Securosis

Research

Watching the Watchers: Clouds Rolling in

As much as we enjoy being the masters of the obvious, we don’t really need to discuss the move to cloud computing. It’s happening. It’s disruptive. Blah blah blah. People love to quibble about the details but it’s obvious to everyone. And of course, when the computation and storage behind your essential IT services might not reside in a facility under your control, things change a bit. The idea of a privileged user morphs in the cloud context, by adding another layer of abstraction via the cloud management environment. So regardless of your current level of cloud computing adoption, you need to factor the cloud into your PUM (privileged user management) initiative. Or do you? Let’s play a little Devil’s advocate here. When you think about it, isn’t cloud computing just more happening faster? You still have the same operating systems running as guests in public and/or private clouds, but with a greatly improved ability to spin up machines, faster than ever before. If you are able to provision and manage the entitlements of these new servers, it’s all good, right? In the abstract, yes. But the same old same old doesn’t work nearly as well in the new regime. Though we do respect the ostrich. Unfortunately burying your head in the sand doesn’t really remove the need to think about cloud privileged users. So let’s walk through some ways cloud computing differs fundamentally than the classical world of on-premise physical servers. Cloud Risks First of all, any cloud initiative adds another layer of management abstraction. You manage cloud resources though either a virtualization console (such as vCenter or XenCenter) or a public cloud management interface. This means a new set of privileged users and entitlements which require management. Additionally, this cloud stuff is (relatively) new, so management capability lags well behind a traditional data center. It’s evolving rapidly but hasn’t yet caught up with tools and processes for management of physical servers on a local physical network – and that immaturity poses a risk. For example, without entitlements properly configured, anyone with access to the cloud console can create and tear down any instance in the account. Or they can change access keys, add access or entitlements, change permissions, etc. – for the entire virtual data enter. Again, this doesn’t mean you shouldn’t proceed and take full advantage of cloud initiatives. But take care to avoid unintended consequences stemming from the flexibility and abstraction of the cloud. We also face a number of new risks driven by the flexibility of provisioning new computing resources. Any privileged user can spin up a new instance, which might not include proper agentry & instrumentation to plug into the cloud management environment. You don’t have the same coarse control of network access we had before, so it’s easier for new (virtual) servers to pop up, which means it’s also easier to be exposed accidentally. Management and security largely need to be implemented within the instances – you cannot rely on the cloud infrastructure to provide them. So cloud consoles absolutely demand suitable protection – at least as much as the most important server under their control. You will want to take a similar lifecycle approach to protecting the cloud console as you do with traditional devices. The Lifecycle in the Clouds To revisit our earlier research, the Privileged User Lifecycle involves restricting access, protecting credentials, enforcing entitlements, and monitoring P-user activity – but what does that look like in a cloud context? Restrict Access (Cloud) As in the physical world, you have a few options for restricting access to sensitive devices, which vary dramatically between private and public clouds. To recap: you can implement access controls within the network, on the devices themselves (via agents), or by running all connections through a proxy and only allowing management connections from the proxy. Private cloud console: The tactics we described in Restrict Access generally work, but there are a few caveats. Network access control gets a lot more complicated due to the inherent abstraction of the cloud. Agentry requires pre-authorized instances which include properly configured software. A proxy requires an additional agent of some kind on each instance, to restrict management connections to the proxy. That is actually as in the traditional datacenter – but now it must be tightly integrated with the cloud console. As instances come and go, knowing which instances are running and which policy groups each instance requires becomes the challenge. To fill this gap, third party cloud management software providers are emerging to add finer-grained access control in private clouds. Public cloud console: Restricting network access is an obvious non-starter in a public cloud. Fortunately you can set up specific security groups to restrict traffic and have some granularity on which IP addresses and protocols can access the instances, which would be fine in a shared administrator context. But you aren’t able to restrict access to specific users on specific devices (as required by most compliance mandates) at the network layer, because you have little control over the network in a public cloud. That leaves agentry on the instances, but with little ability to stop unauthorized parties from accessing instances. Another less viable option is a proxy, but you can’t really restrict access per se – the console literally lives on the Internet. To protect instances in a public cloud environment, you need to insert protections into other segments of the lifecycle. Fortunately we are seeing some innovation in cloud management, including the ability to manage on demand. This means access to manage instances (usually via ssh on Linux instances) is off by default. Only when management is required does the cloud console open up a management port(s) via policy, and only for authorized users at specified times. That approach address a number of the challenges of always on and always accessible cloud instances, and so it’s a promising model for cloud management. Protect Credentials (Cloud) When we think about protecting credentials for cloud computing resources, we use got an expanded concept of credentials. We now need to worry about three types of credentials: Credentials for the cloud console(s) Credentials

Share:
Read Post

Friday Summary: April 13th, 2012

Happy Friday the 13th! I was thinking about superstition and science today, so I was particularly amused to notice that it’s Friday the 13th. Rich and I are both scientists of sorts; we both eschew superstition, but we occasionally argue about science. What’s real and what’s not. What’s science, what’s pseudoscience, and what’s just plain myth. It’s interesting to discuss root causes and what forces actually alter our surroundings. Do we have enough data to make an assertion about something, or is it just a statistical anomaly? I’m far more likely to jump to conclusions about stuff based on personal experience, and he’s more rigorous with the scientific method. And that’s true for work as well as life in general. For example he still shuns my use of Vitamin C, while I’m convinced it has a positive effect. And Rich chides as I make statements about things I don’t understand, or assertions that are completely ‘pseudoscience’ in his book. I’ll make an off-handed observation and he’ll respond with “Myth Busters proved that’s wrong in last week’s show”. And he’s usually right. We still have a fundamental disagreement about the probability of self-atomizing concrete, a story I’d rather not go into – but regardless, we are both serious tech geeks and proponents of science. I regularly run across stuff that surprises me and challenges my fundamental perception of what’s possible. And I am fascinated by those things and the explanations ‘experts’ come up with for them – usually from people with a financial incentive. Hawking anything from food to electronic devices by claiming benefits we cannot measure, or for which we don’t have science which could prove or disprove their clams. To keep things from getting all political or religious, I restrict my examples to my favorite hobby: HiFi. I offer power cords as an example. I’ve switched most of the power cords to my television, iMac, and stereo to versions that run $100 to $300. Sounds deranged, I know, to spend that much on a piece of wire. But you know what? The colors on the television are deeper, more saturated, and far less visually ‘noisy’. Same for the iMac. And I’m not the only one who has witnessed this. It’s not subtle, and it’s completely repeatable. But I am at a loss to understand how the last three feet of copper between the wall socket and the computer can dramatically improve the quality of the display. Or the sound from my stereo. I can see it, and I can hear it, but I know of no test to measure it and I just don’t find the explanations of “electron alignment” plausible. Sometimes it’s simply that nobody thought to measure stuff they should have because theoretically it shouldn’t matter. In college I thought most music sounded terrible and figured I had simply outgrown the music of my childhood. Turns out that in the 80s, when CDs were born, CD players introduced several new forms of distortion, and some of them were unmeasurable. Listener fatigue became common, many people getting headaches as a result of these poorly created devices. Things like jitter, power supply noise, noise created by different types of silicon gates and capacitors, all producing sonic signatures audible to the human ear. Lots of this couldn’t be effectively measured but will send you running from the room. Fortunately over the last 12 years or so audio designers have become aware of these new forms of distortion, and they now have devices that can measure them to one degree or another. I can even hear significant differences with various analog valves (i.e. ‘tubes’) where I cannot measure electrical differences. Another oddity I have found is with vibration control devices. I went to a friend’s house and found his amplifiers and DVD players suspended high in the air on top of maple butcher blocks, which sat on top of what looked like a pair of hockey pucks separated by a ball bearing. The maple blocks are supposed to both absorb vibration and avoid electromagnetic interference between components. And we did several A/B comparisons with and without each, but it was the little bearings that made a clear and noticeable difference in sound quality. The theory is that high frequency vibrations, which shake the electronic circuits of the amps and CD players, decrease resolution and introduce some form of distortion. Is that true? I have no clue. Do they work? Hell yes they do! I know that my mountain bike’s frame was designed to alter the tube circumference and wall thicknesses as a method of dampening vibrations, and there is an improvement over previous generations of bike frames, albeit a subtle one. The reduction in vibrations on the bike can easily be measured, as can the vibrations and electromagnetic interference between A/V equipment. But the vibrational energy is so vanishingly small that it should never make a difference to audio quality. Then there are the environmental factors that alter the user’s perception of events. Yeah, drugs and alcohol would be an example, but sticking to my HiFi theme: a creme that makes your iPod sound better. Works by creating a positive impression with the user. Which again borders on the absurd. An unknown phenomena, or snake oil? Sometimes it’s tough to tell superstition from science. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian’s Dark Reading paper on User Activity Monitoring. Rich’s excellent Macworld article on the Flashback malware. Adrian’s Dark Reading post on reverse database proxies. Favorite Securosis Posts Adrian Lane: The Myth of the Security-Smug Mac User. We get so many ‘news’ items, like how Android will capture the tablet market in 2015, or how Apple’s market share of smartphones is dwindling, or how smug Apple users will get their ‘comeuppance’ for rejecting AV solutions, that you wonder who’s coming up with this crap. Mac users may not have faith in AV to keep them secure, but they know eventually Macs will be targeted just as Windows has been. And I’m fairly certain most hackers run on

Share:
Read Post

Incite 4/11/2012: Exchanging Problems

I figured an afternoon flight to the midwest would be reasonably peaceful. I was wrong. Things started on the wrong foot when I got an email notification from Delta that the flight was delayed, even though it wasn’t. The resulting OJ sprint through the terminal to make the flight was agitating. Then the tons of screaming kids on the flight didn’t help matters. I’m thankful for noise isolating headphones, that’s for sure. But seeing the parents walking their kids up and down the aisle and dealing with the pain of ascent and descent on the kids’ eardrums got me thinking about my own situation. As I mentioned, I was in Italy last week teaching our CCSK course, but the Boss took the kids up north for spring break to visit family. She flew with all of the kids by herself. 5 years ago that never would have happened. We actually didn’t fly as a family for years because it was just too hard. With infant/toddler twins and one three years older, the pain of getting all the crap through the airport and dealing with security and car seats and all the other misery just wasn’t worth it. It was much easier to drive and for anything less than 6-7 hours, it was probably faster to load up the van. The Boss had no problems on the flight. The kids had their iOS devices and watched movies, played games, ate peanuts, enjoyed soda, and basically didn’t give her a hard time at all. They know how to equalize their ears, so the pain wasn’t an issue, and they took advantage of the endless supply of gum they can chew on a flight. So that problem isn’t really a problem any more. As long as they don’t go on walkabout through the terminal, it’s all good. But it doesn’t mean we haven’t exchanged one problem for another. XX1 has entered the tween phase. Between the hormonally driven attitude and her general perspective that she knows everything (yeah, like every other kid), sometimes I long for the days of diapers. At least I didn’t have a kid challenging stuff I learned the hard way decades ago. And the twins have their own issues, as they deal with friend drama and the typical crap around staying focused. When I see harried parents with multiples, sometimes I walk up and tell them it gets easier. I probably shouldn’t lie to them like that. It’s not easier, it’s just different. You constantly exchange one problem for another. Soon enough XX1 will be driving and that creates all sorts of other issues. And then they’ll be off to college and the rest of their lives. So as challenging as it is sometimes, I try to enjoy the angst and keep it all in perspective. If life was easy, what fun would it be? -Mike Photo credits: “Problems are Opportunities” originally uploaded by Donna Grayson Heavy Research We’re back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can access all of our content in its unabridged glory. Vulnerability Management Evolution Scanning the Infrastructure Scanning the Application Layer Watching the Watchers (Privileged User Management) Enforce Entitlements Monitor Privileged Users Understanding and Selecting DSP Extended Features Administration Malware Analysis Quant Index of Posts Incite 4 U Geer on application security: no silent failures Honestly, it’s pointless to try to summarize anything Dan Geer says. A summary misses the point. It misses the art of his words. And you’d miss priceless quotes like “There’s no government like no government,” and regarding data loss, “if I steal your data, then you still have them, unlike when I steal your underpants.” Brilliant. Just brilliant. So read this transcript of Dan’s keynote at AppSecDC and be thankful Dan is generous enough to post his public talks. Let me leave you my main takeaway from Dan’s talk: “In a sense, our longstanding wish to be taken seriously has come; we will soon reflect on whether we really wanted that.” This is an opportunity to learn from a guy who has seen it all in security. Literally. Don’t squander it. Take the 15 minutes and read the talk. – MR AppSec trio: Fergal Glynn of Veracode has started A CISO’s Guide to Application Security, a series on Threatpost. And it’s off to a good start, packed with a lot of good information, but the ‘components’ are all blending together. Secure software development, secure operations, and a software assurance program are three different things; and while they go hand in hand if you want a thorough program, it’s easier to think about them as three legs of the proverbial stool. Make no mistake, I have implemented secure coding techniques based purely on threat modeling because we had no metrics – or even idea of what metrics were viable – to do an assurance program. I’ve worked in finance, with little or no code development, relying purely on operational controls around pre-deployment and deployment phases on COTS software. At another firm I implemented metrics and risk analysis to inspire the CEO to allow secure code development to happen. So while these things get blurred together under the “application security” umbrella, remember they’re three different sets of techniques and processes, with three slightly different – and hopefully cooperating – audiences. – AL It’s the economy, stupid: One of the weirdest things I’ve realized over years in the security industry is how much security is about economics and psychology, not about technology. No, I’m not flying off the deep end and ignoring the tech (I’m still a geek, after all), but if you want to make big changes you need to focus on things that affect the economics, not how many times a user clicks on links in email. One great example is the new database the government and cell phone providers are setting up to track stolen phones. Not only will they keep track of the stolen phones, they will make sure they can’t be

Share:
Read Post

The Myth of the Security-Smug Mac User

I still consider myself a relative newcomer to the Mac community. Despite being the Security Editor at TidBITS and an occasional contributor to Macworld (print and online), and having spoken at Macworld Expo a couple times, I only really switched to Macs back in 2005. To keep this in perspective, TidBITS has been published electronically since 1990. Coming from the security world I had certain expectations of the Mac community. I thought they were naive and smug about security, and living in their own isolated world. That couldn’t have been further from the truth. Over the past 7 years, especially the past 5+ since I left Gartner and could start writing for Mac publications, I have learned that Mac users care about security every bit as much as Windows users. I haven’t met a single Mac pundit who ever dismissed Mac security issues or the potential for malware, or who thought their Mac ‘immune’. From Gruber, to Macworld, to TidBITS, and even The Macalope (a close personal friend when he isn’t busy shedding on my couch, drinking my beer out of the cat’s water bowl, or ripping up my drapes with his antlers) not one person I’ve met or worked with has expressed any of the “security smugness” attributed to them by articles like the following: Are MACS Safer then PCs Flashback Mac Trojan Shakes Apple Rep of Invulnerability Widespread Virus Proves Macs Are No Longer Safe From Hackers Expert: Mac users more vulnerable than Windows users And countless tweets and other articles. Worse yet, the vast majority of Mac users worry about security. When I first started getting out into the Mac community people didn’t say, “Well, we don’t need to worry about security.” They asked, “What do I need to worry about?” Typical Mac users from all walks of life knew they weren’t being exploited on a daily basis, but were generally worried that there might be something they were missing. Especially relatively recent converts who had spent years running Windows XP. This is anecdotal, and I don’t have survey numbers to back it up, but I’ve been probably the most prominent writer on Mac security for the past 5 years, and talk to a ton of people in person and over email. Nearly universally Mac users are and have been, concerned about security and malware. So where does this myth come from? I think it’s 3 sources: An overly vocal minority who fill up the comments on blog posts and news articles. Yep – a big chunk of them are trolls and asshats. There are zealots like this for every technology, cause, and meme on the face of the planet. They don’t represent our community, no matter how many Apple stickers are on the backs of their cars and work-mandated Windows laptops. One single advertisement where Apple made fun of the sick PC. One. Single. Singular. Unique. Apple only ever made that joke once, and it was in a single “I’m a Mac” spot. And it was 100% accurate at the time – there was no significant Mac malware then. But since then we have seen countless claims that Apple is ‘misleading’ users. Did Apple downplay security issues? Certainly… but nearly exclusively during a period when people weren’t being exploited. I’m not going to apologize for Apple’s security failings (especially their patching issues, which lad to the current Flashback issue), but those are very different than actively misleading users. Okay – one of the Securosis staff believe there may have been some print references from pre-2005, but we are still talking small numbers and nothing current. Antivirus vendors. Here I need to tread cautiously here because I have many friends at these companies who do very good work. Top-tier researchers that are vital to our community. But they have a contingent, just like the Mac4EVER zealots, who think people are stupid or naive if they don’t use AV. These are the same people who want Apple to remove iOS security so they can run their AV products on your phones. Who took out full page advertisements against Microsoft when MS was going to lock down parts of the Windows kernel (breaking their products) for better security. Who issue report after report designed only to frighten you into using their products. Who have been claiming that this year really will be the the year of mobile malware (eventually they’ll be right, if we wait long enough). Here’s the thing. The very worst quotes and articles attacking smug Mac users usually use a line similar to the following: Mac users think they are immune because they don’t install antivirus. Which is a logical fallacy of the highest order. These people promote AV as providing the same immunity they say Mac zealots claim for ‘unprotected’ Macs. They gloss over the limited effectiveness of AV products. How even the AV vendors didn’t have signatures for Flashfake until weeks after the infections started. How Windows users are constantly infected despite using AV, to the point where most enterprise security pros I work with see desktop antivirus as more a compliance tool and high-level filter than a reliable security control. I’m not anti-AV. It plays a role, and some of the newer products (especially on the enterprise side) which rely less on signatures are showing better effectiveness (if you aren’t individually targeted). Plus most of those products include other security features, ranging from encryption to data loss prevention, that can be useful. I also recommend AV extensively for email and network filtering. Even on Macs, sometimes you need AV. I am far more concerned about the false sense of immunity claimed by antivirus vendors than smug Mac users. Because the security-smug Mac user community is a myth, but the claims of the pro-AV community (mostly AV vendors) are very real, and backed by large marketing budgets. Update: Andrew Jaquith nailed this issue a while ago over at SecurityWeek: Note to readers: whenever you see or hear an author voicing contempt for customers by calling them arrogant, smug, complacent, oblivious, shiny-shiny obsessed members of a cabal, “living in a false paradise,” or

Share:
Read Post

Responsible or Irresponsible Disclosure?—NFL Style

It’s funny to contrast this April to last April, at least as an NFL fan. Last year the lockout was in force, the negotiations stalled, and fans wondered how billionaires could argue with millionaires when the economy was in the crapper. Between the Peyton Manning lottery, the upcoming draft, and the Saints Bounty situation, there hasn’t been a dull moment for pro football fans since the Super Bowl ended. Speaking of the Saints, even after suspensions and fines, more nasty aspects of the story keep surfacing. Last week, we actually heard Gregg Williams, Defensive Coordinator of the Saints, implore his guys to target injured players, ‘affect’ the head, and twist ankles in the pile. Kind of nauseating. OK, very nauseating. I guess it’s true that most folks don’t want to see how the sausage is made – they just want to enjoy the taste. But the disclosure was anything but clean, Sean Pamphilon, the director who posted the audio, did not have permission to post it. He was a guest of a guest at that meeting, there to capture the life of former Saints player Steve Gleason, who is afflicted with ALS. The director argues he had the right. The player (and the Saints) insist he didn’t. Clearly the audio put the bounty situation in a different light for fans of the game. Before it was deplorable, but abstract. After listening to the tape, it was real. He really said that stuff. Really paid money for his team to intentionally hurt opponents. Just terrible. But there is still the dilemma of posting the tape without permission. Smart folks come down on both sides of this discussion. Many believe Pamphilon should have abided by the wishes of his host and not posted the audio. He wouldn’t have been there if not for the graciousness of both Steve Gleason and the Saints. But he was and he clearly felt the public had a right to know, given the history of the NFL burying audio & video evidence of wrongdoing (Spygate, anyone?). Legalities aside, this is a much higher profile example of the same responsible disclosure debate we security folks have every week. Does the public have a need to know? Is the disclosure of a zero day attack doing a public service? Or should the researcher wait until the patch goes live, when they get to enjoy a credit buried in the patch notice? Cynically, some folks disclosing zero-days are in it for the publicity. Sure, they can blame unresponsive vendors, but at the end of the day, some folks seek the spotlight by breaking a juicy zero-day. Likewise, you can make a case that Pamphilon was able to draw a lot of attention to himself and his projects (past, current, and future) by posting the audio. Obviously you can’t buy press coverage like that. Does that make it wrong – that the discloser gets the benefit of notoriety? There is no right or wrong answer here. There are just differing opinions. I’m not trying to open Pandora’s box and entertain a lot of discussion on responsible disclosure. Smart people have differing opinions and nothing I say will change that. My point was to draw the parallel between the Saints bounty tape disclosure and disclosing zero day attacks. Hopefully that provides some additional context for the moral struggles of researchers deciding whether to go public with their findings or not. Share:

Share:
Read Post

Pain Comes Instantly—Fixes Come Later

Mary Ann Davidson’s recent post Pain Comes Instantly has been generating a lot of press. It’s being miscast by some of the media outlets as trashing PCI Data Security Standard, but it’s really about the rules for vendors who want to certify commercial payment software and related products. The debate is worth considering, so I recommend giving it a read. It’s a long post, but I encourage you to read it all the way through before forming opinions, as she makes many arguments and provides some allegories along the way. In essence she challenges the PCI Council on a particular requirement in the Payment Application Vendor Release Agreement (VRA), part of each vendor’s contractual agreement with the PCI Council to get their applications certified as PCI compliant. The issue is over software vulnerability disclosure. Paraphrasing the issue at hand, let’s say Oracle becomes aware of a security bug. Under the terms of the agreement, Oracle must disseminate the information to the Council as part of the required information disclosure process. Her complaint is that the PCI Council insists on its right to leak (‘share’) this information even when Oracle has not yet provided a fix. Mary Ann argues that in this case the PCI Council is harming Oracle’s customers (who are also PCI Council customers) by making the vulnerability public. Hackers will of course exploit the vulnerability and try to breach the payment systems. The real point of contention is that the PCI Council may decide to share this information with QSAs, partners, and other organizations, so those security experts can better protect themselves and PCI customers based upon this information. Oracle’s position is that these QSAs and others who may receive information from the Council are not qualified to make use the information. And second, the more people know about the vulnerability, the more it likely it is to leak. I don’t have a problem with those points. I totally agree that if you tell thousands of people about the vulnerability, it’s as good as public knowledge. And it’s probably safe to wager that only a small percentage of Oracle customers have the initiative or knowledge to take vulnerability information and craft it into effective protection. Even if a customer has Oracle’s database firewall, they won’t be able to create a rule to protect the database from this vulnerability information. So from that perspective, I agree. But it’s a limited perspective. Just because few Oracle customers can generate a fix or a workaround doesn’t mean that a fix won’t or can’t be made available. Oracle customers have contributed workarounds in the past. Even if an individual customer can’t help themselves, others can – and have. But here’s my real problem with that post: I am having trouble finding a substantial difference between her argument and the whole responsible disclosure debate. What’s the real difference from a security researcher finding an Oracle vulnerability? The information is outside Oracle’s control in both cases, and there is a likelihood of public disclosure. It’s something a determined hacker may discover, or have already discovered. It’s in Oracle’s best interest to fix the problem fast before the rest of the world finds out. Historically the problem is that vendors, unless they have been publicly shamed into action, don’t react quickly to security issues. Oracle, among other vendors, has often been accused of siting on vulnerabilities for months – even years – before addressing them. Security researchers for years told basically the same story about Oracle flaws they found, which goes something like this: We have discovered a security flaw in Oracle. We told Oracle about it, and gave them details on how to reproduce it and some suggestions for how to fix it. Oracle a) never fixed it, b) produced a half-assed fix that causes other issues, or c) waited 9, 12, or 18 months before patching the issue – and that was only after I announced the bug to the world at the RSA/DefCon/Black Hat/OWASP conference. I gave Oracle information that anyone could discover, and did not ask for any compensation, and Oracle tried to sue me when I disclosed the vulnerability after 12 months. I’m not Oracle bashing here – it’s an industry-wide issue – but my point that with disclosure, timing matters… a lot. Since the Payment Application Vendor Release Agreement simply states you will ‘promptly’ inform the PCI Council of vulnerabilities, Oracle has a bit of leeway. Maybe ‘prompt’ means 30 days. Heck, maybe 60. That should be enough time to get a patch to those customers using certified payment products – or whatever term the PCI council uses for vetted but not guaranteed software. If a vendor is a bit tardy with getting detailed information to the PCI Council while they code and test a fix, I don’t think the PCI council will complain too much, so long as they are protected from liability. But make no mistake – timing is a critical part of this whole issue. Timing – particularly the lack of ‘prompt’ responses from Oracle – is why the security research community remains pissed-off and critical to this day. Share:

Share:
Read Post

Understanding and Selecting DSP: Administration

Today’s post focuses on the administering Database Security Platforms. Conceptually DSP is pretty simple: collect data from databases, analyze it according to established rules, and react when a rule has been violated. The administrative component of every DSP platform follows these three basic tasks: data management, policy management, and workflow management. In addition to these three basic functions, we also need to administer the platform itself, as we do with any other application platform. As we described in our earlier post on DSP technical architecture, DSP sends all collected data to a central server. The DAM precursors evolved from single servers, to two-tiered architectures, and finally into a hierarchal model, in order to scale up to enterprise environments. The good news is that system maintenance, data storage, and policy management are all available from a single console. While administration is now usually through a browser, the web application server that performs the work is built into the central management server. Unlike some other security products, not much glue code or browser tricks is required to stitch things together. System Management User Management: With access to many different databases, most filtering and reporting on sensitive data, user management is critical for security. Establishing who can make changes to policies, read collected data, or administer the platform are all specialized tasks, and these groups of users are typically kept separate. All DSP solutions offer different methods for segregating users into different groups, each with differing granularity. Most of the platforms offer integration with directory services to aid in user provisioning and assignment of roles. Collector/Sensor/Target Database Management: Agents and data collectors are managed from the central server. While data and policies are stored centrally, the collectors – which often enforce policy on the remote database – must periodically synch with the central server to update rules and settings. Some systems require the administrator to ‘push’ rules out to agents or remote servers, while others synch automatically. Systems Management: DSP is, in and of itself, and application platform. It has web interfaces, automated services, and databases like most enterprise applications. As such it requires some tweaking, patching, and configuration to perform its best. For example, the supporting database may need pruning to clear out older data, vendor assessment rules require updates, and the system may need additional resources for data storage and reports. The system management interface is provided via a web browser, but only available to authorized administrators. Data Aggregation & Correlation The one characteristic Database Activity Monitoring solutions share with log management, and even Security Information and Event Management, tools is their ability to collect disparate activity logs from a variety of database management systems. They tend to exceed the capabilities of related technologies in their ability to go “up the stack” in order to gather deeper database activity application layer data, and in their ability to correlate information. Like SIEM, DSP aggregates, normalizes, and correlates events across many heterogenous sources. Some platforms even provide an optional ‘enrichment’ capability by linking audit, identity and assessment data to event records. For example, providing both ‘before’ and ‘after’ data values for a suspect query. Despite central management and correlation features, the similarities with SIEM end there. By understanding the Structured Query Language (SQL) of each database platform, these platforms can interpret queries and understand their meaning. While a simple SELECT statement might mean the same thing across different database platforms, each database management system (DBMS) is full of its own particular syntax. DSP understands the SQL for each platform is able to normalize events so the user doesn’t need to know the ins and outs of each DBMS. For example, if you want to review all privilege escalations on all covered systems, a DAM solution will recognize those events, regardless of platform and present a complete report, without you having to understand the SQL. A more advanced feature is to then correlate activity across different transactions and platforms, rather than looking only at single events. For example, some platforms recognize a higher than normal transaction volume by a particular user, or (as we’ll consider in policies) can link a privilege escalation event with a large SELECT query on sensitive data, which could indicate an attack. All activity is also centrally collected in a secure repository to prevent tampering or a breach of the repository itself. Since they collect massive amounts of data, DSPs must support automatic archiving. Archiving should support separate backups of system activity, configuration, policies, alerts, and case management; and encrypt under separate keys to support separation of duties. Policy Management All platforms come with sets of pre-packaged policies for security and compliance. For example, every product contains hundreds, if not thousands, of assessment policies that identify vulnerabilities. Most platforms come with pre-defined policies for monitoring standard deployments of databases behind major applications such as Oracle Financials and SAP. Built-in policies for PCI, SOX, and other generic compliance requirements are also available to help you jump-start the process and save many hours of policy building. Every single policy has the built-in capability of generating an alert if the rule is violated – usually through email, instant message or some other messaging capability. Note that every user needs to tune or customize a subset of pre-existing policies to match their environment, and create others to address specific risks to their data. They are still far better than starting from scratch. Activity monitoring policies include user/group, time of day, source/destination, and other important contextual options. And these policies should offer different analysis techniques based on attributes, heuristics, context, and content analysis. They should also support advanced definitions, such as complex multi-level nesting and combinations. If a policy violation occurs you can specify any number of alerting, event handling and reactive actions. Ideally, the platform will include policy creation tools that limit the need to write everything out in SQL or some other definition language; it’s much better if your compliance team does not need to learn SQL programming to create policies. You can’t avoid having to do some things

Share:
Read Post

How to Tell If Your Cloud Provider Can Read Your Data (Hint: They Can)

Over at TidBITS today I published a non-security-geek oriented article on how to tell if your cloud provider can read your data. Since many of you are security geeks, here’s the short version (mostly cut and paste) and some more technical info. The short version? If you don’t encrypt it and manage keys yourself, of course someone on their side can read it (99+% of the time). There are three easy indicators that your cloud provider (especially SaaS providers) can read your data: If you can see your data in a web browser after entering only your account password, the odds are extremely high that your provider can read it as well. The only way you could see your data in a web browser and still have it be hidden from your provider would require complex (fragile) JavaScript code, or a Flash/Java/ActiveX control to decrypt and display the data locally. If the service offers both web access and a desktop application, and you can access your data in both with the same account password, the odds are high that your provider can read your data. The common access indicates that your account password is probably being used to protect your data (usually your password is used to unlock your encryption key). While your provider could architect things so the same password is used in different ways to both encrypt data and allow web access, that doesn’t really happen. If you can access the cloud service from a new device or application by simply providing your user name and password, your provider can probably read your data. This is how I knew Dropbox could read my files long before that story hit the press. Once I saw that I could log in and see my files, or view them on my iPad without using an encryption key other than my account password, I knew that my data was encrypted with a key Dropbox that manages. The same goes for the enterprise-focused file sharing service Box (even though it’s hard to tell from reading their site). Of course, since Dropbox stores just files, you can apply your own encryption before Dropbox ever sees your data, as I explained last year. And iCloud? With iCloud I have a single user name and password. Apple offers a rich and well-designed web interface where I can manage individual email messages, calendar entries, and more. I can register new devices and computers with the same user name and password I use on the web site. So it has always been clear that Apple could read my content, just as Ars Technica reported recently (with quotes from me). That doesn’t mean that Dropbox, iCloud, and similar services are insecure. They generally have extensive controls – both technical and policy restrictions – to keep employees from snooping. But such services aren’t suitable for all users in all cases – especially for businesses or governmental organizations that are contractually or legally obligated to keep certain data private. Now let’s think beyond consumer services, about the enterprise side. Salesforce? Yep – of course they can read your data (unless you add an encryption proxy). SaaS services nearly always – so they can do stuff with your data. PaaS? Same deal (again, unless you do the encryption yourself). IaaS? Of course – your instance needs to boot up somehow, and if you want attached volumes to be encrypted you have to do it yourself. The main thing for Securosis readers to understand is that the vast majority of consumer and enterprise cloud services that mention encryption or offer encryption options, manage your keys for you, and have full access to your data. Why offer encryption at all then, if it doesn’t really improve security? Compliance. It wipes out one risk (lost hard drives), and reduces compliance scope for physical handling of the storage media. It also looks god on a checklist. Take Amazon S3 – Amazon is really clear that although you can encrypt data, they can still read it. I suppose the only reason I wrote this post and the article is because I’m sick of the “iWhatever service can read your data” non-stories that seem to crop up all the time. Duh. Share:

Share:
Read Post

Vulnerability Management Evolution: Scanning the Application Layer

In our last Vulnerability Management Evolution post we discussed scanning infrastructure, which remains an important part of vulnerability management. But we recognize that most attacks target applications directly, so we can no longer just scan the infrastructure and be done with it. We need to climb the stack and pay attention to the application layer, looking for vulnerabilities in application as well as the supporting components. But that requires us to define an ‘application’, which is surprisingly difficult. A few years ago, the definition of application was fairly straightforward. Even in an N-tier app, with a variety of application servers and data stores, you largely controlled all the components of the application. Nowadays, not so much. Pre-assembled web stacks, open source application servers, third party crypto libraries, and cloud-provided services all make for quick application development, but blur the line between your application and the supporting infrastructure. You have little visibility into what’s going on behind the curtain, but you’re still responsible for securing it. For the purposes of our vulnerability/threat management discussion, we define the app as presentation and infrastructure. The presentation layer focuses on assembling information from a number of different sources – either internal or external to your enterprise. The user of the application couldn’t care less about where the data comes from. So from a threat standpoint you need to assess the presentation code for issues that put devices at risk. But your focus on reducing attack surface of applications also requires you to pay attention to the infrastructure. That means the application servers, interfaces, and databases that assemble the data presented by the application. So you scan application servers and databases to find problems. Let’s dig into the two aspects of the application layer to assess: databases and application infrastructure. Database Layer Assessing databases is more similar to the scanning infrastructure than applications – you look for vulnerabilities in the DBMS (database management system). As with other infrastructure devices, databases can be misconfigured and might have improper entitlements, all of which pose risks to your environment. So assessment needs to focus on whether appropriate database patches have been installed, the configuration of the database, improper access control, entitlements, etc… Let’s work through the key steps in database assessment: Discovery: First you need to know where your databases are. That means a discovery process, preferably automated to find both known and unknown databases. You need to be wary of shadow IT, where lines of business and other groups build their own data stores – perhaps without the operational mojo of your data center group. You should also make sure you are continuously searching for new databases because they can pop up anywhere, at any time, just like rogue access points – and they do. Vulnerabilities: You will also look for vulnerabilities in your DBMS platform, which requires up-to-date tests for database issues. Your DB assessment provider should have a research team to keep track of the newest and latest attacks on whatever database platforms you use. Once something is found, information about exposure and workarounds & remediations, is critical for making your job easier. Configurations: Configuration checking a DBMS is slightly different – you are assessing mostly internals. Be sure to you check the database both with credentials (as an authorized user) and without credentials (which more accurately represents a typical outside attacker). Both scenarios are common in database attacks, so make sure your configuration is sufficiently locked against both of them. Access Rights and Entitlements: Aside from default accounts and passwords, focus your efforts on making sure no users (neither humans nor applications) have additional entitlements that put the database platform at risk. For example, you need to ensure credentials of de-provisioned users have been removed and that accounts which only need read access don’t have the ability to DROP TABLES. And you need to verify that users – especially administrators – cannot ‘backdoor’ the database through local system privileges. Part of this is housekeeping, but you need to pay attention – make sure your databases are configured correctly to avoid unnecessary risk. Finally, we know this research focuses more on vulnerability/threat identification and assessment, but over time you will see even tighter integration between evolved vulnerability/threat management platforms and tactics to remediate problems. We have written a detailed research report on Database Assessment, and you should track our Database Security Platform research closely so you can shorten your exposure window by catching problems and taking action more quickly. Application Layer Application assessment (especially of web applications) is a different animal. Mostly because you have to actually ‘attack’ the application to find vulnerabilities, which might exist within the application code or the infrastructure components it is built on. Obviously you need to crawl through the app to find issues to fix issues. There are a several different types of app security testing (as discussed in Building a Web App Security Program), so we will just summarize here. Platform Vulnerabilties: This is the stuff we check for when scanning infrastructure and databases. Applications aren’t ‘stand-alone’ – they depend on infrastructure and inherit vulnerabilities from their underlying components. The clearest example is a content management system, where a web app built on Drupal inherits all the vulnerabilities of Drupal, unless they are somehow patched worked around. Static Application Security Testing (SAST): Also called “white box testing”, SAST involves developers analyzing source to identify coding errors. This is not normally handled by security teams – it is normally part of a secure development lifecycle (SDLC). Dynamic Application Security Testing (DAST): Also known as “black box testing”, DAST is the attempt to find application defects using bad inputs, using fuzzing and other techniques. This doesn’t involve access to the source code, so some security teams get involved in DAST, but it is still largely seen as a development responsibility because thorough DAST testing can be destructive to the app, and so shouldn’t be used on production applications. Web App Scanners But the technology most relevant to the evolution of vulnerability management is the web application scanner. Many of the available vulnerability management offerings offer an add-on capability to scan applications and their underlying infrastructures to identify

Share:
Read Post

Watching the Watchers: Monitor Privileged Users

As we continue our march through the Privileged User Lifecycle, we have locked down privileged accounts as tightly as needed. But that’s not the whole story, and the lifecycle ends with a traditional audit. Because verifying what the administrators do with their privileges is just as important as the other steps. Admittedly, some organizations have as large a cultural issue with granular user monitoring because they actually want to trust their employees. Silly organizations, right? But in this case there is no monitoring slippery slope – we aren’t talking about recording an employee’s personal Facebook interactions or checking out pictures of Grandma. We’re talking about capturing what an administrator has done on a specific device. Before we get into the how of privileged user monitoring, let’s look at why you would monitor admins. There are two main reasons: Forensics: In the event of a breach, you need to know what happened on the device, quickly. A detailed record of what an administrator did on a device can be instrumental to putting the pieces together – especially in the event of an inside job. Of course privileged user monitoring is not a panacea to forensics – there are a zillion other ways to get compromised – but if the breach began with administrator activity, you would have a record of what happened, and the proverbial smoking gun. Audit: Another use is to make your auditor happy. Imagine the difference between showing the auditor a policy saying how you do things, and showing a screen capture of an account being provisioned or a change being committed. Monitoring logs are powerful for showing that the controls are in place. Sold? Good, but how to you move from concept to reality? You have a couple of options, including: SIEM/Log Management: As part of your other compliance efforts, you likely send most events from sensitive devices to a central aggregation point. This SIEM/Log Management work can also be used to monitor privileged users. By setting up some reports and correlation rules for administrator activity you can effectively figure out what administrators are doing. By the way, this is one of the main use cases for SIEM and log management. Configuration Management: A similar approach is to pull data out of a configuration management platform which tracks changes on managed devices. A difference between using configuration management and a SIEM is the ability to go beyond monitoring, and actually block unauthorized changes. Screen Capture If a picture is worth a thousand words, how much would you say a video is worth? An advantage of routing your administrative sessions through a proxy is the ability to capture exactly what admins are doing on every device. With a video screen capture of the session and the associated keystrokes, there can be no question of intent – no inference of what actually happened. You’ll know what happened – you just need to watch the playback. For screen capture you can deploy an agent on the managed device or you could route sessions through a proxy. We started discussing the P-User Lifecycle by focusing on how to restrict access to sensitive devices. After discussing a number of options, we explained why proxies make a lot of sense for making sure only the right administrators access the correct devices at the right times. So it’s appropriate that we come full circle and end our lifecycle discussion back in a similar position. Let’s look at performance and scale first. Video is pretty compute intensive, and consumes a tremendous amount of storage. The good news is that an administrative session doesn’t require HD quality to catch a bad apple red-handed. So significant compression is feasible, and can save a significant chunk of storage – whether you capture with an agent or through a proxy. But there is a major difference in device impact between these approaches. An agent takes resources for screen capture from the managed device, which impacts the server’s performance – probably significantly. With a proxy, the resources are consumed by the proxy server rather than the managed device. The other issue is the security of the video – ensuring there is no tampering with the capture. Either way you can protect the video with secure storage and/or other means of making tampering evident, such as cryptographic hashing. The main question is how you get the video into secure storage. Using an agent, the system needs a secure transport between the device and the storage. Using a proxy approach, the storage could be integrated into (or very close to) the proxy device. We believe a proxy-based approach to monitoring privileged users makes the most sense, but there are certainly cases where an agent could suffice. And with that we have completed our journey through the Privileged User Lifecycle, but we aren’t done yet. This “cloud computing” thing threatens to dramatically complicate how all devices are managed, with substantial impact on how privileged users need to be managed. So in the next post we will delve into the impact of the cloud on privileged users. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.