Securosis

Research

Incite 4/25/2012: Drafty Draft

It feels like Bizarro World to me. I woke up this morning freezing my backside off. We turned off the heat a few weeks ago and it was something like 65 this morning. Outside it was in the 40s, at the end of April. WTF? And the Northeast has snow. WTF? I had to bust out my sweatshirts, which I had hoped to shelve for the season. Again, WTF? But even a draft of cold weather can’t undermine my optimism this week. Why? Because it’s NFL Draft time. That’s right, I made it through the dark time between the start of free agency and the Draft. You know it’s a slow time – I have been watching baseball and even turned on a hockey game. But the drought is over. Now it’s time to see who goes where. And to keep a scorecard of how wrong all the pundits are in their mock drafts. Here’s another thing I learned. There are pundits in every business, and the Internet seems to have enabled a whole mess of people to make their livings as pundits. If you follow the NFL you are probably familiar with Mel Kiper, Jr. (and his awesome hair) and Todd McShay, who man the draft desk at ESPN. They almost always disagree, which is entertaining. And Mike Mayock of NFL Network provides great analysis. They get most of the visibility this week, but through the magic of the Twitter I have learned that lots of other folks write for web sites, some big and most small, and seem to follow the NFL as their main occupation. Wait, what? I try not to let my envy gene, but come on, man! I say I have a dream job and that I work with smart people doing what I really like. But let’s be honest here – what rabid football fan wouldn’t rather be talking football all day, every day? And make a living doing it. But here’s the issue. I don’t really know anything about football. I didn’t play organized football growing up, as my Mom didn’t think fat Jewish kids were cut out for football. And rolling over neighborhood kids probably doesn’t make me qualified to comment on explosiveness, change of direction, or fluid hips. I know very little about Xs and Os. Actually, I just learned that an offensive lineman with short arms can’t play left tackle, as speed rushers would get around him almost every time. Who knew? But I keep wondering if my lack of formal training should deter me. I mean, if we make an analogy to the security business, we have a ton of folks who have never done anything starting up blogs and tweeting. Even better, some of them are hired by the big analyst firms and paraded in front of clients who have to make real decisions and spend real money based on feedback from some punk. To be fair there was a time in my career when I was that punk, so I should know. 20 years later I can only laugh and hope I didn’t cost my clients too much money. Maybe I should pull a Robin Sage on the NFL information machine. That would be kind of cool, eh? Worst case it works and I’ll have a great Black Hat presentation. -Mike Photo credits: “Windy” originally uploaded by Seth Mazow Heavy Research We’re back at work on a variety of our blog series, so here is a list of the research currently underway. Remember our Heavy RSS Feed, where you can access all our content in its unabridged glory. Vulnerability Management Evolution Core Technologies Value-Add Technologies Enterprise Features and Integration Watching the Watchers (Privileged User Management) Clouds Rolling In Integration Understanding and Selecting DSP Use Cases Malware Analysis Quant Index of Posts Incite 4 U Don’t go out without your raincoat: I tip my hat to the folks at Sophos. To figure out a way to compare the infection rate of Chlamydia to the prevalence of Mac malware is totally evil genius. That stat really resonates with me, and wasn’t a good thing for some of my buddies at school. So do 20% of Macs really have malware? Not exactly – they include the presence of Windows malware, which obviously doesn’t do much harm on Macs. Only 1 in 36 had actual Mac malware, and I’m sure a bunch of those were Flashback users who downloaded AV only after being infected. Though I guess the malware could spread to PCs via VMs and other unsafe computing practices. Of course the Sophos guys never miss an opportunity make an impassioned plea for Mac AV, especially since it’s free. Reminds me of something my Dad said when I came of age. He told me never to go out without my raincoat on. He was right – just ask my fraternity brothers. I wonder if “The Trojan Man for Mac” would work as the new Sophos tagline? – MR Killer apps: Will (Mobile) Apps Kill Websites is Jeff Atwood’s question, one I have been mulling over the last few months. All Jeff’s points are spot-on: Well-designed apps provide a kick-ass user experience that few web sites can rival. Fast, simple, and tailored for the environment, they are often just better. And considering that mobile devices will outnumber desktops 10:1 in the future, replacement is not hard to imagine. But Jeff’s list of disadvantages should contain a few security issues as well. Namely none of the protections I use with my desktop browser (NoScript, Ghostery, Flashblock, Adblock, etc.) are available on mobile platforms. Nor do we have fine-grained control over what apps can do, and we cannot currently run outbound firewalls to make sure websites aren’t secretly transmitting our data. Mobile platforms generally offer really good built-in security, but in practice it is gradually becoming harder to protect – and sandbox – apps, similar to challenges we have already face with desktop browsers. It looks like we get to play security catch-up

Share:
Read Post

Vulnerability Management Evolution: Enterprise Features and Integration

We’re in the home stretch of the Vulnerability Management Evolution research project. After talking mostly about the transition from an audit-centric tactical tool to a much more strategic platform providing security decision support, it is now time to look critically at what’s required to make the platform work in your enterprise. That means providing both built-in tools to help manage your vulnerability management program, as well as supporting integration with existing security and IT management tools. Remember, it is very rare to have an opportunity to start fresh in a green field. So whether you select a new platform or stay with your incumbent provider, as you add functionality you’ll need to play nicely in your existing sandbox. Managing the Vulnerability Management Program We have been around way too long to actually believe that any tool (or toolset) can ever entirely solve any problem, so our research tends to focus around implementing programs to address problems rather than selecting products. Vulnerability management is no different, so let’s list what you need to actually manage the program internally. First you basic information before you can attempt any kind of prioritization. That has really been the focus of the research to date. Taking tactical scans and configuration assessments of the infrastructure and application layers, then combining then with perceived asset value and the value-added technologies we discussed in the last post, and running some analytics to provide usable information. But the fun begins once you have an idea of what needs to be fixed and relative priorities. Dashboards Given the rate of change in today’s organizations, wading through a 200-page vulnerability report or doing manual differential comparisons of configuration files isn’t efficient or scalable. Add in cloud computing and everything is happening even faster, making automation critical to security operations. You need the ability to take information and visualize it in ways that makes sense for a variety of constituencies. You need an Executive View, providing a high-level view of current security posture and other important executive-level metrics. You need an operational view to help guide the security team on what they need to do. And you can probably use views for application-specific vulnerabilities and perhaps infrastructure and database visuals for those folks. Basically you need the flexibility to design an appropriate dashboard/interface for any staffer needing to access the platform’s information. Most vendors ship with a bunch of out-of-the-box options, but more importantly ensure they offer a user-friendly capability to easily customize the interface for what staff needs. Workflow Unless your IT shop is a one-man (or one-woman) band, some level of communication is required to keep everything straight. With a small enough team a daily coffee discussion might suffice. But that doesn’t scale so the vulnerability/threat management platform should include the ability to open ‘tickets’, or whatever you call them, to get work done. It certainly doesn’t need to include a full-blown trouble ticket system, but this capability comes in handy if you don’t have an existing support/help desk system. As a base level of functionality look for the ability to do simple ticket routing, approval / authorization, and indicate work has been done (close tickets). Obviously you’ll want extensive reporting on tickets and the ability to give specific staff members lists of the things they should be doing. Straightforward stuff. Don’t forget that any program needs to have checks and balances, so an integral part of the workflow capability must be enforcement of proper separation of duties to ensure no one individual has too much control over your environment. That means proper authorization before making changes or remediating issues, and ensuring a proper audit trail for everything administrators do with the platform. Compliance Reporting Finally you need to substantiate your controls for the inevitable audits, which means your platform needs to generate documentation to satisfy the auditor’s appetite for information. Okay, it won’t totally satisfy the auditor (as if that were even possible) but at least provide a good perspective on what you do and how well it works, with artifacts to prove it. Since most audits break down to some kind of checklist you need to follow, having those lists enumerated in the vulnerability management platform is important and saves a bunch of time. You don’t want to be mapping reports on firewall configurations to PCI Requirement 1 – the tool should do that out of the box. Make sure whatever you choose offers the reports you need for the mandates you are subject to. But reporting shouldn’t end when the auditor goes away. You should also use the reports to keep everyone operationally honest. That means reports showing similar information to the dashboards we outlined above. You’ll want your senior folks to get periodic reports talking about open vulnerabilities and configuration problems, newly opened attack paths, and systems that can be exploited by the pen test tool. Similarly, operational folks might get reports of their overdue tasks or efficiency reports showing how quickly they remediate their assigned vulnerabilities. Again, look for customization – everyone seems to want the information in their own format. Dashboards and reporting are really the ying/yang of managing any security-oriented program. So make sure the platform provides the flexibility to display and disseminate information however you need it. Enterprise Integration As we mentioned, in today’s technology environment nothing stands alone, so when looking at this evolved vulnerability management platform, how well it integrates with what you already have is a strong consideration. But you have a lot of stuff, right? So let’s prioritize integration a bit. Patch/Config Management: In the value-add technologies piece, we speculated a bit on the future evolution of common platforms for vulnerability/threat and configuration/patch management. As hinted there, tight integration between these two functions is critical. You will probably hear the term vulnerability validation to describe this integration, but it basically means closing the loop between assessment and remediation. So when an issue is identified by the VM platform, the fix is made (presumably by the patch/config tool) and

Share:
Read Post

Vulnerability Management Evolution: Value-Add Technologies

So far we have talked about scanning infrastructure and the application layer, before jumping into some technology decisions you face, such as how to deal with cloud delivery and agents. But as much as these capabilities increase the value of the vulnerability management system, it’s still not enough to really help focus security efforts and prioritize the hundreds (if not thousands) of vulnerabilities or configuration problems you’ll find. So let’s look at a few emerging capabilities that really help make the information gleaned from scans and assessment more impactful to the operational decisions you make every day. These capabilities are not common to all the leading vulnerability management offerings today. But we expect that most (if not all) will be core capabilities of these platforms in some way over the next 2-3 years, so watch for increasing M&A and technology integration for these functions. Attack Path Analysis If no one hears a tree fall in the woods has it really fallen? The same question can be asked about a vulnerable system. If an attacker can’t get to the vulnerable device, is it really vulnerable? The answer is yes, it’s still vulnerable, but clearly less urgent to remediate. So tracking which assets are accessible to a variety of potential attackers becomes critical for an evolved vulnerability management platform. Typically this analysis is based on ingesting firewall rule sets and router/switch configuration files. With some advanced analytics the tool determines whether an attacker could (theoretically) reach the vulnerable devices. This adds a critical third leg to the “oh crap, I need to fix it now” decision process depicted below. Obviously most enterprises have fairly complicated networks, which means an attack path analysis tool must be able to crunch a huge amount of data to work through all the permutations and combinations of possible paths to each asset. You should also look for native support of the devices (firewalls, routers, switches, etc.) you use, so you don’t have to do a bunch of manual data entry – given the frequency of change in most environments, this is likely a complete non-starter. Finally, make sure the visualization and reports on paths present the information in a way you can use. By the way, attack path analysis tools are not really new. They have existed for a long time, but never really achieved broad market adoption. As you know, we’re big fans of Mr. Market, which means we need to get critical for a moment and ask what’s different now that would enable the market to develop? First, integration with the vulnerability/threat management platforms makes this information part of the threat management cycle rather than a stand-alone function, and that’s critical. Second, current tools can finally offer analysis and visualization at an enterprise scale. So we expect this technology to be a key part of the platforms sooner rather than later; we already see some early technical integration deals and expect more. Automated Pen Testing Another key question raised by a long vulnerability report needs to be, “Can you exploit the vulnerability?” Like a vulnerable asset without a clear attack path, if a vulnerability cannot be exploited thanks to some other control or the lack of a weaponized exploit, remediation becomes less urgent. For example, perhaps you have a HIPS product deployed on a sensitive server that blocks attacks against a known vulnerability. Obviously your basic vulnerability scanner cannot detect that, so the vulnerability will be reported just as urgently as every other one on your list. Having the ability to actually run exploits against vulnerable devices as part of a security assurance process can provide perspective on what is really at risk, versus just theoretically vulnerable. In an integrated scenario a discovered vulnerability can be tested for exploit immediately, to either shorten the window of exposure or provide immediate reassurance. Of course there is risk with this approach, including the possibility of taking down production devices, so use pen testing tools with care. But to really know what can be exploited and what can’t you need to use live ammunition. And be sure to use fully vetted, professionally supported exploit code. You should have a real quality assurance process behind the exploits you try. It’s cool to have an open source exploit, and on a test/analysis network using less stable code that’s fine. But you probably don’t want to launch an untested exploit against a production system. Not if you like your job, anyway. Compliance Automation In the rush to get back to our security roots, many folks have forgotten that the auditor is still going to show up every month/quarter/year and you need to be ready. That process burns resources that could otherwise be used on more strategic efforts, just like everything else. Vulnerability scanning is a critical part of every compliance mandate, so scanners have pumped out PCI, HIPAA, SOX, NERC/FERC, etc. reports for years. But that’s only the first step in compliance automation. Auditors need plenty of other data to determine whether your control set is sufficient to satisfy the regulation. That includes things like configuration files, log records, and self-assessment questionnaires. We expect you to see increasingly robust compliance automation in these platforms over time. That means a workflow engine to help you manage getting ready for your assessment. It means a flexible integration model to allow storage of additional unstructured data in the system. The goal is to ensure that when the auditor shows up your folks have already assembled all the data they need and can easily access it. The easier that is the sooner the auditor will go away and lets your folks get back to work. Patch/Configuration Management Finally, you don’t have to stretch to see the value of broader configuration and/or patch management capabilities within the vulnerability management platform. You are already finding what’s wrong (either vulnerable or improperly configured), so why not just fix it? Clearly there is plenty of overlap with existing configuration and patching tools, and you could just as easily make the case that those tools can and should add vulnerability management

Share:
Read Post

Watching the Watchers: Integration

As we wrap up Watching the Watchers it’s worth reminding ourselves of the reality of enterprise security today. Nothing stands alone – not in the enterprise management stack anyway – so privileged user management functions need to play nicely with the other management tools. There are levels of integration required, as some functions need to be attached at the hip, while others can be mere acquaintances. Identity Integration Given that the ‘U’ in PUM stands for user, clearly Identity infrastructure is one of the categories that needs to be tightly coupled. What does that mean? We described the provisioning/entitlements requirement in the Privileged User Lifecycle. But Identity is a discipline itself, so we cannot cover it in real depth in this series. In terms of integration, your PUM environment needs to natively support your enterprise directory. It doesn’t really work to have multiple authoritative sources for users. Privileged users are, by definition, a subset of the user base, so they reside in the main user directory. This is critical, for both provisioning new users and deprovisioning those who no longer need specific entitlements. Again, the PUM Lifecycle needs to enforce entitlements, but the groupings of administrators are stored in the enterprise directory. Another requirement for identity integration is support for two-factor authentication. PUM protects the keys to the kingdom, so if a proxy gateway is part of your PUM installation, it’s essential to ensure a connecting privileged user is actually the real user. That requires some kind of multiple-factor authentication to protect against an administrator’s device being compromised and an attacker thereby gaining access to the PUM console. That would be a bad day. We don’t have any favorites in terms of stronger authentication methods, though we note that most organizations opt for tried-and-true hard tokens. Management Infrastructure Another area of integration is the enterprise IT management stack. You know, the tools that manage data center and network operations. This may include configuration, patching, and performance management. The integration is mostly about pushing alert to an ops console. For instance, if the PUM portal is under a brute force password attack, you probably want to notify ops folks to investigate. The PUM infrastructure also represents devices, so there will be some device health information that could be useful to ops. If a device goes down or an agent fails, alerts should be sent over to the ops console. Finally, you will want to have some kind of help desk integration. Some ops tickets may require access to the PUM console, so being able to address a ticket and close it out directly in the PUM environment could streamline operations. Monitoring Infrastructure The last area of integration will be monitoring infrastructure. Yes, your SIEM/Log Management platform should be the target for any auditable event in the PUM environment. First of all, a best practice for log management is to isolate the logs on a different device to ensure log records aren’t tampered with in the event of a compromise. Frankly, if your PUM proxy is compromised you have bigger problems than log isolation, but you should still exercise care in protecting the integrity of the log files, and perhaps they can help you address those larger issues. Sending events over to the SIEM also helps provide more depth for user activity monitoring. Obviously a key aspect of PUM is privileged user monitoring, but that pertains only when the users access server devices with their enhanced privileges. The SIEM watches a much broader slice of activity which includes accessing applications, email, etc. Don’t expect to start pumping PUM events into the SIEM and fairy dust to start drifting out of the dashboard. You still need to do the work to add correlation rules that leverage the PUM data and update reports, etc. We discuss the process of managing SIEM rule sets fairly extensively in both our Understanding and Selecting SIEM/Log Management and Monitoring Up the Stack papers. Check them out if you want more detail on that process. And with that, we wrap up this series. Over the next few weeks we will package up the posts into a white paper and have our trusty editor (the inimitable Chris Pepper) turn this drivel into coherent copy. Share:

Share:
Read Post

Vulnerability Management Evolution: Core Technologies

As we discussed in the last couple posts, any VM platform must be able to scan infrastructure and scan the application layer. But that’s still mostly tactical stuff. Run the scan, get a report, fix stuff (or not), and move on. When we talk about a strategic and evolved vulnerability management platform, the core technology needs to evolve to serve more than merely tactical goals – it must provide a foundation for a number of additional capabilities. Before we jump into the details we will reiterate the key requirements. You need to be able to scan/assess: Critical Assets: This includes the key elements in your critical data path; it requires both scanning and configuration assessment/policy checking for applications, databases, server and network devices, etc. Scale: Scalability requirements are largely in the eye of the beholder. You want to be sure the platform’s deployment architecture will provide timely results without consuming all your network bandwidth. Accuracy: You don’t have time to mess around, so you don’t want a report with 1,000 vulnerabilities, 400 of them false positives. There is no way to totally avoid false positives (aside from not scanning at all) so accuracy is a key selection criteria. Yes, that was pretty obvious. With a mature technology like vulnerability management the question is less about what you need to do and more about how – especially when positioning for evolution and advanced capabilities. So let’s first dig into the foundation of any kind of strategy platform: the data model. Integrated Data Model What’s the difference between a tactical scanner and an integrated vulnerability/threat management platform? Data sharing, of course. The platform needs the ability to consume and store more than just scan results. You also need configuration data, third party and internal research on vulnerabilities, research on attack paths, and a bunch of other data types we will discuss in the next post on advanced technology. Flexibility and extensibility are key for the data schema. Don’t get stuck with a rigid schema that won’t allow you to add whatever data you need to most effectively prioritize your efforts – whatever data that turns out to be. Once the data is in the foundation, the next requirement involves analytics. You need to set alerts and thresholds on the data and be able to correlate disparate information sources to glean perspective and help with decision support. We are focused on more effectively prioritizing security team efforts, so your platform needs analytical capabilities to help turn all that data into useful information. When you start evaluating specific vendor offerings you may get dragged into a religious discussion of storage approaches and technologies. You know – whether a relational backend, or an object store, or even a proprietary flat file system; provides the performance, flexibility, etc. to serve as the foundation of your platform. Understand that it really is a religious discussion. Your analysis efforts need to focus on the scale and flexibility of whatever data model underlies the platform. Also pay attention to evolution and migration strategies, especially if you plan to stick with your current vendor as they move to a new platform. This transition is akin to a brain transplant, so make sure the vendor has a clear and well-thought-out path to the new platform and data model. Obviously if your vendor stores their data in the cloud it’s not your problem, but don’t put the cart in front of the horse. We will discuss the cloud versus customer premises later in this post. Discovery Once you get to platform capabilities, first you need to find out what’s in your environment. That means a discovery process to find devices on your network and make sure everything is accounted for. You want to avoid the “oh crap” moment, when a bunch of unknown devices show up – and you have no idea what they are, what they have access to, or whether they are steaming piles of malware. Or at least shorten the window between something showing up on your network and the “oh crap” discovery moment. There are a number of techniques for discovery, including actively scanning your entire address space for devices and profiling what you find. That works well enough and tends to be the main way vulnerability management offerings handle discovery, so active discovery is still table stakes for VM offerings. You need to balance the network impact of active discovery against the need to quickly find new devices. Also make sure you can search your networks completely, which means both your IPv4 space and your emerging IPv6 environment. Oh, you don’t have IPv6? Think again. You’d be surprised at the number of devices that ship with IPv6 active by default and if you don’t plan to discover that address space as well, you’ll miss a significant attack surface. You never want to hold a network deployment while your VM vendor gets their act together. You can supplement active discovery with a passive capability that monitors network traffic and identifies new devices based on network communications. Depending on the sophistication of the passive analysis, devices can be profiled and vulnerabilities can be identified, but the primary goal of passive monitoring is to find new unmanaged devices faster. Once a new device is identified passively, you could then launch an active scan to figure out what it’s doing. Passive discovery is also helpful for devices that use firewalls to block active discovery and vulnerability scanning. But that’s not all – depending on the breadth of your vulnerability/threat management program you might want to include endpoints and mobile devices in the discovery process. We always want more data, so we are for determining all assets in your environment. That said, for determining what’s important in your environment (see the asset management/risk scoring section below), endpoints tend to be less important than databases with protected data, so prioritize the effort you expend on discovery and assessment. Finally, another complicating factor for discovery is the cloud. With the ability to spin up and take down instances at will, your platform needs to both track and assess

Share:
Read Post

Incite 4/18/2012: Camión de Calor

It was a Mr. Mom weekend, so I particularly appreciated settling in at the coffee shop on Monday morning and getting some stuff done. And it wasn’t just trucking the kids around to their various activities. It was a big weekend for all of us to catch up on work. XX1 has the CRCT standardized test this week, which is a big deal in GA, so there was much prep for that. Both XX2 and Boy have How to presentations in class this week. So they each had to write and practice a presentation. And I had to finish up our taxes and update the Securosis financials. With the Boss in absentia, I was juggling knives trying to get everything done. I look back on an intense but fun weekend. But when you spend a large block of time with kids, they inevitably surprise you with their interrogation… I mean questions. I was wearing my Hot Truck t-shirt (pictured at right), and the Boy was fascinated. What’s a Hot Truck? Is it hot? That was just the beginning of the questioning, so the Boy needed a little context. The Hot Truck is an institution for those who went to Cornell. Basically a guy made French Bread pizzas in a truck parked every night right off campus. Conveniently enough the truck parked around the corner from my fraternity house, and it was clearly the preferred late night meal after a night of hard partying. At any time of year you had folks milling around the truck waiting for their order. Of course the truck itself was pretty cool. It was basically an old box truck fitted with a pizza oven. The city set up a power outlet right on the street and he’d drive up at maybe 10pm, plug in, and start cooking. Things didn’t get exciting until 1 or 2 in the morning. Then the line would be 10-15 deep and the money guy would write your order on a paper bag. No name, nothing else. Just your order. Obviously there were plenty of ways to game such a sophisticated system. You could sneak a peek at the list and then say the sandwich was yours when it came up. Then wait until the real owner of the sandwich showed up and tried to figure out what happened while you munched on their food. The truck was there until 4am or so – basically until everyone got served. Over time, you got to know Bob (the owner) and he’d let you inside the truck (which was great on those 10-degree winter nights) to chat. You’d get your sandwich made sooner or could just take one of the unclaimed orders. He must have loved talking to all those drunk fools every night. But best of all was the shorthand language that emerged from the Hot Truck. You could order the PMP (Poor Man’s Pizza), MBC (meatballs and cheese), RoRo (roast beef with mushrooms), or even a Shaggy (a little bit of everything) – named after a fraternity brother of mine. And then you’d put on the extras, like Pep (pepperoni) or G&G (grease the garden – mayo and lettuce). All on a french bread pizza. My favorite was the MBC Pep G&G. Between the Hot Truck and beer it’s no wonder I gained a bunch of weight every year at school. But all things end and Bob sold the Truck a few years ago. It was bought by a local convenience store and they still run the truck, as well as serve the sandwiches in their store in downtown Ithaca. It’s just not the same experience though – especially since I don’t eat meatballs anymore. But the memories of Hot Truck live on, and I even have the t-shirt to prove it. –Mike Photo credits: “Hot Truck T-Shirt” taken by Mike Rothman Heavy Research We’re back at work on a variety of our blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS to get all our content in its unabridged glory. Vulnerability Management Evolution Scanning the Application Layer Watching the Watchers (Privileged User Management) Monitor Privileged Users Clouds Rolling In Understanding and Selecting DSP Use Cases Malware Analysis Quant Index of Posts Incite 4 U Stone cold responders: I recently did a session with a dozen or so CISOs at an IANS Forum, and one of the topics was incident response. I started to talk about the human toll of high-pressure incident response, and got a bunch of blank stares. Of course we dug in, and the bigger companies with dedicated response staff said they staff incident response teams with even-keeled folks. The kind who don’t get too excited or depressed or much of anything. Which kind of aligns with Lenny Z’s post on the kind of personality that detects security issues early. Seems anxious folks on edge all the time may not have an effective early warning system. Just more evidence that you need the right folks in the right spots for any chance at success. – MR PCI: Living on borrowed time? Bob Carr of Heartland Payments says Anyone that thinks they’re not going to be breached is naive. This interview, posted just days after Heartland’s financial settlement details went public, reinforces the notion that – just like cockroaches are the only survivors of a nuclear holocaust, only lawyers win in lawsuits. It was expensive for Heartland, and CardSystems Solutions did not survive. Which is topical in light of the Global Payments breach, which illustrates the risk to financial companies when Visa is offering to forgo PCI audits if a majority of merchant transactions originate from EMV terminals. Keep in mind that the breach to Global Payments – or Heartland for that matter – and fraud managed by cloning credit cards are totally separate. So time when merchants and payment processors should more aggressively look at security and breach preparedness as Mr. Carr advocates… Visa is backing off on audits to boost EMV. Some will say this is an exchange for back office security for

Share:
Read Post

Understanding and Selecting DSP: Use Cases

Database Security Platforms are incredibly versatile – offering benefits for security, compliance, and even operations. The following are some classic use cases and ways we often see them used: Monitoring and assessment for regulatory compliance Traditionally the biggest driver for purchasing a DAM/DSP product was to assist with compliance, with Sarbanes-Oxley (SOX) almost single-handedly driving the early market. The features were mostly used in for compliance in a few particular ways: To assess in-scope databases for known security issues and policy compliance. Some regulations require periodic database assessment for security issues, policy (configuration) compliance, or both. To assess databases for entitlement issues related to regulatory compliance. While all vulnerability tools can assess database platforms to some degree, no non-database-specific tools can perform credentialed scanning and assessment of user entitlements. This is now often required by certain regulations to ensure users cannot operate outside their designated scope, and to catch issues like users assigned multiple roles which create a conflict of interest. This can be evaluated manually, but it is far more efficient to use a tool if one is available. To monitor database administrators. This is often the single largest reason to use a DSP product in a compliance project. For comprehensive compliance reports spanning multiple databases and applications. Policy-level reports demonstrate that controls are in place, while other reports provide the audit trail necessary to validate the control. Most tools include such reports for a variety of major regulations, with tailored formats by industry. Web application security Almost all web applications are backed by databases, so SQL injection is one of the top three ways to remotely attack them. Web Applications Firewalls can block some SQL injection, but a key limitation is that they don’t necessarily understand the database they are protecting, and so are prone false positives and negatives. DSPs provide a similar capability – at least for database attacks – but with detailed knowledge of both the database type and how the application uses it. For example, if a web application typically queries a database for credit card numbers, the DSP tool can generate an alert if the application requests more card numbers than a defined threshold (often 1). A DSP tool with content analysis can do the same thing without the operator having to identify the fields containing credit card numbers. Instead you can set a generic “credit card” policy that alerts any time a credit card is returned in a query to the web application server, as nearly no front-end applications ask for full card numbers anymore – they are typically left to transaction systems instead. We have only scratched the surface of the potential security benefits for web apps. For example, query whitelisting can alert any time new queries or patterns appear. It is increasingly common for attackers to inject or alter stored procedures in order to take control of databases, and stored procedure monitoring picks up attacks that a WAF might miss. Some tools on the market even communicate violations back to a WAF, either for alerting or to terminate suspicious sessions and even block the offending IP address. Change management Critical databases go down more often due to poor change management than due to attacks. Unlike application code changes, administrators commonly jump right into production databases and directly manipulate data in ways that can easily cause outages. Adding closed-loop change management supported by DSP reduces the likelihood of a bad change, and provides much deeper accountability – even if shared credentials are used. Every administrator action in the database can be tracked and correlated back to a specific change ticket, with monitoring showing the full log of every SQL command – and often return values as well. Legacy system and service account support Many older databases have terrible logging and auditing features that can crush database performance, when they are even available. Such older databases are also likely to include poorly secured service accounts (although we concede that stored plain-text credentials for application accounts are still all too common in general). DSP can generate an audit trail where the database itself does not offer one, and DSP tools tend to support older databases – even those no longer supported by the database vendor. Even modern databases with auditing tend to impose a greater performance impact than DSPs. They can also audit service accounts – generic accounts used by applications to speed up performance – and even alert on unusual activity. This can be especially useful with even a simple rule – such as alerting on any access attempt using service account credentials from anywhere other than the application server’s IP address. And with that, we have wrapped up our series on Database Security Platforms. Share:

Share:
Read Post

Watching the Watchers: Clouds Rolling in

As much as we enjoy being the masters of the obvious, we don’t really need to discuss the move to cloud computing. It’s happening. It’s disruptive. Blah blah blah. People love to quibble about the details but it’s obvious to everyone. And of course, when the computation and storage behind your essential IT services might not reside in a facility under your control, things change a bit. The idea of a privileged user morphs in the cloud context, by adding another layer of abstraction via the cloud management environment. So regardless of your current level of cloud computing adoption, you need to factor the cloud into your PUM (privileged user management) initiative. Or do you? Let’s play a little Devil’s advocate here. When you think about it, isn’t cloud computing just more happening faster? You still have the same operating systems running as guests in public and/or private clouds, but with a greatly improved ability to spin up machines, faster than ever before. If you are able to provision and manage the entitlements of these new servers, it’s all good, right? In the abstract, yes. But the same old same old doesn’t work nearly as well in the new regime. Though we do respect the ostrich. Unfortunately burying your head in the sand doesn’t really remove the need to think about cloud privileged users. So let’s walk through some ways cloud computing differs fundamentally than the classical world of on-premise physical servers. Cloud Risks First of all, any cloud initiative adds another layer of management abstraction. You manage cloud resources though either a virtualization console (such as vCenter or XenCenter) or a public cloud management interface. This means a new set of privileged users and entitlements which require management. Additionally, this cloud stuff is (relatively) new, so management capability lags well behind a traditional data center. It’s evolving rapidly but hasn’t yet caught up with tools and processes for management of physical servers on a local physical network – and that immaturity poses a risk. For example, without entitlements properly configured, anyone with access to the cloud console can create and tear down any instance in the account. Or they can change access keys, add access or entitlements, change permissions, etc. – for the entire virtual data enter. Again, this doesn’t mean you shouldn’t proceed and take full advantage of cloud initiatives. But take care to avoid unintended consequences stemming from the flexibility and abstraction of the cloud. We also face a number of new risks driven by the flexibility of provisioning new computing resources. Any privileged user can spin up a new instance, which might not include proper agentry & instrumentation to plug into the cloud management environment. You don’t have the same coarse control of network access we had before, so it’s easier for new (virtual) servers to pop up, which means it’s also easier to be exposed accidentally. Management and security largely need to be implemented within the instances – you cannot rely on the cloud infrastructure to provide them. So cloud consoles absolutely demand suitable protection – at least as much as the most important server under their control. You will want to take a similar lifecycle approach to protecting the cloud console as you do with traditional devices. The Lifecycle in the Clouds To revisit our earlier research, the Privileged User Lifecycle involves restricting access, protecting credentials, enforcing entitlements, and monitoring P-user activity – but what does that look like in a cloud context? Restrict Access (Cloud) As in the physical world, you have a few options for restricting access to sensitive devices, which vary dramatically between private and public clouds. To recap: you can implement access controls within the network, on the devices themselves (via agents), or by running all connections through a proxy and only allowing management connections from the proxy. Private cloud console: The tactics we described in Restrict Access generally work, but there are a few caveats. Network access control gets a lot more complicated due to the inherent abstraction of the cloud. Agentry requires pre-authorized instances which include properly configured software. A proxy requires an additional agent of some kind on each instance, to restrict management connections to the proxy. That is actually as in the traditional datacenter – but now it must be tightly integrated with the cloud console. As instances come and go, knowing which instances are running and which policy groups each instance requires becomes the challenge. To fill this gap, third party cloud management software providers are emerging to add finer-grained access control in private clouds. Public cloud console: Restricting network access is an obvious non-starter in a public cloud. Fortunately you can set up specific security groups to restrict traffic and have some granularity on which IP addresses and protocols can access the instances, which would be fine in a shared administrator context. But you aren’t able to restrict access to specific users on specific devices (as required by most compliance mandates) at the network layer, because you have little control over the network in a public cloud. That leaves agentry on the instances, but with little ability to stop unauthorized parties from accessing instances. Another less viable option is a proxy, but you can’t really restrict access per se – the console literally lives on the Internet. To protect instances in a public cloud environment, you need to insert protections into other segments of the lifecycle. Fortunately we are seeing some innovation in cloud management, including the ability to manage on demand. This means access to manage instances (usually via ssh on Linux instances) is off by default. Only when management is required does the cloud console open up a management port(s) via policy, and only for authorized users at specified times. That approach address a number of the challenges of always on and always accessible cloud instances, and so it’s a promising model for cloud management. Protect Credentials (Cloud) When we think about protecting credentials for cloud computing resources, we use got an expanded concept of credentials. We now need to worry about three types of credentials: Credentials for the cloud console(s) Credentials

Share:
Read Post

Friday Summary: April 13th, 2012

Happy Friday the 13th! I was thinking about superstition and science today, so I was particularly amused to notice that it’s Friday the 13th. Rich and I are both scientists of sorts; we both eschew superstition, but we occasionally argue about science. What’s real and what’s not. What’s science, what’s pseudoscience, and what’s just plain myth. It’s interesting to discuss root causes and what forces actually alter our surroundings. Do we have enough data to make an assertion about something, or is it just a statistical anomaly? I’m far more likely to jump to conclusions about stuff based on personal experience, and he’s more rigorous with the scientific method. And that’s true for work as well as life in general. For example he still shuns my use of Vitamin C, while I’m convinced it has a positive effect. And Rich chides as I make statements about things I don’t understand, or assertions that are completely ‘pseudoscience’ in his book. I’ll make an off-handed observation and he’ll respond with “Myth Busters proved that’s wrong in last week’s show”. And he’s usually right. We still have a fundamental disagreement about the probability of self-atomizing concrete, a story I’d rather not go into – but regardless, we are both serious tech geeks and proponents of science. I regularly run across stuff that surprises me and challenges my fundamental perception of what’s possible. And I am fascinated by those things and the explanations ‘experts’ come up with for them – usually from people with a financial incentive. Hawking anything from food to electronic devices by claiming benefits we cannot measure, or for which we don’t have science which could prove or disprove their clams. To keep things from getting all political or religious, I restrict my examples to my favorite hobby: HiFi. I offer power cords as an example. I’ve switched most of the power cords to my television, iMac, and stereo to versions that run $100 to $300. Sounds deranged, I know, to spend that much on a piece of wire. But you know what? The colors on the television are deeper, more saturated, and far less visually ‘noisy’. Same for the iMac. And I’m not the only one who has witnessed this. It’s not subtle, and it’s completely repeatable. But I am at a loss to understand how the last three feet of copper between the wall socket and the computer can dramatically improve the quality of the display. Or the sound from my stereo. I can see it, and I can hear it, but I know of no test to measure it and I just don’t find the explanations of “electron alignment” plausible. Sometimes it’s simply that nobody thought to measure stuff they should have because theoretically it shouldn’t matter. In college I thought most music sounded terrible and figured I had simply outgrown the music of my childhood. Turns out that in the 80s, when CDs were born, CD players introduced several new forms of distortion, and some of them were unmeasurable. Listener fatigue became common, many people getting headaches as a result of these poorly created devices. Things like jitter, power supply noise, noise created by different types of silicon gates and capacitors, all producing sonic signatures audible to the human ear. Lots of this couldn’t be effectively measured but will send you running from the room. Fortunately over the last 12 years or so audio designers have become aware of these new forms of distortion, and they now have devices that can measure them to one degree or another. I can even hear significant differences with various analog valves (i.e. ‘tubes’) where I cannot measure electrical differences. Another oddity I have found is with vibration control devices. I went to a friend’s house and found his amplifiers and DVD players suspended high in the air on top of maple butcher blocks, which sat on top of what looked like a pair of hockey pucks separated by a ball bearing. The maple blocks are supposed to both absorb vibration and avoid electromagnetic interference between components. And we did several A/B comparisons with and without each, but it was the little bearings that made a clear and noticeable difference in sound quality. The theory is that high frequency vibrations, which shake the electronic circuits of the amps and CD players, decrease resolution and introduce some form of distortion. Is that true? I have no clue. Do they work? Hell yes they do! I know that my mountain bike’s frame was designed to alter the tube circumference and wall thicknesses as a method of dampening vibrations, and there is an improvement over previous generations of bike frames, albeit a subtle one. The reduction in vibrations on the bike can easily be measured, as can the vibrations and electromagnetic interference between A/V equipment. But the vibrational energy is so vanishingly small that it should never make a difference to audio quality. Then there are the environmental factors that alter the user’s perception of events. Yeah, drugs and alcohol would be an example, but sticking to my HiFi theme: a creme that makes your iPod sound better. Works by creating a positive impression with the user. Which again borders on the absurd. An unknown phenomena, or snake oil? Sometimes it’s tough to tell superstition from science. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian’s Dark Reading paper on User Activity Monitoring. Rich’s excellent Macworld article on the Flashback malware. Adrian’s Dark Reading post on reverse database proxies. Favorite Securosis Posts Adrian Lane: The Myth of the Security-Smug Mac User. We get so many ‘news’ items, like how Android will capture the tablet market in 2015, or how Apple’s market share of smartphones is dwindling, or how smug Apple users will get their ‘comeuppance’ for rejecting AV solutions, that you wonder who’s coming up with this crap. Mac users may not have faith in AV to keep them secure, but they know eventually Macs will be targeted just as Windows has been. And I’m fairly certain most hackers run on

Share:
Read Post

Incite 4/11/2012: Exchanging Problems

I figured an afternoon flight to the midwest would be reasonably peaceful. I was wrong. Things started on the wrong foot when I got an email notification from Delta that the flight was delayed, even though it wasn’t. The resulting OJ sprint through the terminal to make the flight was agitating. Then the tons of screaming kids on the flight didn’t help matters. I’m thankful for noise isolating headphones, that’s for sure. But seeing the parents walking their kids up and down the aisle and dealing with the pain of ascent and descent on the kids’ eardrums got me thinking about my own situation. As I mentioned, I was in Italy last week teaching our CCSK course, but the Boss took the kids up north for spring break to visit family. She flew with all of the kids by herself. 5 years ago that never would have happened. We actually didn’t fly as a family for years because it was just too hard. With infant/toddler twins and one three years older, the pain of getting all the crap through the airport and dealing with security and car seats and all the other misery just wasn’t worth it. It was much easier to drive and for anything less than 6-7 hours, it was probably faster to load up the van. The Boss had no problems on the flight. The kids had their iOS devices and watched movies, played games, ate peanuts, enjoyed soda, and basically didn’t give her a hard time at all. They know how to equalize their ears, so the pain wasn’t an issue, and they took advantage of the endless supply of gum they can chew on a flight. So that problem isn’t really a problem any more. As long as they don’t go on walkabout through the terminal, it’s all good. But it doesn’t mean we haven’t exchanged one problem for another. XX1 has entered the tween phase. Between the hormonally driven attitude and her general perspective that she knows everything (yeah, like every other kid), sometimes I long for the days of diapers. At least I didn’t have a kid challenging stuff I learned the hard way decades ago. And the twins have their own issues, as they deal with friend drama and the typical crap around staying focused. When I see harried parents with multiples, sometimes I walk up and tell them it gets easier. I probably shouldn’t lie to them like that. It’s not easier, it’s just different. You constantly exchange one problem for another. Soon enough XX1 will be driving and that creates all sorts of other issues. And then they’ll be off to college and the rest of their lives. So as challenging as it is sometimes, I try to enjoy the angst and keep it all in perspective. If life was easy, what fun would it be? -Mike Photo credits: “Problems are Opportunities” originally uploaded by Donna Grayson Heavy Research We’re back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can access all of our content in its unabridged glory. Vulnerability Management Evolution Scanning the Infrastructure Scanning the Application Layer Watching the Watchers (Privileged User Management) Enforce Entitlements Monitor Privileged Users Understanding and Selecting DSP Extended Features Administration Malware Analysis Quant Index of Posts Incite 4 U Geer on application security: no silent failures Honestly, it’s pointless to try to summarize anything Dan Geer says. A summary misses the point. It misses the art of his words. And you’d miss priceless quotes like “There’s no government like no government,” and regarding data loss, “if I steal your data, then you still have them, unlike when I steal your underpants.” Brilliant. Just brilliant. So read this transcript of Dan’s keynote at AppSecDC and be thankful Dan is generous enough to post his public talks. Let me leave you my main takeaway from Dan’s talk: “In a sense, our longstanding wish to be taken seriously has come; we will soon reflect on whether we really wanted that.” This is an opportunity to learn from a guy who has seen it all in security. Literally. Don’t squander it. Take the 15 minutes and read the talk. – MR AppSec trio: Fergal Glynn of Veracode has started A CISO’s Guide to Application Security, a series on Threatpost. And it’s off to a good start, packed with a lot of good information, but the ‘components’ are all blending together. Secure software development, secure operations, and a software assurance program are three different things; and while they go hand in hand if you want a thorough program, it’s easier to think about them as three legs of the proverbial stool. Make no mistake, I have implemented secure coding techniques based purely on threat modeling because we had no metrics – or even idea of what metrics were viable – to do an assurance program. I’ve worked in finance, with little or no code development, relying purely on operational controls around pre-deployment and deployment phases on COTS software. At another firm I implemented metrics and risk analysis to inspire the CEO to allow secure code development to happen. So while these things get blurred together under the “application security” umbrella, remember they’re three different sets of techniques and processes, with three slightly different – and hopefully cooperating – audiences. – AL It’s the economy, stupid: One of the weirdest things I’ve realized over years in the security industry is how much security is about economics and psychology, not about technology. No, I’m not flying off the deep end and ignoring the tech (I’m still a geek, after all), but if you want to make big changes you need to focus on things that affect the economics, not how many times a user clicks on links in email. One great example is the new database the government and cell phone providers are setting up to track stolen phones. Not only will they keep track of the stolen phones, they will make sure they can’t be

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.