Securosis

Research

Vulnerability Management Evolution: Evolution or Revolution?

We have discussed the evolution of vulnerability management from a tactical tool to a much more strategic platform providing decision support for folks to more effectively prioritize security operations and resource allocation. But some vendors may not manage to effectively broaden their platforms sufficiently to remain competitive and fully satisfy their customer requirements. So at some point you may face a replacement decision, or to put it more kindly, a decision of evolution or revolution for your vulnerability/threat management platform. Last year we researched whether to replace your SIEM/Log Management platform. That research provides an in-depth process for revisiting your requirements, re-evaluating your existing tool, making a decision about whether to replace or not, negotiating the deal, and migrating to the new platform. If and when you face a similar decision regarding your vulnerability management platform the process will be largely the same, so check out that research for detail on the replacement process. The difference is that, unlike SIEM platforms, most organizations are not totally unhappy with their current vulnerability tools. And again, in most cases a revolution decision results from the need to utilize additional capabilities available with a competing platform, instead of because the existing tool simply cannot be made to work. The Replacement Decision Let’s start with the obvious: you aren’t really making a decision on the vulnerability management offering – it’s more of a recommendation. The final decision will likely be made in the executive suite. That’s why your process focuses initially on gathering data (quantitative when possible) – because you will need to defend your recommendation until the purchase order is signed. And probably afterwards, especially if a large ‘strategic’ vendor provides your currently VM scanner. This decision generally isn’t about technical facts – especially because there is an incumbent in play, which may be from a big company with important relationships with heavies in your shop. So to make any change you will need all your ducks in a row and a compelling argument. And even then you might not be able to push through a full replacement. In that case the best answer may be to supplement. In this scenario you still scan with the existing tool, but handle the value-add capabilities (web app scanning, attack path analysis, etc.) on the new platform. The replacement decision can be really broken into a few discrete steps: Introspection: Start by revisiting your requirements, both short and long term. Be particularly sensitive to how your adversaries’ tactics are changing. Unfortunately we still haven’t found a vendor of reliable crystal balls, but think about how your infrastructure is provisioned and will be provisioned (cloud computing). What will your applications look like, and who will manage them (SaaS)? How will you interact with your business partners? Most important, be honest about what you really need. It’s important to make a clear distinction between stuff you must have and stuff that would be nice to have. Everything looks shiny on a marketing spec sheet. That doesn’t mean you’ll really use those capabilities. Current Tool Assessment: Does your current product meet your needs? Be careful to keep emotion out of your analysis – most folks get pissed with their existing vendors from time to time. Do some research into the roadmap of your current vendor. Will they support the capabilities you need in the future? If so, when? Do you believe them? Don’t be too skeptical, but if a vendor has a poor track record of shipping new functionality do factor that in. Alternatives and Substitutions: You should also be surveying the industry landscape to learn about other offerings that might meet your needs. It’s okay to start gathering information from vendors – if a vendor can’t convince you their platform will do what you need they have no shot at actually solving your problem. But don’t stop with vendors. Talk to other folks using the product. Talk to resellers and other third parties who can provide a more objective perspective on the technology. Do your due diligence, because if you push for a revolution it will be your fault if it doesn’t meet expectations. Evaluate the Economics: Now that you know which vendors could meet your requirements, what would it cost to get there? How much to buy the software, or is it a service? How does that compare to your current offering? What kind of concessions can you get from the new player to get in the door, and what will the incumbent do to keep your business? Don’t make the mistake of only evaluating the acquisition cost. You should factor in training, integration, and support costs. And understand that you may need to run both offerings in parallel during a migration period, just to make sure you don’t leave a gap in assessment. Document and Sell: At this point your decision will be clear – at least to you. But you’ll need to document what you want to do and why, especially if it involves bringing in another vendor. Depending on the political situation consensus might be required among the folks affected by the decision. And don’t be surprised by pushback if you decide on replacement. You never know who plays golf with whom, or what other big deals are on the table that could have an impact on your project. And ultimately understand that you may not get what you want. It’s an unfortunate reality of life in the big city. Sometimes decisions don’t go your way – no matter how airtight your case is. That’s why we said earlier that you are really only making a recommendation. Many different factors go into a replacement decision for a key technology, and most of them are beyond your control. If your decision is to stay put and evolve the capabilities of your tool into the platform you need, then map out a plan to get there. When will you add the new features? Then you can map out your budgets and funding requests, and work through

Share:
Read Post

[New White Paper] Watching the Watchers: Guarding the Keys to the Kingdom

Given the general focus on most organizations on the attackers out there, they may miss the attackers that actually have the credentials and knowledge to do some real damage. These are your so-call privileged users and far too many organizations don’t do much to protect themselves from an attack from that community. By the way, this doesn’t necessarily require a malicious insider. Rather it’s very possible (if not plausible) that a privileged user’s device gets compromised, therefore giving the attacker access to the administrator’s credentials. Right, that’s a bad day. Thus we’ve written a paper called Watching the Watchers: Guarding the Keys to the Kingdom to describe the problem and offer some ideas on solutions. A compromised P-user can cause all sorts of damage and so needs to be actively managed. Let’s now talk about solutions. Most analysts favor models to describe things, and we call ours the Privileged User Lifecycle. But pretty as the lifecycle diagram is, first let’s scope it to define beginning and ending points. Our lifecycle starts when the privileged user receives escalated privileges, and ends when they are no longer privileged or leave the organization, whichever comes first. We would like to thank Xceedium for sponsoring the research. Check it out, we think it’s a great overview of an issue facing every organization. At least those with administrators. Download Watching the Watchers: Guarding the Keys to the Kingdom The paper is based on the following posts: Keys to the Kingdom (Introduction) The Privileged User LIfecycle Restrict Access Protect Credentials Enforce Entitlements Monitor Privileged Users Clouds Rolling In Integration Share:

Share:
Read Post

Friday Summary, TSA Edition: April 26, 2012

Rich here. I’m writing thi from an airport, so I will eschew my normal ‘personal’ intro and spend a little time on our favorite security show: Airport Screening Follies. (But before I do that, go buy Motherless Children by Dennis Fisher. Dennis is an actual writer, and despite him screwing up an EMT reference it’s a great book (so far… nearly halfway through)). It’s easy to knock the TSA. But like kicking a puppy, it’s also far from satisfying. And while it’s also easy to criticize specific screening techniques, it might be more useful to understand them. Because if we really want our airport traveling experience to change, we need to attack the economics and stop wasting our time focusing on the value of particular security controls, or the failings of a small percentage of the workforce. If we look at the TSA, there are really three levels of people involved (not counting the public): Policymakers (politicians) TSA executives (and high-level appointees) TSA staff Let’s take a moment to look at the dynamics at each level. Politicians only care about being reelected, and don’t want any responsbility for their actions. To them the risk of changing the TSA is that on the off chance something bad, happens they will be excoriated (worst case: not re-elected). The reward for actually changing TSA practices is low, while the reward for posturing is high. In other words: if a politician implements a reduction in security and something bad happens they are likely to be held responsible even if it’s a coincidence; but proposing bills that don’t pass, loudly demanding tigher security (even if their demands are meaningless), and spending complaining to the press, all help them get reelected. So they all talk a lot without doing anything useful. TSA execs – the high-level decision-makers – face the same risks as politicians. Drop a single pointless security ‘control’, and when the next event happens they will be stoned by politicians, press, and the public. There is no cost to them for implementing more security theater, but there is a high risk from removing anything. It’s not an evil mindset, and not one they are necessarily conscious of, but the sad truth is that it is at least as important for them to look like they are doing anything to address every potential visible risk, as to actually stop an attack or improve transportation. TSA staff mostly just want to keep their jobs. One important way to do that is to buy into the security theater. They also want to feel good about their work, so like an AV vendor hyping Mac malware, they believe that even low-value security is important – it’s what they do, day to day. I don’t mean this in an insulting way. There is actually a lot of value in screening, although certain TSA technologies and practices are basically pointless. When you are in the trenches, it is often hard to divest yourself emotionally and to understand the differences objectively. I’m fairly certain that many of our fine readers enforce plenty of IT security theater (especially when it comes to passwords), so you all know what I mean. As a guy who used to hand-search thousands of concert and football attendees, I get it. What about the flying public? The only thing we can control is the political environment, and if we aren’t going to hold our elected officials responsible for their economic foibles we certainly aren’t going to vote based on who will change the TSA. So our politicians really have nothing vested in reducing security theater. We have executives and appointees who see only a downside to reducing it, because public complaints don’t really affect them. And they are motivated to double down when challenged so they seem ‘decisive’ and knowledgeable. Last we have the staffers who just want to keep their jobs and go home without feeling like asses. It’s all risk/reward, and the odds certainly do not favor the flying public. Until the political climate for security theater becomes untenable nothing will change. And that won’t happen as long as we have 24-hour news channels and talk radio. Oh – and this all applies to CISPA, and whatever else is pissing you off today. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian’s paper on User Activity Monitoring. Favorite Securosis Posts Adrian Lane: Vulnerability Management Evolution: Value-Add Technologies. This is the type of graphics we need more of. Mike Rothman: Understanding and Selecting DSP: Use Cases. In case some of the theory behind DSP wasn’t clear, these use cases should clarify things. This was a great series. Rich: Mike’s Privileged User Management paper – this is heating up. Other Securosis Posts Incite 4/25/2012: Drafty Draft. Watching the Watchers: Integration. Vulnerability Management Evolution: Core Technologies. Vulnerability Management Evolution: Value-Add Technologies. Vulnerability Management Evolution: Enterprise Features and Integration. Favorite Outside Posts Mike Rothman: Motherless Children (buy it now!). Our friend Dennis Fisher published a novel. You can buy it on the Kindle and within a week or so you’ll be able to buy a paperback version. I’m getting my copy this weekend. You should too. Mike Rothman: The Mystery of the Flying Laptop. We all get security theater. Nice to see a mass market pub lampoon the idiocy of flying with electronics in the US. Rich: Bill Brenner on the TSA – tying into my intro. Research Reports and Presentations Watching the Watchers: Guarding the Keys to the Kingdom. Network-Based Malware Detection: Filling the Gaps of AV. Tokenization Guidance Analysis: Jan 2012. Applied Network Security Analysis: Moving from Data to Information. Tokenization Guidance. Security Management 2.0: Time to Replace Your SIEM? Fact-Based Network Security: Metrics and the Pursuit of Prioritization. Top News and Posts Mozilla Weighing Opt-In Requirement for Web Plugins. This is already available, if you use the Add-on tool to keep all this stuff turned off. US and China conduct cyber-war games. Hotmail Password Reset Bug Exploited in Wild. Critical 0day in Oracle. Backdoor

Share:
Read Post

Incite 4/25/2012: Drafty Draft

It feels like Bizarro World to me. I woke up this morning freezing my backside off. We turned off the heat a few weeks ago and it was something like 65 this morning. Outside it was in the 40s, at the end of April. WTF? And the Northeast has snow. WTF? I had to bust out my sweatshirts, which I had hoped to shelve for the season. Again, WTF? But even a draft of cold weather can’t undermine my optimism this week. Why? Because it’s NFL Draft time. That’s right, I made it through the dark time between the start of free agency and the Draft. You know it’s a slow time – I have been watching baseball and even turned on a hockey game. But the drought is over. Now it’s time to see who goes where. And to keep a scorecard of how wrong all the pundits are in their mock drafts. Here’s another thing I learned. There are pundits in every business, and the Internet seems to have enabled a whole mess of people to make their livings as pundits. If you follow the NFL you are probably familiar with Mel Kiper, Jr. (and his awesome hair) and Todd McShay, who man the draft desk at ESPN. They almost always disagree, which is entertaining. And Mike Mayock of NFL Network provides great analysis. They get most of the visibility this week, but through the magic of the Twitter I have learned that lots of other folks write for web sites, some big and most small, and seem to follow the NFL as their main occupation. Wait, what? I try not to let my envy gene, but come on, man! I say I have a dream job and that I work with smart people doing what I really like. But let’s be honest here – what rabid football fan wouldn’t rather be talking football all day, every day? And make a living doing it. But here’s the issue. I don’t really know anything about football. I didn’t play organized football growing up, as my Mom didn’t think fat Jewish kids were cut out for football. And rolling over neighborhood kids probably doesn’t make me qualified to comment on explosiveness, change of direction, or fluid hips. I know very little about Xs and Os. Actually, I just learned that an offensive lineman with short arms can’t play left tackle, as speed rushers would get around him almost every time. Who knew? But I keep wondering if my lack of formal training should deter me. I mean, if we make an analogy to the security business, we have a ton of folks who have never done anything starting up blogs and tweeting. Even better, some of them are hired by the big analyst firms and paraded in front of clients who have to make real decisions and spend real money based on feedback from some punk. To be fair there was a time in my career when I was that punk, so I should know. 20 years later I can only laugh and hope I didn’t cost my clients too much money. Maybe I should pull a Robin Sage on the NFL information machine. That would be kind of cool, eh? Worst case it works and I’ll have a great Black Hat presentation. -Mike Photo credits: “Windy” originally uploaded by Seth Mazow Heavy Research We’re back at work on a variety of our blog series, so here is a list of the research currently underway. Remember our Heavy RSS Feed, where you can access all our content in its unabridged glory. Vulnerability Management Evolution Core Technologies Value-Add Technologies Enterprise Features and Integration Watching the Watchers (Privileged User Management) Clouds Rolling In Integration Understanding and Selecting DSP Use Cases Malware Analysis Quant Index of Posts Incite 4 U Don’t go out without your raincoat: I tip my hat to the folks at Sophos. To figure out a way to compare the infection rate of Chlamydia to the prevalence of Mac malware is totally evil genius. That stat really resonates with me, and wasn’t a good thing for some of my buddies at school. So do 20% of Macs really have malware? Not exactly – they include the presence of Windows malware, which obviously doesn’t do much harm on Macs. Only 1 in 36 had actual Mac malware, and I’m sure a bunch of those were Flashback users who downloaded AV only after being infected. Though I guess the malware could spread to PCs via VMs and other unsafe computing practices. Of course the Sophos guys never miss an opportunity make an impassioned plea for Mac AV, especially since it’s free. Reminds me of something my Dad said when I came of age. He told me never to go out without my raincoat on. He was right – just ask my fraternity brothers. I wonder if “The Trojan Man for Mac” would work as the new Sophos tagline? – MR Killer apps: Will (Mobile) Apps Kill Websites is Jeff Atwood’s question, one I have been mulling over the last few months. All Jeff’s points are spot-on: Well-designed apps provide a kick-ass user experience that few web sites can rival. Fast, simple, and tailored for the environment, they are often just better. And considering that mobile devices will outnumber desktops 10:1 in the future, replacement is not hard to imagine. But Jeff’s list of disadvantages should contain a few security issues as well. Namely none of the protections I use with my desktop browser (NoScript, Ghostery, Flashblock, Adblock, etc.) are available on mobile platforms. Nor do we have fine-grained control over what apps can do, and we cannot currently run outbound firewalls to make sure websites aren’t secretly transmitting our data. Mobile platforms generally offer really good built-in security, but in practice it is gradually becoming harder to protect – and sandbox – apps, similar to challenges we have already face with desktop browsers. It looks like we get to play security catch-up

Share:
Read Post

Vulnerability Management Evolution: Enterprise Features and Integration

We’re in the home stretch of the Vulnerability Management Evolution research project. After talking mostly about the transition from an audit-centric tactical tool to a much more strategic platform providing security decision support, it is now time to look critically at what’s required to make the platform work in your enterprise. That means providing both built-in tools to help manage your vulnerability management program, as well as supporting integration with existing security and IT management tools. Remember, it is very rare to have an opportunity to start fresh in a green field. So whether you select a new platform or stay with your incumbent provider, as you add functionality you’ll need to play nicely in your existing sandbox. Managing the Vulnerability Management Program We have been around way too long to actually believe that any tool (or toolset) can ever entirely solve any problem, so our research tends to focus around implementing programs to address problems rather than selecting products. Vulnerability management is no different, so let’s list what you need to actually manage the program internally. First you basic information before you can attempt any kind of prioritization. That has really been the focus of the research to date. Taking tactical scans and configuration assessments of the infrastructure and application layers, then combining then with perceived asset value and the value-added technologies we discussed in the last post, and running some analytics to provide usable information. But the fun begins once you have an idea of what needs to be fixed and relative priorities. Dashboards Given the rate of change in today’s organizations, wading through a 200-page vulnerability report or doing manual differential comparisons of configuration files isn’t efficient or scalable. Add in cloud computing and everything is happening even faster, making automation critical to security operations. You need the ability to take information and visualize it in ways that makes sense for a variety of constituencies. You need an Executive View, providing a high-level view of current security posture and other important executive-level metrics. You need an operational view to help guide the security team on what they need to do. And you can probably use views for application-specific vulnerabilities and perhaps infrastructure and database visuals for those folks. Basically you need the flexibility to design an appropriate dashboard/interface for any staffer needing to access the platform’s information. Most vendors ship with a bunch of out-of-the-box options, but more importantly ensure they offer a user-friendly capability to easily customize the interface for what staff needs. Workflow Unless your IT shop is a one-man (or one-woman) band, some level of communication is required to keep everything straight. With a small enough team a daily coffee discussion might suffice. But that doesn’t scale so the vulnerability/threat management platform should include the ability to open ‘tickets’, or whatever you call them, to get work done. It certainly doesn’t need to include a full-blown trouble ticket system, but this capability comes in handy if you don’t have an existing support/help desk system. As a base level of functionality look for the ability to do simple ticket routing, approval / authorization, and indicate work has been done (close tickets). Obviously you’ll want extensive reporting on tickets and the ability to give specific staff members lists of the things they should be doing. Straightforward stuff. Don’t forget that any program needs to have checks and balances, so an integral part of the workflow capability must be enforcement of proper separation of duties to ensure no one individual has too much control over your environment. That means proper authorization before making changes or remediating issues, and ensuring a proper audit trail for everything administrators do with the platform. Compliance Reporting Finally you need to substantiate your controls for the inevitable audits, which means your platform needs to generate documentation to satisfy the auditor’s appetite for information. Okay, it won’t totally satisfy the auditor (as if that were even possible) but at least provide a good perspective on what you do and how well it works, with artifacts to prove it. Since most audits break down to some kind of checklist you need to follow, having those lists enumerated in the vulnerability management platform is important and saves a bunch of time. You don’t want to be mapping reports on firewall configurations to PCI Requirement 1 – the tool should do that out of the box. Make sure whatever you choose offers the reports you need for the mandates you are subject to. But reporting shouldn’t end when the auditor goes away. You should also use the reports to keep everyone operationally honest. That means reports showing similar information to the dashboards we outlined above. You’ll want your senior folks to get periodic reports talking about open vulnerabilities and configuration problems, newly opened attack paths, and systems that can be exploited by the pen test tool. Similarly, operational folks might get reports of their overdue tasks or efficiency reports showing how quickly they remediate their assigned vulnerabilities. Again, look for customization – everyone seems to want the information in their own format. Dashboards and reporting are really the ying/yang of managing any security-oriented program. So make sure the platform provides the flexibility to display and disseminate information however you need it. Enterprise Integration As we mentioned, in today’s technology environment nothing stands alone, so when looking at this evolved vulnerability management platform, how well it integrates with what you already have is a strong consideration. But you have a lot of stuff, right? So let’s prioritize integration a bit. Patch/Config Management: In the value-add technologies piece, we speculated a bit on the future evolution of common platforms for vulnerability/threat and configuration/patch management. As hinted there, tight integration between these two functions is critical. You will probably hear the term vulnerability validation to describe this integration, but it basically means closing the loop between assessment and remediation. So when an issue is identified by the VM platform, the fix is made (presumably by the patch/config tool) and

Share:
Read Post

Vulnerability Management Evolution: Value-Add Technologies

So far we have talked about scanning infrastructure and the application layer, before jumping into some technology decisions you face, such as how to deal with cloud delivery and agents. But as much as these capabilities increase the value of the vulnerability management system, it’s still not enough to really help focus security efforts and prioritize the hundreds (if not thousands) of vulnerabilities or configuration problems you’ll find. So let’s look at a few emerging capabilities that really help make the information gleaned from scans and assessment more impactful to the operational decisions you make every day. These capabilities are not common to all the leading vulnerability management offerings today. But we expect that most (if not all) will be core capabilities of these platforms in some way over the next 2-3 years, so watch for increasing M&A and technology integration for these functions. Attack Path Analysis If no one hears a tree fall in the woods has it really fallen? The same question can be asked about a vulnerable system. If an attacker can’t get to the vulnerable device, is it really vulnerable? The answer is yes, it’s still vulnerable, but clearly less urgent to remediate. So tracking which assets are accessible to a variety of potential attackers becomes critical for an evolved vulnerability management platform. Typically this analysis is based on ingesting firewall rule sets and router/switch configuration files. With some advanced analytics the tool determines whether an attacker could (theoretically) reach the vulnerable devices. This adds a critical third leg to the “oh crap, I need to fix it now” decision process depicted below. Obviously most enterprises have fairly complicated networks, which means an attack path analysis tool must be able to crunch a huge amount of data to work through all the permutations and combinations of possible paths to each asset. You should also look for native support of the devices (firewalls, routers, switches, etc.) you use, so you don’t have to do a bunch of manual data entry – given the frequency of change in most environments, this is likely a complete non-starter. Finally, make sure the visualization and reports on paths present the information in a way you can use. By the way, attack path analysis tools are not really new. They have existed for a long time, but never really achieved broad market adoption. As you know, we’re big fans of Mr. Market, which means we need to get critical for a moment and ask what’s different now that would enable the market to develop? First, integration with the vulnerability/threat management platforms makes this information part of the threat management cycle rather than a stand-alone function, and that’s critical. Second, current tools can finally offer analysis and visualization at an enterprise scale. So we expect this technology to be a key part of the platforms sooner rather than later; we already see some early technical integration deals and expect more. Automated Pen Testing Another key question raised by a long vulnerability report needs to be, “Can you exploit the vulnerability?” Like a vulnerable asset without a clear attack path, if a vulnerability cannot be exploited thanks to some other control or the lack of a weaponized exploit, remediation becomes less urgent. For example, perhaps you have a HIPS product deployed on a sensitive server that blocks attacks against a known vulnerability. Obviously your basic vulnerability scanner cannot detect that, so the vulnerability will be reported just as urgently as every other one on your list. Having the ability to actually run exploits against vulnerable devices as part of a security assurance process can provide perspective on what is really at risk, versus just theoretically vulnerable. In an integrated scenario a discovered vulnerability can be tested for exploit immediately, to either shorten the window of exposure or provide immediate reassurance. Of course there is risk with this approach, including the possibility of taking down production devices, so use pen testing tools with care. But to really know what can be exploited and what can’t you need to use live ammunition. And be sure to use fully vetted, professionally supported exploit code. You should have a real quality assurance process behind the exploits you try. It’s cool to have an open source exploit, and on a test/analysis network using less stable code that’s fine. But you probably don’t want to launch an untested exploit against a production system. Not if you like your job, anyway. Compliance Automation In the rush to get back to our security roots, many folks have forgotten that the auditor is still going to show up every month/quarter/year and you need to be ready. That process burns resources that could otherwise be used on more strategic efforts, just like everything else. Vulnerability scanning is a critical part of every compliance mandate, so scanners have pumped out PCI, HIPAA, SOX, NERC/FERC, etc. reports for years. But that’s only the first step in compliance automation. Auditors need plenty of other data to determine whether your control set is sufficient to satisfy the regulation. That includes things like configuration files, log records, and self-assessment questionnaires. We expect you to see increasingly robust compliance automation in these platforms over time. That means a workflow engine to help you manage getting ready for your assessment. It means a flexible integration model to allow storage of additional unstructured data in the system. The goal is to ensure that when the auditor shows up your folks have already assembled all the data they need and can easily access it. The easier that is the sooner the auditor will go away and lets your folks get back to work. Patch/Configuration Management Finally, you don’t have to stretch to see the value of broader configuration and/or patch management capabilities within the vulnerability management platform. You are already finding what’s wrong (either vulnerable or improperly configured), so why not just fix it? Clearly there is plenty of overlap with existing configuration and patching tools, and you could just as easily make the case that those tools can and should add vulnerability management

Share:
Read Post

Watching the Watchers: Integration

As we wrap up Watching the Watchers it’s worth reminding ourselves of the reality of enterprise security today. Nothing stands alone – not in the enterprise management stack anyway – so privileged user management functions need to play nicely with the other management tools. There are levels of integration required, as some functions need to be attached at the hip, while others can be mere acquaintances. Identity Integration Given that the ‘U’ in PUM stands for user, clearly Identity infrastructure is one of the categories that needs to be tightly coupled. What does that mean? We described the provisioning/entitlements requirement in the Privileged User Lifecycle. But Identity is a discipline itself, so we cannot cover it in real depth in this series. In terms of integration, your PUM environment needs to natively support your enterprise directory. It doesn’t really work to have multiple authoritative sources for users. Privileged users are, by definition, a subset of the user base, so they reside in the main user directory. This is critical, for both provisioning new users and deprovisioning those who no longer need specific entitlements. Again, the PUM Lifecycle needs to enforce entitlements, but the groupings of administrators are stored in the enterprise directory. Another requirement for identity integration is support for two-factor authentication. PUM protects the keys to the kingdom, so if a proxy gateway is part of your PUM installation, it’s essential to ensure a connecting privileged user is actually the real user. That requires some kind of multiple-factor authentication to protect against an administrator’s device being compromised and an attacker thereby gaining access to the PUM console. That would be a bad day. We don’t have any favorites in terms of stronger authentication methods, though we note that most organizations opt for tried-and-true hard tokens. Management Infrastructure Another area of integration is the enterprise IT management stack. You know, the tools that manage data center and network operations. This may include configuration, patching, and performance management. The integration is mostly about pushing alert to an ops console. For instance, if the PUM portal is under a brute force password attack, you probably want to notify ops folks to investigate. The PUM infrastructure also represents devices, so there will be some device health information that could be useful to ops. If a device goes down or an agent fails, alerts should be sent over to the ops console. Finally, you will want to have some kind of help desk integration. Some ops tickets may require access to the PUM console, so being able to address a ticket and close it out directly in the PUM environment could streamline operations. Monitoring Infrastructure The last area of integration will be monitoring infrastructure. Yes, your SIEM/Log Management platform should be the target for any auditable event in the PUM environment. First of all, a best practice for log management is to isolate the logs on a different device to ensure log records aren’t tampered with in the event of a compromise. Frankly, if your PUM proxy is compromised you have bigger problems than log isolation, but you should still exercise care in protecting the integrity of the log files, and perhaps they can help you address those larger issues. Sending events over to the SIEM also helps provide more depth for user activity monitoring. Obviously a key aspect of PUM is privileged user monitoring, but that pertains only when the users access server devices with their enhanced privileges. The SIEM watches a much broader slice of activity which includes accessing applications, email, etc. Don’t expect to start pumping PUM events into the SIEM and fairy dust to start drifting out of the dashboard. You still need to do the work to add correlation rules that leverage the PUM data and update reports, etc. We discuss the process of managing SIEM rule sets fairly extensively in both our Understanding and Selecting SIEM/Log Management and Monitoring Up the Stack papers. Check them out if you want more detail on that process. And with that, we wrap up this series. Over the next few weeks we will package up the posts into a white paper and have our trusty editor (the inimitable Chris Pepper) turn this drivel into coherent copy. Share:

Share:
Read Post

Vulnerability Management Evolution: Core Technologies

As we discussed in the last couple posts, any VM platform must be able to scan infrastructure and scan the application layer. But that’s still mostly tactical stuff. Run the scan, get a report, fix stuff (or not), and move on. When we talk about a strategic and evolved vulnerability management platform, the core technology needs to evolve to serve more than merely tactical goals – it must provide a foundation for a number of additional capabilities. Before we jump into the details we will reiterate the key requirements. You need to be able to scan/assess: Critical Assets: This includes the key elements in your critical data path; it requires both scanning and configuration assessment/policy checking for applications, databases, server and network devices, etc. Scale: Scalability requirements are largely in the eye of the beholder. You want to be sure the platform’s deployment architecture will provide timely results without consuming all your network bandwidth. Accuracy: You don’t have time to mess around, so you don’t want a report with 1,000 vulnerabilities, 400 of them false positives. There is no way to totally avoid false positives (aside from not scanning at all) so accuracy is a key selection criteria. Yes, that was pretty obvious. With a mature technology like vulnerability management the question is less about what you need to do and more about how – especially when positioning for evolution and advanced capabilities. So let’s first dig into the foundation of any kind of strategy platform: the data model. Integrated Data Model What’s the difference between a tactical scanner and an integrated vulnerability/threat management platform? Data sharing, of course. The platform needs the ability to consume and store more than just scan results. You also need configuration data, third party and internal research on vulnerabilities, research on attack paths, and a bunch of other data types we will discuss in the next post on advanced technology. Flexibility and extensibility are key for the data schema. Don’t get stuck with a rigid schema that won’t allow you to add whatever data you need to most effectively prioritize your efforts – whatever data that turns out to be. Once the data is in the foundation, the next requirement involves analytics. You need to set alerts and thresholds on the data and be able to correlate disparate information sources to glean perspective and help with decision support. We are focused on more effectively prioritizing security team efforts, so your platform needs analytical capabilities to help turn all that data into useful information. When you start evaluating specific vendor offerings you may get dragged into a religious discussion of storage approaches and technologies. You know – whether a relational backend, or an object store, or even a proprietary flat file system; provides the performance, flexibility, etc. to serve as the foundation of your platform. Understand that it really is a religious discussion. Your analysis efforts need to focus on the scale and flexibility of whatever data model underlies the platform. Also pay attention to evolution and migration strategies, especially if you plan to stick with your current vendor as they move to a new platform. This transition is akin to a brain transplant, so make sure the vendor has a clear and well-thought-out path to the new platform and data model. Obviously if your vendor stores their data in the cloud it’s not your problem, but don’t put the cart in front of the horse. We will discuss the cloud versus customer premises later in this post. Discovery Once you get to platform capabilities, first you need to find out what’s in your environment. That means a discovery process to find devices on your network and make sure everything is accounted for. You want to avoid the “oh crap” moment, when a bunch of unknown devices show up – and you have no idea what they are, what they have access to, or whether they are steaming piles of malware. Or at least shorten the window between something showing up on your network and the “oh crap” discovery moment. There are a number of techniques for discovery, including actively scanning your entire address space for devices and profiling what you find. That works well enough and tends to be the main way vulnerability management offerings handle discovery, so active discovery is still table stakes for VM offerings. You need to balance the network impact of active discovery against the need to quickly find new devices. Also make sure you can search your networks completely, which means both your IPv4 space and your emerging IPv6 environment. Oh, you don’t have IPv6? Think again. You’d be surprised at the number of devices that ship with IPv6 active by default and if you don’t plan to discover that address space as well, you’ll miss a significant attack surface. You never want to hold a network deployment while your VM vendor gets their act together. You can supplement active discovery with a passive capability that monitors network traffic and identifies new devices based on network communications. Depending on the sophistication of the passive analysis, devices can be profiled and vulnerabilities can be identified, but the primary goal of passive monitoring is to find new unmanaged devices faster. Once a new device is identified passively, you could then launch an active scan to figure out what it’s doing. Passive discovery is also helpful for devices that use firewalls to block active discovery and vulnerability scanning. But that’s not all – depending on the breadth of your vulnerability/threat management program you might want to include endpoints and mobile devices in the discovery process. We always want more data, so we are for determining all assets in your environment. That said, for determining what’s important in your environment (see the asset management/risk scoring section below), endpoints tend to be less important than databases with protected data, so prioritize the effort you expend on discovery and assessment. Finally, another complicating factor for discovery is the cloud. With the ability to spin up and take down instances at will, your platform needs to both track and assess

Share:
Read Post

Incite 4/18/2012: Camión de Calor

It was a Mr. Mom weekend, so I particularly appreciated settling in at the coffee shop on Monday morning and getting some stuff done. And it wasn’t just trucking the kids around to their various activities. It was a big weekend for all of us to catch up on work. XX1 has the CRCT standardized test this week, which is a big deal in GA, so there was much prep for that. Both XX2 and Boy have How to presentations in class this week. So they each had to write and practice a presentation. And I had to finish up our taxes and update the Securosis financials. With the Boss in absentia, I was juggling knives trying to get everything done. I look back on an intense but fun weekend. But when you spend a large block of time with kids, they inevitably surprise you with their interrogation… I mean questions. I was wearing my Hot Truck t-shirt (pictured at right), and the Boy was fascinated. What’s a Hot Truck? Is it hot? That was just the beginning of the questioning, so the Boy needed a little context. The Hot Truck is an institution for those who went to Cornell. Basically a guy made French Bread pizzas in a truck parked every night right off campus. Conveniently enough the truck parked around the corner from my fraternity house, and it was clearly the preferred late night meal after a night of hard partying. At any time of year you had folks milling around the truck waiting for their order. Of course the truck itself was pretty cool. It was basically an old box truck fitted with a pizza oven. The city set up a power outlet right on the street and he’d drive up at maybe 10pm, plug in, and start cooking. Things didn’t get exciting until 1 or 2 in the morning. Then the line would be 10-15 deep and the money guy would write your order on a paper bag. No name, nothing else. Just your order. Obviously there were plenty of ways to game such a sophisticated system. You could sneak a peek at the list and then say the sandwich was yours when it came up. Then wait until the real owner of the sandwich showed up and tried to figure out what happened while you munched on their food. The truck was there until 4am or so – basically until everyone got served. Over time, you got to know Bob (the owner) and he’d let you inside the truck (which was great on those 10-degree winter nights) to chat. You’d get your sandwich made sooner or could just take one of the unclaimed orders. He must have loved talking to all those drunk fools every night. But best of all was the shorthand language that emerged from the Hot Truck. You could order the PMP (Poor Man’s Pizza), MBC (meatballs and cheese), RoRo (roast beef with mushrooms), or even a Shaggy (a little bit of everything) – named after a fraternity brother of mine. And then you’d put on the extras, like Pep (pepperoni) or G&G (grease the garden – mayo and lettuce). All on a french bread pizza. My favorite was the MBC Pep G&G. Between the Hot Truck and beer it’s no wonder I gained a bunch of weight every year at school. But all things end and Bob sold the Truck a few years ago. It was bought by a local convenience store and they still run the truck, as well as serve the sandwiches in their store in downtown Ithaca. It’s just not the same experience though – especially since I don’t eat meatballs anymore. But the memories of Hot Truck live on, and I even have the t-shirt to prove it. –Mike Photo credits: “Hot Truck T-Shirt” taken by Mike Rothman Heavy Research We’re back at work on a variety of our blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS to get all our content in its unabridged glory. Vulnerability Management Evolution Scanning the Application Layer Watching the Watchers (Privileged User Management) Monitor Privileged Users Clouds Rolling In Understanding and Selecting DSP Use Cases Malware Analysis Quant Index of Posts Incite 4 U Stone cold responders: I recently did a session with a dozen or so CISOs at an IANS Forum, and one of the topics was incident response. I started to talk about the human toll of high-pressure incident response, and got a bunch of blank stares. Of course we dug in, and the bigger companies with dedicated response staff said they staff incident response teams with even-keeled folks. The kind who don’t get too excited or depressed or much of anything. Which kind of aligns with Lenny Z’s post on the kind of personality that detects security issues early. Seems anxious folks on edge all the time may not have an effective early warning system. Just more evidence that you need the right folks in the right spots for any chance at success. – MR PCI: Living on borrowed time? Bob Carr of Heartland Payments says Anyone that thinks they’re not going to be breached is naive. This interview, posted just days after Heartland’s financial settlement details went public, reinforces the notion that – just like cockroaches are the only survivors of a nuclear holocaust, only lawyers win in lawsuits. It was expensive for Heartland, and CardSystems Solutions did not survive. Which is topical in light of the Global Payments breach, which illustrates the risk to financial companies when Visa is offering to forgo PCI audits if a majority of merchant transactions originate from EMV terminals. Keep in mind that the breach to Global Payments – or Heartland for that matter – and fraud managed by cloning credit cards are totally separate. So time when merchants and payment processors should more aggressively look at security and breach preparedness as Mr. Carr advocates… Visa is backing off on audits to boost EMV. Some will say this is an exchange for back office security for

Share:
Read Post

Understanding and Selecting DSP: Use Cases

Database Security Platforms are incredibly versatile – offering benefits for security, compliance, and even operations. The following are some classic use cases and ways we often see them used: Monitoring and assessment for regulatory compliance Traditionally the biggest driver for purchasing a DAM/DSP product was to assist with compliance, with Sarbanes-Oxley (SOX) almost single-handedly driving the early market. The features were mostly used in for compliance in a few particular ways: To assess in-scope databases for known security issues and policy compliance. Some regulations require periodic database assessment for security issues, policy (configuration) compliance, or both. To assess databases for entitlement issues related to regulatory compliance. While all vulnerability tools can assess database platforms to some degree, no non-database-specific tools can perform credentialed scanning and assessment of user entitlements. This is now often required by certain regulations to ensure users cannot operate outside their designated scope, and to catch issues like users assigned multiple roles which create a conflict of interest. This can be evaluated manually, but it is far more efficient to use a tool if one is available. To monitor database administrators. This is often the single largest reason to use a DSP product in a compliance project. For comprehensive compliance reports spanning multiple databases and applications. Policy-level reports demonstrate that controls are in place, while other reports provide the audit trail necessary to validate the control. Most tools include such reports for a variety of major regulations, with tailored formats by industry. Web application security Almost all web applications are backed by databases, so SQL injection is one of the top three ways to remotely attack them. Web Applications Firewalls can block some SQL injection, but a key limitation is that they don’t necessarily understand the database they are protecting, and so are prone false positives and negatives. DSPs provide a similar capability – at least for database attacks – but with detailed knowledge of both the database type and how the application uses it. For example, if a web application typically queries a database for credit card numbers, the DSP tool can generate an alert if the application requests more card numbers than a defined threshold (often 1). A DSP tool with content analysis can do the same thing without the operator having to identify the fields containing credit card numbers. Instead you can set a generic “credit card” policy that alerts any time a credit card is returned in a query to the web application server, as nearly no front-end applications ask for full card numbers anymore – they are typically left to transaction systems instead. We have only scratched the surface of the potential security benefits for web apps. For example, query whitelisting can alert any time new queries or patterns appear. It is increasingly common for attackers to inject or alter stored procedures in order to take control of databases, and stored procedure monitoring picks up attacks that a WAF might miss. Some tools on the market even communicate violations back to a WAF, either for alerting or to terminate suspicious sessions and even block the offending IP address. Change management Critical databases go down more often due to poor change management than due to attacks. Unlike application code changes, administrators commonly jump right into production databases and directly manipulate data in ways that can easily cause outages. Adding closed-loop change management supported by DSP reduces the likelihood of a bad change, and provides much deeper accountability – even if shared credentials are used. Every administrator action in the database can be tracked and correlated back to a specific change ticket, with monitoring showing the full log of every SQL command – and often return values as well. Legacy system and service account support Many older databases have terrible logging and auditing features that can crush database performance, when they are even available. Such older databases are also likely to include poorly secured service accounts (although we concede that stored plain-text credentials for application accounts are still all too common in general). DSP can generate an audit trail where the database itself does not offer one, and DSP tools tend to support older databases – even those no longer supported by the database vendor. Even modern databases with auditing tend to impose a greater performance impact than DSPs. They can also audit service accounts – generic accounts used by applications to speed up performance – and even alert on unusual activity. This can be especially useful with even a simple rule – such as alerting on any access attempt using service account credentials from anywhere other than the application server’s IP address. And with that, we have wrapped up our series on Database Security Platforms. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.