Securosis

Research

React Faster and Better: Initial Incident Data

In New Data for New Attacks we discussed why there is usually too much data early in the process. Then we talked about leveraging the right data to alert and trigger the investigative process. But once the incident response process kicks in too much data is rarely the problem, so now let’s dig deeper into the most useful data for the initial stages of incident response. At this early stage, when we don’t yet know what we are dealing with, it’s all about triaging the problem. That usually means confirming the issue with additional data sources and helping to isolate the root cause. We assume that at this stage of investigation a relatively unsophisticated analyst is doing the work. So these investigation patterns can and should be somewhat standard and based on common tools. At this point the analyst is trying to figure out what is being attacked, how the attack is happening, how many devices are involved, and ultimately whether (and what kind of) escalation is required. Once you understand the general concept behind the attack, you can dig a lot deeper with cool forensics tools. But at this point we are trying to figure out where to dig. The best way to stage this discussion is to focus on the initial alert and then what kinds of data would validate the issue and provide the what, how, and how many answers we need at this stage. There are plenty of places we might see the first alert, so let’s go through each in turn. Network If one of your network alerts fires, what then? It becomes all about triangulating the data to pinpoint what devices are in play and what the attack is doing. This kind of process isn’t comprehensive, but should represent the kinds of additional data you’d look for and why. Attack path: The first thing you’ll do is check out the network map and figure out if there is a geographic or segment focus to the network alerts. Basically you are trying to figure out what is under attack and how. Is this a targeted attack, where only specific addresses are generating the funky network traffic? Or is it reconnaissance that may indicate some kind of worm proliferating? Or is it command and control traffic, which might indicate zombies or persistent attackers? Device events/logs/configurations: Once we know what IP addresses are in play, we can dig into those specific devices and figure out what is happening and/or what changed. At this stage of investigation we are looking for obvious stuff. New accounts or executables, or configuration changes, are typical indications of some kind of issue with the device. For the sake of both automation and integrity, this data tends to be centrally stored in one or more system management platforms (SIEM, CMDB, Endpoint Protection Platform, Database Activity Monitor, etc.). Egress path and data: Finally, we want to figure out what information is leaving your network and (presumably) going into the hands of the bad guys, and how. While we aren’t concerned with a full analysis of every line item, we want a general sense of what’s headed out the door and an understanding of how it’s being exfiltrated. Endpoint The endpoint may alert first if it’s some kind of drive-by download or targeted social engineering attack. You also can have this kind of activity in the event of a mobile device doing something bad outside your network, then connecting to your internal network and wreaking havoc. Endpoint logs/configurations: Once you receive an alert that there is something funky happening on an endpoint, the first thing you do is investigate the device to figure out what’s happening. You are looking for new executables on the device or a configuration change that indicates a compromise. Network traffic: Another place to look when you get an endpoint alert is the network traffic originating from and terminating on the device. Analyzing that traffic can give you an idea of what is being targeted. Is it a back-end data store? Is it other devices? How and where is the device is getting instructions? Also be aware of exfiltration activities, which indicate not only a successful compromise, but also a breach. The objective is to profile the attack and understand the objective and tactics. Application targets: Likewise, if it’s obvious a back-end datastore is being targeted, you can look at the transaction stream to decipher what the objective is and how widely has the attack spread. You also need to understand the target to figure out whether and how remediation should occur. Upper Layers If the first indication of an attack happens at the application layer (including databases, application servers, DLP, etc.) – which happens more and more, due to the nature of application-oriented attacks – then it’s about quickly understanding the degree of compromise and watching for data loss. Network traffic: Application attacks are often all about stealing data, so at the network layer you are looking primarily for signs of exfiltration. Secondarily, understanding the attack path will help discover which devices are compromised, and understand short and longer term remediation options. Application changes Is your application functioning normally? Or is the bad guy inserting malware on pages to compromise your customers? While you won’t perform a full application assessment at this point, you need to look for key indicators of the bad guy’s activities that might not show up through network monitoring. Device events/logs/configurations: As with the other scenarios, understanding to what degree the devices involved in the application stack are compromised is important for damage assessment. Content monitors: Given the focus of most application attacks on data theft, you’ll want to consult your content monitors (DLP, as well as outbound web and email filters) to gauge whether the attack has compromised data and to what degree. This information is critical for determining the amount of escalation required. Incident Playbook Obviously there are infinite combinations of data you can look at to figure out what is going on (and

Share:
Read Post

The Evolving Role of Vulnerability Assessment and Penetration Testing in Web Application Security

Yesterday I got involved in an interesting Twitter discussion with Jeremiah Grossman, Chris Eng, Chris Wysopal, and Shrdlu that was inspired by Shrdlu’s post on application security over at Layer8. I sort of suck at 140 character responses, so I figured a blog post was in order. The essence of our discussion was that in organizations with a mature SDLC (security development lifecycle), you shouldn’t need to prove that a vulnerability is exploitable. Once detected, it should be slotted for repair and prioritized based on available information. While I think very few organizations are this mature, I can’t argue with that position (taken by Wysopal). In a mature program you will know what parts of your application the code affects, what potential data is exposed, and even the possible exploitability. You know the data flow, ingress/egress paths, code dependencies, and all the other little things that add up to exploitability. These flaws are more likely to be discovered during code assessment than a vulnerability scan. And biggest of all, you don’t need to prove every vulnerability to management and developers. But I don’t think this, in any way, obviates the value of penetration testing to determine exploitability. First we need to recognize that – especially with web applications – the line between a vulnerability assessment and penetration test is an artificial construct created to assuage the fears of the market in the early days of VA. Assessment and penetration testing are on continuum, and the boundary is a squishy matter of depth, rather than a hard line with clear demarcation. Effectively, every vulnerability scan is the early stage of a (potential) penetration test. And while this difference may be more distinct for a platform, where you check something like patch level, it’s even more vague for a web application, where the mere act of scanning custom code often involves some level of exploitation techniques. I’m no pen tester, but this is one area where I’ve spent reasonable time getting my hands dirty – using various free and commercial tools against both test and (my own) production systems. I’ve even screwed up the Securosis site by misconfiguring my tool and accidentally changing site functionality during what should have been a “safe” scan. I see what we call a vulnerability scan as merely the first, incomplete step of a longer and more involved process. In some cases the scan provides enough information to make an appropriate risk decision, while in others we need to go deeper to determine the full impact of the issue. But here’s the clincher – the more information you have on your environment, the less depth you need to make this decision. The greater your ability to analyze the available variables to determine risk exposure, the less you need to actually test exploitability. This all presumes some sort of ideal state, which is why I don’t ever see the value of penetration testing declining significantly. I think even in a mature organization we will only ever have sufficient information to make exploitation testing unnecessary for a small number of our applications. It isn’t merely a matter of cost or tools, but an effect of normal human behavior and attention spans. Additionally, we cannot analyze all the third party code in our environment to the same degree as our own code. As we described a bit in our Building a Web Application Security Program paper, these are all interlocking pieces of the puzzle. I don’t see any of these as in competition in the long term – once we have the maturity and resources to acquire and use these techniques and tools together. Code analysis and penetration testing are complementary techniques that provide different data to secure our applications. Sometimes we need one or the other, and often we need both. Share:

Share:
Read Post

Motivational Skills for Security Wonks: 2011 Edition

Ah yes, 2011 is here. A new year, which means it’s time to put into action all of those wonderful plans you’ve been percolating over the holidays. Oh, you don’t have plans, besides getting through the day, that is? I get that. The truth is things aren’t likely to be better in 2011 – probably not even tolerable. But we persevere because that’s what we do, although a lot of folks (including AndyITGuy, among others) continue talking burnout risk. And that means we have to refocus. A while back I did a presentation called The Pursuit of Security Happyness. It was my thoughts on how to maintain your sanity while the world continues to burn down around you. But that was about you. If you drew the short straw, you may be in some kind of management position. That means you are not only responsible for your own happiness, but have a bunch of other folks looking to you for inspiration and guidance. I know, you probably don’t feel like much of a role model, but you drew the short straw, remember? Own it, and work at it. The fact remains that most security folks aren’t very good at managing. Neither their security program (what the Pragmatic CSO is about), nor their people. With it being a new year and all, maybe it’s a good idea to start thinking about your management skills as well. Where do you start? I’m glad you asked… I stumbled across a post from Richard Bejtlich over the break, which starts with a discussion about how Steve Jobs builds teams and why they are successful. Yes, you need good people. Yes, the bulk of your time must be spent finding these people. But that’s not interesting. What’s interesting is making the mission exciting. Smart talented folks can work anywhere. As a manager, you need to get them excited about working with you and solving the problems you need to solve. LonerVamp highlighted a great quote at the bottom of Bejtlich’s post: Real IT/security talent will work where they make a difference, not where they reduce costs, “align w/business,” or serve other lame ends. So that’s what you need to focus on. To be clear, someone has to align with business. Someone also has to reduce costs and serve all those lame ends, which was LonerVamp’s point. Unfortunately as a manager, that is likely you. Your job as a manager is to give your people the opportunity to be successful. It means dealing with the stuff they shouldn’t have to. That means making sure they understand the goal and getting them excited about it. Right, you need to be a Security Tony Robbins, and motivate your folks to continue jumping into the meat grinder every day. And all of this is easier said than done. But remember, it’s a new year. If you can’t get excited about what you do now, maybe you need to check out these tips on making your resume kick ass. Share:

Share:
Read Post

HP(en!s) Envy: Dell Buys SecureWorks

Well, it didn’t take long to see the bankers and lawyers stayed busy over the holidays. Dell announced they are acquiring SecureWorks, the MSSP, for an undisclosed sum. Yeah, you are probably thinking the same thing I did initially. Dell? WTF? Now I can certainly rationalize the need for Big IT to expand their services capabilities. IBM started the trend (and got into security directly) with the ISS deal, and HP bought EDS to keep pace. Dell bought Perot as well because that was the me-too thing to do. Dell also tried to buy their way into the storage market, but was foiled by HP’s 3Par deal. Kind of makes you wonder if Dell spent stupid money on this deal (thus the undisclosed sum) because HP was bidding also. But I’ll leave the speculation on bidding wars to others, and focus on the merits of the deal. For SecureWorks, it’s pretty straightforward. Let me list the ways this deal makes sense for them: Cash Simoleons Tons of sales people Euros Global distribution Shekels Balance sheet (for expansion and more deals) Krugerands And I figure it was a lot of cash, because there was nothing forcing SecureWorks to sell now. Besides a big pile of money. Though there was always trepidation inside SecureWorks about doing a deal because of the likelihood of screwing up the corporate culture built by Mike Cote and his team. I guess the bags of money Michael Dell hauled down to Atlanta must have been pretty compelling. Channel Leverage SecureWorks started by focusing on smaller companies. Dell sells computer equipment to a lot of small companies. So there is a clear upside here. Just to give you an idea of scale, I have a buddy who sells for Dell in GA. He has half the state, and his patch is about 5,000 companies. And about 2,500 have done business with Dell over the past two years. That’s half of a relatively small state. But we’ve seen countless deals flounder, and this one will too unless SecureWorks can 1) educate Dell’s field on why security services are a good thing (even if they don’t get bolted into racks), and 2) package and provision the services efficiently and at scale to hit a broader market target. Dell is also global, and that is a key focus for SecureWorks in 2011. They bought a small player in the UK last year, and this gives them a business operations platform to scale globally a lot more quickly than they could do themselves. So there is a lot of upside, assuming reasonable deal integration. No, that’s not a good assumption. I know that. Customer Impact If you are currently a SecureWorks customer, it’s unlikely you’ll notice any difference. The early indications are that Dell won’t be messing with the operations of how security services are delivered. In fact, Dell plans to bring every single SecureWorks employee on board when the deal closes. That is good news for customers (and rather unique, considering today’s cut costs above all else mentality). Optimistically, as the business scales, Dell has plenty of compute horsepower (and global operations centers) to facilitate building out new data centers. If you are a Dell customer, call your Dell rep right now and ask them about the new SecureWorks services. Odds are you’ll hear crickets on the line. It’s not like you are buying a lot of security from Dell now anyway, so does having SecureWorks on board change that? Will you take Dell seriously as a security player now? Yeah, me neither. Though I should be open-minded here, so if this does change your perception, drop a comment below. It would be interesting to hear if that’s true. But I guess every company has to start somewhere as they enter new markets. Little Competitive Impact It’s hard to see how this deal either helps or hinders SecureWorks’ ability to compete. Some competitors were on Twitter today, hoping this puts a crimp in SWRX’s momentum and provides them with an opportunity to gain share. There will inevitably be some integration hiccups, which may slow the move to a new services platform (integrating the heritage SecureWorks and the acquired VeriSign technologies), but Dell would be stupid to mess with it much at all. They’ve got nothing to speak of in security now, and the obvious strategy is to use SecureWorks as the linchpin of a security practice. SecureWorks will see more deals if they can leverage Dell’s global channels, and having Dell behind them can’t hurt in enterprise deals, but candidly it didn’t seem like SWRX was losing deals because they weren’t IBM or Verizon – so it’s not clear how important a change this is. Although Dell may get some benefit from being able to package security into some of their bigger IT operations product and services deals. Dell? Really? Yet I’m still perplexed that Dell was the buyer. They do very little with security. They do even less with security services. Although I guess you could see Dell’s partnership with SecureWorks, announced back in July, as a harbinger of things to come. The joint offering hasn’t even hit the market yet. You still have to wonder why they’d buy the cow, when they don’t even know what the milk tastes like. It’s not like the $120 million in revenue SecureWorks brings is going to move Dell’s needle. Even if they triple it over the next 2 years, that would still be about 1% of Dell’s business. Unless SWRX was in play and Dell needed to pull the trigger. Buying SecureWorks certainly won’t hurt Dell, but it’s not clear how it helps much, either. I guess every big IT shop needs some real estate in security, as they try to become a solutions provider, but the fit seems off. You can clearly see how SWRX would fit better with an HP or a Cisco or even a MFE/Intel. But Dell? It seems like the obvious motivation is envy of its bigger

Share:
Read Post

Mobile Device Security: I can haz your mobile

As we start 2011, a friend pointed out that my endpoint research agenda (including much of my work on Positivity) is pretty PC platform focused. And relative to endpoint security that is on point. But the reality is that nowadays we cannot assume that our only threat vectors remain PC-like devices. Given that pretty much all the smart phones out there are as powerful as the computers I used 5 years ago, we need to factor in that mobile devices are the next frontier for badness. Guys like Charlie Miller have already shown (multiple times) how to break Apple’s mobile devices, and we can probably consider ourselves more lucky than good that we’ve been spared a truly problematic mobile attack to date. Luck is no better a strategy than hope. So based on the largesse of our friends at Fosforous, who are running a program with the fine folks at McAfee, I’m going to write a quick paper outlining some realities of mobile device security. You’ve Lost Control: Accept It First let’s point out the elephant in the room: Control. If you feel the need to control your end-user computing environment you are in the wrong profession. The good old days of dictating devices, platforms, and applications are gone – along with the KGB interrogation lights. You may have missed the obituary, but control of devices was pretty well staked through the heart by the advent of cool iDevices. Yes, I’m talking about iPhones, iPads, Androids, and Palms. OK, Palm not so much, but certainly the others. Some smart IT folks realized, when the CEO called and said she had an iPad and needed to get her email and look at those deal documents, that we were entering a different world. Lots of folks are calling this consumerization, which is fine. Just like anything else, it needs a name, but to me this is really just a clear indicatiion that we have lost control. But you don’t have to accept it. You can try to find a job with one of the five or ten government agencies that can still dictate their computing environment (and good luck as they move all their stuff to the cloud). But the rest of us need to accept that our employees will be bringing their own devices onto the network, and we can’t stop them. So we first need to figure out what the big deal is. How many ways can this consumerization wave kill us? And yes, you need to know. Sticking your head into the sand like an ostrich isn’t a viable option. I Can Haz Your Mobile Devices As always, you need to start any security-oriented program by understanding the risks you face. To be clear, a lot of these risks aren’t necessarily caused by the bad guys, but security folks already knew that. Our own people tend to do more damage to our systems than any of the attackers. So we’ll cover a mix of external and self-inflicted wounds. Data Loss The first issue with having key people (or even non-key people) access your company’s stuff using their own devices is data security. Clearly things like email and the fancy iPhone app from your CRM vendor are driving 24/7 access via these devices in the first place. So thinking about data loss is tops on the hit parade: Device Loss: You’ll be amazed at the number of ways your employees lose mobile devices. It’s not impossible to leave a 17” laptop in an airplane seat, but it’s hard. Leaving smartphones I-don’t-know-where happens all the time. And Find My iPhone won’t save you when the battery dies or the thief engages airplane mode. So you have to plan for the fact that these devices will be lost with sensitive data on them, and you need to protect that data. Device Sale: Oh yeah, these devices are owned by employees. So when they feel the urge to buy the new Shiny Object, they will. Those crazy employees usually find a buyer on eBay or Craigslist and send them the old device. Will it be cleaned? Definitely possible! Is it more likely to have your Q4 forecast on it? Don’t answer that, but make sure you have some way to address this. Malware You can’t discuss security without at least mentioning malware. So far attacks on smartphones have been relatively tame. But I wouldn’t build my security strategy on a bet that they will remain tame. Again, hope is not a strategy. Weaponized Exploits: To date there hasn’t been much malware targeting mobiles, although sites like jailbreak.me show what is possible. So it’s not a matter of if, but when some self-proliferating exploit will make the rounds and spread like wildfire. App Store Mayhem: Sure, all these app stores include controls to ensure malware doesn’t make its way onto authorized applications, but you have to expect that at some point, one of these process will experience a breakdown (even if it’s just an obscure third-party store operator losing their keys), and something bad will get in. And if it’s a widespread application? Right: mayhem and anarchy, which is always ‘fun’ for us security folks. Jailbreak: Remember, these devices are not owned by your organization. So employees can consciously decide to bypass whatever security controls are built into the platform. They don’t necessarily care that jailbreaking basically obviates all security controls you might be counting on. Are you having fun yet? Manageability Finally, we’ll talk a bit about the complexities of managing thousands of devices – some you own and some you don’t. And sure, that’s not really a security issue until you mess up a configuration and open up a huge hole on the device(s). So managing and enforcing policies is critical to maintaining any semblance of security on these devices. Misconfiguration: What happens when you get 20 different device types with 5 different versions of operating systems, and 25 different apps (that you care about) running on each? Configuration nightmare. This is

Share:
Read Post

Mr. Cranky Faces Reality

There are some mornings I should not be allowed to look at the Internet. Those days when I think someone peed in my cornflakes. The mornings when every single media release, blog post, and news item, looks like total BS. I think maybe they are just struggling for news during the holiday season, or maybe I am just unsually snarky. I don’t know. Today was one of those days. I was combing through my feed reader and ran across Brian Prince’s article, Database Security Reminder: Don’t Let Your Guard Down. The gist is that if you move your database into the cloud you could be hacked, especially if you don’t patch the database. Uh, come again? Brian’s point is that if you don’t have a firewall to protect against port scanning you help hackers locate databases. And if you set Oracle to allow unlimited password attempts, your accounts can be brute-forced. And if you expose an unpatched version of Oracle to the Internet, vulnerabilities can be exploited. Now I am annoyed. Was this supposed to be news because the database was running on Amazon’s EC2, and that’s cloud, so it must be newsworthy? Was this a subtle way of telling us that the database vulnerability assessment and activity monitoring vendors are still important and relevant in the cloudy world? Was there a message in there about the quality of Amazon’s firewall, such that databases can be located by port scans? Or perhaps a veiled criticism that Amazon’s outbound monitoring failed to detect suspicious activity? I figure most companies by now have gotten the memo that databases get hacked. And they know you need to correctly configure and patch them prior to deployment. So how is this different than the database within your own IT data center, and why is this reminder newsworthy? Turns out it is. I continue to read more and more news, and see database hack after database hack after database hack. And that is right on the heels of the Gawker/Lifehacker/Gizmodo screwup. I have lost count of the other hospitals, universities, and Silverpop customers in the last month who are victims of database breaches. Okay, I concede Brian has a point. Maybe a reminder to get the basics right is worthy of a holiday post because there are plenty of companies still messing this up. I was thinking this was pure hyperbole and telling us stuff we already know. Apparently I was wrong. I am calm now, though still depressed. Thanks for sharing, Brian. I think I’ll go back to bed. Share:

Share:
Read Post

React Faster and Better Chugging along

As we described a while back, we have separated our heavier white paper research out into a complete feed, and slimmed down the main feed. But that means folks subscribing only to the main feed may miss some of the outstanding blog series we do. So every so often we’ll cross-post links to the series as they are developing, inviting those interested to check out the research and provide comments on what is right and wrong. As we recast the series Rich and I did earlier this year on Incident Response Fundamentals, our intention was to go deeper and more advanced on incident response in the React Faster and Better series. We are are almost half-way through that series. Here are a few links to what we’ve posted. Check it out – it’s good stuff. Introduction Incident Response Gaps: We identify why the fundamental process we described won’t be enough as the attackers get better, more persistent, and more innovative. New Data for New Attacks: We start to analyze the kinds of data we need for these advanced techniques, where we can get it, and why. Alerts & Triggers: Data is good, but not enough to understand when the response process needs to be engaged. So we discuss how to figure out when to alert, covering both internal and external sources. The next phase of the series will talk about how to leverage the additional data types to work through a tiered response process. First we’ll deal with what a first-level analyst needs, and then proceed through the advanced tiers of analysis and response. Stay tuned. Share:

Share:
Read Post

React Faster and Better: Alerts & Triggers

In our last post New Data for New Attacks, we delved into the types of data we want to systematically collect, through both log record aggregation and full packet capture. As we’ve said many times, data isn’t the issue – it’s the lack of actionable information for prioritizing our efforts. That means we must more effectively automate analysis of this data and draw the proper conclusions about what is at risk and what isn’t. Automate = Tools As much as we always like to start with process (since that’s where most security professionals fail), automation is really about tools. And there plenty of tools to bring to bear on setting alerts to let you know when something is funky. You have firewalls, IDS/IPS devices, network monitors, server monitors, performance monitors, DLP, email and web filtering gateways … and that’s just the beginning. In fact there is a way to monitor everything in your environment. Twice. And many organizations pump all this data into some kind of SIEM to analyze it, but this continues to underscore that we have too much of the wrong kind of data, at least for incident response. So let’s table the tools discussion for a few minutes and figure out what we are really looking for… Threat Modeling Regardless of the tool being used to fire alerts, you need to 1) know what you are trying to protect; 2) know what an attack on it looks like; and 3) understand relative priorities of those attacks. Alerts are easy. Relevant alerts are hard. That’s why we need to focus considerable effort early in the process on figuring out what is at risk and how it can be attacked. So we will take a page from Security 101 and spend some time building threat models. We’ve delved into this process in gory detail in our Network Security Operations Quant research, so we won’t repeat it all here, but these are the key steps: Define what’s important: First you need to figure out what critical information/applications will create the biggest issues if compromised. Model how it can be attacked: It’s always fun to think like a hacker, so put on your proverbial black hat and think about ways to exploit and compromise the first of the most important stuff you just identified. Determine the data those attacks would generate: Those attacks will result in specific data patterns that you can look for using your analysis tools. This isn’t always an attack signature – it may be the effect of the attack, such as excessive data egress or bandwidth usage. Set alert thresholds: Once you establish the patterns, then figure out when to actually trigger an alert. This is an art, and most organization start with fairly broad thresholds, knowing they result in more alerts initially. Optimize thresholds: Once your systems start hammering you with alerts, you’ll be able to tune the system by tightening the thresholds to focus on real alerts and increase the signal-to-noise ratio. Repeat for next critical system/data: Each critical information source/application will have its own set of attacks to deal with. Once you’ve modeled one, go back and repeat the process. You can’t do everything at once, so don’t even try. Start with the most critical stuff, get a quick win, and then expand use of the system. Keep in mind that the larger your environment, the more intractable modeling everything becomes. You will never know where all the sensitive stuff is. Nor can you build a threat model for every known attack. That’s why under all our research is the idea of determining what’s really important and working hard to protect those resources. Once we have threat models implemented in our monitoring tool(s) – which include element managers, analysis tools like SIEM, and even content monitoring tools like DLP – these products can (and should) be configured to alert based on a scenario in the threat model. More Distant Early Warning We wish the threat models could be comprehensive, but inevitably you’ll miss something – accept this. And there are other places to glean useful intelligence, which can be factored into your analysis and potentially show attacks not factored into the threat models. Baselines: Depending on the depth of monitoring, you can and should be looking at establishing baselines for your critical assets. That could mean network activity on protected segments (using Netflow), or perhaps transaction types (SQL queries on a key database), but you need some way to define normal for your environment. Then you can start by alerting on activities you determine are not normal. Vendor feeds: These feeds come from your vendors – mostly IDS/IPS – because they have a research teams tasked with staying on top of emerging attack plans. Admittedly this is reactive, and needs to be built on known attacks, but the vendors spend significant resources making sure their tools remain current. Keep in mind you’ll want to tailor these signatures to your organization/industry – obviously you don’t need to look for SCADA attacks if you don’t have those control systems, but the inclusive side is a bit more involved. Intelligence sharing: Larger organizations see a wide variety of stuff, mostly because they are frequently targeted and have the staff to see attack patterns. Many of these folks do a little bit of co-opetition and participate in sharing groups (like FS-ISAC) to leverage each other’s experiences. This could be a formal deal or just informal conversations over beers every couple weeks. Either way, it’s good to know what other peer organizations are seeing. The point is that there are many places to leverage data and generate alerts. No one information source can identify all emerging attacks. You’re best served by using many, then establishing a method to prioritize alerts which warrant investigation. Visualization Just about every organization – particularly large enterprises – generates more alerts than it has the capability to investigate. If you don’t, there’s a good chance you aren’t alerting enough. So prioritization is a key

Share:
Read Post

Web Application Firewalls Really Work

A couple months ago I decided to finally dig in and see whether WAFs (Web Application Firewalls) are really useful, or merely another crappy shiny object we spend a lot of money on to get the auditors off our backs. Sure, the WAF vendors keep telling me how well their products work and how many big clients they have, but that’s not the best way to figure out whether something really does the job. I also talk with a bunch of end users who provide darn good info, but even that isn’t always the best way to determine the security value of a tool. Not all users have good visibility and internal controls to measure the effectiveness of the tool, and many can’t deploy it in an optimal manner due to all sorts of political and technical issues. In this case I started with users, then checked with a bunch of my penetration testing friends. While a pen tester doesn’t necessarily understand the overall value of a tool (since they don’t have to pay the same kind of attention to compliance/management issues), a good tester most definitely knows how much harder a security tool makes their life. The end result was that WAFs do have value when used properly, and may provide value beyond pure security, but aren’t a panacea. Since you could say that about the value of a gerbil for defending against APT too, here’s a little more detail… WAFs are best at protecting against known framework vulnerabilities (e.g., you run WordPress and haven’t patched), known automated (script kiddie) attacks, or when configured with (defensive) application-specific rules (whitelisting, although almost no one really deploys them this way). WAFs are moderately effective against general XSS/SQL injection. All the researchers said it was a roadbump for custom attacks that added to the time it took them to generate a successful exploit… with varying effectiveness depending on many factors – particularly the target app behind the WAF. The better the configuration, based on deep application knowledge, the more difficult the attack. But they stated that the increasing time to exploit increases the attacker’s costs, and thus might reduce the chances the attacker would devote time to the app and increase your probability of detecting them. Still, if someone really wants to get you and is knowledgeable, no WAF alone will stop them. The products often provide great analytics value because they are sometimes better than normal tracking/stats packages for understanding what’s going on with your site. They don’t do anything for logic flaws (unless you hand-code/configure them) or much beyond XSS/SQL injection. They aren’t as easy to use as is usually promised in the sales cycle. Gee, what a shock. Again, I could say this about gerbils. In some ways, now that I’ve written this, I feel like I could have substituted “duh” for the entire post. Yet again we have a tool that promises a lot, is often misused, but (used properly) can provide a spectrum of value from “keeping the auditors off our backs” to “protects against some 1337 haxor in a leather bodysuit”. But don’t let anyone tell you they are a waste of money… just make sure you know what you’re getting and use it right. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.