Securosis

Research

React Faster and Better: Initial Incident Data

In New Data for New Attacks we discussed why there is usually too much data early in the process. Then we talked about leveraging the right data to alert and trigger the investigative process. But once the incident response process kicks in too much data is rarely the problem, so now let’s dig deeper into the most useful data for the initial stages of incident response. At this early stage, when we don’t yet know what we are dealing with, it’s all about triaging the problem. That usually means confirming the issue with additional data sources and helping to isolate the root cause. We assume that at this stage of investigation a relatively unsophisticated analyst is doing the work. So these investigation patterns can and should be somewhat standard and based on common tools. At this point the analyst is trying to figure out what is being attacked, how the attack is happening, how many devices are involved, and ultimately whether (and what kind of) escalation is required. Once you understand the general concept behind the attack, you can dig a lot deeper with cool forensics tools. But at this point we are trying to figure out where to dig. The best way to stage this discussion is to focus on the initial alert and then what kinds of data would validate the issue and provide the what, how, and how many answers we need at this stage. There are plenty of places we might see the first alert, so let’s go through each in turn. Network If one of your network alerts fires, what then? It becomes all about triangulating the data to pinpoint what devices are in play and what the attack is doing. This kind of process isn’t comprehensive, but should represent the kinds of additional data you’d look for and why. Attack path: The first thing you’ll do is check out the network map and figure out if there is a geographic or segment focus to the network alerts. Basically you are trying to figure out what is under attack and how. Is this a targeted attack, where only specific addresses are generating the funky network traffic? Or is it reconnaissance that may indicate some kind of worm proliferating? Or is it command and control traffic, which might indicate zombies or persistent attackers? Device events/logs/configurations: Once we know what IP addresses are in play, we can dig into those specific devices and figure out what is happening and/or what changed. At this stage of investigation we are looking for obvious stuff. New accounts or executables, or configuration changes, are typical indications of some kind of issue with the device. For the sake of both automation and integrity, this data tends to be centrally stored in one or more system management platforms (SIEM, CMDB, Endpoint Protection Platform, Database Activity Monitor, etc.). Egress path and data: Finally, we want to figure out what information is leaving your network and (presumably) going into the hands of the bad guys, and how. While we aren’t concerned with a full analysis of every line item, we want a general sense of what’s headed out the door and an understanding of how it’s being exfiltrated. Endpoint The endpoint may alert first if it’s some kind of drive-by download or targeted social engineering attack. You also can have this kind of activity in the event of a mobile device doing something bad outside your network, then connecting to your internal network and wreaking havoc. Endpoint logs/configurations: Once you receive an alert that there is something funky happening on an endpoint, the first thing you do is investigate the device to figure out what’s happening. You are looking for new executables on the device or a configuration change that indicates a compromise. Network traffic: Another place to look when you get an endpoint alert is the network traffic originating from and terminating on the device. Analyzing that traffic can give you an idea of what is being targeted. Is it a back-end data store? Is it other devices? How and where is the device is getting instructions? Also be aware of exfiltration activities, which indicate not only a successful compromise, but also a breach. The objective is to profile the attack and understand the objective and tactics. Application targets: Likewise, if it’s obvious a back-end datastore is being targeted, you can look at the transaction stream to decipher what the objective is and how widely has the attack spread. You also need to understand the target to figure out whether and how remediation should occur. Upper Layers If the first indication of an attack happens at the application layer (including databases, application servers, DLP, etc.) – which happens more and more, due to the nature of application-oriented attacks – then it’s about quickly understanding the degree of compromise and watching for data loss. Network traffic: Application attacks are often all about stealing data, so at the network layer you are looking primarily for signs of exfiltration. Secondarily, understanding the attack path will help discover which devices are compromised, and understand short and longer term remediation options. Application changes Is your application functioning normally? Or is the bad guy inserting malware on pages to compromise your customers? While you won’t perform a full application assessment at this point, you need to look for key indicators of the bad guy’s activities that might not show up through network monitoring. Device events/logs/configurations: As with the other scenarios, understanding to what degree the devices involved in the application stack are compromised is important for damage assessment. Content monitors: Given the focus of most application attacks on data theft, you’ll want to consult your content monitors (DLP, as well as outbound web and email filters) to gauge whether the attack has compromised data and to what degree. This information is critical for determining the amount of escalation required. Incident Playbook Obviously there are infinite combinations of data you can look at to figure out what is going on (and

Share:
Read Post

Motivational Skills for Security Wonks: 2011 Edition

Ah yes, 2011 is here. A new year, which means it’s time to put into action all of those wonderful plans you’ve been percolating over the holidays. Oh, you don’t have plans, besides getting through the day, that is? I get that. The truth is things aren’t likely to be better in 2011 – probably not even tolerable. But we persevere because that’s what we do, although a lot of folks (including AndyITGuy, among others) continue talking burnout risk. And that means we have to refocus. A while back I did a presentation called The Pursuit of Security Happyness. It was my thoughts on how to maintain your sanity while the world continues to burn down around you. But that was about you. If you drew the short straw, you may be in some kind of management position. That means you are not only responsible for your own happiness, but have a bunch of other folks looking to you for inspiration and guidance. I know, you probably don’t feel like much of a role model, but you drew the short straw, remember? Own it, and work at it. The fact remains that most security folks aren’t very good at managing. Neither their security program (what the Pragmatic CSO is about), nor their people. With it being a new year and all, maybe it’s a good idea to start thinking about your management skills as well. Where do you start? I’m glad you asked… I stumbled across a post from Richard Bejtlich over the break, which starts with a discussion about how Steve Jobs builds teams and why they are successful. Yes, you need good people. Yes, the bulk of your time must be spent finding these people. But that’s not interesting. What’s interesting is making the mission exciting. Smart talented folks can work anywhere. As a manager, you need to get them excited about working with you and solving the problems you need to solve. LonerVamp highlighted a great quote at the bottom of Bejtlich’s post: Real IT/security talent will work where they make a difference, not where they reduce costs, “align w/business,” or serve other lame ends. So that’s what you need to focus on. To be clear, someone has to align with business. Someone also has to reduce costs and serve all those lame ends, which was LonerVamp’s point. Unfortunately as a manager, that is likely you. Your job as a manager is to give your people the opportunity to be successful. It means dealing with the stuff they shouldn’t have to. That means making sure they understand the goal and getting them excited about it. Right, you need to be a Security Tony Robbins, and motivate your folks to continue jumping into the meat grinder every day. And all of this is easier said than done. But remember, it’s a new year. If you can’t get excited about what you do now, maybe you need to check out these tips on making your resume kick ass. Share:

Share:
Read Post

HP(en!s) Envy: Dell Buys SecureWorks

Well, it didn’t take long to see the bankers and lawyers stayed busy over the holidays. Dell announced they are acquiring SecureWorks, the MSSP, for an undisclosed sum. Yeah, you are probably thinking the same thing I did initially. Dell? WTF? Now I can certainly rationalize the need for Big IT to expand their services capabilities. IBM started the trend (and got into security directly) with the ISS deal, and HP bought EDS to keep pace. Dell bought Perot as well because that was the me-too thing to do. Dell also tried to buy their way into the storage market, but was foiled by HP’s 3Par deal. Kind of makes you wonder if Dell spent stupid money on this deal (thus the undisclosed sum) because HP was bidding also. But I’ll leave the speculation on bidding wars to others, and focus on the merits of the deal. For SecureWorks, it’s pretty straightforward. Let me list the ways this deal makes sense for them: Cash Simoleons Tons of sales people Euros Global distribution Shekels Balance sheet (for expansion and more deals) Krugerands And I figure it was a lot of cash, because there was nothing forcing SecureWorks to sell now. Besides a big pile of money. Though there was always trepidation inside SecureWorks about doing a deal because of the likelihood of screwing up the corporate culture built by Mike Cote and his team. I guess the bags of money Michael Dell hauled down to Atlanta must have been pretty compelling. Channel Leverage SecureWorks started by focusing on smaller companies. Dell sells computer equipment to a lot of small companies. So there is a clear upside here. Just to give you an idea of scale, I have a buddy who sells for Dell in GA. He has half the state, and his patch is about 5,000 companies. And about 2,500 have done business with Dell over the past two years. That’s half of a relatively small state. But we’ve seen countless deals flounder, and this one will too unless SecureWorks can 1) educate Dell’s field on why security services are a good thing (even if they don’t get bolted into racks), and 2) package and provision the services efficiently and at scale to hit a broader market target. Dell is also global, and that is a key focus for SecureWorks in 2011. They bought a small player in the UK last year, and this gives them a business operations platform to scale globally a lot more quickly than they could do themselves. So there is a lot of upside, assuming reasonable deal integration. No, that’s not a good assumption. I know that. Customer Impact If you are currently a SecureWorks customer, it’s unlikely you’ll notice any difference. The early indications are that Dell won’t be messing with the operations of how security services are delivered. In fact, Dell plans to bring every single SecureWorks employee on board when the deal closes. That is good news for customers (and rather unique, considering today’s cut costs above all else mentality). Optimistically, as the business scales, Dell has plenty of compute horsepower (and global operations centers) to facilitate building out new data centers. If you are a Dell customer, call your Dell rep right now and ask them about the new SecureWorks services. Odds are you’ll hear crickets on the line. It’s not like you are buying a lot of security from Dell now anyway, so does having SecureWorks on board change that? Will you take Dell seriously as a security player now? Yeah, me neither. Though I should be open-minded here, so if this does change your perception, drop a comment below. It would be interesting to hear if that’s true. But I guess every company has to start somewhere as they enter new markets. Little Competitive Impact It’s hard to see how this deal either helps or hinders SecureWorks’ ability to compete. Some competitors were on Twitter today, hoping this puts a crimp in SWRX’s momentum and provides them with an opportunity to gain share. There will inevitably be some integration hiccups, which may slow the move to a new services platform (integrating the heritage SecureWorks and the acquired VeriSign technologies), but Dell would be stupid to mess with it much at all. They’ve got nothing to speak of in security now, and the obvious strategy is to use SecureWorks as the linchpin of a security practice. SecureWorks will see more deals if they can leverage Dell’s global channels, and having Dell behind them can’t hurt in enterprise deals, but candidly it didn’t seem like SWRX was losing deals because they weren’t IBM or Verizon – so it’s not clear how important a change this is. Although Dell may get some benefit from being able to package security into some of their bigger IT operations product and services deals. Dell? Really? Yet I’m still perplexed that Dell was the buyer. They do very little with security. They do even less with security services. Although I guess you could see Dell’s partnership with SecureWorks, announced back in July, as a harbinger of things to come. The joint offering hasn’t even hit the market yet. You still have to wonder why they’d buy the cow, when they don’t even know what the milk tastes like. It’s not like the $120 million in revenue SecureWorks brings is going to move Dell’s needle. Even if they triple it over the next 2 years, that would still be about 1% of Dell’s business. Unless SWRX was in play and Dell needed to pull the trigger. Buying SecureWorks certainly won’t hurt Dell, but it’s not clear how it helps much, either. I guess every big IT shop needs some real estate in security, as they try to become a solutions provider, but the fit seems off. You can clearly see how SWRX would fit better with an HP or a Cisco or even a MFE/Intel. But Dell? It seems like the obvious motivation is envy of its bigger

Share:
Read Post

Mobile Device Security: I can haz your mobile

As we start 2011, a friend pointed out that my endpoint research agenda (including much of my work on Positivity) is pretty PC platform focused. And relative to endpoint security that is on point. But the reality is that nowadays we cannot assume that our only threat vectors remain PC-like devices. Given that pretty much all the smart phones out there are as powerful as the computers I used 5 years ago, we need to factor in that mobile devices are the next frontier for badness. Guys like Charlie Miller have already shown (multiple times) how to break Apple’s mobile devices, and we can probably consider ourselves more lucky than good that we’ve been spared a truly problematic mobile attack to date. Luck is no better a strategy than hope. So based on the largesse of our friends at Fosforous, who are running a program with the fine folks at McAfee, I’m going to write a quick paper outlining some realities of mobile device security. You’ve Lost Control: Accept It First let’s point out the elephant in the room: Control. If you feel the need to control your end-user computing environment you are in the wrong profession. The good old days of dictating devices, platforms, and applications are gone – along with the KGB interrogation lights. You may have missed the obituary, but control of devices was pretty well staked through the heart by the advent of cool iDevices. Yes, I’m talking about iPhones, iPads, Androids, and Palms. OK, Palm not so much, but certainly the others. Some smart IT folks realized, when the CEO called and said she had an iPad and needed to get her email and look at those deal documents, that we were entering a different world. Lots of folks are calling this consumerization, which is fine. Just like anything else, it needs a name, but to me this is really just a clear indicatiion that we have lost control. But you don’t have to accept it. You can try to find a job with one of the five or ten government agencies that can still dictate their computing environment (and good luck as they move all their stuff to the cloud). But the rest of us need to accept that our employees will be bringing their own devices onto the network, and we can’t stop them. So we first need to figure out what the big deal is. How many ways can this consumerization wave kill us? And yes, you need to know. Sticking your head into the sand like an ostrich isn’t a viable option. I Can Haz Your Mobile Devices As always, you need to start any security-oriented program by understanding the risks you face. To be clear, a lot of these risks aren’t necessarily caused by the bad guys, but security folks already knew that. Our own people tend to do more damage to our systems than any of the attackers. So we’ll cover a mix of external and self-inflicted wounds. Data Loss The first issue with having key people (or even non-key people) access your company’s stuff using their own devices is data security. Clearly things like email and the fancy iPhone app from your CRM vendor are driving 24/7 access via these devices in the first place. So thinking about data loss is tops on the hit parade: Device Loss: You’ll be amazed at the number of ways your employees lose mobile devices. It’s not impossible to leave a 17” laptop in an airplane seat, but it’s hard. Leaving smartphones I-don’t-know-where happens all the time. And Find My iPhone won’t save you when the battery dies or the thief engages airplane mode. So you have to plan for the fact that these devices will be lost with sensitive data on them, and you need to protect that data. Device Sale: Oh yeah, these devices are owned by employees. So when they feel the urge to buy the new Shiny Object, they will. Those crazy employees usually find a buyer on eBay or Craigslist and send them the old device. Will it be cleaned? Definitely possible! Is it more likely to have your Q4 forecast on it? Don’t answer that, but make sure you have some way to address this. Malware You can’t discuss security without at least mentioning malware. So far attacks on smartphones have been relatively tame. But I wouldn’t build my security strategy on a bet that they will remain tame. Again, hope is not a strategy. Weaponized Exploits: To date there hasn’t been much malware targeting mobiles, although sites like jailbreak.me show what is possible. So it’s not a matter of if, but when some self-proliferating exploit will make the rounds and spread like wildfire. App Store Mayhem: Sure, all these app stores include controls to ensure malware doesn’t make its way onto authorized applications, but you have to expect that at some point, one of these process will experience a breakdown (even if it’s just an obscure third-party store operator losing their keys), and something bad will get in. And if it’s a widespread application? Right: mayhem and anarchy, which is always ‘fun’ for us security folks. Jailbreak: Remember, these devices are not owned by your organization. So employees can consciously decide to bypass whatever security controls are built into the platform. They don’t necessarily care that jailbreaking basically obviates all security controls you might be counting on. Are you having fun yet? Manageability Finally, we’ll talk a bit about the complexities of managing thousands of devices – some you own and some you don’t. And sure, that’s not really a security issue until you mess up a configuration and open up a huge hole on the device(s). So managing and enforcing policies is critical to maintaining any semblance of security on these devices. Misconfiguration: What happens when you get 20 different device types with 5 different versions of operating systems, and 25 different apps (that you care about) running on each? Configuration nightmare. This is

Share:
Read Post

React Faster and Better Chugging along

As we described a while back, we have separated our heavier white paper research out into a complete feed, and slimmed down the main feed. But that means folks subscribing only to the main feed may miss some of the outstanding blog series we do. So every so often we’ll cross-post links to the series as they are developing, inviting those interested to check out the research and provide comments on what is right and wrong. As we recast the series Rich and I did earlier this year on Incident Response Fundamentals, our intention was to go deeper and more advanced on incident response in the React Faster and Better series. We are are almost half-way through that series. Here are a few links to what we’ve posted. Check it out – it’s good stuff. Introduction Incident Response Gaps: We identify why the fundamental process we described won’t be enough as the attackers get better, more persistent, and more innovative. New Data for New Attacks: We start to analyze the kinds of data we need for these advanced techniques, where we can get it, and why. Alerts & Triggers: Data is good, but not enough to understand when the response process needs to be engaged. So we discuss how to figure out when to alert, covering both internal and external sources. The next phase of the series will talk about how to leverage the additional data types to work through a tiered response process. First we’ll deal with what a first-level analyst needs, and then proceed through the advanced tiers of analysis and response. Stay tuned. Share:

Share:
Read Post

React Faster and Better: Alerts & Triggers

In our last post New Data for New Attacks, we delved into the types of data we want to systematically collect, through both log record aggregation and full packet capture. As we’ve said many times, data isn’t the issue – it’s the lack of actionable information for prioritizing our efforts. That means we must more effectively automate analysis of this data and draw the proper conclusions about what is at risk and what isn’t. Automate = Tools As much as we always like to start with process (since that’s where most security professionals fail), automation is really about tools. And there plenty of tools to bring to bear on setting alerts to let you know when something is funky. You have firewalls, IDS/IPS devices, network monitors, server monitors, performance monitors, DLP, email and web filtering gateways … and that’s just the beginning. In fact there is a way to monitor everything in your environment. Twice. And many organizations pump all this data into some kind of SIEM to analyze it, but this continues to underscore that we have too much of the wrong kind of data, at least for incident response. So let’s table the tools discussion for a few minutes and figure out what we are really looking for… Threat Modeling Regardless of the tool being used to fire alerts, you need to 1) know what you are trying to protect; 2) know what an attack on it looks like; and 3) understand relative priorities of those attacks. Alerts are easy. Relevant alerts are hard. That’s why we need to focus considerable effort early in the process on figuring out what is at risk and how it can be attacked. So we will take a page from Security 101 and spend some time building threat models. We’ve delved into this process in gory detail in our Network Security Operations Quant research, so we won’t repeat it all here, but these are the key steps: Define what’s important: First you need to figure out what critical information/applications will create the biggest issues if compromised. Model how it can be attacked: It’s always fun to think like a hacker, so put on your proverbial black hat and think about ways to exploit and compromise the first of the most important stuff you just identified. Determine the data those attacks would generate: Those attacks will result in specific data patterns that you can look for using your analysis tools. This isn’t always an attack signature – it may be the effect of the attack, such as excessive data egress or bandwidth usage. Set alert thresholds: Once you establish the patterns, then figure out when to actually trigger an alert. This is an art, and most organization start with fairly broad thresholds, knowing they result in more alerts initially. Optimize thresholds: Once your systems start hammering you with alerts, you’ll be able to tune the system by tightening the thresholds to focus on real alerts and increase the signal-to-noise ratio. Repeat for next critical system/data: Each critical information source/application will have its own set of attacks to deal with. Once you’ve modeled one, go back and repeat the process. You can’t do everything at once, so don’t even try. Start with the most critical stuff, get a quick win, and then expand use of the system. Keep in mind that the larger your environment, the more intractable modeling everything becomes. You will never know where all the sensitive stuff is. Nor can you build a threat model for every known attack. That’s why under all our research is the idea of determining what’s really important and working hard to protect those resources. Once we have threat models implemented in our monitoring tool(s) – which include element managers, analysis tools like SIEM, and even content monitoring tools like DLP – these products can (and should) be configured to alert based on a scenario in the threat model. More Distant Early Warning We wish the threat models could be comprehensive, but inevitably you’ll miss something – accept this. And there are other places to glean useful intelligence, which can be factored into your analysis and potentially show attacks not factored into the threat models. Baselines: Depending on the depth of monitoring, you can and should be looking at establishing baselines for your critical assets. That could mean network activity on protected segments (using Netflow), or perhaps transaction types (SQL queries on a key database), but you need some way to define normal for your environment. Then you can start by alerting on activities you determine are not normal. Vendor feeds: These feeds come from your vendors – mostly IDS/IPS – because they have a research teams tasked with staying on top of emerging attack plans. Admittedly this is reactive, and needs to be built on known attacks, but the vendors spend significant resources making sure their tools remain current. Keep in mind you’ll want to tailor these signatures to your organization/industry – obviously you don’t need to look for SCADA attacks if you don’t have those control systems, but the inclusive side is a bit more involved. Intelligence sharing: Larger organizations see a wide variety of stuff, mostly because they are frequently targeted and have the staff to see attack patterns. Many of these folks do a little bit of co-opetition and participate in sharing groups (like FS-ISAC) to leverage each other’s experiences. This could be a formal deal or just informal conversations over beers every couple weeks. Either way, it’s good to know what other peer organizations are seeing. The point is that there are many places to leverage data and generate alerts. No one information source can identify all emerging attacks. You’re best served by using many, then establishing a method to prioritize alerts which warrant investigation. Visualization Just about every organization – particularly large enterprises – generates more alerts than it has the capability to investigate. If you don’t, there’s a good chance you aren’t alerting enough. So prioritization is a key

Share:
Read Post

Dealtime 2010: Remembering the Departed

As we approach Christmas time, quite a few folks will have gold bullion under their trees, courtesy of the security industry M&A machines. Of course, the investment bankers and lawyers had a banner year, but let’s also hear it for some fortunate entrepreneurs, their VCs, and even some public company shareholders who were able to share in the wealth this year. You forget how long 12 months are, until you go back and start to revisit what happened in 2010. CRN helped me out a bit by doing one of their silly slideshows (page view hos) listing the Top 10 deals in security this year. Let’s take a quick run through each and think about the longer term impact (though we covered many of these during the year). Intel/McAfee: Obviously having the biggest pure-play security company taken out is a big deal. We did some analysis of the deal (and here), and our perspectives haven’t changed. Though with the EU scrutinizing the deal, there is still some risk of not closing. If it does we expect business as usual, though McAfee may be a bit more acquisitive (and spend bigger $$’s) leveraging Intel’s balance sheet. Symantec/PGP/GuardianEdge: Symantec had a huge hole relative to encryption, and they filled it. Twice. Why buy one, when you can buy two at twice the price? It’s the SYMC way! Though the initial integration ideas we’ve seen on the roadmap are promising, we are still talking about the Big Yellow here, so we remain cautious. Here is our deal analysis. Symantec/VeriSign: This high dollar deal was a surprise and clearly there is lots of risk. It does make sense and provide some leverage, especially relative to the enterprise authentication business. And this one requires less integration than most of SYMC’s deals. So this could end up being a net positive if the SYMC field teams can figure out how to sell it. SonicWall goes private: Thoma Bravo acquired SonicWall (our analysis here) and saved them from the quarterly scrutiny of being a public company. Big whoop. The real question is what are they going to fold into the operation (and no, Entrust is not a clean fit), because the company will need some additional heft and excitement to warrant another public offering or higher value deal to a strategic acquirer. Sophos goes private equity: Despite how ineffective traditional AV is at pretty much everything (except maybe passing PCI), it’s still a multi-billion-dollar market. We were reminded of that when APAX partners acquired Sophos for $830 million. Basically a 2nd tier player in AV is bigger than the entire DLP market, though probably not for long (WIKILEAKS WIKILEAKS WIKILEAKS). Like SonicWall, Sophos will need to keep buying stuff to be able to generate excitement for an IPO. HP/Fortify: HP got the application security bug and added Fortify to SPI Dynamics and folded it all into its application tools business. Which is exactly where it belongs, because without tight linkages to IDEs and dev tools, developers won’t do much. Not that they will even with tight integration, but at least there is a chance. This also showed HP’s need to buy the biggest dog in any space, because you cannot move a needle that weighs more than $120 billion, $10 million at a time. HP/ArcSight: HP also swallowed up the big dog of the SIEM space in 2010. We’ve been saying for a long time that SIEM and Log Management are going to be part of the big IT ops management stack, and this kind of move facilitates that. Of course integration won’t be easy, but in the meantime we’re pretty sure an army of EDS services folks will keep very busy making ArcSight work. McAfee/Trust Digital: McAfee did a few deals last year, and this one – acquiring Trust Digital to add some mobile security technology – may pay dividends when we see weaponized mobile attacks go mainstream. At some point it will happen and folks will have to pay attention to what’s on those pesky smart phones and how to protect it all. IBM/BigFix: After screwing the pooch on the ISS deal, IBM went back to the well to acquire BigFix, which is as much a big IT ops play as a security play. It fits nicely with Tivoli and thus will be a lot cleaner to integrate and leverage than ISS. That doesn’t mean there won’t be a run for the exits by the BigFix brain trust, or that IBM won’t screw this one up too, but you can at least make a case that BigFix is a much better fit. Trend Micro/Mobile Armor: Oh, yeah, Trend had a big hole in mobile encryption as well. So they filled it, but only once. How silly. Though it’s not clear they could have filled it twice if they tried. Bonus Round The CRN folks left out a couple that bear mentioning. RSA/Archer: This deal was announced on Jan 5, so it hardly feels like a 2010 deal. Given EMC’s move to drive more of their own services and push to solidify CIO level relationships, buying Archer’s toolkit, I mean ‘platform’, makes a lot of sense. The question for next year is whether RSA will buy something to supplement EnVision, which continues to fall behind technically in the SIEM/Log Management space. Juniper/Altor: This one is fresh in our minds because it went down recently, but buying Altor was probably as much about Juniper getting access to the VMSafe API as about buying a spot in a market that isn’t yet. How else do you justify paying in the neighborhood of 30x bookings? You can check out our pithy Incite on the deal (it’s bullet #3). I’m sure by this point you want to know what’s going to happen in 2011. So let’s bust out the Magic 8 Ball and figure it out: Will there be more deals in 2011 than 2010? My sources say no. Will there be a bunch of fire sales? Without a doubt.

Share:
Read Post

Incite 12/22/2010: Resolution

Pretty much every year, I spend the winter holidays up north visiting the Boss’s family. I usually take that week and try to catch up on all the stuff I didn’t get done, working frantically on whatever will launch right when everyone returns from their December hangover. But as I have described here, I’m trying to evolve. I’m trying to take some time to smell the proverbial roses, and appreciate things a bit. I know, quite novel. I have to say, this has been a great year on pretty much all fronts. There was a bit of uncertainty this time last year, as I had left my previous job and we were rushing headlong into announcing the new Securosis. There were a lot of moving pieces and it was pretty stressful, with legal documents relating to the new company floating around, web sites to update, and pipelines to build. A year later, I can say things are great. I’ve told them each collectively, but I have to thank Rich and Adrian for letting me join their band of merry men. Also a big thanks to our contributors (Mort, Gunnar, Dave, and Jamie) who keep us on our toes and teach me something every time we talk. I won’t forget our editor Chris either, who actually helps to make my ramblings somewhat Strunk & White ready. I also want to thank all of you, for reading my stuff and not throwing anything at me during speaking gigs. I do appreciate that. Mentally, I’m in a good place. Sure I still have some demons, but who doesn’t? I keep looking to exorcise each one in its turn. Physically, I’m in pretty good shape. Probably the best shape I’ve been in since I graduated college. Yes, I had dark hair back then. The family is healthy and they seem to still like me. I have nothing to complain about on that front. Yes, I’m very lucky. I’m also very excited for 2011. Rich alluded to our super sekret plans for world domination, and things are coming together on that front. No, it’s not fast enough, but when we get there it will be great. I’m looking forward to fleshing out my research agenda and continuing to work with our clients. Since this is the last Incite of 2010, I guess I’ll divulge my 2011 resolution: Don’t screw it up. No, I’m not kidding. There will be ups and there will be downs. I expect that. But if I can look back 12 months from now and feel the way I do today, it will have been a fantastic year. I hope you have a safe and happy holiday season, and there will be plenty of Incite in 2011. Until then… -Mike Photo credits: “Resolution” originally uploaded by sneeu Incite 4 U Gawking at Security 101: Oh how the PR top spins. After spending last week washing egg off their faces due to the massive pwnage Gawker suffered, now they are out talking about all the cool stuff they’ll do to make sure it doesn’t happen again. Like requiring employees to log into Google Apps with SSL. And telling them not to discuss sensitive stuff in chat rooms. Yeah, that’s the answer. Just be thankful that sites like Gawker don’t collect much information. Though we should commend folks like LinkedIn and Yahoo, who used the list of suckers, I mean commenters, and reset their passwords automagically. I’ve had issues with LinkedIn’s security processes before, but in this case they were on the ball. – MR Fear the PM: Do project managers managers need to “lighten up” and give away some control over development projects? Maybe. Are they being forced to provide transparency into their projects because SaaS management tools allow access to outsiders? Mike Vizard and LiquidPlanner CEO Charles Seybold seem to think so. Personally I think it’s total BS. With Agile becoming a standard development methodology, the trend is exactly the opposite. Agile with Scrum, by design, shields development efforts from outside influencers, leaving product managers more in control of feature sets than ever before. They are the gatekeepers. And when you manage tasks by 3×5 card and prioritize with Post-It notes, you don’t exactly provide transparency. Collaboration and persuasion are interpersonal skills, not an app. I recommend that project managers leverage software for task tracking over and above task cards, but don’t think some cloud-based nag-ware is going to subjugate a skilled PM. – AL Not your daddy’s DDoS: – I’ve spent a heck of a lot of time explaining denial of service attacks to the media over the past few weeks for some odd reason. While explaining straightforward flooding attacks is easy enough, I found it a bit tougher to talk about more complex DDoS. To be honest I don’t know why I tried, because for the general press it doesn’t really matter. But one area I never really covered too much is application level DDoS, where you dig in and attack resource-intensive tasks rather than the platform. Craig Labovitz of Arbor Networks does a great job of explaining it in this SearchSecurity article (near the bottom). Definitely worth a read. – RM No slimming the AV pig: Ed over at Security Curve makes the point (again) that the issues around AV, especially performance, aren’t going to get better. Sure the vendors are working hard to streamline things and for the most part they are making progress. Symantec went from a warthog to a guinea pig, but it’s still a pig. And they can’t change the math. No matter how much you put into the cloud, traditional AV engines cannot keep up. Reputation and threat intelligence helps, but ultimately this model runs out of gas. Positivity, anyone? Yes, I’m looking for white listing to make slow and steady inroads in 2011. – MR Live with it: – This Incite isn’t a link, but a note on a call I had with a vendor recently (not a client) that highlighted 2 issues.

Share:
Read Post

React Faster and Better: New Data for New Attacks, Part 1

As we discussed in our last post on Critical Incident Response Gaps, we tend to gather too much of the wrong kinds of information, too early in the process. To clarify that a little bit, we are still fans of collecting as much data as you can, because once you miss the opportunity to collect something you’ll never get another chance. Our point is that there is a tendency to try to boil the ocean with analysis of all sorts of data. That causes failure and has plagued technologies like SIEM, because customers try to do too much too soon. Remember, the objective from an operational standpoint is to react faster, which means discovering as quickly as possible that you have an issue, and then engaging your incident response process. But merely responding quickly isn’t useful if your response is inefficient or ineffective, which is why the next objective is to react better. Collecting the Right Data at the Right Time Balancing all the data collection sources available today is like walking a high wire, in a stiff breeze, after knocking a few back at the local bar. We definitely don’t lack for potential information sources, but many organizations find themselves either overloaded with data or missing key information when it’s time for investigation. The trick is to realize that you need three kinds of data: Data to support continuous monitoring and incident alerts/triggers. This is the stuff you look at on a daily basis to figure out when to trigger an incident. Data to support your initial response process. Once an incident triggers, these are the first data sources you consult to figure out what’s going on. This is a subset of all your data sources. Keep in mind that not all incidents will tie directly to one of these sources, so sometimes you’ll still need to dive into the ocean of lower-priority data. Data to support post-incident investigation and root cause analysis. This is a much larger volume of data, some of it archived, used to for the full in-depth investigation. One of the Network Security Fundamentals I wrote about early in the year was called Monitor Everything because I fundamentally believe in data collection and driving issue identification from the data. Adrian pushed back pretty hard, pointing out that monitoring everything may not be practical, and focus should be on monitoring the right stuff. Yes, there is a point in the middle. How about collect (almost) everything and analyze the right stuff? That seems to make the most sense. Collection is fairly simple. You can generate a tremendous amount of data, but with the log management tools available today scale is generally not an issue. Analysis of that data, on the other hand, is still very problematic; when we mention too much of the wrong kinds of information, that’s what we are talking about. To address this issue, we advocate segmenting your network into vaults and analyzing traffic and events within the critical vaults at a deep level. So basically it’s about collecting all you can within the limits of reason and practicality, then analyzing the right information sources for early indications of problems, so you can then engage the incident response process. You start with a set of sources to support your continuous monitoring and analysis, followed by a set of prioritized data to support initial incident management, and close with a massive archive of different data sources, again based on priorities. Continuous Monitoring We have done a lot of research into SIEM and Log Management, as well as advanced monitoring (Monitoring up the Stack). That’s the kind of information to use in your ongoing operational analysis. For those vaults (trust zones) you deem critical, you want to monitor and analyze: Perimeter networks and devices: Yes, the bad guys tend to be out there, so they need to cross the perimeter to get to the good stuff. So we want to look for issues on those devices. Identity: Who is as important as what, so analyze access to specific resources – especially within a privileged user context. Servers: We are big fans of anomaly detection and white listing on critical servers such as domain controllers and app servers, so you can be alerted to funky stuff happening at the server level – which usually indicates something that warrants investigation. Database: Likewise, correlating database anomalies against other types of traffic (such as reconnaissance and network exfiltration) can indicate a breach in progress. Better to know that early, before your credit card brand notifies you. File Integrity: Most attacks involve some change to key system files, so by monitoring their integrity you can pinpoint when an attacker is trying to make changes. You can even block these attacks using technology like HIPS, but that’s another story for another day. Application: Finally, you should be able to profile normal transactions and user interactions for your key applications (those accessing protected data) and watch for non-standard activities. Again, they don’t always indicate a problem, but do allow you to prioritize investigation. We recommend focusing on your most important zones, but keep in mind that you need some baseline monitoring of everything. The two most common sources we see for baselines are network monitoring and endpoint & server logs (or whatever security tools you have on those systems). Full Packet Capture Sandwich One emerging advanced monitoring capability – the most interesting to us – is full packet capture. Rich wrote about this earlier this year. Basically these devices capture all the traffic on a given network segment. Why? In a nutshell, it’s the only way you can really piece together exactly what happened, because this way you have the actual traffic. In a forensic investigation this is absolutely crucial will provide detail you cannot get from log records. Going back to our Data Breach Triangle, you need some kind of exfiltration for a real breach. So we advocate heavy perimeter egress filtering and monitoring, to (hopefully) prevent valuable data from escaping

Share:
Read Post

Infrastructure Security Research Agenda 2011—Part 4: Egress and Endpoints

In the first three posts of my 2011 Research Agenda (Positivity, Posturing and RFAB, Vaulting and Assurance) I mostly talked about how we security folks need to protect our stuff from them. You know, outside attackers trying to reach our stuff. Now let’s move on to people on the inside. Although most of us prefer to focus on folks trying to break in, it’s also important to put some forethought into protecting people inside the perimeter. Whether an employee loses a device (and compromises data), clicks the wrong link (resulting in a compromised device and giving attackers a foothold on the internal network), or even maliciously tries to exfiltrate data (WikiLeaks, anyone?) all of these attack scenarios are very real. So we have to think from the inside out about protecting endpoint devices, because nowadays that is probably the most common way for attackers to begin a multi-faceted attack. They’ll pwn an endpoint and then use it to pivot and find other interesting stuff. Yet, we also have to focus a bit on breaking one of the legs of Rich’s Data Breach Triangle – the egress leg. Unless the attackers can get the data out, it’s not a breach. So a lot of what we’ll do as part of the egress research agenda is focus on content filtering at the edge to ensure our sensitive stuff doesn’t escape. Endpoints The good news is that we did a bunch of research to lay the foundation for endpoint security in 2010. Looking at 2011, we want to dig deeper and start thinking about dealing with all of these newfangled devices like smartphones, and examine technologies like application white listing which implements our positivity model on endpoint devices. Background: Endpoint Security Fundamentals Endpoint Protection Suite Evolution: Using the Endpoint Fundamentals content as a base; we need to delve into what the EPP suite looks like moving forward; and how capabilities like threat intelligence, HIPS, and cloud services will remake what we think of as the endpoint suite. Application White Listing: Where, When, and Why? We’ve written a bit about application white listing concepts, but it’s still not necessarily a general purpose control – yet. So we’ll dig into specific use cases where white listing makes sense and some deployment advice to make sure your implementation is successful (and avoid breaking too much). Mobile device security: There is a lot of hype but not much by way of demonstrable weaponized threats to our smartphones, so we’ll document what you need to know and what to ignore, and discuss some options for protecting mobile devices. Quick Wins with Full Disk Encryption: Everyone is buying FDE, but how do you choose it and how do you get quick value? Again, lots of stuff to think about for protecting endpoints, so we’ll be pretty busy on these topics in 2011. Egress Egress filtering on the network will be covered by the Positivity research. But as Adrian mentions in his research agenda, there is plenty of content that goes out of your organization via email and web protocols, and we need to filter that traffic (before you have a breach). Understanding and Selecting DLP, v2: Rich’s recent updated to this paper is a great base, and we may dig into specific endpoint or gateway DLP to prevent critical content from leaving the organization – which plays directly into this egress theme. Web Security Evolution: Web filters and their successors have been around for years, so what is the future of the category and how can/should customers with existing web security implementations move forward? And how will SaaS impact how customers provide these services? Email Security Evolution: Very similar conceptually to web security evolution, but of course the specifics are very different. So there you have it. Yes, I’ll be pretty busy next year and that’s a good thing. I’m still looking for feedback on these ideas, so if one (or more) of these research projects resonates please let me know. Or if some things don’t, that would be interesting as well. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.