Securosis

Research

Whitepaper Released: Quick Wins with Data Loss Prevention

Two of the most common criticisms of Data Loss Prevention (DLP) that comes up in user discussions are a) its complexity and b) the fear of false positives. Security professionals worry that DLP is an expensive widget that will fail to deliver the expected value – turning into yet another black hole of productivity. But when used properly DLP provides rapid assessment and identification of data security issues not available with any other technology. We don’t mean to play down the real complexities you might encounter as you roll out a complete data protection program. Business use of information is itself complicated, and no tool designed to protect that data can simplify or mask the underlying business processes. But there are steps you can take to obtain significant immediate value and security gains without blowing your productivity or wasting important resources. In this paper we highlight the lowest hanging fruit for DLP, refined in conversations with hundreds of DLP users. These aren’t meant to incorporate the entire DLP process, but to show you how to get real and immediate wins before you move on to more complex policies and use cases. I like this paper, and not just because I wrote it. Short, to the point, with advice on deriving immediate value as opposed to kicking off some costly and complex process. This paper is the culmination of the Quick Wins in DLP blog series I posted, all compiled together with a pretty picture or two. Special thanks to McAfee for licensing the report. You can download the paper directly, or visit the landing page, where you can leave comments or criticism, and track revisions. Share:

Share:
Read Post

Database Security Fundamentals: Auditing Events

I realized from my last post that I made a mistake. In my previous post on Auditing Transactions, attempting to simplify database auditing, I instead made it more complicated. What I want to do is to differentiate between database auditing through the native database transactional audit trail, from other forms of logging and event collection. The reason is that the native database audit trail provides a sequence of associated events, and whether and when those events were committed to disk. Simple events do not provide the same degree of context and are not as capable of providing database state. If you need application context and state – perhaps for Sarbanes-Oxley – you need the audit log. Make no mistake: there are simpler and less invasive ways of collecting data. They also provide an alternative – and in some cases clearer – picture of events. For example, it’s a heck of a lot easier to get data from syslog that native audit. And if all you are interested in is when patches are installed, syslog is a better source of information. If you are only interested in failed login attempts, a login trigger is far more efficient. The entire purpose of this Database Security Fundamentals series is to create a set of steps, which can each be performed in about an afternoon’s time, to secure your database. I believe the entire sequence can be completed in a week. My goal is to provide clarity and simplicity for database and IT administrators who do not have time to learn and deploy advanced security measures, and are instead interested in raising the security bar without spending weeks or months on the project. So I want to step back and clarify that the last post is specifically for at those who must use native database audit, primarily to populate reports or fulfill regulatory controls, with security as a secondary goal. And yes, compliance of some sort has become a fundamental requirement for the majority of DBAs. For the rest of you, we’ll dig into simple event collection for security events. If you are interested in a few simple events, but not enough to justify the burden of audit, this phase will be more useful to you. Define events: The goal here is to figure out what you need, or what others want from you. Installation of patches, alteration of specific permissions settings, granting of public roles, insertion of stored procedures, ad-hoc database access, use of management tools like Toad, adding views, 3 or more failed login attempts, and just about anything involving DBA capabilities are common concerns. These are all simple events with frequency rates low enough not to overwhelm you. Determine collection methods: Based on which events you want, select a data collection method or two that gather the data you need. There are a lot of ways to gather event data. System tables, command line tools, triggers, syslog, trace options, etc. Write scripts: To make this easier on yourself script your queries, or turn them into stored procedures, or both. Create the scripts to collect the events and, if needed, filter out what you don’t need. Use whatever scripting language you are comfortable with. Keep in mind that it is often useful to have the scripts make follow-up queries to reference other data sources, and being able to recursively gather additional information based upon simple if-then or where comparisons on data will save you a lot of work. User permission mapping is one such example, as the groups and roles a user belongs to could be a complex set of queries, depending on which platform you are using. You may want to send yourself an email for more critical events that need urgent attention. Implement: Deploy your scripts and test. Annoying though it may be, you will want to set up a specific user account with just enough privileges to perform the data collection. Secure these scripts so unprivileged users cannot use or modify them. You will want to set up a secure place to dump the results, and if necessary archive and remove files so they don’t take up too much disk space. Set review schedule: The data you collect is only valuable if you use it, so get in the habit of reviewing the results for anomalies. If security is your goal, plan on spending a few minutes every day on this, and setting alerts on the one or two events that absolutely, positively, look suspicious. Archive the scripts and document: Keep a copy of the scripts and notes on what you implemented for future reference. For a single database I find that I can create and test the scripts in an afternoon. Another few hours to set up the user accounts, cron jobs, or archive scripts. After that the entire process is pretty much self-sustaining as long as you stay on top of event review. Some of you who are the lone DBA at your job will consider this step in the Fundamentals series silly. I have had DBAs ask me, “Why would you set up a script to track your own work? Why would I send myself a reminder that I just added a table view?” Remember that this is meant to catch stuff that should not be happening, or events you were not aware of, like someone else in IT making changes to be ‘helpful’. Or when an attacker tries to compromise a database. This afternoon’s effort will all seem worth it when you have your first ‘WTF?’ moment a few months from now, when some web programmer changes the database without telling you. More advanced methods I intended to leave database activity monitoring out of this discussion. Monitoring is an advanced database security option, and does not fit into this simpler Essentials series. But those tools provide far more advanced data collection and storage capabilities, policies, and reporting. If the number of events to collect, or of databases grows, or if the policies and reports you

Share:
Read Post

Who DAT McAfee Fail?

There are a lot of grumpy McAfee customers out there today. Yesterday, little Red issued a faulty DAT file update that mistakenly thought svchost.exe was a bad file and blew it away. This, of course, results in all sorts of badness on Windows XP SP3, causing an endless reboot loop and rendering those machines inoperable. Guess they forgot the primary imperative, do no harm… To McAfee’s credit, they did own the issue and made numerous apologies. Personally, I think the apology should have come from DeWalt, the CEO, on the blog. But they aren’t making excuses and are working diligently to fix the problem. But that is little consolation for those folks spending the next few days cleaning up machines and implementing the fix. Yet, there is lots of coverage out there that will explain the issue, how it happened, and how to fix it from LifeHacker or McAfee. You’ll also get some perspective on how this provided an opportunity to test those incident response chops. What I want to talk about is understanding the risk profile of anti-malware updates, and whether & how your internal processes should change in the face of this problem. First off, no one is immune to this type of catastrophic failure. It happened to be McAfee this time, but anti-malware products work at the lowest layers of the operating system, and a faulty update can really screw things up. Yes, the AV vendors have mature QA processes, which is why you don’t see this stuff happening much at all. But it can, and likely will again at some point. Yes, you could decide to ditch McAfee, although I’d imagine they’ll be retooling their QA processes to ensure this type of problem doesn’t recur. But that’s a short-term emotional reaction. The real question revolves around how to deal with anti-malware updates. It’s always been about balancing the speed of detection with the risk of unintended consequences (breaking something). So you basically have three choices for how to deal with anti-malware updates: Automatic updates – This represents the common status quo. The AV vendor issues a release, you get it and install it with no testing or any other mechanisms on your end. To be clear, a vast majority of end users are in this bucket. Test first – You can take the update and run it through a battery of tests to see if there is a problem before you deploy. This option is pretty resource intensive, because you tend to get multiple updates per day from the vendor; it also extends the window of vulnerability by the length of your testing and acceptance pipeline. Wait and listen – The last approach is basically to wait a day or two day before installing updates. You peruse the message boards and other sources to see if there are any known issues. If not, you install. This also extends the window of exposure, but would have avoided the McAfee issue. There is no right answer. Most organizations opt for the quickest protection possible, which means automatic updates to minimize the window of vulnerability. But it gets back to your organization’s threshold for risk. I don’t think the “test first” option is really viable for an organization. There are too many updates. I do think “wait and listen” can make sense for the vast majority of companies out there. But how does wait and listen work against a zero-day attack? In this case it still works okay, because you can always do a manual test or take the risk of sending out an update before the waiting period is over. And in reality, the signature updates for a 0-day usually take 8-18 hours anyway. But there is a risk you might get nailed in the time between an update arrives and when you deploy it. In that case, hopefully you’ve managed expectations with the senior team regarding this scenario. I’d be remiss if I didn’t at least mention the need for layers beyond anti-malware. Especially when deciding whether to install an AV update. There are alternative mitigations (at the perimeter or on the network, for example) for most 0-day attacks, which could lessen the impact and spread of an attack. Those can often be made immediately, and are easier to reverse than an install that touches every desktop. So it’s unfortunate for McAfee and they’ll be cleaning up the mess (in market perception and customer frustration) for a while. And as I told the AP yesterday, fortunately this kind of issue is very rare. But when these things do happen, it’s a train wreck. Share:

Share:
Read Post

Incite 4/21/2010: Picky Picky

My kids are picky eaters. Two out of the three anyway. XX1 (oldest daughter) doesn’t like pizza or hamburgers. How do you not like pizza or hamburgers? Anyway, she let us know over the weekend her favorite foods are cake frosting and butter. Awesome. XY (boy) is even worse. He does like pretty much all fruits and carrots, but will only eat cheese sticks, yogurt and some kinds of chicken nuggets – mostly the Purdue brand. Over the weekend, the Boss and I decided we’d had enough. Basically he asked for lunch at the cafe in our fitness center and said he’d try the nuggets. They are baked and relatively healthy (for nuggets anyway). The Boss warned him that if he didn’t eat them there would be trouble. But he really wanted the chips that came with the nuggets, so he agreed. And, of course, decided he wasn’t going to eat the nuggets. And trouble did find him. We basically dictated that he would eat nothing else until he finished two out of the three nuggets. But he’s heard this story before and he’d usually just wait us out. And to date, that was always a good decision because eventually we’d fold like a house of cards. What kind of parents would we be if we didn’t feed the kid? So we took the boy to his t-ball game, and I wouldn’t let him have the mini-Oreos and juice bag they give as snacks after the game. He mentioned he was hungry on the way home. “Fantastic,” I said. “I’ll be happy to warm your nuggets when we get home.” Amazingly enough, he wasn’t hungry anymore when we got home. So he went on his merry way, and played outside. It was a war of attrition. He is a worthy adversary. But we were digging in. If I had to lay odds, it’s 50-50 best case. The boy just doesn’t care about food. He must be an alien or something. At dinnertime, he came in and said he was really really hungry and would eat the nuggets. Jodi dutifully warmed them up and he dug in. Of course, it takes him 20 minutes to eat two nuggets, and he consumed most of a bottle of ketchup in the process. But he ate the two nuggets and some carrots and was able to enjoy his mini-Oreos for dessert. The Boss and I did a high five, knowing that we had stood firm and won the battle. But the war is far from over. That much I know. – Mike Photo credits: “The biggest chicken nugget in the known universe” originally uploaded by Stefan Incite 4 U From fear, to awareness, to measurement… – Last week I talked about the fact that I don’t have enough time to think. Big thoughts drive discussion, which drives new thinking, which helps push things forward. Thankfully we security folks have Dan Geer to think and present cogent, very big thoughts, and spur discussion. Dan’s latest appeared in the Harvard National Security Journal and tackles how the national policy on cyber-security is challenged by definition. But Dan is constructive as he dismantles the underlying structure of how security policies get made in the public sector and why it’s critical for nations and industries on a global basis to share information – something we are crappy at. Bejtlich posted his perspectives on Dan’s work as well. But I’d be remiss if I didn’t at least lift Dan’s conclusion verbatim – it’s one of the best pieces of writing I’ve seen in a long long time… “For me, I will take freedom over security and I will take security over convenience, and I will do so because I know that a world without failure is a world without freedom. A world without the possibility of sin is a world without the possibility of righteousness. A world without the possibility of crime is a world where you cannot prove you are not a criminal. A technology that can give you everything you want is a technology that can take away everything that you have. At some point, in the near future, one of us security geeks will have to say that there comes a point at which safety is not safe.” Amen, Dan. – MR Phexting? – Researchers over at the Intrepidus Group published a new vulnerability for Palm WebOS devices (the Pre) that works over SMS (text messaging). These are the kinds of vulnerabilities that keep me up at night since I started using smart phones. As with Charlie Miller’s iPhone exploit from last year, sending a malicious text message could trigger actions on the phone. Charlie’s attack was actually more complex (and concerning) since it operated at a lower level, but none of these sound fun. For those of you who don’t know, an SMS is limited to 160 characters of text, but modern phones use that to support more complex actions – like photo and video messages. Those work by specially encoding the SMS message with the address of the photo or video that the phone then automatically downloads. SMS messages are also used to trigger a variety of other actions on phones without user interaction, which opens up room for manipulation and exploit… all without anything for you to notice, except maybe the radiation burns in your pocket. – RM Time to open source Gaia – With additional details coming out regarding the social engineering/hack on Google, we are being told that the source code to the Gaia SSO module was a target, and social engineering on Gaia team members had been ongoing for two years. While attackers may not have succeeded in inserting a Trojan, easter egg, or other backdoor in the source code, the thieves will certainly perform a very thorough review looking for exploitable defects. If I ran Google I would open up the source code to the public and ask for help reviewing it for defects. I can’t help laughing at

Share:
Read Post

Google: An Example of Why Single Sign on Sucks

According to the New York Times, when Google was hacked during the recent China incident, their single sign on system was specifically targeted. The attackers may have accessed the source code, which gives them some good intel to look for other vulnerabilities. There’s speculation they could have also added a back door to the source code, but I suspect that even if they did this, given how quickly Google detected the intrusion, any back doors probably didn’t make it into backups and might be easy for Google to spot and remove. I’ve never been a fan of single sign on (SSO). Its only purpose is to make life easier for the users at the expense of security. All you need to do is compromise one password and you get access to everything. It’s okay if you use strong authentication (like tokens), but crap if you run it solo. Not that we can expect all our users to remember 25 complex passwords. That’s why passwords alone as an authentication mechanism also suck. If you can’t roll out strong authentication, I tend to recommend reduced sign on – instead of one password you have the user remember somewhere between 3-5 to compartmentalize. Drop the 90-day rotations, because they only make life harder without actually improving security, and encourage passphrases rather than the silly 10-character, must-have-a-number, non-alpha-character, and 3-Wingdats-symbols-drawn-in-crayon crap. Personally I use a password vault (1Password. Technically it’s close to SSO in that if someone gets the password to my vault I’m in deep trouble, but to do that they would need to take over my personal system, and it’s pretty much game over at that point anyway. I don’t have to worry if someone compromises a web forum that they’ll use my password there to access my bank account, since they all use different credentials, and I don’t even know what they all are. Update: Two points I forgot: I don’t do much with Google, but I do have different accounts set up for when I need to compartmentalize services. My bank passwords are not in 1Password – those I keep memorized, because I’m a paranoid freak. Share:

Share:
Read Post

FireStarter: You Don’t Need Central Key Management

If you are using encryption, somewhere you have encryption keys. Where you store them, and how they are managed and shared, are legitimate concerns. It is fashionable to store all keys in a single centralized key management server. Much as the name implies, this means storing all of your keys, of different types, for multiple use cases into a single key management server. Rich likes to call these ‘uber’ key manager, that handle any and all key functions; and are distinct from external key management servers that unify instances of single application, or provide key services across the disks in your storage array. Conceptually, a single product that handles all my key needs from a unified interface sounds great. But the real question is: why do you need it? Central key management is not simplified key management. Central key management requires architectural and deployment changes. Consolidation of key storage and use policies does not ensure easier management, but does mean increased cost. Few people want or need centralized key management. Putting all your keys from every application or services into a single monolithic key management store offers few advantages, and creates a number of problems itself. The implication is that a central server will offer easier management, increased redundancy, and greater functionality; but these are often illusory benefits, based on solving a problem you did not have to begin with. In practice the internal and external key services that came with the products you already own are likely not only sufficient, but better. Here’s why: No reason to share keys – Databases, disks, applications, file systems, and wherever else you encrypt seldom (if ever) need to share keys across these different services. Even if the encryption algorithm, key type, and key size are all consistent, there is no need to share keys between your tape drive and web application server. Using encryption to provide data integrity and privacy is a common goal, but the use cases and technical constraints are radically different. Redundant – Why use it if it is already built into your application? Internal key management is built into most applications and cryptographic systems such as storage products and file-level encryption tools. External key management – for products that really need external support for good key security and proper separation of duties (SoD) – is provided by application libraries and database encryption products. Failover, backup, management interfaces, rotation, and cipher strength are all common features, so why centralize? Multiple services mean more interfaces to learn, but inherently provides SoD and focused policy management. Cost – Central key management servers are standalone, dedicated products. They excel in areas such as key security, ease of use, key sharing, etc. But they are still an additional investment. Policy Management – A single management console to manage the system sounds like a great convenience, but I don’t always want the same policies across dissimilar applications and use cases. In fact, I usually want different key lengths, different rotation schedules, and different ciphers, depending on the data I am protecting, and prefer the granularity to specify them at the level of an individual use case. Single administration Console Having a single location to manage keys is conceptually useful. It may actually be useful if you have a very large number of users or must distribute keys and data across a large organization. Most of you reading this, however, work for small shops, and the one or two areas where you have deployed encryption do not require centralization. Few of you are at large enough organizations to worry about thousands of users each with hundreds of keys – and thus to need central key management to address data access issues across a dispersed environment. Key rotation – Having a central key server to automate key management, especially the complexity of key rotation, is a common motivator for central services. Rotation, or key-cycling, is a common feature whereby key management products periodically issue new keys on the premise that with sufficient time and effort, someone will be able to discover the encryption keys in use. Theoretically you would issue a new key and then re-encrypt all data under the new key, but in practice it would take months of even years to re-encrypt everything, as the data sets are simply too large, and the media might even be off-line or off-site. In this scenario data is only re-encrypted under new keys opportunistically when it’s rewritten (perhaps also when it’s reread). But there is no guarantee that data will be re-encrypted. With every key rotation cycle, a new set of keys is generated, and the old ones must be retained to decrypt older data. Over time data will be encrypted under so many different keys that you must use a key manager just to keep track of what’s what. It’s a side effect of the encryption scheme, and for a modicum of extra security you get a bloated key server. Better to keep this compartmentalized than centralized. Don’t go looking for central key management when external key management is all you need. Central key management is occasionally necessary – most often for existing systems with really bad built-in key management, geographically dispersed servers that require key sharing, or thousands of users each with multiple keys. A single point of management is veru much a secondary advantage, however, and should not drive your decisions. So why do you think you need it? What’s the advantage to you? Share:

Share:
Read Post

Level 4 Apathy

I was perusing some of my saved links from the past few weeks and came across Shimmy’s dispatch from the ETA (Electronic Transaction Association) show, which is a big conference for payment processors. As Alan summarized, here are the key takeaways from the processors: They view the PCI Council as not caring about Level 3 and 4 merchants. Basically a shark with no teeth. They don’t see smaller merchants as a big risk. They believe their responsibility ends when a ‘program’ is in place. Alan uses the rest of his post to beat on the PCI scanning shylocks, who are offering services for $1 per merchant, to get their vulnerability scan checkbox and to fill out the SAQ. But my perspective is a bit different. Right there, in the flesh, is the compliance-centric mindset. It’s not about outcomes, it’s about checking the box. And we can decide to get all upset about it, but that would be a waste of time. You see, apathy is usually a result of some kind of analysis (either conscious or unconscious). I suspect the processors have done the math and decided to focus their risk management on the places where they lose the most money – presumably the Level 1 and 2 merchants. Now I haven’t seen the fraud reports from any of these folks, but I presume they do a bit of analysis on to where their ‘shrinkage’ occurs, and if a large portion of it was Level 3 and 4 merchants, then Mr. Market would expect them to be much more aggressive about making real security changes at that level. But they aren’t, so the only conclusion I can draw is that even though (as Alan says) 85% of the incidents take place at smaller merchants, it’s probably only a small portion of the total dollars in fraud. To be clear, I could be making that up, and/or the processors could just be crappy at understanding their risk profiles. But I don’t think so. I think as an industry we really have to start thinking about the point of diminishing returns. Where is the line where increasing our efforts to secure small companies just doesn’t matter? You know, where the economic benefit of reduced fraud is outweighed by the cost of making those security improvements. Seems like the PCI Council is already there. Of course, the trade press will still get all aflutter about the builder or shop owner whose accounts are looted for $100K or $500K, and then they go out of business. That’s sad, but it seems the card value chain is focused on stopping the $100M losses, and is willing to accept the $100K fraud. Predictably, the system is figuring out how to game the lower levels of the regulation, where the focus is non-existent. Though it probably pisses you off, you shouldn’t be surprised. After all, it’s just simple economics, right? Share:

Share:
Read Post

Friday Summary: April 16, 2010

I am sitting here staring at power supplies and empty cases. Cleaning out the garage and closets, looking at the remnants from my PC building days. I used to love going out to select new motherboard and chipset combinations, hand-selecting each component to build just the right database server or video game machine. Over the years one sad acknowledgement needed to be made: after a year or so, the only pieces worth a nickel were the power supply and the case. Sad, but you spend $1,500.00 and after a few months the freaking box that housed the parts was the only remaining item of value. I was thinking about this during some of our recent meetings with clients and would-be clients. Rich, Mike, and I are periodically approached by investors to review portfolio companies. We look both at their technology and market opportunities, and determine whether we feel the product is hitting the mark, and reaching buyers with the right product and message. We are engaged in response to either mis-aligned vision between investors and company operators (shocker, I know), or more commonly to give the investors some understanding of whether the company is worth salvaging through additional investment and a change of focus. Sometimes the company has followed market trends to preserve value, but quite a few turn out to be just a box of old parts. If this is not a direct consequence of Moore’s law, it should be. We see cases where technologies were obsolete before the hit they market. In a handful of instances, we do find one or two worth salvaging, but not for the reasons they thought. In those cases the engineering staff was smart enough, or lucky enough, to build a deployment model or architecture that is currently relevant. The core technology? Forget it. Pitch it in the dust bin and take the write-off because that $20M investment is spent. But the ‘box’ was valuable enough to salvage, invest in, or sell. It’s ironic, and it goes to show how tough technology start-ups are to get right, and that luck is often better than planning. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Living with Windows: security. Rich wrote this up for Macworld. Rich’s Putting the Fun in Dysfunctional presentation at RSA. Adrian’s PCI Database Security Primer, part 1, at Dark Reading. Favorite Securosis Posts Mike Rothman: FireStarter: No User Left Behind. User education is critical, but it needs to be done right, with the right incentives. David Mortman: ESF: Building the Endpoint Security Program. Rich: Anti-Malware Effectiveness: The Truth Is out There. Lies, damn lies, and testing. Adrian Lane: ESF: Full Disk Encryption. Other Securosis Posts ESF: Endpoint Compliance Reporting. Incite 4/14/2010: Just Think. ESF: Controls: Firewalls, HIPS, and Device Control. FireStarter: No User Left Behind. ESF: Controls: Anti-Malware. Friday Summary: April 9, 2010. Database Security Fundamentals: Auditing Transactions. Favorite Outside Posts Mike Rothman: if security wasn’t hard, everyone would do it. Lonervamp nails this one. Tools are great, but it’s people who have to implement the programs. David Mortman: For Thirty Pieces of Silver My Product Can Beat Your Product. Lori MacVittie brings the Rothman-esque smackdown! Pepper: apache.org server cracked: some passwords sniffed; others brute-forced. Time to change some passwords…. Rich: The Spider That Ate My Site. Okay, I hate to admit this but I did something similar to our back end management interface. When admin buttons like “delete” are merely links, a simple spider can cause some serious damage. Adrian Lane: Thinking About The Apache.org Attacks. Actually raises more questions that it answers. What does it say about security when software as prevalent as Apache has a flaw? SHA 256 has been around forever. And I get that the fewer attacks means less exposure of inherent flaws, but complexity of software plays a similar role in masking flaws. Project Quant Posts Project Quant: Database Security – Change Management. Project Quant: Database Security – Patch. Top News and Posts The Spy in the Middle. SSL Certificate Authority trust is a hot topic lately – scary because browsers inherently trust so many CAs, and we know some of them are untrustworthy. US government finally admits most piracy estimates are bogus. Not that it will stop them. Personally, I welcome our RIAA and MPAA thought police overlords. Google to Reveal Research into Fake AV Operations. Hackers exploit new Java zero-day bug. Apple Patches Pwn2Own Flaw That Hacked Safari. Rsnake on the significance of CSRF. If you don’t know who Immunet is and what they do, now’s a good time to read Brain Krebs’ interview with Adam O’Donnell and Adam Huger. Oracle’s April CPU advisory. Nothing cataclysmic. Microsoft on the other hand announced a couple critical patches, and the Authenticode bypass hack looks serious. Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to LonerVamp, in response to Security Incite: Just Think. Nice opening post, especially with the kicker at the end that you’re actually writing it on a plane! 🙂 I definitely find myself purposely unplugging at times (even if I’m still playing with something electronic) and protecting my private time when reasonable. I wonder if this ultimately has to do with the typically American concept of super-efficiency…milking every waking moment with something productive…at the expense of the great, relaxing, leisure things in life. I could listen to a podcast during this normally quiet hour in my day! And so on… @Porky Risk…Pig: It doesn’t help that every security professional shotguns everyone else’s measurements…and with good reason! At some point, I think we’ll have to accept that every company CEO is different, and it takes a person to distill the necessary information down for her consumption. No tool will ever be enough, just like no tool ever determines if a product will be successful. @My dad can beat up your dad: Ugh…I have some pretty strong opinions about cyberbullying and home internet monitoring…but I tend to

Share:
Read Post

Public Goods

Chris Pepper tweeted a very cool post on Why Content is a Public Good. The author, Milena Popova, provides an economist’s perspective on market forces and digital goods. Her premise is that in economic terms, many types of electronic content are “public goods” – that being a technical term for objects with infinite supply and no good way to control consumption. She makes the economic concepts of ‘rival’ and ‘excludable’ very easy to understand, and by breaking it down into rudimentary components, makes a compelling argument that content is a public good: It means that old business models based on content being a club good simply don’t work. It means we have to rethink our relationship with content – as creators, as distributors and as consumers. It means that there are a lot of giants in the content distribution industry whose livelihoods (profit margins) are being pulled out from under them faster than they can say “illegal downloads”, and they are fighting it. Of course they’re fighting it. They’ve had an incredibly profitable business model for about a century and suddenly they don’t. Let’s face it, human beings don’t like change at the best of times, and we sure as hell don’t like it when it means less cash in our pockets. I have written many posts on how economics affect DRM, RIAA, and ‘piracy’; and on the difference between actual security and security marketing, so I won’t rehash those subjects here; but note the common theme is that a busted business model is the root of the problem. Right now I want to stay away from some of the negativity of those posts, and instead focus on the economic drivers. Ms. Popova does a much better job than I of isolating the underlying forces, and discusses the factors in a way that helps us begin to visualize possible solutions. A lot of people have a hard time with the concept of free and how you can actually make money in a world with so much free stuff. In a capitalist society we all have trouble with this. I talk to people in IT who still don’t think Linux and Java are viable technologies, and no one could make money with those products. But the availability of free stuff requires you to think a little differently about value – fewer people will pay money for the everyday and ordinary stuff because they don’t have to, but they will pay for things they perceive as special. In fact, I don’t think I fully grasped the concept and implications until I started working at Securosis. We are a research company that gives away most of our products for free, but charges for services and engagements. One area I where was at odds with Popova was on the concept of “price discrimination”. From my perspective this looks more like the market being able to set the price, but do so far more efficiently: person to person, item by item, and adjusted over time. This is a very cool concept if you think about something like television: If you pay channel by channel, how many channels would you pay for? You have 400 or so, but I bet when it came to spending money, very few would get your hard-earned dollars. The NFL knows this, as football not only drives huge ad revenue, but single-handedly the bulk of hi-def television sales and additional add-on packages. If it was not for bundling into programming packages, many (most?) other channels would not be able to survive. All in all, one of the better posts I have seen on the problems of dealing with consumer media. Share:

Share:
Read Post

ESF: Endpoint Incident Response

Nowadays, the endpoint is the path of least resistance for the bad guys to get a foothold in your organization. Which means we have to have a structured plan and process for dealing with endpoint compromises. The high level process we’ll lay out here focuses on: confirming the attack, containing the damage, and then performing a post-mortem. To be clear, incident response and forensics is a very specialized discipline, and hairy issues are best left to the experts. But that being said, there are things you as a security professional need to understand, to ensure the forensics guys can do their jobs. Confirming the attack There are lots of ways your spidey-sense should start tingling that something is amiss. Maybe it’s the user calling up and saying their machine is slow. Maybe it’s your SIEM detecting some weird log records. It could be your configuration management system reverting inexplicable changes or noting the presence of strange executables. Or perhaps your network flow analysis shows some reconnaissance activities from the device. A big part of the security management process is about being able to fire alerts when something suspicious is happening. Then we make like bloodhounds and investigate the issue. We’ve got to find the machine and isolate it. Yes, that usually means interrupting the user and ‘inviting’ them to grab a cup of coffee, while you figure out what a mess they’ve made. The first step is likely to do a scan and compare with your standard builds (you remember the standard build, right?). Basically we look for obvious changes that cause issues. If it’s not an obvious issue (think tons of pop-ups), then you’ve got to go deeper. This usually requires forensics tools, including stuff to analyze disks and memory to look for corruption or other compromise. There are lots of good tools – both open source and commercial – available for your forensics toolkit. We do recommend you take a course in simple forensics as you get started, for a simple reason. You can really screw up an investigation by doing something wrong, in the wrong order, or using the wrong tools. If it’s truly an attack, your organization may want to prosecute at some point, and that means you have to maintain chain of custody on any evidence you gather. You should consult a forensics expert and probably your general counsel to identify the stuff you need to gather from a prosecution standpoint. Containing the damage “Houston, we have a problem…” Yup, your fears were justified and an endpoint or 200 have been compromised – so what to do? First off, you should inherently know what to do because you have a documented incident response plan, and you’ve practiced the process countless times, and your team springs into action without prompting, right? OK, this is the real world, so hopefully you have a plan and your team doesn’t look at you like an alien when you take it to DEFCON 4. In all seriousness, you need to have an incident response plan. And you need to practice it. The time to figure out your plan stinks is not while a worm is proliferating through your innards at an alarming rate. We aren’t going to go into depth on that process (we’ll be doing a series later this year on incident response), but the general process is as follows: Quarantine – Bad stuff doesn’t spread through osmosis – you need a network in place to allow malware to find new targets and spread like wildfire, so first isolate the compromised device. Yes, user grumpiness may follow, but whatever. They got pwned, so they can grab a coffee while you figure out how to contain the damage. Assess – How bad is it? How far has it spread? What are your options to fix it? The next step in the process is to understand what you are dealing with. When you confirm the attack, you probably have a pretty good idea what’s going on. But now you have to figure out what the best option(s) is to fix it. Workaround – Are there settings that can be deployed on the perimeter or at the network layer that can provide a short term fix? Maybe it’s blocking communication to the botnet’s command and control. Or possibly blocking inbound traffic on a certain port or some specific non-standard protocol that is the issue. Obviously be wary of the ripple effect of any workaround (what else does it break?), but allowing folks to get back to work quickly is paramount, so long as you can avoid the risk of further damage. Remediate – Is it a matter of changing a setting or uninstalling some bad stuff? That would be optimistic, eh? Now is when you figure out how to fix the issue, and increasingly these days re-imaging is the best answer. Today’s malware hides so well it’s almost impossible to entirely inoculate a compromised device, and impossible to know you got it all. Which means part of your incident response plan should be a leveraged way to re-image machines. At some point you have to figure out if this is an incident you can handle yourself, or if you need to bring in the artillery, in the form of forensics experts or law enforcement. Your IR plan needs to be identify scenarios which call for experts, and which call for the law. You don’t want that to be a judgement call in the heat of battle. So define the scenarios, establish the contacts (at both forensics firms and law enforcement), and be ready. That’s what IR is all about. Post mortem Once most folks get done cleaning up an incident, they think the job is done. Well, not so much. The reality is that the job has just begun, since you need to figure out what happened and make sure it doesn’t happen again. It’s OK to get nailed by something you haven’t seen before (fool me once, shame on you). It’s

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.