Securosis

Research

Friday Summary: April 16, 2010

I am sitting here staring at power supplies and empty cases. Cleaning out the garage and closets, looking at the remnants from my PC building days. I used to love going out to select new motherboard and chipset combinations, hand-selecting each component to build just the right database server or video game machine. Over the years one sad acknowledgement needed to be made: after a year or so, the only pieces worth a nickel were the power supply and the case. Sad, but you spend $1,500.00 and after a few months the freaking box that housed the parts was the only remaining item of value. I was thinking about this during some of our recent meetings with clients and would-be clients. Rich, Mike, and I are periodically approached by investors to review portfolio companies. We look both at their technology and market opportunities, and determine whether we feel the product is hitting the mark, and reaching buyers with the right product and message. We are engaged in response to either mis-aligned vision between investors and company operators (shocker, I know), or more commonly to give the investors some understanding of whether the company is worth salvaging through additional investment and a change of focus. Sometimes the company has followed market trends to preserve value, but quite a few turn out to be just a box of old parts. If this is not a direct consequence of Moore’s law, it should be. We see cases where technologies were obsolete before the hit they market. In a handful of instances, we do find one or two worth salvaging, but not for the reasons they thought. In those cases the engineering staff was smart enough, or lucky enough, to build a deployment model or architecture that is currently relevant. The core technology? Forget it. Pitch it in the dust bin and take the write-off because that $20M investment is spent. But the ‘box’ was valuable enough to salvage, invest in, or sell. It’s ironic, and it goes to show how tough technology start-ups are to get right, and that luck is often better than planning. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Living with Windows: security. Rich wrote this up for Macworld. Rich’s Putting the Fun in Dysfunctional presentation at RSA. Adrian’s PCI Database Security Primer, part 1, at Dark Reading. Favorite Securosis Posts Mike Rothman: FireStarter: No User Left Behind. User education is critical, but it needs to be done right, with the right incentives. David Mortman: ESF: Building the Endpoint Security Program. Rich: Anti-Malware Effectiveness: The Truth Is out There. Lies, damn lies, and testing. Adrian Lane: ESF: Full Disk Encryption. Other Securosis Posts ESF: Endpoint Compliance Reporting. Incite 4/14/2010: Just Think. ESF: Controls: Firewalls, HIPS, and Device Control. FireStarter: No User Left Behind. ESF: Controls: Anti-Malware. Friday Summary: April 9, 2010. Database Security Fundamentals: Auditing Transactions. Favorite Outside Posts Mike Rothman: if security wasn’t hard, everyone would do it. Lonervamp nails this one. Tools are great, but it’s people who have to implement the programs. David Mortman: For Thirty Pieces of Silver My Product Can Beat Your Product. Lori MacVittie brings the Rothman-esque smackdown! Pepper: apache.org server cracked: some passwords sniffed; others brute-forced. Time to change some passwords…. Rich: The Spider That Ate My Site. Okay, I hate to admit this but I did something similar to our back end management interface. When admin buttons like “delete” are merely links, a simple spider can cause some serious damage. Adrian Lane: Thinking About The Apache.org Attacks. Actually raises more questions that it answers. What does it say about security when software as prevalent as Apache has a flaw? SHA 256 has been around forever. And I get that the fewer attacks means less exposure of inherent flaws, but complexity of software plays a similar role in masking flaws. Project Quant Posts Project Quant: Database Security – Change Management. Project Quant: Database Security – Patch. Top News and Posts The Spy in the Middle. SSL Certificate Authority trust is a hot topic lately – scary because browsers inherently trust so many CAs, and we know some of them are untrustworthy. US government finally admits most piracy estimates are bogus. Not that it will stop them. Personally, I welcome our RIAA and MPAA thought police overlords. Google to Reveal Research into Fake AV Operations. Hackers exploit new Java zero-day bug. Apple Patches Pwn2Own Flaw That Hacked Safari. Rsnake on the significance of CSRF. If you don’t know who Immunet is and what they do, now’s a good time to read Brain Krebs’ interview with Adam O’Donnell and Adam Huger. Oracle’s April CPU advisory. Nothing cataclysmic. Microsoft on the other hand announced a couple critical patches, and the Authenticode bypass hack looks serious. Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to LonerVamp, in response to Security Incite: Just Think. Nice opening post, especially with the kicker at the end that you’re actually writing it on a plane! 🙂 I definitely find myself purposely unplugging at times (even if I’m still playing with something electronic) and protecting my private time when reasonable. I wonder if this ultimately has to do with the typically American concept of super-efficiency…milking every waking moment with something productive…at the expense of the great, relaxing, leisure things in life. I could listen to a podcast during this normally quiet hour in my day! And so on… @Porky Risk…Pig: It doesn’t help that every security professional shotguns everyone else’s measurements…and with good reason! At some point, I think we’ll have to accept that every company CEO is different, and it takes a person to distill the necessary information down for her consumption. No tool will ever be enough, just like no tool ever determines if a product will be successful. @My dad can beat up your dad: Ugh…I have some pretty strong opinions about cyberbullying and home internet monitoring…but I tend to

Share:
Read Post

Public Goods

Chris Pepper tweeted a very cool post on Why Content is a Public Good. The author, Milena Popova, provides an economist’s perspective on market forces and digital goods. Her premise is that in economic terms, many types of electronic content are “public goods” – that being a technical term for objects with infinite supply and no good way to control consumption. She makes the economic concepts of ‘rival’ and ‘excludable’ very easy to understand, and by breaking it down into rudimentary components, makes a compelling argument that content is a public good: It means that old business models based on content being a club good simply don’t work. It means we have to rethink our relationship with content – as creators, as distributors and as consumers. It means that there are a lot of giants in the content distribution industry whose livelihoods (profit margins) are being pulled out from under them faster than they can say “illegal downloads”, and they are fighting it. Of course they’re fighting it. They’ve had an incredibly profitable business model for about a century and suddenly they don’t. Let’s face it, human beings don’t like change at the best of times, and we sure as hell don’t like it when it means less cash in our pockets. I have written many posts on how economics affect DRM, RIAA, and ‘piracy’; and on the difference between actual security and security marketing, so I won’t rehash those subjects here; but note the common theme is that a busted business model is the root of the problem. Right now I want to stay away from some of the negativity of those posts, and instead focus on the economic drivers. Ms. Popova does a much better job than I of isolating the underlying forces, and discusses the factors in a way that helps us begin to visualize possible solutions. A lot of people have a hard time with the concept of free and how you can actually make money in a world with so much free stuff. In a capitalist society we all have trouble with this. I talk to people in IT who still don’t think Linux and Java are viable technologies, and no one could make money with those products. But the availability of free stuff requires you to think a little differently about value – fewer people will pay money for the everyday and ordinary stuff because they don’t have to, but they will pay for things they perceive as special. In fact, I don’t think I fully grasped the concept and implications until I started working at Securosis. We are a research company that gives away most of our products for free, but charges for services and engagements. One area I where was at odds with Popova was on the concept of “price discrimination”. From my perspective this looks more like the market being able to set the price, but do so far more efficiently: person to person, item by item, and adjusted over time. This is a very cool concept if you think about something like television: If you pay channel by channel, how many channels would you pay for? You have 400 or so, but I bet when it came to spending money, very few would get your hard-earned dollars. The NFL knows this, as football not only drives huge ad revenue, but single-handedly the bulk of hi-def television sales and additional add-on packages. If it was not for bundling into programming packages, many (most?) other channels would not be able to survive. All in all, one of the better posts I have seen on the problems of dealing with consumer media. Share:

Share:
Read Post

ESF: Endpoint Incident Response

Nowadays, the endpoint is the path of least resistance for the bad guys to get a foothold in your organization. Which means we have to have a structured plan and process for dealing with endpoint compromises. The high level process we’ll lay out here focuses on: confirming the attack, containing the damage, and then performing a post-mortem. To be clear, incident response and forensics is a very specialized discipline, and hairy issues are best left to the experts. But that being said, there are things you as a security professional need to understand, to ensure the forensics guys can do their jobs. Confirming the attack There are lots of ways your spidey-sense should start tingling that something is amiss. Maybe it’s the user calling up and saying their machine is slow. Maybe it’s your SIEM detecting some weird log records. It could be your configuration management system reverting inexplicable changes or noting the presence of strange executables. Or perhaps your network flow analysis shows some reconnaissance activities from the device. A big part of the security management process is about being able to fire alerts when something suspicious is happening. Then we make like bloodhounds and investigate the issue. We’ve got to find the machine and isolate it. Yes, that usually means interrupting the user and ‘inviting’ them to grab a cup of coffee, while you figure out what a mess they’ve made. The first step is likely to do a scan and compare with your standard builds (you remember the standard build, right?). Basically we look for obvious changes that cause issues. If it’s not an obvious issue (think tons of pop-ups), then you’ve got to go deeper. This usually requires forensics tools, including stuff to analyze disks and memory to look for corruption or other compromise. There are lots of good tools – both open source and commercial – available for your forensics toolkit. We do recommend you take a course in simple forensics as you get started, for a simple reason. You can really screw up an investigation by doing something wrong, in the wrong order, or using the wrong tools. If it’s truly an attack, your organization may want to prosecute at some point, and that means you have to maintain chain of custody on any evidence you gather. You should consult a forensics expert and probably your general counsel to identify the stuff you need to gather from a prosecution standpoint. Containing the damage “Houston, we have a problem…” Yup, your fears were justified and an endpoint or 200 have been compromised – so what to do? First off, you should inherently know what to do because you have a documented incident response plan, and you’ve practiced the process countless times, and your team springs into action without prompting, right? OK, this is the real world, so hopefully you have a plan and your team doesn’t look at you like an alien when you take it to DEFCON 4. In all seriousness, you need to have an incident response plan. And you need to practice it. The time to figure out your plan stinks is not while a worm is proliferating through your innards at an alarming rate. We aren’t going to go into depth on that process (we’ll be doing a series later this year on incident response), but the general process is as follows: Quarantine – Bad stuff doesn’t spread through osmosis – you need a network in place to allow malware to find new targets and spread like wildfire, so first isolate the compromised device. Yes, user grumpiness may follow, but whatever. They got pwned, so they can grab a coffee while you figure out how to contain the damage. Assess – How bad is it? How far has it spread? What are your options to fix it? The next step in the process is to understand what you are dealing with. When you confirm the attack, you probably have a pretty good idea what’s going on. But now you have to figure out what the best option(s) is to fix it. Workaround – Are there settings that can be deployed on the perimeter or at the network layer that can provide a short term fix? Maybe it’s blocking communication to the botnet’s command and control. Or possibly blocking inbound traffic on a certain port or some specific non-standard protocol that is the issue. Obviously be wary of the ripple effect of any workaround (what else does it break?), but allowing folks to get back to work quickly is paramount, so long as you can avoid the risk of further damage. Remediate – Is it a matter of changing a setting or uninstalling some bad stuff? That would be optimistic, eh? Now is when you figure out how to fix the issue, and increasingly these days re-imaging is the best answer. Today’s malware hides so well it’s almost impossible to entirely inoculate a compromised device, and impossible to know you got it all. Which means part of your incident response plan should be a leveraged way to re-image machines. At some point you have to figure out if this is an incident you can handle yourself, or if you need to bring in the artillery, in the form of forensics experts or law enforcement. Your IR plan needs to be identify scenarios which call for experts, and which call for the law. You don’t want that to be a judgement call in the heat of battle. So define the scenarios, establish the contacts (at both forensics firms and law enforcement), and be ready. That’s what IR is all about. Post mortem Once most folks get done cleaning up an incident, they think the job is done. Well, not so much. The reality is that the job has just begun, since you need to figure out what happened and make sure it doesn’t happen again. It’s OK to get nailed by something you haven’t seen before (fool me once, shame on you). It’s

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.