Incite 1/4/2011: Shaking things up

For a football fan, there is nothing like the New Year holiday. You get to shake your hangover with a full day of football. This year was even better because the New Year fell on a Sunday, so we had a full slate of Week 17 NFL games (including a huge win for the G-men over the despised Cowboys) and then a bunch of college bowl games on Monday the 2nd. Both of my favorite NFL teams (the Giants and Falcons) qualified for the playoffs, which is awesome. They play on Sunday afternoon. Which is not entirely awesome. This means the season will end for one of my teams on Sunday. Bummer. It also means the other will play on, giving me someone to root for in the Divisional round. Yup, that’s awesome again. Many of my friends ask who I will root for, and my answer is both. Or neither. All I can hope is for an exciting and well-played game. And that whoever wins has some momentum to go into the next round and pull an upset in Green Bay. The end of the football season also means that many front offices (NFL) and athletic departments figure it’s time to shake things up. If the teams haven’t met expectations, they make a head coaching change. Or swap out a few assistants. Or inform the front office they’ve been relieved of their duties. Which is a nice way of saying they get fired. Perhaps in the offseason blow up the roster, or search to fill a missing hole in the draft or via free agency, to get to the promised land. But here’s the deal – as with everything else, the head coach is usually a fall guy when things go south. It’s not like you can fire the owner (though many Redskins fans would love to do that). But it’s not really fair. There is so much out of the control of the head coach, like injuries. Jacksonville lost a dozen defensive backs to injury. St. Louis lost all their starting wide receivers throughout the year. Indy lost their hall of fame QB. And most likely the head coaches of all these teams will take the bullet. But I guess that’s why they make the big bucks. BTW, most NFL owners (and big college boosters) expect nothing less than a Super Bowl (or BCS) championship every year. And of course only two teams end each year happy. I’m all for striving for continuous improvement. Securosis had a good year in 2011. But we will take most of this week to figure out (as a team) how to do better in 2012. That may mean growth. It may mean leverage and/or efficiency. Fortunately I’m pretty sure no one is getting fired, but we still need to ask the questions and do the work because we can always improve. I’m also good with accountability. If something isn’t getting done, someone needs to accept responsibility and put a plan in place to fix it. Sometimes that does mean shaking things up. But remember that organizationally, shaking the tree doesn’t need to originate in the CEO’s office or in the boardroom. If something needs to be fixed, you can fix it. Agitate for change. What are you waiting for? I’m pretty sure no one starts the year with a resolution to do the same ineffective stuff (again) and strive for mediocrity. It’s the New Year, folks. Get to work. Make 2012 a great one. -Mike Photo credits: “drawing with jo (2 of 2)” originally uploaded by cuttlefish Heavy Research We’ve launched the latest Quant project digging deeply into Malware Analysis. Here are the posts so far: Introduction Process Map Draft 1 Confirm Infection Build Testbed Static Analysis Given its depth we will be posting it on the Project Quant blog. Check it out, or follow our Heavy Feed via RSS. Incite 4 U Baby steps: I have been writing and talking a lot more about cloud security automation recently (see the kick-ass cloud database security example and this article. What’s the bottom line? The migration to cloud computing brings new opportunities for automated security at scale that we have never seen before, allowing us to build new deployment and consumption models on existing platforms in very interesting ways. All cloud platforms live and die based on automation and APIs, allowing us to do things like automatically provision and adapt security controls on the fly. I sometimes call it “Programmatic Security.” But the major holdup today is our security products – few of which use or supply the necessary APIs. One example of a product moving this way is Nessus (based on this announcement post). Now you can load Nessus with your VMWare SOAP API certs and automatically enumerate some important pieces of your virtualized environment (like all deployed virtual machines). Pretty basic, but it’s a start. – RM Own It: It seems these two simple words might be the most frequently used phrase in my house. Any time the kids (or anyone else for that matter) mess something up – and the excuses, stories, and other obfuscations start flying – the Boss and I just blurt out own it. And 90% of the time they do. So I just loved to see our pal Adam own a mistake he made upgrading the New School blog. But he also dove into his mental archives and wrote a follow-up delving into an upgrade FAIL on one of his other web sites, which resulted in some pwnage. Through awstats of all things. Just goes to show that upgrading cleanly (and quickly) is important and hard, especially given the number of disparate packages running on a typical machine. But again, hats off to Adam for sharing and eating his own dog food – the entire blog is about how we don’t share enough information in the security business, and it hurts us. So learn from Adam’s situation, and share your own stories of pwnage. We won’t

Read Post

Network-based Malware Detection: Where to Detect the Bad Stuff?

We spent the first two posts in this series on the why (Introduction) and how (Detecting Today’s Malware) of detecting malware on the network. But that all assumes the network is the right place to detect malware. As Hollywood types tend to do, let’s divulge the answer at the beginning, in a transparent ploy. Drum roll please… You want to do malware detection everywhere you can. On the endpoints, at the content layer, and also on the network. It’s not an either/or decision. But of course each approach has strengths and weaknesses. Let’s dig into those pros and cons to give you enough information to figure out what mix of these options makes sense for you. If we recall the last post Detecting Today’s Malware, you have a malware profile of something bad. Now comes the fun part: you actually look for it, and perhaps even block it before it wreaks havoc in your environment. You also need to be sure you aren’t flagging things unnecessarily (the dreaded false positives), so care is required when you decide to actually block something. Let’s weigh the advantages and disadvantages of all the different places we can detect malware, and put together a plan to minimize the impact of malware attacks. Traditional Endpoint-centric Approaches If we jump in the time machine and go back to the beginning of Age of Computer Viruses (about 1991?), the main threat vector was ‘sneakernet’: viruses spreading via floppy disks. Then detection on actual endpoint made sense, as that’s where viruses replicated. That started an almost 20-year fiesta (for endpoint protection vendors, anyway), of anti-virus technologies becoming increasingly entrenched on endpoints, evolving three or four steps behind the attacks. After that consistent run, endpoint protection is widely considered ineffective. Does that mean it’s not worth doing anymore? Of course not, for a couple reasons. First and foremost, most organizations just can’t ditch their endpoint protection because it’s a mandated control in many regulatory hierarchies. Additionally, endpoints are not always connected to your network, so they can’t rely on protection from the mothership. So at minimum you still need some kind of endpoint protection on mobile devices. Of course network-based controls (just like all other controls) aren’t foolproof, so having another (even mostly ineffective) layer of protection generally doesn’t hurt. And keeping anything up to date on thousands of endpoints is a challenge, and you can’t afford to ignore those complexities. Finally, by the time your endpoint protection takes a crack at detection, the malware has already entered your network, which historically has not ended well. Obviously the earlier (and closer to the perimeter) you can stop malware, the better. Detecting malware is one thing, but how can you control it on endpoints? You have a couple options: Endpoint Protection Suite: Traditional AV (and anti-spyware and anti-everything-else). The reality is that most of these tools already use some kind of advanced heuristics, reputation matching, and cloud assistance to help them detect malware. But tests show these offerings still don’t catch enough, and even if the detection rate is 80% (which it probably isn’t) across your 10,000 endpoints, you would be spending 30-40 hours per day cleaning up infected endpoints. Browser Isolation: Running a protected browser logically isolated from the rest of the device basically puts the malware in a jail where it can’t hurt your legitimate applications and data. When malware executes you just reset the browser without impacting the base OS or device. This is more customer-friendly than forcing browsing in a full virtual machine, but can the browser ever be completely isolated? Of course not, but this helps prevent stupid user actions from hurting users (or the organization, or you). Application Whitelisting: A very useful option for truly locking down particular devices, application whitelisting implements a positive security model on an endpoint. You specify all the things that can run and block everything else. Malware can’t run because it’s unauthorized, and alerts can be fired if malware-type actions are attempted on the device. For devices which can be subjected to draconian lockdown, AWL makes a difference. But they tend to be a small fraction of your environment, relegating AWL to a niche. Remember, we aren’t talking about an either/or decision. You’ll use one or more of these options, regardless of what you do on the network for malware detection. Content Security Gateways The next layer we saw develop for malware detection was the content security gateway. This happened as LAN-based email was becoming pervasive, when folks realized that sneakernet was horribly inefficient when the bad guys could just send viruses around via email. Ah, the good old days of self-propagating worms. So a set of email (and subsequently web) gateway devices were developed, embedding anti-virus engines to move detection closer to the perimeter. Many attacks continue to originate as email-based social engineering campaigns, in the form of phishing email – either with the payload attached to the message, more often as a link to a malware site, and sometimes even embedded within the HTML message body. Content security gateways can detect and block the malware at any point during the attack cycle by stopping attached malware, blocking users from navigating toa compromised sites, or inspecting web content coming into the organization and detecting attack code. Many of these gateways also use DLP-like techniques to ensure that sensitive files don’t leave the network via email or web sessions, which is all good. The weakness of content gateways is similar to the issues with endpoint-based techniques: keeping up with the rapid evolution of malware. Email and web gateways do have a positive impact by stopping the low-hanging fruit of malware (specimens which are easy to detect due to known signatures), by blocking spam to prevent users from clicking something stupid, and by preventing users from navigating to compromised sites. But these devices, along with email and web based cloud services, don’t stand much chance against sophisticated malware, because their detection mechanisms are primarily based on old-school signatures. And once

Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.