Securosis

Research

More On Why I Think Free Microsoft AV Will Be Good For Consumers

Last week I talked a bit on the decision by Microsoft to kill OneCare and release a new, free antivirus package later in 2009. Overall, I stated that I believe this will be good for consumers: I consider this an extremely positive development, and no surprise at all. Back when Microsoft first acquired an AV company I told clients and reporters that Microsoft would first offer a commercial service, then eventually include it in Windows. Antivirus and other malware protections are really something that should be included as an option in the operating system, but due to past indiscretions (antitrust) Microsoft is extremely careful about adding major functionality that competes with third party products. Not everyone shares my belief that this is a positive development for consumers. Kurt Wismer expressed it best: i doubt you need to be a rocket scientist to see the parallels between that scenario and what microsoft did back in the mid-90’s with internet explorer, and i don’t think i need to remind anyone that that was actually not good for users (it resulted in microsoft winning the first browser war and then, in the absence of credible competition, they literally stopped development/innovation for years) … what we don’t want or need is for microsoft (or anyone else, technically, though microsoft has the most potential due to their position) to win the consumer anti-malware war in any comparable sense… it’s bad on a number of different levels – not only is it likely to hurt innovation by taking out the little guys (who tend to be more innovative and less constrained by the this is the way we’ve always done things mindset), but it also creates another example of a technological monoculture… granted we’re only talking about the consumer market, but the consumer market is the low-hanging fruit as far as bot hosts go and while it may sound good to increase the percentage of those machines running av (as graham cluley suggests) if they’re all using the same av it makes it much, much easier for the malware author to create malware that can evade it… That’s an extremely reasonable argument, but I think the market around AV is different. Kurt assumes that there is innovation in today’s AV, and that the monoculture will make AV evasion easier. My belief is that we essentially have both conditions today (low innovation, easy evasion), and the nature of attacks will continue to change rapidly enough to exceed the current capabilities of AV. An attacker, right now, can easily create a virus to evade all current signature and heuristic based AV products. The barrier to entry is extremely low, with malware creation kits with these capabilities widely available. And while I think we are finally starting to see a little more innovation out of AV products, this innovation is external to the signature based system. Here’s why I think Morro will be very positive for consumers: Signature based AV, the main engine I suspect Morro runs on, is no longer overly effective and not where the real innovation will take place. Morro will be forced to innovate like any AV vendor due to the external pressures of the extensive user base of existing AV solutions, changing threats/attacks, and continued pressure from third party AV. Morro will force AV companies to innovate more. Morro essentially kills the signature based portion of the market, forcing the vendors to focus on other areas. The enterprise market will still lean toward third party products, even if AV is included for free in the OS, keeping the innovation pipeline open and ripe to cross back to the consumer market if Since the threat landscape is ever evolving I don’t think we’ll ever hit the same situation we did with Internet Explorer. Yes, we may have a relative monoculture for signatures, but those are easily evadable as it is. At a minimum, Morro will expand the coverage of up-to-date signature based AV and force third party companies to innovate. In a best case scenario, this then feeds back and forces Microsoft to innovate. The AV market isn’t like the browser market; it faces additional external pressures that prevent stagnation for very long. I personally feel the market stagnated for a few years even without Microsoft’s involvement, but it is in the midst of self correcting thanks to new/small vendor innovation, external threats, and customer demand (especially with regards to performance). Morro will only drive even more innovation and consumer benefits, even if it ever fails to innovate itself. Share:

Share:
Read Post

Politics And Protocols

Catching up from last week I saw this article in Techworld (from NetworkWorld) about an IETF meeting to discuss the impact of Dan Kaminsky’s DNS exploit and potential strategies for hardening DNS. The election season may be over, but it’s good to see politics still hard at work: One option is for the IETF to do nothing about the Kaminsky bug. Some participants at the DNS Extensions working group meeting this week referred to all of the proposals as a “hack” and argued against spending time developing one of them into a standard because it could delay DNSSEC deployment. Other participants said it was irresponsible for the IETF to do nothing about the Kaminsky bug because large sections of the DNS will never deploy DNSSEC. “We can do the hack and it might work in the short term, but when DNSSEC gets widely used, we’ll still be stuck with the hack,” said IETF participant Scott Rose, a DNSSEC expert with the US National Institute for Standards and Technology (NIST). Look, any change to DNS is huge and likely ugly, but it’s disappointing that there seems to be a large contingent that wants to use this situation to push the DNSSEC agenda without exploring other options. DNSSEC is massive, complex, ugly, and prone to its own failures. You can read more about DNSSEC problems at this older series over at Matasano (Part 1, Part 2, site currently experiencing some problems, should be back soon). The end of the article does offer some hope: The co-chairs of the DNS Extensions working group said they hope to make a decision on whether to change the DNS protocols in light of the Kaminsky bug before the group’s next meeting in March. ” We want to avoid creating a long-term problem that is caused by a hasty decision,” Sullivan said. “There are big reasons to be careful here. The DNS is a really old protocol and it is fundamental to the Internet. We’re not talking about patching software. We’re talking about patching a protocol. We want to make sure that whatever we do doesn’t break the Internet.” Good- at least the chairs understand that rushing headlong into DNSSEC may not be the answer. We might end up there anyway, but let’s make damn sure it’s the right thing to do first. Share:

Share:
Read Post

Upgrading to Parallels 4.0

I installed Parallels 4.0 on the iMac last week, upgraded my licenses and converted my bootable images to the new format. It took a while to do as the conversion process takes a long time. While the installation was trivial, I had 4 different bootable images to convert, which took a good 3 hours to migrate even though they were only a couple of gigabytes a piece and only have a handful of applications installed. But I had no problems and everything worked fine. There are a couple subtle changes to the interface that make management of the images a little easier. I have not witnessed the performance enhancements that are claimed to be present, but I have not had performance issues in the past, so your mileage may vary. As the build I used was the one provided right after the official announcement, I was expecting that a new one would be released soon to clear up some small issues that have popped up. Sure enough, build 4.0.3540.209168 popped up today. Problem is I cannot install it. The ‘Continue’ button is grayed out; tried a couple of times, but there are really no options other than to accept and continue, but still I cannot proceed. I cannot imagine something this simple not getting picked up by QA. Anyone else out there having this issue? Share:

Share:
Read Post

How To Become An Analyst

Since I get asked this question a lot: Call yourself an analyst. Convince someone to call you an analyst. Business cards don’t hurt. (P.S.- Being a good analyst? Totally different story, although you still start the same way.) Share:

Share:
Read Post

Idiocy

Experts: Cyber-crime as Destructive as Credit Crisis Bullshit. Share:

Share:
Read Post

Security Bloggers Network Revived

Last week the SBN died as Google decided to drop support for Feedburner groups during their transition of Feedburner to Google’s platform. Alan Shimel worked hard behind the scenes, and the new SBN is hosted over here at Lijit. Huge thanks to Alan and Lijit for saving the SBN, and please redirect your browsers and readers to http://security.lijitnetworks.com/. It’s a little rough right now, but more updates and fixes should be out soon. Share:

Share:
Read Post

Friday Summary – 11-21-08

After this week, Rich and I are “Home for the Holidays”, with the last of the year’s travel behind us. We have started work on our Web Application Security Program, and in keeping with our dedication to transparency in our research, we will be posting research notes for comments here on the blog during the next couple of weeks. We’re the first to admit that more of our revenue comes from sponsors/vendors than end users, but we believe that total transparency in our research process can help weed out any overt or subconscious bias and keep us honest. And let’s face it- we want to give you free stuff, and this is the only way I can do that and keep all my dogs fed. Rich and I are looking forward to avoiding the airports during the holidays and we should be pumping out a ton of research to close out our year. Now on to the week’s security summary: Webcasts, Podcasts, Outside Writing, and Conferences: Rich was in Mi esota this week, meeting with clients and giving his DLP pitch, at a T-Wolves game before returning. (No, he didn’t wear a gorilla suit, and no flaming rings were involved). On the Network Security Podcast this week, Martin and Rich interviewed Glenn Fleishman on the recent WPA crack and more. CSO Magazine published seven of Rich’s predictions for 2009. Not one involves Hoff or SCADA. Rich wrote a TidBITS article on how the new anti-phishing features work (or don’t) in Safari. This one really isn’t Apple’s fault, he’s just not a fan of Extended Validation certificates, and hopes users don’t rely on a blacklist filter to completely protect themselves. Favorite Securosis Posts: Rich: Gives his perspective on the evolution of, and current challenges facing, Building a Web Application Security Program. Adrian: Rich’s post on Microsoft’s move to give AV away to Windows users. Favorite Outside Posts: Adrian: Amrit Williams’ humorous look at great Tech Failures. Rich: Gunnar Peterson’s lecture on security, economics, and breaches: The Economics of Finding and Fixing Vulnerabilities in Distributed Systems. I may not agree with all of it, but this is exactly the kind of perspective we need to develop more in security professionals. Top News: The big news all week has been the automobile manufacturers in Washington looking for bailout loans. The political game has been high drama, with both sides accusing each other of ineptitude. Oh yeah, that whole Stock Market bug-a-boo. Anyone think we will drop to 6k before this is all over? 5k? You didn’t own stocks, did you? Deja Vu all over again … IT functions being outsourced during tough economic conditions. What’s next, call centers in India? The Metasploit Framework, version 3.2 has been released. Not security related, but this parody of the real estate crisis is just too funny not to share. The Chinese Hacker Flowchart. Nothing new, but interesting anyway. Google is supporting OAuth for secure mashups. I’d like to dig into the model more and see if a malicious gadget can use this to compromise credentials. At a minimum, it will likely enable easier CSRF. We finally have users suspicious about installing desktop apps, but now we have to explain why online gadgets/widgets are also dangerous. Sigh. Massachusetts privacy law includes security standards. Most of which just require documentation, and other than encryption very little security. Blog Comment of the Week: From ‘ds’, on Building A Web Application Security Program: Looking forward to this series. I undertook this process last year with much success. It was something that benefited the business, with an ability to conduct testing more regularly than could be done with externals as well as more affordably. It also provided a nice career path for the technical team members and raised the profile of security as something more than just a specialized system administrator. We’ve gotten more “good press” with our business leadership on this than most anything else we’ve done. Share:

Share:
Read Post

Sensitive Data Dumped

I swore that I was not going to cover data ‘breach’ events unless there was something that was really interesting or unique about it. There are too many and the general public has grown desensitized as the number of records and the overall number of breaches is, well, mind numbing. But this caught my eye as I think I may have taken photos of this house when it went back to the bank: Boxes containing loan applications, Social Security numbers and bank account information for residents of a Gilbert neighborhood have been discovered in a ransacked model home abandoned by a bankrupt developer. Several Higley Park model homes have been broken into since builder Randall Martin ceased operations. One home even had its garage door stolen, residents say. Julio Gonzalez, member of an ad hoc committee of Higley Park residents, found the boxes of paperwork when he was surveying the damage to the model homes. This sort of thing is going to be a lot more common in the coming months: bankrupt businesses throwing their files in the trash, or in this case, just leaving them behind in the building and walking away. Weird that the police wanted nothing to do with the files as I would think this is evidence of a crime. Share:

Share:
Read Post

The Impact Of Free Antivirus From Microsoft

Well, they’ve finally done it. Microsoft announced they will be dropping OneCare and start providing antivirus for free to all Windows users late next year in a product called Morro. I consider this an extremely positive development, and no surprise at all. Back when Microsoft first acquired an AV company I told clients and reporters that Microsoft would first offer a commercial service, then eventually include it in Windows. Antivirus and other malware protections are really something that should be included as an option in the operating system, but due to past indiscretions (antitrust) Microsoft is extremely careful about adding major functionality that competes with third party products. The move to free AV for all Windows users helps on two fronts. First, it’s a good way to navigate the antitrust allegations that will likely surface from the consumer AV companies. By not including AV with the default installation of Windows, it keeps the competitive environment open and provides Microsoft a good defense for monopoly allegations. Second, I suspect this will only be available to legitimate, activated copies of Windows, which provides additional incentive to purchase a legal copy and stem a small part of the home piracy market. This won’t matter to the street vendors in China, but will encourage friends and family to buy their own damn copy of Windows. The major AV companies have long expected this move. Both McAfee and Symantec have been buffering themselves through diversification and acquisition for the past few years. My personal belief was that Symantec acquired Veritas in large part to prepare for the eventual dissolution of the consumer AV market when Microsoft eventually builds it into the OS. Will this hurt? Absolutely, but they probably won’t see any market erosion at all for 2 years, and the real pain will likely only start to hit in around 3 years. This gives them enough time to avoid suddenly losing 40% (don’t quote me on that, I’m on an airplane and just guessing) of profits over 12 months. The real losers will be the consumer-only AV companies with portfolio diversification or a larger enterprise base. I don’t expect to see material erosion of the enterprise AV market anytime soon. Major vendors like Symantec, McAfee, and Trend are including growing functionality in their endpoint products, and improving central management. These additional features will likely protect their enterprise client base, although there may be some price erosion. Any consumer oriented AV product will need to seriously innovate to survive once Morro is released. Users won’t be willing to pay the $70-$99 a year AV tax once a viable, easy to download and use, product appears. Microsoft already includes a good firewall in the OS, the Malicious Software Removal Tool, anti-phishing, and other security controls. Vista is much more secure than previous versions of the OS, and it sounds like Windows 7 will actually be usable. This combination means that any consumer “AV” company will need to either protect against new threats not covered by Windows, or offer materially better security than the built in tools. Both situations rely heavily on the threat environment, making accurate predictions difficult. My rough guess is that within 5-7 years most consumer-level Windows users won’t need third party desktop security. I’m not sure if it will be in WIndows 7, but it’s also clear that it’s inevitable that AV will be included in WIndows. In summary, this is good for users, will really hurt any consumer-only AV company, will only moderately hurt enterprise and diversified AV companies, and is an extremely positive step. Unless, of course, they screw it up or the product is crap. Those are always options. The flight attendant is giving me a nasty look, so it’s time to upload this and turn off my laptop… Share:

Share:
Read Post

Building a Web Application Security Program: Part 1, Introduction

I realize this might shock our fair readers, but once upon a time I used to get my hands dirty with a little hands on web application development. Back in the heady early days of the mid-1990’s Internet I accidentally transitioned from a systems and network administrator to a web application developer and DBA at the University of Colorado’s Graduate School of Business. It all started when I made the mistake of making an incredibly ugly home page for the school, complete with a tiled background of my Photoslop-embossed version of the CU logo (but, thankfully, no BLINK tag). The University took note, and I slowly migrated out of keeping the network running into developing database driven web applications for a few thousand users. Eventually I ran my own department before setting off into the big bad world of private consulting. To this day I’m still proud of our online education tools that could totally kick Blackboard’s ass, but I think I developed my last application around 2001. I’ll be the first to admit that my skills are beyond stale, and the tools and techniques now available to web application developers are simply astounding. When I first started out in Boulder, Colorado I’d say the majority of web site developers I met were more focused on graphics skills than database design and proper user authentication. Today’s web application developers need a background in everything from structured programming, to application design, to a working knowledge of multiple frameworks and programming languages. Current web applications exist in an environment that is markedly different from the early days of businesses entering the Internet. They’ve become essential business tools interconnecting organizations in ways never anticipated when the first web browsers were designed. These changes have occurred so rapidly that, in many ways, we’ve failed to adapt our operational processes to meet current needs. This is especially apparent with web application security, where although most organizations have some security controls in place, few organizations have a comprehensive web application security program. This is a concern for two reasons. First, the lack of a complete program materially increases the chance of failure resulting in a loss-bearing security breach. Second, the lack of a coordinated program is likely to increase overall costs- not just losses from a breach, but the long term costs of maintaining an adequate security level (adequate being defined as meeting all compliance obligations and reducing losses to an acceptable level). This series of posts will show you how to build a pragmatic web application security program that constrains costs while still providing effective security. Rather than digging into the specific details of any particular technology, we’ll show you all the basic pieces and how to put them together. We’ll start with some background on how web applications are different than traditional enterprise applications or commercial off-the-shelf products. We’ll provide basic business justifications for investments in web application security you can use to gain management support (although we’ll be covering these in more depth in future research). The bulk of this series will then focus of the particular security needs of web applications, before delving into details on the major security components and how to pull them together into a complete program. Eventually we plan on releasing this as a white paper, and we already have one sponsor lined up (sponsors can redistribute the content and are acknowledged in the paper, but have no influence on content- it’s just an advertising spot within the larger paper). As with all of our research we rely on you, our readers, to keep us honest and accurate as we develop the research. Technically all analysts do that, but we actually admit it and prefer to engage you directly out in the open- so please comment away as we post. Since I’ve already wasted a ton of space setting up the series, today we’ll just cover the web application security problem. Our next post will provide business justifications for investing in a web application security program, and guidance on building a structured program. The Web Application Security Problem Enterprise web applications evolved in such a way that they’ve created a bit of a conundrum for security. Although we’ve always been aware of them, we initially treated them as low-risk endeavors almost fully under the control of the developers creating them. But before we knew it, they transitioned from experimental programs to critical business applications. Ask any web application developer and they can tell you the story of their small internal project that became an essential business application once they made the mistake of showing it off to a business unit or the outside world. We can break this general evolution down into some key trends creating the current security situation: Before web applications, few businesses exposed their internal transactional systems to the outside world. Even those businesses which did expose systems to business partners on a restricted basis rarely exposed them directly to customers. Web applications grew organically- starting from informational websites that were little more than online catalogs, through basic services, to robust online applications connected to back end systems. In many cases, this transition was the classic “frog in a frying pan” problem. Drop a frog into a hot frying pan, and it will hop right out. Slowly increase the heat, and it will fry to death without noticing or trying to escape. Our applications developed slowly over time, with increased functionality, leading to increased reliance, often without the oversight they might have gotten had they been scoped as massive projects from the beginning. Web application protocols were designed to be lightweight and flexible, and lacked privacy, integrity, and security checks. Web application development tools and techniques evolve rapidly, but we still rely on massive amounts of legacy code. Both internal and external systems, once deployed, are nearly impossible to simply shut down and migrate to new systems. Web application threats evolve as quickly as our applications, and apply to everything we’ve done in the past.

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.