Securosis

Research

The Rumor Is True … I’m Joining Rich At Securosis.

Believe it or not, I’m going to work with Rich Mogull at Securosis. Worse yet, I’m excited about it! On the outside looking in, Rich and I have dissimilar backgrounds. I have been working in product development and IT over the last ten years, and Rich has been an analyst and market strategist. But during the four years I have known Rich, we have shown an uncanny similarity in our views on data security across the board. We are both tech guys at the core, and have independently arrived at the same ideas and conclusions about security and what it will look like in the years to come. As our backgrounds are both diverse and complementary, my joining Securosis will facilitate taking on additional clients and slowly expand the types of services provided. I will contribute to strategy, evaluations, architecture, and end-user guidance, as well as projects that involve more “hands-on” assistance. I will also be making contributions to the Blog on a regular basis as well. Anyway, I am really looking forward to working with Rich on a daily basis. And yes, before Amrit Williams has a chance to ask, I am a card carrying NAHMLA (North American Hoff-Mogull Love Association) member. We may even sell Polo Shirts on the web site. Share:

Share:
Read Post

Is Rootkit Detection Worth It?

An interesting debate/panel over at Matasano with perspectives from a pundit, researcher, and honest-to-goodness in the trenches security pro. Share:

Share:
Read Post

New Identity Theft Stats

One of my biggest annoyances in the industry is the lack of good metrics for making informed decisions, and the overuse of crappy metrics (like ROI) that drive poor decisions. Of those valid metrics that wistfully dance with rainbows, unicorns, and pony-unicorns in my happiest dreams, those that correlate real-world fraud with real-world incidents stand alone on the peak of the rainbow bridge to metrics nirvana. I’ve written about our need for fraud statistics, not breach statistics, but often feel like I’m just banging my head against the hard, thick walls of big money. Thanks to Debix, today there’s a bit of rainbow light at the end of the turn el (have I killed that analogy yet? Really? Even with the unicorns?). As many of you know, since they sponsored a contest here at Securosis, Debix is an identity theft prevention company. They place credit locks with the credit agencies for you, and route all new account requests through their call center for routing to you for approval or disapproval. Today they released some very interesting statistics. Since they pass a lot of credit query traffic through their call center, they closely track new account fraud attempts against their client base. Many of their clients enroll as a protective measure after data breaches, so for those customers they an also track at least of the breach origins (nothing says that’s the only time they’ve been a victim). Some of this information is based on my briefing with them, and is not available in the report. According to this report from the Identity Theft Resource Center, new credit account fraud is 57% of financial identity theft. Many of the 259,761 accounts included in the study were the result of major incidents involving lost backup tapes. There were 30,618 authorization attempts for new credit lines. Of those, 380 were fraudulent (and stopped). There were 4 incidents of new account creation that circumvented the Debix controls (all detailed in the report). This gives us a bit of meat to work with. The fraud rate is about 1.25% of new accounts, which is about the average. Since most of the participants were exposed due to lost backup tapes, it shows either that those losses are not resulting in increased fraud, or that the bad guys are holding onto the information for greater than the (public) 1 year of protection. Debix also added a new feature recently that may lead to more interesting results. When you decline to open a new account, you have the option to immediately route your case to a private investigator on their staff, who collects the information and engages law enforcement. While I doubt we’ll get hard numbers out of that, we might get some good anecdotes on the fraud origins. On our call Debix committed to providing more statistics down the road (all anonymized of course). We gave them a few suggestions, including some ways to add controls to their analysis, and I’m really looking forward to seeing what numbers pop out in the coming years. Ideally we’ll see more stats like this coming out of the credit agencies and financial institutions, but I’m not holding my breath. (Full disclosure: I have no business relationship with Debix, but am currently enrolled with them with a free press/pundit account). Share:

Share:
Read Post

A Most Concise, Accurate Description Of The Problem With GRC

Good post to read over at the Burton Blog. A snippet: Of course, the elements of G, R, C are not dead. Governing, managing risk, and responding to compliance obligations are ongoing and critical organizational tasks. The problem is conflating them into a single term. As Burton Group is inclined to say, GRC is a four-letter word that shouldn’t be spoken among polite company. Each function is deserving of its own, complete, and separate word. There’s no organization in which compliance activities, risk management, and executive governance are rolled into a single person, group, or tool. No sense creating an acronym that implies it. My favorite part. One of those things I’m jealous I didn’t put into writing first: If everything is “GRC,” then nothing is. Amen. Share:

Share:
Read Post

Making The Move To Multiple Browsers

For a while now I’ve been using different web browsers to compartmentalize my risk. Most of my primary browsing is in one browser, but I use another for potentially risky activities I want to isolate more. Running different browsers for different sessions isolates certain types of attacks. For example, unless someone totally pwns you with malware, they can’t execute a CSRF attack if you’re on the malicious site in one browser, but using a totally separate browser to check your bank balance. Actually, to be totally safe you shouldn’t even run both browsers at the same time. Last night I was talking with Robert “Rsnake” Hansen of SecTheory about this and he finally convinced me to take my paranoia to the next level. Here’s the thing- what I’m about to describe may be overkill for many of you. Because of what I do for a living my risk is higher, so take this as an example of where you can take things, but many of you don’t need to be as paranoid as I am. On the other hand, Robert is at even higher risk, and takes even more extreme precautions. I also purposely use a combination of virtualization and browser diversity to further limit my exposure. In all cases there are completely different applications, not just instances of the same platform. My web browsers break out like this. I won’t list which specific browsers I use except in a few cases: Everyday browsing: low risk, low value sites. I use one of the main browsers, and even use it to manage my low value passwords. Everyday browsing 2: slightly higher risk, but even lower value. Basically, it’s the browser in my RSS reader. Blog management: a third browser dedicated to running Securosis. This is the bit Robert convinced me to start. I use it for nothing else. Banking: Internet Explorer running in a Windows XP virtual machine. I only use it for visiting financial sites. To be honest, this is as much a reflection of my bank’s web app as anything else. I can deposit using my scanner at home, but only in IE on Windows. High risk/research: a browser running in a non-persistent Linux virtual machine. Specifically, it’s Firefox running off the Backtrack read-only ISO. Nothing is saved to disk, and that virtual machine doesn’t even have a virtual hard drive attached. This setup isn’t really all that hard to manage since it’s very task-based. Now the truth is this only protects me from some (major) web based attacks. If my system is compromised at the host OS level, the attacker can just capture everything I’m doing and totally own me. It doesn’t prevent the browser from being that vector, so, like everyone, I take the usual precautions to limit the possibility of malware on my system (but no AV, at least not yet). For average users I recommend the following if you don’t want to go as far as I have: One browser for everyday browsing. I like Firefox with NoScript. Another for banking/financial stuff. If you go to “those” sites, stick with a virtual machine. Oh, don’t pretend you don’t know what I’m talking about. Share:

Share:
Read Post

The Good (Yes, Good) And Bad Of PCI

I’m still out at SANS, in a session dedicated to PCI and web application security. Now, as you readers know, I’m not the biggest fan of PCI. The truth is (this is the “bad” part) it’s mostly a tool to minimize the risk of the credit card companies by transferring as much risk and cost as possible to the merchants and processors. On the other hand (the “good” side), it’s clear that PCI is slowly driving organizations which would otherwise ignore security to take it more seriously. I’ve met with a bunch of security admins out here who tell me they are finally getting resources from the business that they didn’t have before. Sure, many of them also complain those resources are only to give them the bare minimum needed for compliance, but in these cases that’s still significant. When it comes to web application security, it’s also a mixed bag. On the “good” side, including web application defense in section 6.6 is driving significant awareness that web applications are a major vector for successful attacks. On the “bad” side, 6.6 places code review and WAFs as competing, not complementary, technologies. These tools solve very different problems, something I hope PCI eventually recognizes. I don’t totally blame them on this one, since requiring both in every organization within the compliance deadlines isn’t reasonable, but I’d like to see PCI publicly recognize that the “either/or” decision is one of limited resources, not that the technologies themselves are equivalent. One take-away from the event, based on conversations with end users and other experts, is that WAFs are your best quick fix, while secure coding is the way to go for long-term risk reduction. Share:

Share:
Read Post

Live at SANS WhatWorks: App Sec and Pen Testing

I’m out in Vegas at the SANS WhatWorks Summits on application security and penetration testing. I like the format of these events, which mix a few expert talks with a whole slew of user panels. I’ve previously spoken at the DLP and Mobile Encryption Summits. If you’re in Vegas, drop me a line. Otherwise, stay tuned for some posts on these topics. One of the nice things about these events is there are actually power outlets for the audience; so between that and my EVDO card I can write live at the event. Right now I’m sitting in Jeremiah Grossman’s keynote session. His statistics on the probable number of 0-days in web applications are simply astounding. I’ve seen this content before, and it never ceases to stun me. More on that in a minute as I dedicate a post to how we need to change our perspective on web applications… Share:

Share:
Read Post

Web Application Security: We Need Web Application Firewalls To Work. Better.

Jeremiah Grossman is just finishing up his keynote at the SANS conference on web application security. Jeremiah and I have talked a few times about the future of web application security, and we both agree that many current approaches just can’t solve the problem. It’s increasingly clear that no matter how good we are at secure programming (SDLC) , and no matter how effective our code scanning and vulnerability analysis tools are, neither approach can “solve” our web application security problem. We not only develop code at a staggering pace, we have a massive legacy code base. While many leading organizations follow secure software development lifecycles, and many more will be adopting at least some level of code scanning over the next few years thanks to PCI 6.6, it’s naive to think that even the majority of software will go through secure development any time soon. On top of that, we are constantly discovering new vulnerability classes that affect every bit of code written in the past. And, truth be told, no tool will ever catch everything, and even well-educated people still make mistakes. Since these same issues affect non-web software, we’ve developed some reasonably effective ways to protect ourselves on that side. The key mantra is shield and patch. When we discover a new vulnerability, we (if possible) shield ourselves through firewalls and other perimeter techniques to buy us time to fix (patch) the underlying problem. No, it doesn’t always work and we still have a heck of a lot of progress to make, but it is a fundamentally sound approach. We’re not really doing this much in the web application world. The web application firewall (WAF) market is growing, but has struggled for years. Even when WAFs are deployed, they still struggle to provide effective security. If you think about it, this is one big difference between a WAF and a traditional firewall or IPS. With old school vulnerabilities we know the details of the specific vulnerability and (usually) exploit mechanism. With WAFs, we are trying to block vulnerability classes instead of specific vulnerabilitie s . This is a HUGE difference. The WAF doesn’t know the details of the application or any application-specific vulnerabilities, and thus is much more limited in what it can block. I don’t think stand-alone external WAFs will ever be effective enough to provide us the security we need for web applications. Rather, we need to change how we view WAFs. They can no longer be merely external boxes protecting against generic vulnerabilities; they need tighter integration into our applications. In the long term, I’ve branded this Application and Database Monitoring and Protection (ADMP) as we create a dedicated application and database security stack that links from the browser on the front end, to the DB server on the back. There are a few companies exploring these new approaches today. Jeremiah’s company, WhiteHat Security, has teamed up with F5 to provide specific vulnerability data from a web application to the F5 WAF. Fortify is moving down the anti-exploitation path with real-time blocking (and other actions) directly on the web application server. Imperva is tying together their WAF and database activity monitoring. (I’m sure there are more, but these are the web-application specific companies taking this path I can remember offhand). They are all taking different approaches, but all recognize that “static” WAFs or code scanning alone are pretty limited. Share:

Share:
Read Post

Emergency SunSec This Wednesday! Rothman Hits Phoenix!

The legendary Mike Rothman will be in Phoenix this week, so we’re going to call an emergency session of SunSec on Wednesday to celebrate the occasion. Rumor is we might also have another surprise guest or two. I realize I’ve been a total slacker on organizing these; we really need to figure out a regular schedule at some point. We’ll be starting at Furio in Old Town Scottsdale for happy hour at 6 (we’ll probably head down early at 5), and possibly move someplace cheaper after happy hour ends. As always, email me with any questions, and we hope to see you there. SunSec is an informal gathering of anyone with an interest in security. We hang our, drink beverages, and just generally socialize. Share:

Share:
Read Post

Webcast On Tuesday: Encryption And Key Management

This Tuesday I’ll be giving a webcast for RSA on encryption and key management. It’s heavy on the data center side; focusing on SAN/NAS/Tape, Databases, and Applications. Not much discussion of mobile or email, but a bit of file and folder (server based). Here’s the official description, and you can register here: Encryption Considerations for the Enterprise Business Trends, Impact, and Solutions Government regulations and internal policies drive your need to secure information wherever it lives and travels. Get the facts on Encryption and Key Management technologies during this seminar series and Q&A featuring Rich Mogull, founder of Securosis.com, who will discuss: Why encrypt data? Where to encrypt data? What are the pros and cons of different solutions? What role should enterprise key management play as part of an overall encryption strategy? What is the value of centralizing encryption key management? Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.