Securosis

Research

My First MacWorld Article Is Up!

I have to admit, although Apple’s handling of security issues is often a train wreck, I’m still a big fan of Macs and other Apple products. I covered a lot of the firewall issues on this blog and over at TidBITS, but I was still excited when MacWorld asked me to write an article on using the Leopard Firewall. I really try to walk the middle ground when discussing Mac issues, which can tend to get a little emotional for some people. Some of my security friends accuse me of selling out when I write an article like this, while Mac zealots cry havoc at any criticism of their favorite platform. As with everything, the truth is somewhere in the middle. Apple has a long way to go with security, but we do see them taking some baby steps in the right direction. Trying to beat Apple over the head clearly doesn’t work, so I try and take a reasoned approach to criticism; giving them credit for the work they’ve done while offering specific suggestions for improvements where they fail. The truth is, even with all their faults and the critical vulnerabilities (including 0days) we’ve seen, the average Mac user is safer than the average Windows XP user as they go through their computing days. But we also need to recognize that this won’t hold true as the popularity of the platform continues to grow. We’re seeing the early signs that the bad guys are gaining interest in Macs, and there are flaws in the platform they can eventually use to cause some damage. I suspect that once this starts occurring on a large enough scale, Apple will have to respond and start adopting some of the development processes and security features we see at Microsoft. If only Microsoft would learn a little about usability from Apple… then we’d have a serious fight. Anyway, you can check it out here. Share:

Share:
Read Post

Network Security Podcast: The Hoff

Chris Hoff returned to the podcast this week to discuss the little awareness campaign we cooked up (no, he didn’t really hack me) and talk about the future of security over the next few years. I think this is one of our best episodes ever. If you’re interested in learning how us pundits look at the industry and recognize trends, you’ll want to listen to this one. Chris, Martin, and I really dig in deep on where security is headed and why. As always, you can find it at netsecpodcast.com… Share:

Share:
Read Post

Definitions: Content Monitoring and Protection And Application and Database Monitoring and Protection

More on this later, but I’m starting to see the data security market splitting along two lines. One focused on protecting content in user workspaces and productivity applications. It’s starting with DLP but moving towards what I call Content Monitoring and Protection. On the other side of data security is protecting content in business applications- from your web application stack to internal applications and databases. I’m starting to call this Application and Database Monitoring and Protection, and Database Activity Monitoring is where it’s starting. Since we need definitions, here’s my first stab for ADMP: Products that monitor all activity in a business application and database, identify and audit users and content, and, based on central policies, protect data based on content, context, and/or activity. For CMP, I’m sticking with my DLP definition (DLP is a terrible term, but I’m not going to fight the market): Products that, based on central policies, identify, monitor, and protect data at rest, in motion, and in use through deep content analysis. Share:

Share:
Read Post

End Of Year Humor And Awareness: No Folks, Hoff Didn’t Pwn Me

Chris Hoff and I decided to have a little fun and fake some back and forth exploits to highlight some security risks. It’s nearing the end of the year; either crunch time for some of you, or boring time for the rest. We figured a little humor couldn’t hurt in either case. We decided to blow this open early so it doesn’t get away from us. The attack Chris described could clearly work, but I’m surprised more people didn’t pick up the holes. While I do have a home automation system (but no cameras) I don’t know of any that use SCADA-based technologies. Then again, SCADA is going all IP so it might not be a stretch to define my system that way. For the record, I use an Insteon system but haven’t finished implementation yet. Bonus points to the commenters that noticed there’s no way I’d have a yard with that much green in Phoenix. The idea of the Quicktime rtsp attack was completely real. Until Apple released the patch a day or so ago, the only defense was avoiding clicking on potentially hostile links. I trust Chris, and would click on most things he sends me. Outbound filtering (which I do one one of my machines) could block the request unless it directed me to an unusual port; something Chris is capable of. The idea of pwning my workstation is dead on- and one reason I often recommend SCADA workstations be isolated from the Internet. I don’t have to take over your SCADA network if I can take over the workstation and do whatever I want when you aren’t looking. We were planning on highlighting a few other attack vectors in the next few days. Among them was a fake pretexting of Chris’s phone (we had a viable way for me to get his SSN) and username/password sniffing from wireless access points. All are common vectors that even us security pros are a little lax with sometimes. I suspect most of you enjoyed this, and we’ll come up with something more creative for April 1. Share:

Share:
Read Post

Dark Reading Column Up- The Perils of Predictions & Predicting Perils

My second monthly column is up over at Dark Reading; The Perils of Predictions & Predicting Perils. This is not your ordinary year-end prediction special. Here’s an excerpt: As the end of the year approaches, a strange phenomenon begins. As we relax and prepare for the holidays, we feel a strange compulsion to predict the future. For some, this compulsion is so overwhelming that it bursts the bounds of late night family dinners and explodes onto the pages of blogs, magazines, newspapers and the ever-dreaded year-end specials on TV. Ah, year’s end. Legions of armchair futurists slobber over their keyboards, spilling obvious dribble that they either predict every year until it finally happens or is so nebulous that they claim success if a butterfly flaps its wings in Liechtenstein. As you can tell, I’ve never been the biggest fan of these year-end predictions, especially in the security business. Since the days of the slide rule, scores of pundits have consistently, inaccurately predicted a devastating SCADA attack or the next big worm. Instead, I focus on two major threat trends and the security innovation they are inspiring. My favorite line in the column is near the end, so I’ll pull it out: Vulnerability scanning, secure software development, and programmer security training cannot solve the Web application security problem. I’ll leave you with two words: anti-exploitation, but you should really go read the article. Share:

Share:
Read Post

Off Topic: Argh! Smart House Went Stupid

Here I am, about 30 hours away from home, and my home automation system is freaking out. Why does stuff like this only happen when I’m on the road? Time to whip out my copy of How To Prepare For The Robot Uprising. I guess I know what I’ll be fixing this weekend… Share:

Share:
Read Post

Never Bring A Knife To A Gun Fight

Oh no he didn’t! http://rationalsecurity.typepad.com/blog/2007/12/breaking-news-s.html I should be crossing the border back to the US in about 12 hours. Share:

Share:
Read Post

Network Security Podcast Up: With Special Guest Chris Hoff

Ah, the wonders of year end predictions. We just couldn’t help ourselves, so we invited Chris Hoff, our favorite prognosticator, to join us. This week focuses on the negative trends affecting security, and Chris will be joining us again next week to finish up with the positive. As always, show notes and podcast link are over at NetSecPodcast.com. Share:

Share:
Read Post

Permanent Link For ipfw Rules

Looks like the ipfw rules project that Chris is leading is pretty popular. We’ve set up a permanent link that we’ll redirect to the latest version as we keep refining this thing. You can find it here. Thanks again to everyone who has helped on this project: windexh8er: http://www.slash32.com/ Rob Lee: http://thnetos.wordpress.com/ Josh Chris Pepper http://www.extrapepperoni.com/ Share:

Share:
Read Post

Data Security Lifecycle- Technologies, Part 3

  There’s been a lot going on in the industry since we last covered the Data Security Lifecycle, and it’s been far too long since the previous post. Today we’ll finish off our discussion of the controls technologies, and in our next post we’ll discuss supportive technologies, like Identity and Access Management and network encryption, that don’t fit neatly into the lifecycle itself. Since it’s been a while, here are links to the rest of the series: The Data Security Lifecycle Create and Store Technologies Use and Share Technologies The final two phases, Archive and Destroy, involve fewer technologies; making this one of the shorter posts. I’m sure at least a few of you will appreciate the brevity. 200712111207 Archive Encryption: As data migrates to archived storage, especially tape and other removable media, the risk of exposure through physical loss increases. In most cases losing a copy of data doesn’t result in any disclosure, but since you can’t definitively confirm that the data is safe you have to act as if it has been disclosed. This often leads to breach disclosures or other regulatory and reputation consequences. Inline Tape Encryption: An inline network appliance to automatically compress and encrypt data as it is transferred to a tape drive or library. Solutions currently exist for fiber channel, iSCSI, and TCP/IP, with support for all major tape protocols. Support for mainframe protocols may be possible with virtual tape adapters. Best suited for quickly encrypting existing infrastructure. Tape Drive Encryption: Hardware encryption built into the tape drive, sometimes requiring use of special tapes. Key management is typically more difficult than when using an inline appliance, mostly due to weak vendor offerings. Users state a strong preference for drive encryption in the long term, and key management is expected to improve over time, especially with the adoption of interoperability standards. Backup Software Encryption: Software encryption built into the backup tool. Performance is significantly worse than when using hardware encryption, but for lower-volume backups (especially in distributed environments) it’s often sufficient. Users are advised to be careful when choosing this option to make sure they can effectively retain and manage keys over the life of the tapes. Mainframe Tape Encryption- Hardware Accelerated: Some mainframes are able to use hardware crytographic accelerator cards in combination with tape encryption software. This eliminates the need for adapters or encrypted drives when creating mainframe tapes. Accelerator card support is included as an option in backup software from multiple vendors, often obviating the need to additional encryption software. Third-Party Software Encryption: Third-party encryption software designed to work with one or more backup software packages. Some products offer performance that exceeds that of encryption built into backup software, with superior key management, or support for multiple backup packages in a heterogenous environment. Inline SAN/NAS Encryption: An inline network appliance or feature of a SAN controller to encrypt all data moving to mass storage. Protects against physical loss of drives when SAN or NAS is used for archival storage, but does not offer separation of duties nor protection from network and software attacks. Hard Drive Encryption (Drive Level): When hard drives are used for archival storage, drive level encryption may protect data from physical loss. As with inline SAN/NAS encryption it does not protect against network or software level attacks. Requires external key management. Field-Level Encryption: Data already encrypted in a database is still secure in archives. In some cases, you may consider encrypting data normally left unencrypted in a live database when it moves to an archived database. Software Encryption: For file and media encryption. Covered in the Store section. Also usable for archived storage, including CD/DVDs. Asset Management: Since you don’t know if it’s been lost or misplaced, simply losing track of archival media can result in negative losses similar to a breach. The majority of public breach disclosures are the result of lost media (including laptops and tapes) that may or may not have ended up in the hands of the bad guys. Asset management tools, including software, tagging, and tracking technologies, reduce the risk of lost media. Destroy Crypto-Shredding: Deliberate destruction of all encryption keys essentially destroys the data until (if ever) the encryption protocol used is broken or capable of being brute-forced within a reasonable time period. This is sufficient for nearly every use case in a private enterprise, but shouldn’t be considered acceptable for highly sensitive government data. Encryption tools must have this as a specific feature to absolutely ensure that the keys are unrecoverable. Dedicated enterprise key management tools may be needed. Disk/Free-Space Wiping: Software or hardware designed to destroy data on hard drives and other media. At a minimum the tool should overwrite all possible space on the media 1-3 times, and 7 times is recommended for especially sensitive data. Merely formatting over data is not sufficient. Secure wiping is highly recommended for any systems with sensitive data that are sold or reused, especially laptops and desktops. File-level secure deletion tools exist when it’s necessary to destroy just a portion of data in active storage, but are not as reliable as a full media wipe. Physical Destruction: The possibilities for physically destroying media are only limited by your imagination, but break out into two categories: Degaussing: Use of strong magnets to scramble magnetic media like hard drives and backup tapes. Dedicated solutions should be used to ensure data is unrecoverable, and it’s highly recommended you confirm the efficiency of a degaussing tool by randomly performing forensic analysis on wiped media. Physical Destruction: Complete physical destruction of media, focusing on shredding actual magnetic media (platters or tape). Content Discovery: When truly sensitive data reaches end-of-life, you need to make sure that the destroyed data is really destroyed. Use of content discovery tools helps ensure that no copies or versions of the data remain accessible in the enterprise. Considering how complex our storage, archive, and backup strategies are today, this can’t absolutely guarantee the data is unrecoverable, but it does reduce the risk of subsequent retrieval.

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.