Securosis

Research

Off Topic: A Little Perspective

This has nothing to do with security other than the fact Mike Rothman is a security analyst. Sometimes it’s worth sitting back and evaluating why you’re in the race in the first place. It’s all too easy to get caught up in the insanity of day-to-day demands or the incredibly deceptive priorities of the corporate and government rat races. A few months ago I took a step back and decided to reduce travel, stay healthy, and start this blog. I wanted a more-personal outlet for writing on topics and in a style that’s inappropriate at my day job (in other words, more fun). My challenge is running this site in a way that doesn’t create a conflict of interest with my employer, and thus I don’t publish anything here that I should be publishing there. Mike just went off and started his own company to support his real priorities. You should really read this. Share:

Share:
Read Post

Experiences with FileVault- Mac Encryption

Believe it or not, despite accusations that that my coverage of the Mac wireless hack is all part of some anti-Apple black PR conspiracy, I’m a Mac user. One that’s so addicted I bought my Mom one and had it shipped to me so I could “configure” it. Okay, really I had to send mine in for service and I needed another Intel Mac so I could run it off an external hard drive with an image of my MacBook Pro. I mean I might have been without it for, like, 5-7 days and that’s just not acceptable. How can I carry out my anti-Apple black PR conspiracy without a Mac to write my blog entries on? But I have something I need to admit. It’s sort of embarrassing. But it’s time to share. You see, I’m a security professional. Not just a security professional, but one that focuses on data security. The kind that gets paid to run around telling the media how stupid everyone is for not protecting their data and doing things like, uh, encrypting their hard drives. Not that I… um… was encrypting my laptop. You see I was in a bit of denial. At first it was because I still used my corporate PC and didn’t have access to good encryption software that wouldn’t mess up my configuration. Which was really me just lying to myself. Later I told myself I was so good at physical security, and paranoid in general, that I’d never let my laptop get stolen. Yep, another lie. Finally the ultimate in self deception, “well, I really don’t have anything sensitive on there in the first place”. Right. None of those “not for disclosure” Powerpoint presentations from vendors are really sensitive, are they? I mean how much personal stuff like social security numbers or credit card info could really be hiding in Outlook (in my Parallels virtual machine) or Mail.app? I mean really! When I decided to attend Black Hat and Defcon (home of the world’s most hostile network) right after an international trip to Australia and China I figured it might be a good time to get off my ass and finally encrypt my laptop. For those of you not familiar with Macs, Apple’s included encryption in the OS X operating system for a few years know in a feature called FileVault. But there’s been a lot of debate on how “safe” FileVault is; not from a security standpoint, but from a reliability/recovery standpoint. But in a recent thread in the TidBITS mailing list it didn’t seem to many people had much experience with FileVault, and perhaps some of the rumors were unfounded. Or not. Eventually the guilt caught up and it was time to take the encryption plunge. And so far FileVault is working like a 128-bit AES charm. (details after the jump) FileVault isn’t the whole-drive encryption I typically recommend to enterprise clients. Rather than encrypting the entire hard drive FileVault encrypts the entire home directory of the user. It’s a model well suited for Unix-style operating systems like OS X where nearly any personal file or setting is in the home directory, as opposed to Windows systems where data tends to be more distributed throughout the OS. OS X also includes an option to encrypt the memory cache so even temporary files are protected. The combination of encrypting the home directory and all virtual memory isn’t perfect security, but good for most of us mere mortals worried about losing our laptop or hard drive. FileVault works by creating an encrypted disk image for your home folder (an encrypted sparse image file). When you log in the image mounts and data is transparently encrypted and decrypted using 128 bit AES (Advanced Encryption Standard) as it moves to and from disk. Log out and it unmounts, appearing as one big encrypted file. That’s where most people’s fears arise- your entire home directory, every file including photos, music, video, email, and everything else is all on one big file just waiting for a few corrupt bits to make it unreadable. If your system suddenly crashes and corrupts the image (yes, even Macs crash) there’s the possibility of losing everything. For more details on the inner workings of FileVault check out this article at macdevcenter.com. After doing some research I took a few steps to prep my system. To help with performance I moved my iTunes library to /Users/Shared so they’d be out of my home directory and keep the image file smaller. My photos were already on an external drive and I only have a few videos. That dropped around 30 GB from my home directory. I then created a new user account for running backups. I use the excellent SuperDuper to backup my Macs to external drives. By using a separate backup account the entire encrypted disk image is backed up and thus protected even on the external drive. Since SuperDuper creates bootable copies of hard drives you get the nice option of being able to run completely off the external drive, on any Mac, should you lose your primary drive or even the entire computer. No restore needed. At this point I also committed to backing up nightly instead of weekly. From there it was a simple matter of going into the Security Preference Pane, setting a master password (just in case I forget/screw up my primary), enabling virtual memory encryption, and turning on FileVault. An hour or so later it finished encrypting the drive and I was good to go. So how did it work? Good. Maybe even great. I’ve been up for about 6 weeks now and haven’t had any problems. Performance seems as good as before, although I do have 2 GB of memory and a 7200 rpm hard drive. Even with a few system crashes I haven’t experienced any corruption. What’s also nice is since I do most of my work-related computing in a Windows XP Parallels virtual computer so now even my Windows

Share:
Read Post

Voting Machine Idiocy- and a Proposal for a Reasonable Standard

Ah Diebold, how we’ve missed you. In yet another example of gross negligence with our most sacred political process we find our favorite manufacturer of ATMs and voting machines yet again in the news. This time with a series of failures in the Alaskan primary. From Slashdot: http://rss.slashdot.org/~r/Slashdot/slashdot/~3/15859396/article.pl From Engadget: http://www.engadget.com/2006/08/24/diebold-machines-fail-in-alaska-primary/ For those of you that don’t follow the twist and turns of this seriously shady company, Diebold has a long history of insecure voting machines, battling any attempt to regulate better voting security, and attacking anyone that suggests they might have any teensy-weensy wittle problem that might let someone, you know, hijack an election. For more on the past check out the work by Black Box Voting and the very respected Avi Rubin. This really pisses me off. Voting, whatever your political party (except maybe you anarchists and fascists) is the ultimate expression of a democracy. If we can’t protect the voting process, we might as well give up and just sell the country to the highest bidder (and yes, I feel the same way about poll taxes, gerrymandering, and anything else that interferes with the right to vote). I have two simple suggestions to resolve this idiocy: Require a voter verified paper trail with random audits at the federal level for all elections (right now only certain states require it). Hold voting machines to the same security standards as gambling machines! Think about how highly secure gambling machines are. I first heard this suggestion from Ray Wagner (a fellow analyst at my day job) and it was so simple in concept it amazes me every time someone claims higher standards are just too hard. Heck, we already have testing labs, protocols, and procedures in place. I’m not a conspiracy theorist, despite many hours dedicated to watching the X-Files, but sometimes ya just gotta wonder…. Share:

Share:
Read Post

Home Security Tip of the Day: SpamSieve for Mac

One of the advantages of being a paranoid security geek is you slowly acquire a familiarity with consumer security tools to prevent any of the bad nastiness you comment on from happening to your own system. While I’m sure some of my remotely hosted servers will get cracked on occasion since I don’t have full control over them I’ve taken it as a personal point of honor to defend my personal computers from www.youvebeenhacked.ru to the bitter end. Every now and then on slow news days I’ll highlight some of these tools and techniques to help readers protect their own systems. Since I use Macs, PCs, and even a dash of Linux there should be some good nuggets for all platforms. Disclaimer– I do not accept any advertising (or anything else) from any vendor, anywhere, end of story. If I discuss a vendor on this site it’s because I think the product is actually useful. I will also NEVER endorse any vendor I cover professionally on Securosis! And I’m going to start with spam. I really hate spam. Seriously. And if you want to skip to the end just go buy SpamSieve (Mac only), which is one of those gems very familiar to you Mac geeks. But for those of you that like to read… Like everyone on the Internet not sending this crap I despise spam. I still remember the early days when commercial business was forbidden on the Internet. No spam. No popups. No phishing. No Amazon. No Google. No ThinkGeek. No… oh wait, never mind. Spam is more than an annoyance, it’s a pretty serious security issue. Most phishing attacks, Internet fraud, and viruses spread using spam. While I don’t know the exact economics involved I suspect more spam today is for fraudulent businesses and goods than legitimate, but annoying, marketing. Sorry, even the porn spam guys. Spam is apparently so darn profitable that a serious chunk of the botnets today are dedicated to spreading it. But most of you already know this. For a while I was reasonably immune to spam. My work email was protected with a commercial server-based product and the not-too-bad Outlook junk mail filters. Yahoo does a good job, as do the other public servers where I keep accounts. The real problem was my long-time personal email on a private domain. This account was hard to guess and off the map for a long time and spam was where. What did make it through was caught by the server filter we used (SpamAssasin). But one tragic day I ended up on a political email list and my blissful childhood ended. One bad list administrator managed to get everyone on that list firmly in the sights of the evil spammers. Within weeks 70% of my email on my once-pristine account was spam. Until I finally downloaded SpamSieve. SpamSieve is what’s known as a Bayesian filter- which means it uses all sorts of math I’ll never understand to recognize patterns. I won’t review it or dig into details. All you need to know is if you are on a Mac and have spam in your Inbox you need to go buy this. It took me only 5 days of the free 30 day trial before I whipped out the credit card and paid my $25.00 I get less than one spam message in my Inbox per week. It’s only ever blocked one message I wanted to read (you can check). It takes a few days to a week to train, but that’s really easy. Unlike most computer software it just works. ‘nuff said. Share:

Share:
Read Post

Another Take on the Mac Wireless Hack

On Friday the Mac Wireless hack issue exploded again after Apple PR issued a carefully worded press release. Next thing you know one of my favorite sites, The Unofficial Apple Weblog posts a headline that’s just wrong. There have been a lot of really bad posts on this topic, but John Gruber at Daring Fireball winds his way through the press and blog hype in a well reasoned article, The Curious Case of the Supposed MacBook Wi-Fi Hack. John’s reasoning is strong, but I believe we can take his assumptions in a different direction and finish with essentially the opposite results. First some full disclosure- I was at Black Hat and Defcon, talked with Maynor and Ellch, and have followed up with Maynor and SecureWorks since the event. I won’t be revealing any secret information here, but will just analyze John Gruber’s assumptions and see how his conclusions might change. John and I also emailed a bit on this issue over the weekend (he’s on vacation this week, so might not be able to respond). For those of you with short attention spans I believe that Maynor and Ellch will emerge with their reputations intact and have been trying to do the right thing from the start. If I’m wrong I’ll be the first to call myself on it and apologize, but I really don’t expect that to happen. John’s first assumption is: “What”s notable about this disclosure is that it is about the driver. We already know, just from watching the demonstration video, that it was also based on a third-party card. This means that either (a) the exploit they discovered uses neither the MacBook”s built-in card nor Mac OS X”s built-in driver; (b) the exploit they discovered works against both the third-party driver demonstrated in the video and against Apple”s standard driver, and they have inexplicably decided to post this disclaimer to explicitly describe only what is being demonstrated in the video; or (c) that the “experts” at SecureWorks do not understand the difference between a driver and a card. My money is on (a).” Let’s explore option (b), especially the last part: ‘…they have inexplicably decided to post this disclaimer to explicitly describe only what is being demonstrated in the video’ . (bold added) I propose an alternative: that they purposely posted the disclaimer to explicitly describe only what is being demonstrated in the video. Why would they do this? Not all security researchers believe in full disclosure. If you are one of these researchers and you don’t want to disclose the details of an unpatched vulnerability but want to demonstrate the class of vulnerability (device driver exploits) you might choose to demonstrate the vulnerability using an unidentified device. In the background you would notify any affected vendors and give them time to respond. If you show the attack on the built-in wireless device you instantly identify the vendor involved. An anonymous third-party card avoids this exposure. Let’s move to the next few points which focus on Brian Krebs. John states, “The reason this is notable is that if (a) is true (that the vulnerability they discovered does not apply to the standard AirPort driver software from Apple) it entirely contradicts Brian Krebs”s original and much-publicized story. Krebs wrote (emphasis added): “The video shows Ellch and Maynor targeting a specific security flaw in the Macbook”s [sic] wireless “device driver,” the software that allows the internal wireless card to communicate with the underlying OS X operating system. While those device driver flaws are particular to the MacBook – and presently not publicly disclosed – Maynor said the two have found at least two similar flaws in device drivers for wireless cards either designed for or embedded in machines running the Windows OS. Still, the presenters said they ultimately decided to run the demo against a Mac due to what Maynor called the “Mac user base aura of smugness on security.” Brian is a reporter and as such has different motivations than a security researcher. Brian posted this information and stands by it. Maynor and Ellch have followed a policy of not commenting on the potential vulnerability of native MacBook wireless drivers. Thus we have a situation where Brian reported something, but the sources won’t validate or repudiate the statement. Maynor and Ellch have yet to either confirm or deny Brian’s reporting. Why might they do this? If the vulnerability was real and they didn’t wish to disclose it until the vendor involved issued a patch. For this to be true they would have to have informed Apple (and the anonymous third-party device vendor) and said vendor wasn’t ready, for whatever reasons, to issue a patch. Since we’re only a few weeks from the initial disclosure we’re still in a reasonable timeframe. Remember, if they confirm Brian’s post they thus release enough details on the vulnerability that it could be replicated. But they haven’t denied the statement, which either indicates it’s true, Brian is wrong, or they lied. I don’t believe this is something they would lie about. Brian is now in the unenviable position of trying to justify his reporting without confirmation from his sources. While not a reporter (I’m just an analyst and blogger) I’ve come close to similar situations and they’re no fun. Next we have to look at Apple’s official response. John states: “In response to SecureWorks”s admission that their demonstration did not exploit the built-in driver, Apple on Friday released a statement regarding the supposed vulnerability. Lynn Fox, Apple”s director of Mac PR, told Macworld: “Despite SecureWorks being quoted saying the Mac is threatened by the exploit demonstrated at Black Hat, they have provided no evidence that in fact it is. To the contrary, the SecureWorks demonstration used a third party USB 802.11 device – not the 802.11 hardware in the Mac – a device which uses a different chip and different software drivers than those on the Mac. Further, SecureWorks has not shared or demonstrated any code in relation to the Black Hat-demonstrated

Share:
Read Post

Concerts vs. Airports: The Role and Effectiveness of Security Screening in Public Spaces

As previously posted I have a fair bit of experience with security screening in large facilities. With all the hype about airports these days it’s a good time to review the screening process and the role it plays in securing public areas. While one of the risks of security is believing expertise in one domain means expertise in all areas I believe large facilities/events and airports are related closely enough that we can apply the lessons of one to the other. In summary the security screening process is an effective tool at reducing risk in controlled spaces but is an extremely ineffective tool at completely eliminating risk. Screening only works when used in conjunction with other security controls both inside and outside of the area/facility being protected in a layered model. Good old defense in depth applies just as much to the physical world as the electronic one. Screening has improved a bit in U.S. airports since 9/11 but seems to be relied on too heavily and just can’t provide provide the level of protection either the public or politicians seem to demand from it. Continually increasing scrutiny during airport screenings beyond normal levels or increasing ID requirements will not significantly reduce the risk of a successful attack. At this point the best way to reduce the chances of a successful attack is to use additional security controls spread throughout the air travel system. I’ll talk a bit at how we screened at events, but if you just want to know about airports you might want to skip to the end. Back in the 90’s we used to perform physical searches on most of the people coming to concerts and sports events. Screening was just one of the many tools we used to ensure public safety at the event and it worked well for what we expected from it. Our screening met four goals: Check tickets or authenticate ID (depending on the event) Reduce unallowed items, like alcohol or cameras Reduce dangerous items like weapons/drugs Profile the person for entry (stopping extreme drunks, identifying problem children) (more after the jump…) Over the 10 years I worked event security I probably hand searched tens of thousands of people. It’s definitely not as fun as it sounds (and is surprisingly hard on the knees). We would vary screening considerably based on the risk profile of the event. A Barry Manilow show might get a quick visual check, while the latest skinhead gig would border a strip search. We didn’t use metal detectors due to a combination of cost and limited effectiveness. If all you’re looking for is metal they’re not too bad, but they suck for snagging boda bags full of Jack or vials of coke. One of the nice things about our events is that they were technically private- every ticket included a disclaimer that entry was solely at our discretion. You didn’t have to agree to a search, but we didn’t have to let you in. It seems harsh but some events can get a little rough. If we let everyone in with whatever they wanted things could get out of control in some very dangerous ways. Not for us, but for the people attending. I think it was a little worse back then, I rarely see the same kinds of brawls and injuries when I go to a show these days. Screening for any particular show was designed using a few factors: Size of the event Number of staff and experience level Nature of the event/audience Special requests from the performer or event sponsor (like no cameras) Budget A football game is huge- Folsom Stadium at the University of Colorado seats around 53,000 while the old Mile High stadium was just over 70,000. To cover an event of that size you need a large pool of cheap labor willing to work a single (or handful) of events screening at a dozen or more gates, with 5-15 screeners per gate. There’s no way to train an inconsistent group like that so we’d give them a quick briefing and place one or more experienced supervisors and regular staff with them. In this case the goal of screening is mostly to reduce the amount of booze making it’s way inside. When you convert a stadium to a concert venue that same facility might increase capacity by 30,000 while increasing the risk profile. Guns and Roses/Metallica was around 100,000. As much as we’d like otherwise we had to accept that screening at an event of that size would be only somewhat effective, so we beef up security in the event itself. We might run 50% more staff at a stadium concert than a football game. Smaller shows usually had smaller budgets. For shows of all sizes we had to try and match staff to get the attendees in as quickly as possible. Most people show up within an hour of a show starting, so we might have to hand search 3,000-10,000 people in that time and we’d balance staff and budget as best we could. For high risk shows we’d use the most experienced staff possible and rotate frequently to keep people fresh. There’s no manual for this stuff, but after working enough shows you get a good feel for the different kinds of crowds, when they’ll show up at the doors, what they’ll try to sneak in (everyone sneaks something in), and what to look for in terms of behavior and dress. You’d be surprised at which shows were the best or worst. Buffett concerts and Dead shows had plenty of fights and injuries, while speed metal shows were usually pretty tame aside from the mosh pits. But screening was only one small part of our security controls. Security typically stared in the parking lots and roads around the event, with a mix of identified staff and a few people in regular clothes. Out in the parking lots we’d look for problems, take care of the drunks, and give early verbal instructions

Share:
Read Post

Concerts vs. Airports- the Really Short Version

After posting Concerts vs. Airports: The Role and Effectiveness of Security Screening in Public Places I realized it was a tad long and I might bore some of you, so here’s the crib notes: For about ten years I worked, and eventually directed, security for large events like concerts and football games. There are some lessons we can apply to airline screening since both involve securing public spaces and large crowds: Screening is just one layer of security, but in airports it’s treated as practically the only layer. In concerts we relied more heavily on inside security to fill the gaps of screening. In concerts we used more behavioral profiling, earlier in the system. We never stopped profiling and watching once someone was inside. Technology is only good at catching certain things, and can’t catch everything. Increasing screening, but not the rest of security, will only piss people off and won’t significantly improve security. 80% or more of airport security seems to start and stop with screening. This might look good, but isn’t really secure. Computers suck at profiling and and are much more likely to catch a good guy than a bad guy. No security is perfect, and a determined and intelligent attacker could probably defeat most security where we allow public access, but by adding additional non-intrusive security controls we can rely less on screening and increase security while improving the flight experience. We need to build more layers into air transport security, not try and build a single really big wall. We have defense in depth at concerts and football games, why not airports? There’s more in the main post, but you get the idea. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.