Securosis

Research

Another Take on the Mac Wireless Hack

On Friday the Mac Wireless hack issue exploded again after Apple PR issued a carefully worded press release. Next thing you know one of my favorite sites, The Unofficial Apple Weblog posts a headline that’s just wrong. There have been a lot of really bad posts on this topic, but John Gruber at Daring Fireball winds his way through the press and blog hype in a well reasoned article, The Curious Case of the Supposed MacBook Wi-Fi Hack. John’s reasoning is strong, but I believe we can take his assumptions in a different direction and finish with essentially the opposite results. First some full disclosure- I was at Black Hat and Defcon, talked with Maynor and Ellch, and have followed up with Maynor and SecureWorks since the event. I won’t be revealing any secret information here, but will just analyze John Gruber’s assumptions and see how his conclusions might change. John and I also emailed a bit on this issue over the weekend (he’s on vacation this week, so might not be able to respond). For those of you with short attention spans I believe that Maynor and Ellch will emerge with their reputations intact and have been trying to do the right thing from the start. If I’m wrong I’ll be the first to call myself on it and apologize, but I really don’t expect that to happen. John’s first assumption is: “What”s notable about this disclosure is that it is about the driver. We already know, just from watching the demonstration video, that it was also based on a third-party card. This means that either (a) the exploit they discovered uses neither the MacBook”s built-in card nor Mac OS X”s built-in driver; (b) the exploit they discovered works against both the third-party driver demonstrated in the video and against Apple”s standard driver, and they have inexplicably decided to post this disclaimer to explicitly describe only what is being demonstrated in the video; or (c) that the “experts” at SecureWorks do not understand the difference between a driver and a card. My money is on (a).” Let’s explore option (b), especially the last part: ‘…they have inexplicably decided to post this disclaimer to explicitly describe only what is being demonstrated in the video’ . (bold added) I propose an alternative: that they purposely posted the disclaimer to explicitly describe only what is being demonstrated in the video. Why would they do this? Not all security researchers believe in full disclosure. If you are one of these researchers and you don’t want to disclose the details of an unpatched vulnerability but want to demonstrate the class of vulnerability (device driver exploits) you might choose to demonstrate the vulnerability using an unidentified device. In the background you would notify any affected vendors and give them time to respond. If you show the attack on the built-in wireless device you instantly identify the vendor involved. An anonymous third-party card avoids this exposure. Let’s move to the next few points which focus on Brian Krebs. John states, “The reason this is notable is that if (a) is true (that the vulnerability they discovered does not apply to the standard AirPort driver software from Apple) it entirely contradicts Brian Krebs”s original and much-publicized story. Krebs wrote (emphasis added): “The video shows Ellch and Maynor targeting a specific security flaw in the Macbook”s [sic] wireless “device driver,” the software that allows the internal wireless card to communicate with the underlying OS X operating system. While those device driver flaws are particular to the MacBook – and presently not publicly disclosed – Maynor said the two have found at least two similar flaws in device drivers for wireless cards either designed for or embedded in machines running the Windows OS. Still, the presenters said they ultimately decided to run the demo against a Mac due to what Maynor called the “Mac user base aura of smugness on security.” Brian is a reporter and as such has different motivations than a security researcher. Brian posted this information and stands by it. Maynor and Ellch have followed a policy of not commenting on the potential vulnerability of native MacBook wireless drivers. Thus we have a situation where Brian reported something, but the sources won’t validate or repudiate the statement. Maynor and Ellch have yet to either confirm or deny Brian’s reporting. Why might they do this? If the vulnerability was real and they didn’t wish to disclose it until the vendor involved issued a patch. For this to be true they would have to have informed Apple (and the anonymous third-party device vendor) and said vendor wasn’t ready, for whatever reasons, to issue a patch. Since we’re only a few weeks from the initial disclosure we’re still in a reasonable timeframe. Remember, if they confirm Brian’s post they thus release enough details on the vulnerability that it could be replicated. But they haven’t denied the statement, which either indicates it’s true, Brian is wrong, or they lied. I don’t believe this is something they would lie about. Brian is now in the unenviable position of trying to justify his reporting without confirmation from his sources. While not a reporter (I’m just an analyst and blogger) I’ve come close to similar situations and they’re no fun. Next we have to look at Apple’s official response. John states: “In response to SecureWorks”s admission that their demonstration did not exploit the built-in driver, Apple on Friday released a statement regarding the supposed vulnerability. Lynn Fox, Apple”s director of Mac PR, told Macworld: “Despite SecureWorks being quoted saying the Mac is threatened by the exploit demonstrated at Black Hat, they have provided no evidence that in fact it is. To the contrary, the SecureWorks demonstration used a third party USB 802.11 device – not the 802.11 hardware in the Mac – a device which uses a different chip and different software drivers than those on the Mac. Further, SecureWorks has not shared or demonstrated any code in relation to the Black Hat-demonstrated

Share:
Read Post

Concerts vs. Airports: The Role and Effectiveness of Security Screening in Public Spaces

As previously posted I have a fair bit of experience with security screening in large facilities. With all the hype about airports these days it’s a good time to review the screening process and the role it plays in securing public areas. While one of the risks of security is believing expertise in one domain means expertise in all areas I believe large facilities/events and airports are related closely enough that we can apply the lessons of one to the other. In summary the security screening process is an effective tool at reducing risk in controlled spaces but is an extremely ineffective tool at completely eliminating risk. Screening only works when used in conjunction with other security controls both inside and outside of the area/facility being protected in a layered model. Good old defense in depth applies just as much to the physical world as the electronic one. Screening has improved a bit in U.S. airports since 9/11 but seems to be relied on too heavily and just can’t provide provide the level of protection either the public or politicians seem to demand from it. Continually increasing scrutiny during airport screenings beyond normal levels or increasing ID requirements will not significantly reduce the risk of a successful attack. At this point the best way to reduce the chances of a successful attack is to use additional security controls spread throughout the air travel system. I’ll talk a bit at how we screened at events, but if you just want to know about airports you might want to skip to the end. Back in the 90’s we used to perform physical searches on most of the people coming to concerts and sports events. Screening was just one of the many tools we used to ensure public safety at the event and it worked well for what we expected from it. Our screening met four goals: Check tickets or authenticate ID (depending on the event) Reduce unallowed items, like alcohol or cameras Reduce dangerous items like weapons/drugs Profile the person for entry (stopping extreme drunks, identifying problem children) (more after the jump…) Over the 10 years I worked event security I probably hand searched tens of thousands of people. It’s definitely not as fun as it sounds (and is surprisingly hard on the knees). We would vary screening considerably based on the risk profile of the event. A Barry Manilow show might get a quick visual check, while the latest skinhead gig would border a strip search. We didn’t use metal detectors due to a combination of cost and limited effectiveness. If all you’re looking for is metal they’re not too bad, but they suck for snagging boda bags full of Jack or vials of coke. One of the nice things about our events is that they were technically private- every ticket included a disclaimer that entry was solely at our discretion. You didn’t have to agree to a search, but we didn’t have to let you in. It seems harsh but some events can get a little rough. If we let everyone in with whatever they wanted things could get out of control in some very dangerous ways. Not for us, but for the people attending. I think it was a little worse back then, I rarely see the same kinds of brawls and injuries when I go to a show these days. Screening for any particular show was designed using a few factors: Size of the event Number of staff and experience level Nature of the event/audience Special requests from the performer or event sponsor (like no cameras) Budget A football game is huge- Folsom Stadium at the University of Colorado seats around 53,000 while the old Mile High stadium was just over 70,000. To cover an event of that size you need a large pool of cheap labor willing to work a single (or handful) of events screening at a dozen or more gates, with 5-15 screeners per gate. There’s no way to train an inconsistent group like that so we’d give them a quick briefing and place one or more experienced supervisors and regular staff with them. In this case the goal of screening is mostly to reduce the amount of booze making it’s way inside. When you convert a stadium to a concert venue that same facility might increase capacity by 30,000 while increasing the risk profile. Guns and Roses/Metallica was around 100,000. As much as we’d like otherwise we had to accept that screening at an event of that size would be only somewhat effective, so we beef up security in the event itself. We might run 50% more staff at a stadium concert than a football game. Smaller shows usually had smaller budgets. For shows of all sizes we had to try and match staff to get the attendees in as quickly as possible. Most people show up within an hour of a show starting, so we might have to hand search 3,000-10,000 people in that time and we’d balance staff and budget as best we could. For high risk shows we’d use the most experienced staff possible and rotate frequently to keep people fresh. There’s no manual for this stuff, but after working enough shows you get a good feel for the different kinds of crowds, when they’ll show up at the doors, what they’ll try to sneak in (everyone sneaks something in), and what to look for in terms of behavior and dress. You’d be surprised at which shows were the best or worst. Buffett concerts and Dead shows had plenty of fights and injuries, while speed metal shows were usually pretty tame aside from the mosh pits. But screening was only one small part of our security controls. Security typically stared in the parking lots and roads around the event, with a mix of identified staff and a few people in regular clothes. Out in the parking lots we’d look for problems, take care of the drunks, and give early verbal instructions

Share:
Read Post

Concerts vs. Airports- the Really Short Version

After posting Concerts vs. Airports: The Role and Effectiveness of Security Screening in Public Places I realized it was a tad long and I might bore some of you, so here’s the crib notes: For about ten years I worked, and eventually directed, security for large events like concerts and football games. There are some lessons we can apply to airline screening since both involve securing public spaces and large crowds: Screening is just one layer of security, but in airports it’s treated as practically the only layer. In concerts we relied more heavily on inside security to fill the gaps of screening. In concerts we used more behavioral profiling, earlier in the system. We never stopped profiling and watching once someone was inside. Technology is only good at catching certain things, and can’t catch everything. Increasing screening, but not the rest of security, will only piss people off and won’t significantly improve security. 80% or more of airport security seems to start and stop with screening. This might look good, but isn’t really secure. Computers suck at profiling and and are much more likely to catch a good guy than a bad guy. No security is perfect, and a determined and intelligent attacker could probably defeat most security where we allow public access, but by adding additional non-intrusive security controls we can rely less on screening and increase security while improving the flight experience. We need to build more layers into air transport security, not try and build a single really big wall. We have defense in depth at concerts and football games, why not airports? There’s more in the main post, but you get the idea. Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.