Securosis

Research

The Myth of the Security-Smug Mac User

I still consider myself a relative newcomer to the Mac community. Despite being the Security Editor at TidBITS and an occasional contributor to Macworld (print and online), and having spoken at Macworld Expo a couple times, I only really switched to Macs back in 2005. To keep this in perspective, TidBITS has been published electronically since 1990. Coming from the security world I had certain expectations of the Mac community. I thought they were naive and smug about security, and living in their own isolated world. That couldn’t have been further from the truth. Over the past 7 years, especially the past 5+ since I left Gartner and could start writing for Mac publications, I have learned that Mac users care about security every bit as much as Windows users. I haven’t met a single Mac pundit who ever dismissed Mac security issues or the potential for malware, or who thought their Mac ‘immune’. From Gruber, to Macworld, to TidBITS, and even The Macalope (a close personal friend when he isn’t busy shedding on my couch, drinking my beer out of the cat’s water bowl, or ripping up my drapes with his antlers) not one person I’ve met or worked with has expressed any of the “security smugness” attributed to them by articles like the following: Are MACS Safer then PCs Flashback Mac Trojan Shakes Apple Rep of Invulnerability Widespread Virus Proves Macs Are No Longer Safe From Hackers Expert: Mac users more vulnerable than Windows users And countless tweets and other articles. Worse yet, the vast majority of Mac users worry about security. When I first started getting out into the Mac community people didn’t say, “Well, we don’t need to worry about security.” They asked, “What do I need to worry about?” Typical Mac users from all walks of life knew they weren’t being exploited on a daily basis, but were generally worried that there might be something they were missing. Especially relatively recent converts who had spent years running Windows XP. This is anecdotal, and I don’t have survey numbers to back it up, but I’ve been probably the most prominent writer on Mac security for the past 5 years, and talk to a ton of people in person and over email. Nearly universally Mac users are and have been, concerned about security and malware. So where does this myth come from? I think it’s 3 sources: An overly vocal minority who fill up the comments on blog posts and news articles. Yep – a big chunk of them are trolls and asshats. There are zealots like this for every technology, cause, and meme on the face of the planet. They don’t represent our community, no matter how many Apple stickers are on the backs of their cars and work-mandated Windows laptops. One single advertisement where Apple made fun of the sick PC. One. Single. Singular. Unique. Apple only ever made that joke once, and it was in a single “I’m a Mac” spot. And it was 100% accurate at the time – there was no significant Mac malware then. But since then we have seen countless claims that Apple is ‘misleading’ users. Did Apple downplay security issues? Certainly… but nearly exclusively during a period when people weren’t being exploited. I’m not going to apologize for Apple’s security failings (especially their patching issues, which lad to the current Flashback issue), but those are very different than actively misleading users. Okay – one of the Securosis staff believe there may have been some print references from pre-2005, but we are still talking small numbers and nothing current. Antivirus vendors. Here I need to tread cautiously here because I have many friends at these companies who do very good work. Top-tier researchers that are vital to our community. But they have a contingent, just like the Mac4EVER zealots, who think people are stupid or naive if they don’t use AV. These are the same people who want Apple to remove iOS security so they can run their AV products on your phones. Who took out full page advertisements against Microsoft when MS was going to lock down parts of the Windows kernel (breaking their products) for better security. Who issue report after report designed only to frighten you into using their products. Who have been claiming that this year really will be the the year of mobile malware (eventually they’ll be right, if we wait long enough). Here’s the thing. The very worst quotes and articles attacking smug Mac users usually use a line similar to the following: Mac users think they are immune because they don’t install antivirus. Which is a logical fallacy of the highest order. These people promote AV as providing the same immunity they say Mac zealots claim for ‘unprotected’ Macs. They gloss over the limited effectiveness of AV products. How even the AV vendors didn’t have signatures for Flashfake until weeks after the infections started. How Windows users are constantly infected despite using AV, to the point where most enterprise security pros I work with see desktop antivirus as more a compliance tool and high-level filter than a reliable security control. I’m not anti-AV. It plays a role, and some of the newer products (especially on the enterprise side) which rely less on signatures are showing better effectiveness (if you aren’t individually targeted). Plus most of those products include other security features, ranging from encryption to data loss prevention, that can be useful. I also recommend AV extensively for email and network filtering. Even on Macs, sometimes you need AV. I am far more concerned about the false sense of immunity claimed by antivirus vendors than smug Mac users. Because the security-smug Mac user community is a myth, but the claims of the pro-AV community (mostly AV vendors) are very real, and backed by large marketing budgets. Update: Andrew Jaquith nailed this issue a while ago over at SecurityWeek: Note to readers: whenever you see or hear an author voicing contempt for customers by calling them arrogant, smug, complacent, oblivious, shiny-shiny obsessed members of a cabal, “living in a false paradise,” or

Share:
Read Post

Responsible or Irresponsible Disclosure?—NFL Style

It’s funny to contrast this April to last April, at least as an NFL fan. Last year the lockout was in force, the negotiations stalled, and fans wondered how billionaires could argue with millionaires when the economy was in the crapper. Between the Peyton Manning lottery, the upcoming draft, and the Saints Bounty situation, there hasn’t been a dull moment for pro football fans since the Super Bowl ended. Speaking of the Saints, even after suspensions and fines, more nasty aspects of the story keep surfacing. Last week, we actually heard Gregg Williams, Defensive Coordinator of the Saints, implore his guys to target injured players, ‘affect’ the head, and twist ankles in the pile. Kind of nauseating. OK, very nauseating. I guess it’s true that most folks don’t want to see how the sausage is made – they just want to enjoy the taste. But the disclosure was anything but clean, Sean Pamphilon, the director who posted the audio, did not have permission to post it. He was a guest of a guest at that meeting, there to capture the life of former Saints player Steve Gleason, who is afflicted with ALS. The director argues he had the right. The player (and the Saints) insist he didn’t. Clearly the audio put the bounty situation in a different light for fans of the game. Before it was deplorable, but abstract. After listening to the tape, it was real. He really said that stuff. Really paid money for his team to intentionally hurt opponents. Just terrible. But there is still the dilemma of posting the tape without permission. Smart folks come down on both sides of this discussion. Many believe Pamphilon should have abided by the wishes of his host and not posted the audio. He wouldn’t have been there if not for the graciousness of both Steve Gleason and the Saints. But he was and he clearly felt the public had a right to know, given the history of the NFL burying audio & video evidence of wrongdoing (Spygate, anyone?). Legalities aside, this is a much higher profile example of the same responsible disclosure debate we security folks have every week. Does the public have a need to know? Is the disclosure of a zero day attack doing a public service? Or should the researcher wait until the patch goes live, when they get to enjoy a credit buried in the patch notice? Cynically, some folks disclosing zero-days are in it for the publicity. Sure, they can blame unresponsive vendors, but at the end of the day, some folks seek the spotlight by breaking a juicy zero-day. Likewise, you can make a case that Pamphilon was able to draw a lot of attention to himself and his projects (past, current, and future) by posting the audio. Obviously you can’t buy press coverage like that. Does that make it wrong – that the discloser gets the benefit of notoriety? There is no right or wrong answer here. There are just differing opinions. I’m not trying to open Pandora’s box and entertain a lot of discussion on responsible disclosure. Smart people have differing opinions and nothing I say will change that. My point was to draw the parallel between the Saints bounty tape disclosure and disclosing zero day attacks. Hopefully that provides some additional context for the moral struggles of researchers deciding whether to go public with their findings or not. Share:

Share:
Read Post

Pain Comes Instantly—Fixes Come Later

Mary Ann Davidson’s recent post Pain Comes Instantly has been generating a lot of press. It’s being miscast by some of the media outlets as trashing PCI Data Security Standard, but it’s really about the rules for vendors who want to certify commercial payment software and related products. The debate is worth considering, so I recommend giving it a read. It’s a long post, but I encourage you to read it all the way through before forming opinions, as she makes many arguments and provides some allegories along the way. In essence she challenges the PCI Council on a particular requirement in the Payment Application Vendor Release Agreement (VRA), part of each vendor’s contractual agreement with the PCI Council to get their applications certified as PCI compliant. The issue is over software vulnerability disclosure. Paraphrasing the issue at hand, let’s say Oracle becomes aware of a security bug. Under the terms of the agreement, Oracle must disseminate the information to the Council as part of the required information disclosure process. Her complaint is that the PCI Council insists on its right to leak (‘share’) this information even when Oracle has not yet provided a fix. Mary Ann argues that in this case the PCI Council is harming Oracle’s customers (who are also PCI Council customers) by making the vulnerability public. Hackers will of course exploit the vulnerability and try to breach the payment systems. The real point of contention is that the PCI Council may decide to share this information with QSAs, partners, and other organizations, so those security experts can better protect themselves and PCI customers based upon this information. Oracle’s position is that these QSAs and others who may receive information from the Council are not qualified to make use the information. And second, the more people know about the vulnerability, the more it likely it is to leak. I don’t have a problem with those points. I totally agree that if you tell thousands of people about the vulnerability, it’s as good as public knowledge. And it’s probably safe to wager that only a small percentage of Oracle customers have the initiative or knowledge to take vulnerability information and craft it into effective protection. Even if a customer has Oracle’s database firewall, they won’t be able to create a rule to protect the database from this vulnerability information. So from that perspective, I agree. But it’s a limited perspective. Just because few Oracle customers can generate a fix or a workaround doesn’t mean that a fix won’t or can’t be made available. Oracle customers have contributed workarounds in the past. Even if an individual customer can’t help themselves, others can – and have. But here’s my real problem with that post: I am having trouble finding a substantial difference between her argument and the whole responsible disclosure debate. What’s the real difference from a security researcher finding an Oracle vulnerability? The information is outside Oracle’s control in both cases, and there is a likelihood of public disclosure. It’s something a determined hacker may discover, or have already discovered. It’s in Oracle’s best interest to fix the problem fast before the rest of the world finds out. Historically the problem is that vendors, unless they have been publicly shamed into action, don’t react quickly to security issues. Oracle, among other vendors, has often been accused of siting on vulnerabilities for months – even years – before addressing them. Security researchers for years told basically the same story about Oracle flaws they found, which goes something like this: We have discovered a security flaw in Oracle. We told Oracle about it, and gave them details on how to reproduce it and some suggestions for how to fix it. Oracle a) never fixed it, b) produced a half-assed fix that causes other issues, or c) waited 9, 12, or 18 months before patching the issue – and that was only after I announced the bug to the world at the RSA/DefCon/Black Hat/OWASP conference. I gave Oracle information that anyone could discover, and did not ask for any compensation, and Oracle tried to sue me when I disclosed the vulnerability after 12 months. I’m not Oracle bashing here – it’s an industry-wide issue – but my point that with disclosure, timing matters… a lot. Since the Payment Application Vendor Release Agreement simply states you will ‘promptly’ inform the PCI Council of vulnerabilities, Oracle has a bit of leeway. Maybe ‘prompt’ means 30 days. Heck, maybe 60. That should be enough time to get a patch to those customers using certified payment products – or whatever term the PCI council uses for vetted but not guaranteed software. If a vendor is a bit tardy with getting detailed information to the PCI Council while they code and test a fix, I don’t think the PCI council will complain too much, so long as they are protected from liability. But make no mistake – timing is a critical part of this whole issue. Timing – particularly the lack of ‘prompt’ responses from Oracle – is why the security research community remains pissed-off and critical to this day. Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.