Securosis

Research

On the Month of Apple Bugs, Backdoor Drama, and Why Security Researches Need Exceptional Ethics

Being on the road this week, I missed the latest drama at the Month of Apple Bugs pointed out in this post by Chris Pepper. (One thing Chris doesn’t mention is that this backdoor was only included in a pre-release version of the exploit, not the released proof of concept code). I read LMH’s response and explanation, spoke with him directly, and feel he’s unfortunately damaged the reputation of the already controversial project. Basically, LMH found some individuals scanning the directories where he posted the exploit code samples before the accompanying blog entry was posted. There were no pointers to these files. He included the backdoor code in a file put at that location that the people scanning his server picked up and, in some cases, executed. His goal was to identify these individuals and prevent further looping of his server. I think it was a bad idea, but the backdoor code was not in any released version of the exploit. Basically it was some wacky arms race- with the individuals downloading the code poking around someone else’s server, and LMH taking a vigilante-style response. As I’ve said many times I really don’t like dropping 0day exploit code; it damages the innocent more than the guilty. But if you are going to drop code as part of a full disclosure, it should be to prove the concept and allow people to test and evaluate the vulnerability. Your code should never make the situation worse, no matter what kind of point you’re trying to make. Putting in a backdoor to track and expose who downloads the code, released or not, is never the right way to go. It’s unethical. LMH responded on his blog: The disclaimer is clear enough, and if they go around downloading and voluntarily executing random code (read, a exploit), it’s certainly their responsibility to set up a properly isolated environment. Otherwise you’re total jackass (although, why would you “worry if the bugs are fake”?). Yes- you should set up a test environment before messing with ANY exploit code. I blew 4 hours this weekend fixing my Metasploit copy (still can’t get msfweb running) and creating virtual targets to test a new exploit someone sent me. Then again most people can’t virtualize OS X legally, requiring you to buy a spare Mac to test anything. That said, I don’t think anyone ever has the right to place backdoor code in anything. If you want to track who is downloading or leaking pre-release code you track audit logs or take other actions on your end. You have no right to do anything malicious to someone else’s system, even when they aren’t playing nice. This did nothing but hurt the project. Apple security is a serious issue that needs real debate, but games like this destroy credibility and marginalize the individuals involved. Vendors generally dislike security researchers as it is. Giving them any opening, no matter how small, makes it that much harder on the research community. It allows PR departments to make a legitimate researcher look like nothing more than a malicious criminal, demonizing them in the press even when they play strictly by the book. I don’t consider LMH malicious at all; and after a few conversations have no doubts his goal is to improve security, but I do disagree with his methods on this particular project. Security researchers need to have exceptional ethics to withstand the vendor attempts at marginalization or all their work goes to naught. Share:

Share:
Read Post

Heading to MA

Tomorrow morning I’m off on the wonderful 6 hour flight from Phoenix to Boston I probably don’t have time to meet up, but if any of you are in the area and want to give it a shot let me know. Share:

Share:
Read Post

Defending My Privacy- One Beer at a Time

The BCS Championship is in Phoenix tonight (that’s the college football championship game for our overseas and raging-geek readers) and Ohio State seems to have brought around 60,000 of their fans into town. A couple of buddies of mine from Colorado are in town for the game and we spent the weekend out and about. Last night we were heading into one of the bigger Buckeye bar-parties in town and I was totally stunned when it came time to give the bouncer our IDs. As everyone walked up he grabbed driver’s licenses and ran the mag stripes through a handheld scanner. Being both moderately sober and the security paranoid I am the conversation went like this: Me: So, are you just checking for fakes or storing any of the info? Bouncer: Both, it’s mostly for our database statistics. Me: Like how many people came in? Bouncer: Just your name, date of birth, and driver’s license number. Me: Ah. Umm… Okay. Any chance you can skip it and just give my ID a visual check? At that point he, looking like I was some Unabomber-like freak, checked my ID the old fashioned way and let me in. Of course, while watching him I noticed that he was so intent on scanning IDs into his little machine that he sort of, you know, didn’t check the faces of people handing them over. That might explain the one 20 year old girl running around stealing drinks, grabbing any guy within reach in a manner that’s rarely free, and putting the Vegas showgirls to shame. I can’t really think of any good reason a bar named Mickey’s Hangover needs that info. And in the process they reduced their security by relying on a machine to find fakes, and forgetting to see if the ID handed over even slightly resembled the patron at the door. Seriously folks- I don’t think I’m overly paranoid, but it’s hard to justify letting a random bar keep my vital stats just to buy an overpriced beer. Share:

Share:
Read Post

Maynor is Free… And Blogging

I’m catching up from being out (or sick) most of the holidays, so this is a bit of old news. Dave Maynor is no longer with SecureWorks (his decision) and has joined us in the blogosphere over at Errata Security (his new employer). I suspect he’s still bound to keep details of the Mac WiFi fiasco under wraps, so don’t expect any new insight on that issue. He also got a bit of fame in this article. Dave’s a good guy who got caught in an extremely bad position. It’s nice to see him in public again, and nice to see another professional researcher hit the blogs. In the past he’s been against full disclosure, so it will be interesting to see how he reacts to the Month of Apple Bugs after his recent experiences. Share:

Share:
Read Post

Keeping it Real

I had the opportunity to review Rothman’s Pragmatic CSO before the holidays, and it got me thinking about complexity. (Oh yeah, and it’s really good, but I’m not allowed to endorse anything so that’s all I’ll say.) One thing I realized after spending a few years wandering into people’s homes and vehicles during the most stressful events of their lives (legally, being a paramedic and all) is that we have this incredible ability to make our lives more complicated than they need to be. It’s as if the human creature, by din of our apparently complex consciousness, builds nearly insurmountable mental constructs that shield us from that which is straightforward and simple. It’s like our brains are these high performance sports cars that just have to run at full speed no matter what the road. And let’s be honest, not all sports cars are built alike, sending those of lower performance flying off the cliff edge of intelligence to land in a mangled heap when they hit the hard pavement of reality. Time and time again I saw people sometimes destroy themselves by failing to follow the path of simplicity- sometimes losing a relationship or their long term health, other times losing their lives. Come on, you all know the drama kings and queens that crave complexity in their lives despite their protestations for the contrary. Or the motormouths that keep their lips moving to prevent theirs brain from having a moment of quiet reflection to show them how much they’ve screwed themselves up. We (and I really mean we; all of us are guilty) often make similar mistakes in the professional world. We spend more time building an RFP and testing each widget in a product than we’ll actually spend using it, totally ignoring the fact it doesn’t have the one critical feature we really need. We spend more time building frameworks, models, architectures, and checklists than building the necessary systems. I’m not saying we should toss all paperwork and planning to the winds, but we very often lose perspective and create unnecessary complexity. Just look at the COSO ERM framework as the shining example of CTSCS (crap to sell consulting services), or the government paperwork bottlenecks of accreditation and certification. In mountain rescue our goal was to keep every rescue system and operational plan as simple as possible- because the more pieces you add to the chain, the greater the likelihood of failure (literally). I like Rothman’s work because he’s trying to pull us back to basics. Yes, we need assessments, strategies, policies, and plans, but the practicality of security is complex enough as it is; we shouldn’t let the business of security compound the problem. We need to be realists and know that we’ll never solve everything, but by focusing on the pragmatic, simple, and direct we can best protect our organizations without going totally batshit. Don’t make life harder than it needs to be. Don’t add complexity. Keep it real. One of the best ways to be effective in security is to look for the simplest and most pragmatic solutions to the complex problems. Share:

Share:
Read Post

SAS 70 Has Nothing To Do With Security

Richard expresses a little shock upon discovering that SAS 70 audits don’t evaluate security. I’d be shocked if any service provider, or other organization for that matter, claimed to me a SAS 70 made them secure. As in I’d consider them totally fracking worthless. All a SAS 70 does is certify that a control works as documented. Kind of like Common Criteria (my other favorite puppy to kick). If you document a single control, a SAS 70 will certify it works as documented. Nothing more. A lot less if it’s a Type I; since the auditor just signs off on management’s assertion that the control works as management documented (cool, eh?). SAS 70 has nothing to do with security. For SOX some orgs are certifying using the COSO Internal Controls Framework, which is as close as you can get to a SOX audit. It works for that since they certify to the same standard used for the SOX audits. Sort of; it can be grey depending on the auditor. For security the best we have is the imperfect ISO 27001 and 27002. If nothing else, they’re a good baseline. I’d also ask your provider for their latest penetration test results from a third party. Really, none of these checklists prove you’re secure. But they are very useful tools in designing and evaluating your security program. Except SAS 70- at least where security is concerned. Share:

Share:
Read Post

Privacy Update- No Warrant Needed to Open Mail

To be honest, this is just a signing statement and, from what little constitutional law I know, kind of illegal. Basically, when Bush signed a law into effect that prohibited warrantless reading of citizens email, he added a statement that said the feds can still read email without a warrant. Wacky, huh? Bush asserted the new authority Dec. 20 after signing legislation that overhauls some postal regulations. He then issued a “signing statement” that declared his right to open mail under emergency conditions, contrary to existing law and contradicting the bill he had just signed, according to experts who have reviewed it. Still, it’s fun to get all hot under to collar about it, and if they we start hearing about opened mail, we’ll know that we don’t live in a democracy. Anyway, original article here. Share:

Share:
Read Post

February is

Securosis is officially declaring February as the “Month of No Bugs”. This follows the trend started by HD Moore with the Month of Browser Bugs, then continued by LMH with the Month of Kernel Bugs, and now the Month of Apple Bugs. During the month of February no security researcher will release any vulnerabilities on any systems, giving IT departments and vendors valuable time to make a dent in their backlog of existing vulnerabilities to fix and patch. All cybercriminals will refrain from using any of their 0-day exploits and limit themselves to previously reported public vulnerabilities. “We feel that the Month of No Bugs will force improvements in information security by giving vendors time to create patches for existing flaws while allowing users to catch up on updating their systems.” Stated Securosis, “an additional advantage is providing security researchers a full month off to relax, recharge, and explore new hobbies or scan the Microsoft Robotics Studio for any back-door code from Skynet.” The Month of No Bugs will not release a bug on each day in February. Seriously folks, while I have tremendous respect for security researchers I think this “Month of” stuff is getting out of hand. HD started with hacks that disclosed a flaw without a direct path to remote code execution, but it looks like a number of the flaws released by LMH will come with working exploits. I’ve had positive discussions with him in the past, and think his heart’s in the right place, but this isn’t the way to make things better. As messed up as the industry’s disclosure approaches may be, dumping code isn’t the answer. One of my first posts was on the dirty little secrets of disclosure, and while there is sometimes a time and place for releasing code, this clearly isn’t it. Apple, or any vendor for that matter, that doesn’t respond well to reported vulnerabilities isn’t about to change their practices due to ending up in the crosshairs of a lone gunman (or even several), whatever their intentions. It’s only when the end users start getting hurt and either complain enough, or start switching to other products enough, that a vendor starts to think differently. It’s what moved Microsoft, and it’s what will move Apple when the time comes. Releasing code without reporting it to the vendor does little more than ga er attention and place end users at risk. I highly doubt it will change any vendor’s patching policies. This is turning into the cyber equivalent of a self-declared vigilante smashing everyone’s doors down while they’re away on vacation, leaving them as burglar-bait, to prove to them how weak their lock vendor is. Either that or handing out bump keys and instructional videos in the worst part of town and pretending that the lock vendors will get it all fixed before the bad guys watch the DVD and put it to work. I’ve never hidden that I think our disclosure process, if we can even call it that, needs serious work. And I’ve called some big vendors to the carpet more than once. But spending a month dumping exploit code is only going to make us end users less secure, and make it even harder to deal with those vendors. It might be the right intent, but it’s definitely the wrong approach. Share:

Share:
Read Post

Welcome to 2007: ‘06 Recap and Predictions

Yep, I’m usually late to parties. The holidays were pretty intense with various family events this year, so I blogged and worked less than expected on my vacation. I’ve also managed to come down with a nasty case of strep, which is an annoying way to start the year. Thus it’s only now, on January 2nd, that I can finally respond to Alex’s challenge/tag for my 2007 predictions. Let’s start with the 2006 recap: Some good stuff happened Some bad stuff happened Some things got better Some things got worse Everything else stayed the same Hmm, did I miss anything? Now for my 2007 predictions: Some good stuff will happen Some bad stuff will happen Some things will get better Some things will get worse Everything else will stay the same While I do think the end of the year can be a good time to reflect on the recent past and look towards the future, I also think we in the security world can’t always afford to make these arbitrary divisions of time. We live on a non-cyclical continuum that, vacations aside, doesn’t begin or end on annual or quarterly cycles (except for some of you on the vendor side, maybe). I think this cynicism is probably an artifact of working so many holidays as a paramedic or physical security guy (for the record, Xmas was usually slow, with a few tragic calls, and New Year’s Eve usually busy). Thus I’m using this arbitrary black line of the end of the year to remind you that there are no arbitrary black lines. Actually, there is one prediction I want to make for 2007. It isn’t about any markets, threats, or technology developments. In 2007 the job of a security professional will be neither materially more difficult, nor materially less difficult, than it was in 2006. My fellow bloggers, and my coworkers, have already done a good job of predicting specifics and I don’t see much to add. Threats, tools, and technology will change, but the net balance for 2007 will stay even. Sorry folks, you’ll still have job security into 2008… Share:

Share:
Read Post

The Three Laws of Data Encryption

Lately (as in, most of the year) I’ve been seeing a lot of chatter around encryption- driven primarily by PCI and concerns about landing on the front page of every major newspaper in the . It cracks me up that the PCI Data Security Standard calls encryption, “the ultimate security technology” (I think they pulled that line out of the 1.1 version). Encryption is just another tool in the box, albeit a useful one. There is no “ultimate” technology. Unless, of course, you’d like to pay me a very reasonable fee and I’ll provide it to you. Just sign this little EULA agreement not to disclose any benchmark or… oh heck, not to disclose anything at all. Earlier this year I published a note over with my employer entitled “The Three Laws of Data Encryption”. While I can’t release the note content here (because of the whole wanting to stay employed thing, and if they don’t make money I don’t) here are the three laws as a teaser (since they’ve been published in a few public news articles). Basically, there are only three reasons to encrypt: If data moves, physically or virtually. E.g. laptops, backup tapes, email, and EDI. To enforce separation of duties beyond what’s possible with access controls. Usually this only means protecting against administrators, since access controls can stop everyone else. Examples include credit card or social security numbers in databases (when you separate keys from admins) and files in shared storage. Because someone tells you you have to. I call this “mandated encryption”. You G clients should check out the note if you want more details (actually, if any of you start using Gartner because of this blog please let me know via email). While the “laws” are totally fracking obvious I’ve found a lot of people run around trying to encrypt without taking the time to figure out what the threats are and if encryption will offer any real value. Like encrypting a column in a database and having the DBA manage the keys. What are you protecting against? And “hackers” isn’t the answer. Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.