Securosis

Research

Defending My Privacy- One Beer at a Time

The BCS Championship is in Phoenix tonight (that’s the college football championship game for our overseas and raging-geek readers) and Ohio State seems to have brought around 60,000 of their fans into town. A couple of buddies of mine from Colorado are in town for the game and we spent the weekend out and about. Last night we were heading into one of the bigger Buckeye bar-parties in town and I was totally stunned when it came time to give the bouncer our IDs. As everyone walked up he grabbed driver’s licenses and ran the mag stripes through a handheld scanner. Being both moderately sober and the security paranoid I am the conversation went like this: Me: So, are you just checking for fakes or storing any of the info? Bouncer: Both, it’s mostly for our database statistics. Me: Like how many people came in? Bouncer: Just your name, date of birth, and driver’s license number. Me: Ah. Umm… Okay. Any chance you can skip it and just give my ID a visual check? At that point he, looking like I was some Unabomber-like freak, checked my ID the old fashioned way and let me in. Of course, while watching him I noticed that he was so intent on scanning IDs into his little machine that he sort of, you know, didn’t check the faces of people handing them over. That might explain the one 20 year old girl running around stealing drinks, grabbing any guy within reach in a manner that’s rarely free, and putting the Vegas showgirls to shame. I can’t really think of any good reason a bar named Mickey’s Hangover needs that info. And in the process they reduced their security by relying on a machine to find fakes, and forgetting to see if the ID handed over even slightly resembled the patron at the door. Seriously folks- I don’t think I’m overly paranoid, but it’s hard to justify letting a random bar keep my vital stats just to buy an overpriced beer. Share:

Share:
Read Post

Maynor is Free… And Blogging

I’m catching up from being out (or sick) most of the holidays, so this is a bit of old news. Dave Maynor is no longer with SecureWorks (his decision) and has joined us in the blogosphere over at Errata Security (his new employer). I suspect he’s still bound to keep details of the Mac WiFi fiasco under wraps, so don’t expect any new insight on that issue. He also got a bit of fame in this article. Dave’s a good guy who got caught in an extremely bad position. It’s nice to see him in public again, and nice to see another professional researcher hit the blogs. In the past he’s been against full disclosure, so it will be interesting to see how he reacts to the Month of Apple Bugs after his recent experiences. Share:

Share:
Read Post

Keeping it Real

I had the opportunity to review Rothman’s Pragmatic CSO before the holidays, and it got me thinking about complexity. (Oh yeah, and it’s really good, but I’m not allowed to endorse anything so that’s all I’ll say.) One thing I realized after spending a few years wandering into people’s homes and vehicles during the most stressful events of their lives (legally, being a paramedic and all) is that we have this incredible ability to make our lives more complicated than they need to be. It’s as if the human creature, by din of our apparently complex consciousness, builds nearly insurmountable mental constructs that shield us from that which is straightforward and simple. It’s like our brains are these high performance sports cars that just have to run at full speed no matter what the road. And let’s be honest, not all sports cars are built alike, sending those of lower performance flying off the cliff edge of intelligence to land in a mangled heap when they hit the hard pavement of reality. Time and time again I saw people sometimes destroy themselves by failing to follow the path of simplicity- sometimes losing a relationship or their long term health, other times losing their lives. Come on, you all know the drama kings and queens that crave complexity in their lives despite their protestations for the contrary. Or the motormouths that keep their lips moving to prevent theirs brain from having a moment of quiet reflection to show them how much they’ve screwed themselves up. We (and I really mean we; all of us are guilty) often make similar mistakes in the professional world. We spend more time building an RFP and testing each widget in a product than we’ll actually spend using it, totally ignoring the fact it doesn’t have the one critical feature we really need. We spend more time building frameworks, models, architectures, and checklists than building the necessary systems. I’m not saying we should toss all paperwork and planning to the winds, but we very often lose perspective and create unnecessary complexity. Just look at the COSO ERM framework as the shining example of CTSCS (crap to sell consulting services), or the government paperwork bottlenecks of accreditation and certification. In mountain rescue our goal was to keep every rescue system and operational plan as simple as possible- because the more pieces you add to the chain, the greater the likelihood of failure (literally). I like Rothman’s work because he’s trying to pull us back to basics. Yes, we need assessments, strategies, policies, and plans, but the practicality of security is complex enough as it is; we shouldn’t let the business of security compound the problem. We need to be realists and know that we’ll never solve everything, but by focusing on the pragmatic, simple, and direct we can best protect our organizations without going totally batshit. Don’t make life harder than it needs to be. Don’t add complexity. Keep it real. One of the best ways to be effective in security is to look for the simplest and most pragmatic solutions to the complex problems. Share:

Share:
Read Post

SAS 70 Has Nothing To Do With Security

Richard expresses a little shock upon discovering that SAS 70 audits don’t evaluate security. I’d be shocked if any service provider, or other organization for that matter, claimed to me a SAS 70 made them secure. As in I’d consider them totally fracking worthless. All a SAS 70 does is certify that a control works as documented. Kind of like Common Criteria (my other favorite puppy to kick). If you document a single control, a SAS 70 will certify it works as documented. Nothing more. A lot less if it’s a Type I; since the auditor just signs off on management’s assertion that the control works as management documented (cool, eh?). SAS 70 has nothing to do with security. For SOX some orgs are certifying using the COSO Internal Controls Framework, which is as close as you can get to a SOX audit. It works for that since they certify to the same standard used for the SOX audits. Sort of; it can be grey depending on the auditor. For security the best we have is the imperfect ISO 27001 and 27002. If nothing else, they’re a good baseline. I’d also ask your provider for their latest penetration test results from a third party. Really, none of these checklists prove you’re secure. But they are very useful tools in designing and evaluating your security program. Except SAS 70- at least where security is concerned. Share:

Share:
Read Post

Privacy Update- No Warrant Needed to Open Mail

To be honest, this is just a signing statement and, from what little constitutional law I know, kind of illegal. Basically, when Bush signed a law into effect that prohibited warrantless reading of citizens email, he added a statement that said the feds can still read email without a warrant. Wacky, huh? Bush asserted the new authority Dec. 20 after signing legislation that overhauls some postal regulations. He then issued a “signing statement” that declared his right to open mail under emergency conditions, contrary to existing law and contradicting the bill he had just signed, according to experts who have reviewed it. Still, it’s fun to get all hot under to collar about it, and if they we start hearing about opened mail, we’ll know that we don’t live in a democracy. Anyway, original article here. Share:

Share:
Read Post

February is

Securosis is officially declaring February as the “Month of No Bugs”. This follows the trend started by HD Moore with the Month of Browser Bugs, then continued by LMH with the Month of Kernel Bugs, and now the Month of Apple Bugs. During the month of February no security researcher will release any vulnerabilities on any systems, giving IT departments and vendors valuable time to make a dent in their backlog of existing vulnerabilities to fix and patch. All cybercriminals will refrain from using any of their 0-day exploits and limit themselves to previously reported public vulnerabilities. “We feel that the Month of No Bugs will force improvements in information security by giving vendors time to create patches for existing flaws while allowing users to catch up on updating their systems.” Stated Securosis, “an additional advantage is providing security researchers a full month off to relax, recharge, and explore new hobbies or scan the Microsoft Robotics Studio for any back-door code from Skynet.” The Month of No Bugs will not release a bug on each day in February. Seriously folks, while I have tremendous respect for security researchers I think this “Month of” stuff is getting out of hand. HD started with hacks that disclosed a flaw without a direct path to remote code execution, but it looks like a number of the flaws released by LMH will come with working exploits. I’ve had positive discussions with him in the past, and think his heart’s in the right place, but this isn’t the way to make things better. As messed up as the industry’s disclosure approaches may be, dumping code isn’t the answer. One of my first posts was on the dirty little secrets of disclosure, and while there is sometimes a time and place for releasing code, this clearly isn’t it. Apple, or any vendor for that matter, that doesn’t respond well to reported vulnerabilities isn’t about to change their practices due to ending up in the crosshairs of a lone gunman (or even several), whatever their intentions. It’s only when the end users start getting hurt and either complain enough, or start switching to other products enough, that a vendor starts to think differently. It’s what moved Microsoft, and it’s what will move Apple when the time comes. Releasing code without reporting it to the vendor does little more than ga er attention and place end users at risk. I highly doubt it will change any vendor’s patching policies. This is turning into the cyber equivalent of a self-declared vigilante smashing everyone’s doors down while they’re away on vacation, leaving them as burglar-bait, to prove to them how weak their lock vendor is. Either that or handing out bump keys and instructional videos in the worst part of town and pretending that the lock vendors will get it all fixed before the bad guys watch the DVD and put it to work. I’ve never hidden that I think our disclosure process, if we can even call it that, needs serious work. And I’ve called some big vendors to the carpet more than once. But spending a month dumping exploit code is only going to make us end users less secure, and make it even harder to deal with those vendors. It might be the right intent, but it’s definitely the wrong approach. Share:

Share:
Read Post

Welcome to 2007: ‘06 Recap and Predictions

Yep, I’m usually late to parties. The holidays were pretty intense with various family events this year, so I blogged and worked less than expected on my vacation. I’ve also managed to come down with a nasty case of strep, which is an annoying way to start the year. Thus it’s only now, on January 2nd, that I can finally respond to Alex’s challenge/tag for my 2007 predictions. Let’s start with the 2006 recap: Some good stuff happened Some bad stuff happened Some things got better Some things got worse Everything else stayed the same Hmm, did I miss anything? Now for my 2007 predictions: Some good stuff will happen Some bad stuff will happen Some things will get better Some things will get worse Everything else will stay the same While I do think the end of the year can be a good time to reflect on the recent past and look towards the future, I also think we in the security world can’t always afford to make these arbitrary divisions of time. We live on a non-cyclical continuum that, vacations aside, doesn’t begin or end on annual or quarterly cycles (except for some of you on the vendor side, maybe). I think this cynicism is probably an artifact of working so many holidays as a paramedic or physical security guy (for the record, Xmas was usually slow, with a few tragic calls, and New Year’s Eve usually busy). Thus I’m using this arbitrary black line of the end of the year to remind you that there are no arbitrary black lines. Actually, there is one prediction I want to make for 2007. It isn’t about any markets, threats, or technology developments. In 2007 the job of a security professional will be neither materially more difficult, nor materially less difficult, than it was in 2006. My fellow bloggers, and my coworkers, have already done a good job of predicting specifics and I don’t see much to add. Threats, tools, and technology will change, but the net balance for 2007 will stay even. Sorry folks, you’ll still have job security into 2008… Share:

Share:
Read Post

The Three Laws of Data Encryption

Lately (as in, most of the year) I’ve been seeing a lot of chatter around encryption- driven primarily by PCI and concerns about landing on the front page of every major newspaper in the . It cracks me up that the PCI Data Security Standard calls encryption, “the ultimate security technology” (I think they pulled that line out of the 1.1 version). Encryption is just another tool in the box, albeit a useful one. There is no “ultimate” technology. Unless, of course, you’d like to pay me a very reasonable fee and I’ll provide it to you. Just sign this little EULA agreement not to disclose any benchmark or… oh heck, not to disclose anything at all. Earlier this year I published a note over with my employer entitled “The Three Laws of Data Encryption”. While I can’t release the note content here (because of the whole wanting to stay employed thing, and if they don’t make money I don’t) here are the three laws as a teaser (since they’ve been published in a few public news articles). Basically, there are only three reasons to encrypt: If data moves, physically or virtually. E.g. laptops, backup tapes, email, and EDI. To enforce separation of duties beyond what’s possible with access controls. Usually this only means protecting against administrators, since access controls can stop everyone else. Examples include credit card or social security numbers in databases (when you separate keys from admins) and files in shared storage. Because someone tells you you have to. I call this “mandated encryption”. You G clients should check out the note if you want more details (actually, if any of you start using Gartner because of this blog please let me know via email). While the “laws” are totally fracking obvious I’ve found a lot of people run around trying to encrypt without taking the time to figure out what the threats are and if encryption will offer any real value. Like encrypting a column in a database and having the DBA manage the keys. What are you protecting against? And “hackers” isn’t the answer. Share:

Share:
Read Post

Security Often Has Little To Do With Safety

I’m catching up after all of last week’s travel and saw a good post by Dave over at Matasano on Safety vs. Security. Dave basically states that although one operating system might have better security than another, it doesn’t really matter if it’s more of a target. Vista might be more inherently secure than OS X, but it doesn’t matter if you are less likely to be attacked on your Mac. At least until someone decides it’s time to change targets. But what’s really interesting is that Dave’s post got me thinking on the whole concepts of safety and security. I realized that in the IT security world we tend to always correlate the two, but in the physical security world we know that safety and security are two totally separate issues, often at odds. It’s an easy mistake to make; especially when the New Oxford American Dictionary defines security as: the state of being free from danger or threat. To be honest, that’s not the definition I expected. A significant part of my job as a security professional has absolutely nothing to do with safety or “threats” in the sense most of you are probably thinking. Unless you consider protecting liquor revenue “safety”. For example: At some venues our searches were to reduce the overall volume of alcohol in the event. In other cases, it was to stop booze from coming in so people had to buy it inside. Stopping cameras and recording devices from coming in to a concert has nothing to do with safety. DRM reduces the security of your computer while failing to prevent piracy. It’s a tool to restrict how you use content, not to stop copying. Checking boarding passes at airport security reduces lines, but doesn’t improve security. While URL filtering does provide a little security against certain web-based attacks, it’s more typically deployed to keep employees from wasting time on corporate resources. A productivity issue, not a security one. I can think of countless times in the physical security world where safety played second fiddle to some other security goal. I suppose we could sometimes make some loose correlation between the threat of reduced alcohol sales and gate searches, but really we’re talking about using security as a tool for a goal other than safety. I remember doing a facility walk-through with a facilities management inspector and a rep from the concert promoter before a Beastie Boys show. The promoter was willing to pay for ticket takers and gate searchers, but seemed confused when the inspector and myself told him we’d have to hire security guards for all the emergency exits and couldn’t just chain them to keep people out. On another occasion I was supervising at a Guns and Roses/Metallica show back when G&R was inciting riots to support their drug habits. Axl decided to go for a drive after the opening song, and Slash was up to about 15 minutes on his guitar solo while we (and the Denver police) tracked down the limo. Quiet word was spread to us supervisor types that if we got the word, we were to pull all our people back stage to protect the gear. There’d already been one nasty riot on this tour. Now I’ll admit that there was a personal safety aspect, but the decision was to let the house go and just protect the gear and people back stage. Rather than set up some safe zones for the innocent public we were going to let the house tear itself apart. So even when security is about safety, it might not be about your safety. We got Axl back and man-handled (no joke) him back on stage where a few biker/bouncer-types stood just off stage to keep him there at all costs. No riot, but a really crappy show after a great start by Metallica. Maybe that makes a better story than proof of my case, but I think you get the point. Security is a tool to enforce controls. Despite what the dictionary says, this often has little to do with safety as we commonly think about it, or may even sacrifice your safety for someone else’s. Share:

Share:
Read Post

What a Silly Search

I went to the Broncos vs. Cardinals game yesterday here in Phoenix (Broncos won, in case you were wondering). On the way in we were subject to a pat down of the type I discussed here. What a joke. Basically, it looks like the employees at the gate were given strict, rote guidelines on how to search. Some of it good (no use of the palm of the hand, to limit accusations of groping), but most of it bad. I’m fairly certain that you don’t need to brush the entire length of someone’s arm when they’re wearing a t-shirt. Also, it’s probably kind of important to check someone’s coat pockets. While an untrained observer might look at one of these searches next to one of the ones we used to perform and think they’re the same, a trained observer will pick up on a stark difference. These guys moved their hands by rote on a pre-set pattern. The searchers never adjusted based on the person, and never used their eyes. It’s like they were magnetometers without brains. Our teams, even the untrained temporary help, were instructed to use their eyes and heads. Don’t just follow the same pattern over and over (although we had a minimum pattern to start); use a little judgement. Most of the time it might look the same, but the odds of finding something are significantly higher. Then again, maybe I’m just waxing nostalgic and we weren’t any better. But seriously- if any of you are senior managers in the NFL give me a call- you’re wasting everyone’s time and increasing your risk of lawsuit with what I saw. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.