Securosis

Research

SAS 70 Has Nothing To Do With Security

Richard expresses a little shock upon discovering that SAS 70 audits don’t evaluate security. I’d be shocked if any service provider, or other organization for that matter, claimed to me a SAS 70 made them secure. As in I’d consider them totally fracking worthless. All a SAS 70 does is certify that a control works as documented. Kind of like Common Criteria (my other favorite puppy to kick). If you document a single control, a SAS 70 will certify it works as documented. Nothing more. A lot less if it’s a Type I; since the auditor just signs off on management’s assertion that the control works as management documented (cool, eh?). SAS 70 has nothing to do with security. For SOX some orgs are certifying using the COSO Internal Controls Framework, which is as close as you can get to a SOX audit. It works for that since they certify to the same standard used for the SOX audits. Sort of; it can be grey depending on the auditor. For security the best we have is the imperfect ISO 27001 and 27002. If nothing else, they’re a good baseline. I’d also ask your provider for their latest penetration test results from a third party. Really, none of these checklists prove you’re secure. But they are very useful tools in designing and evaluating your security program. Except SAS 70- at least where security is concerned. Share:

Share:
Read Post

Privacy Update- No Warrant Needed to Open Mail

To be honest, this is just a signing statement and, from what little constitutional law I know, kind of illegal. Basically, when Bush signed a law into effect that prohibited warrantless reading of citizens email, he added a statement that said the feds can still read email without a warrant. Wacky, huh? Bush asserted the new authority Dec. 20 after signing legislation that overhauls some postal regulations. He then issued a “signing statement” that declared his right to open mail under emergency conditions, contrary to existing law and contradicting the bill he had just signed, according to experts who have reviewed it. Still, it’s fun to get all hot under to collar about it, and if they we start hearing about opened mail, we’ll know that we don’t live in a democracy. Anyway, original article here. Share:

Share:
Read Post

February is

Securosis is officially declaring February as the “Month of No Bugs”. This follows the trend started by HD Moore with the Month of Browser Bugs, then continued by LMH with the Month of Kernel Bugs, and now the Month of Apple Bugs. During the month of February no security researcher will release any vulnerabilities on any systems, giving IT departments and vendors valuable time to make a dent in their backlog of existing vulnerabilities to fix and patch. All cybercriminals will refrain from using any of their 0-day exploits and limit themselves to previously reported public vulnerabilities. “We feel that the Month of No Bugs will force improvements in information security by giving vendors time to create patches for existing flaws while allowing users to catch up on updating their systems.” Stated Securosis, “an additional advantage is providing security researchers a full month off to relax, recharge, and explore new hobbies or scan the Microsoft Robotics Studio for any back-door code from Skynet.” The Month of No Bugs will not release a bug on each day in February. Seriously folks, while I have tremendous respect for security researchers I think this “Month of” stuff is getting out of hand. HD started with hacks that disclosed a flaw without a direct path to remote code execution, but it looks like a number of the flaws released by LMH will come with working exploits. I’ve had positive discussions with him in the past, and think his heart’s in the right place, but this isn’t the way to make things better. As messed up as the industry’s disclosure approaches may be, dumping code isn’t the answer. One of my first posts was on the dirty little secrets of disclosure, and while there is sometimes a time and place for releasing code, this clearly isn’t it. Apple, or any vendor for that matter, that doesn’t respond well to reported vulnerabilities isn’t about to change their practices due to ending up in the crosshairs of a lone gunman (or even several), whatever their intentions. It’s only when the end users start getting hurt and either complain enough, or start switching to other products enough, that a vendor starts to think differently. It’s what moved Microsoft, and it’s what will move Apple when the time comes. Releasing code without reporting it to the vendor does little more than ga er attention and place end users at risk. I highly doubt it will change any vendor’s patching policies. This is turning into the cyber equivalent of a self-declared vigilante smashing everyone’s doors down while they’re away on vacation, leaving them as burglar-bait, to prove to them how weak their lock vendor is. Either that or handing out bump keys and instructional videos in the worst part of town and pretending that the lock vendors will get it all fixed before the bad guys watch the DVD and put it to work. I’ve never hidden that I think our disclosure process, if we can even call it that, needs serious work. And I’ve called some big vendors to the carpet more than once. But spending a month dumping exploit code is only going to make us end users less secure, and make it even harder to deal with those vendors. It might be the right intent, but it’s definitely the wrong approach. Share:

Share:
Read Post

Welcome to 2007: ‘06 Recap and Predictions

Yep, I’m usually late to parties. The holidays were pretty intense with various family events this year, so I blogged and worked less than expected on my vacation. I’ve also managed to come down with a nasty case of strep, which is an annoying way to start the year. Thus it’s only now, on January 2nd, that I can finally respond to Alex’s challenge/tag for my 2007 predictions. Let’s start with the 2006 recap: Some good stuff happened Some bad stuff happened Some things got better Some things got worse Everything else stayed the same Hmm, did I miss anything? Now for my 2007 predictions: Some good stuff will happen Some bad stuff will happen Some things will get better Some things will get worse Everything else will stay the same While I do think the end of the year can be a good time to reflect on the recent past and look towards the future, I also think we in the security world can’t always afford to make these arbitrary divisions of time. We live on a non-cyclical continuum that, vacations aside, doesn’t begin or end on annual or quarterly cycles (except for some of you on the vendor side, maybe). I think this cynicism is probably an artifact of working so many holidays as a paramedic or physical security guy (for the record, Xmas was usually slow, with a few tragic calls, and New Year’s Eve usually busy). Thus I’m using this arbitrary black line of the end of the year to remind you that there are no arbitrary black lines. Actually, there is one prediction I want to make for 2007. It isn’t about any markets, threats, or technology developments. In 2007 the job of a security professional will be neither materially more difficult, nor materially less difficult, than it was in 2006. My fellow bloggers, and my coworkers, have already done a good job of predicting specifics and I don’t see much to add. Threats, tools, and technology will change, but the net balance for 2007 will stay even. Sorry folks, you’ll still have job security into 2008… Share:

Share:
Read Post

HTTP Authentication: a Primer

The HTTP protocol includes encryption features, such as “Basic HTTP Authentication” and “Digest HTTP Authentication”, which are well supported by current browsers. Using either, every time you log your browser into a website with a username & password, the browser stores three pieces of information: the site’s hostname, your username, and your password. From then on, until you quit your browser, every time you visit any page on that site, your browser sends that username & password to the server. This is the same via both HTTP & HTTPS, but doesn’t apply to custom login code, such as forms and cookies; normally the easiest way to recognize Basic or Digest authentication is the separate window that pops up over the web page, prompting for username and password, and possibly “realm”; if it has logos or is inside a web page, it isn’t basic or digest authentication. There are a lot of tricks, including scripting techniques to grab passwords from other sites, or fool a password manager (built into the browser or a separate program) into providing a password with little or no human confirmation, but it’s simply not possible to completely prevent people from sending their passwords for any site to ‘rogue’ pages on the same. In the simplest case, someone could copy the official Site X login page, make a private copy, and store the passwords entered or send them to a remote server. For bonus points, forward the credentials to the real login page, so the user gets logged in successfully and doesn’t notice anything is wrong. For a long and interesting review of the issues, see the Firefox bug for a specific MySpace password capturing attack: https://bugzilla.mozilla.org/show_bug.cgi?id=360493. A shorter page by the original reporter is at http://www.info-svc.com/news/11-21-2006/. Note how difficult it is to solve a specific problem in a specific browser, and keep in mind that many browsers are in use (Firefox, IE6, IE7, Safari, Opera, Lynx, Konqueror, Nokia’s new browser, IE Mobile, the Palm browser, etc.). There will always be attacks that get past many of these browsers – the possibilities are too wide open, and there’s too much human desire for quick and convenient access to the web (otherwise password managers wouldn’t exist, and we’d use a different password for every site). The same issues apply to cookies, although they are more flexible and thus more complicated. Share:

Share:
Read Post

When Community Is Bad: Community and Commerce—Don’t Cross the Streams!

Note: For some background on HTTP authentication and username/password caching, see HTTP Authentication: a Primer. I was reading Schneier yesterday, and it reminded me of all those MySpace and similar worms going around. Why are they so bad? How will they get worse in the future? Their biggest problem is that they welcome everyone, making it easy for bad people to establish themselves. The second is that even though the sites themselves are not high-security, they have security implications for other sites, including high-security sites. MySpace is scary because it enables a very large number of people to post content your browser will parse and possibly execute. Further, they’re casual sites, so don’t have the same level of security urgency or corporate paranoia as a bank obviously needs (the reality of bank security is a different matter, but the expectation is higher for Citibank than MySpace). The other concern is that people and their browsers (often on auto-pilot, for both people and browsers) enter login information routinely to access these sites. This makes community sites a rich target for attackers – especially since many people use the same username & password for MySpace and electronic banking (and everything else)! Those people who get hacked on MySpace, and then immediately on their electronic banking sites, are screwed. But at least everybody can say “You should have known better.” It doesn’t help much, but is important to both MySpace and the banks for liability reasons. And it’s true – in 2006 you’re asking for trouble if you use the same password for your bank as a low-security site like MySpace. This isn’t to confuse the victim with the perpetrator, but we have to expect more self-defense than that. We can’t provide all the security everybody needs – they have to help! But site developers must make the assumption that every user has exactly one username and password, which they use everywhere, and make every effort to protect that password (this means not storing accounts in a plaintext MySQL table, not showing passwords to customer service/support staff, and not emailing passwords on request – reset them and email the new random password to the address on file). Crossing the Line So there are high-security sites, and low-security sites, and people can understand that banks are better guarded (physically and electronically) than lots of other places (perhaps not as well as sports & concert venues, though). When this line is erased or faded, risk increases. For example, Apple hosts user web pages through its .Mac online service. Amazon encourages people to post reviews of books and other products. Do these have security implications? Of course (everything does, actually). Scenario: Harry the Hacker gets a free .Mac trial account, posts an evil JavaScript that records username, password, and all cookies, to his new disposable site on Apple’s .Mac servers, and starts collecting data. Community sites often attempt to block JavaScript, but it’s an ongoing struggle. What does Harry get? With a working JavaScript, it’s easy to capture usernames & passwords from all the .Mac subscribers who logged into their own sites or just auto-enter credentials (disclaimer: I don’t know how .Mac uses cookies). Some substantial percentage of these passwords work on banks, as we’ve already discussed. Can Harry grab people’s store.apple.com accounts this way, to order a brand new Mac Pro? Not directly, at least, because store.apple.com and homepage.mac.com are different domain names, so their account information is stored and accessed separately; the same goes for the iTunes Store. Naturally, Apple’s doesn’t give .Mac users posting access to store.apple.com. So Apple’s okay here. Not great, but I don’t see a solution aside from not authenticating to .Mac at all, and that wouldn’t work. Note that the problem is made worse by the fact that Apple pushes people towards using a single “Apple ID” for all Apple services, and further requires 1-click ordering to be enabled for at least iPhoto print orders, which makes .Mac password compromises more severe, by linking them to active credit cards. There’s an ongoing dynamic tension between making the site richer and more capable, and limiting it to prevent ‘mischief’. MySpace is clearly possible partially because it’s so flexible. Every security restriction is necessarily weighed as a potential detractor from ease of use, power, and popularity. What about Amazon? They put user content on the main amazon.com shopping site! But I believe what they allow is much more restricted HTML, and there’s at least some editorial review, so they’re probably okay. I’m sure more problems will appear over time, as people add community functionality to more and more sites, for feedback, documentation, etc. I recently had to think long and hard about sending a financial document to an ISP based on a publicly-editable wiki page (it’s no longer editable, but there’s still plenty of potential for mischief there). It’s a wiki – I didn’t even have a guarantee that the offer on the page was real. I eventually decided the page and fax number were almost certainly legitimate (they were), and the risk was very small, but that doesn’t mean the next such page will be copacetic. People have asked me a couple times to add CMS-type functionality to the website for our 31-student co-operative pre-school, so it’s clear that interactive “community” features are not stuck in a MySpace-type-only ghetto. This is an issue that all web developers must keep in mind as they consider user contributions and interactivity or their sites: What does it do to our existing security model? Do we need to draw a stronger line between official/ecommerce/corporate and public/collaborative/non-binding/untrusted? Note that this isn’t a Web 2.0 issue. Various companies have been mixing employee and customer email domains for years. I believe Netscape used to provide @netscape.com email addresses to subscribers. What a great opportunity for spam! From: update-service@netscape.com Subject: Urgent Netscape Upgrade Please go to update-service.netscrape.com to get an important security patch for your Netscape browser! It’s quaint now, but 5 years ago lots of people used

Share:
Read Post

The Three Laws of Data Encryption

Lately (as in, most of the year) I’ve been seeing a lot of chatter around encryption- driven primarily by PCI and concerns about landing on the front page of every major newspaper in the . It cracks me up that the PCI Data Security Standard calls encryption, “the ultimate security technology” (I think they pulled that line out of the 1.1 version). Encryption is just another tool in the box, albeit a useful one. There is no “ultimate” technology. Unless, of course, you’d like to pay me a very reasonable fee and I’ll provide it to you. Just sign this little EULA agreement not to disclose any benchmark or… oh heck, not to disclose anything at all. Earlier this year I published a note over with my employer entitled “The Three Laws of Data Encryption”. While I can’t release the note content here (because of the whole wanting to stay employed thing, and if they don’t make money I don’t) here are the three laws as a teaser (since they’ve been published in a few public news articles). Basically, there are only three reasons to encrypt: If data moves, physically or virtually. E.g. laptops, backup tapes, email, and EDI. To enforce separation of duties beyond what’s possible with access controls. Usually this only means protecting against administrators, since access controls can stop everyone else. Examples include credit card or social security numbers in databases (when you separate keys from admins) and files in shared storage. Because someone tells you you have to. I call this “mandated encryption”. You G clients should check out the note if you want more details (actually, if any of you start using Gartner because of this blog please let me know via email). While the “laws” are totally fracking obvious I’ve found a lot of people run around trying to encrypt without taking the time to figure out what the threats are and if encryption will offer any real value. Like encrypting a column in a database and having the DBA manage the keys. What are you protecting against? And “hackers” isn’t the answer. Share:

Share:
Read Post

Security Often Has Little To Do With Safety

I’m catching up after all of last week’s travel and saw a good post by Dave over at Matasano on Safety vs. Security. Dave basically states that although one operating system might have better security than another, it doesn’t really matter if it’s more of a target. Vista might be more inherently secure than OS X, but it doesn’t matter if you are less likely to be attacked on your Mac. At least until someone decides it’s time to change targets. But what’s really interesting is that Dave’s post got me thinking on the whole concepts of safety and security. I realized that in the IT security world we tend to always correlate the two, but in the physical security world we know that safety and security are two totally separate issues, often at odds. It’s an easy mistake to make; especially when the New Oxford American Dictionary defines security as: the state of being free from danger or threat. To be honest, that’s not the definition I expected. A significant part of my job as a security professional has absolutely nothing to do with safety or “threats” in the sense most of you are probably thinking. Unless you consider protecting liquor revenue “safety”. For example: At some venues our searches were to reduce the overall volume of alcohol in the event. In other cases, it was to stop booze from coming in so people had to buy it inside. Stopping cameras and recording devices from coming in to a concert has nothing to do with safety. DRM reduces the security of your computer while failing to prevent piracy. It’s a tool to restrict how you use content, not to stop copying. Checking boarding passes at airport security reduces lines, but doesn’t improve security. While URL filtering does provide a little security against certain web-based attacks, it’s more typically deployed to keep employees from wasting time on corporate resources. A productivity issue, not a security one. I can think of countless times in the physical security world where safety played second fiddle to some other security goal. I suppose we could sometimes make some loose correlation between the threat of reduced alcohol sales and gate searches, but really we’re talking about using security as a tool for a goal other than safety. I remember doing a facility walk-through with a facilities management inspector and a rep from the concert promoter before a Beastie Boys show. The promoter was willing to pay for ticket takers and gate searchers, but seemed confused when the inspector and myself told him we’d have to hire security guards for all the emergency exits and couldn’t just chain them to keep people out. On another occasion I was supervising at a Guns and Roses/Metallica show back when G&R was inciting riots to support their drug habits. Axl decided to go for a drive after the opening song, and Slash was up to about 15 minutes on his guitar solo while we (and the Denver police) tracked down the limo. Quiet word was spread to us supervisor types that if we got the word, we were to pull all our people back stage to protect the gear. There’d already been one nasty riot on this tour. Now I’ll admit that there was a personal safety aspect, but the decision was to let the house go and just protect the gear and people back stage. Rather than set up some safe zones for the innocent public we were going to let the house tear itself apart. So even when security is about safety, it might not be about your safety. We got Axl back and man-handled (no joke) him back on stage where a few biker/bouncer-types stood just off stage to keep him there at all costs. No riot, but a really crappy show after a great start by Metallica. Maybe that makes a better story than proof of my case, but I think you get the point. Security is a tool to enforce controls. Despite what the dictionary says, this often has little to do with safety as we commonly think about it, or may even sacrifice your safety for someone else’s. Share:

Share:
Read Post

What a Silly Search

I went to the Broncos vs. Cardinals game yesterday here in Phoenix (Broncos won, in case you were wondering). On the way in we were subject to a pat down of the type I discussed here. What a joke. Basically, it looks like the employees at the gate were given strict, rote guidelines on how to search. Some of it good (no use of the palm of the hand, to limit accusations of groping), but most of it bad. I’m fairly certain that you don’t need to brush the entire length of someone’s arm when they’re wearing a t-shirt. Also, it’s probably kind of important to check someone’s coat pockets. While an untrained observer might look at one of these searches next to one of the ones we used to perform and think they’re the same, a trained observer will pick up on a stark difference. These guys moved their hands by rote on a pre-set pattern. The searchers never adjusted based on the person, and never used their eyes. It’s like they were magnetometers without brains. Our teams, even the untrained temporary help, were instructed to use their eyes and heads. Don’t just follow the same pattern over and over (although we had a minimum pattern to start); use a little judgement. Most of the time it might look the same, but the odds of finding something are significantly higher. Then again, maybe I’m just waxing nostalgic and we weren’t any better. But seriously- if any of you are senior managers in the NFL give me a call- you’re wasting everyone’s time and increasing your risk of lawsuit with what I saw. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.