Securosis

Research

Friday Summary: June 26, 2009

Yesterday I had the opportunity to speak at a joint ISSA and ISACA event on cloud computing security down in Austin (for the record, when I travel I never expect it to be hotter AND more humid than Phoenix). I’ll avoid my snarky comments on the development and use of the term “cloud”, since I think we are finally hitting a coherent consensus on what it means (thanks in large part to Chris Hoff). I’ve always thought the fundamental technologies now being lumped into the generic term are extremely important advances, but the marketing just kills me some days. Since I flew in and out the same day, I missed a big chunk of the event before I hopped on stage to host a panel of cloud providers – all of whom are also cloud consumers (mostly on the infrastructure side). One of the most fascinating conclusions of the panel was that if the data or application is critical, don’t send it to a public cloud (private may be okay). Keep in mind, every one of these panelists sells external and/or public cloud services, and not a single one recommended sending something critical to the cloud (hopefully they’re all still employed on Monday). By the end of a good Q&A session, we seemed to come to the following consensus, which aligns with a lot of the other work published on cloud computing security: In general, the cloud is immature. Internal virtualization and SaaS are higher on the maturity end, with PaaS and IaaS (especially public/external) on the bottom. This is consistent with what other groups, like the Cloud Security Alliance, have published. Treat external clouds like any other kind of outsourcing – your SLAs and contracts are your first line of defense. Start with less-critical applications/uses to dip your toes in the water and learn the technologies. Everyone wants standards, especially for interoperability, but you’ll be in the cloud long before the standards are standard. The market forces don’t support independent development of standards, and you should expect standards-by-default to emerge from the larger vendors. If you can easily move from cloud to cloud it forces the providers to compete almost completely on price, so they’ll be dragged in kicking and screaming. What you can expect is that once someone like Amazon becomes the de facto leader in a certain area, competitors will emulate their APIs to steal business, thus creating a standard of sorts. As much as we talk SLAs, a lot of users want some starting templates. Might be some opportunities for some open projects here. I followed the panel with a presentation – “Everything You Need to Know About Cloud Security in 30 Minutes or Less”. Nothing Earth-shattering in it, but the attendees told me it was a good, practical summary for the day. It’s no Hoff’s Frogs, and is more at the tadpole level. I’ll try and get it posted on Monday. And one more time, in case you wanted to take the Project Quant survey and just have not had time: Stop what you are doing and hit the SurveyMonkey. We are over 70 responses, and will release the raw data when we hit 100. -Rich And now for the week in review: Webcasts, Podcasts, Outside Writing, and Conferences Rich provides a quote on his use of AV for CSO magazine Rich & Martin on the Network Security Podcast #155. Rich hosted a panel and gave a talk at the Austin ISSA/ISACA meeting. Rich was quoted on database security over at Network Computing. Favorite Securosis Posts Rich: Cyberskeptic: Cynicism vs. Skepticism. I’ve been addicted to The Skeptics’ Guide to the Universe podcast for a while now, and am looking for more ways to apply scientific principles to the practice of security. Adrian: Rich’s post on How I Use Social Media. I wish I could say I understood my own stance towards these media as well as Rich does. Appropriate use was the very subject Martin McKeay and I discussed one evening during RSA, and neither of us were totally comfortable for various reasons of privacy and paranoia. Good post! Other Securosis Posts You Don’t Own Yourself Database Patches, Ad Nauseum Mike Andrews Releases Free Web and Application Security Series SIEM, Today and Tomorrow Kindle and DRM Content Database Encryption: Fact vs. Fiction Project Quant Posts Project Quant: Deploy Phase Project Quant: Create and Test Deployment Package Project Quant: Test and Approve Phase Favorite Outside Posts Adrian: Adam’s comment in The emergent chaos of fingerprinting at airports post: ‘… additional layers of “no” will expose conditions unimagined by their designers’. This statement describes most software and a great number of the processes I encounter. Brilliantly captured! Rich: Jack Daniel nails one of the biggest problems with security metrics. Remember, the answer is always 42-ish. Top News and Posts TJX down another $9.75M in breach costs. Too bad they grew, like, a few billion dollars after the breach. I think they can pull $9.75M from the “need a penny/leave a penny” trays at the stores. Boaz talks about how Nevada mandates PCI – even for non-credit-card data. I suppose it’s a start, but we’ll have to see the enforcement mechanism. Does this mean companies that collect private data, but not credit card data, have to use PCI assessors? The return of the all-powerful L0pht Heavy Industries. Microsoft releases a Beta of Morro– their free AV. I talked about this once before. Lori MacVittie on clickjacking protection using x-frame-options in Firefox. Once we put up something worth protecting, we’ll have to enable that. German police totally freak out and clear off a street after finding a “nuke” made by two 6-year-olds. Critical Security Patch for Shockwave. Spam ‘King’ pleads guilty. Microsoft AntiVirus Beta Software was announced, and informally reviewed over at Digital Soapbox. Clear out of business, and they’re selling all that biometric data. Blog Comment of the Week This week’s best comment comes from Andrew in response to Science, Skepticism, and Security: I’d love to see skepticism

Share:
Read Post

Mildly Off Topic: How I Use Social Media

This post doesn’t have a whole heck of a lot to do with security, but it’s a topic I suspect all of us think about from time to time. With the continuing explosion of social media outlets, I’ve noticed myself (and most of you) bouncing around from app to app as we figure out which ones work best in which contexts, and which are even worth our time. The biggest challenge I’ve found is compartmentalization – which tools to use for which jobs, and how to manage my personal and professional online lives. Again, I think it’s something we all struggle with, but for those of us who use social media heavily as part of our jobs it’s probably a little more challenging. Here’s my perspective as an industry analyst. I really believe I’d manage these differently if I were in a different line of work (or with a different analyst firm), so I won’t claim my approach is the right one for anyone else. Blogs: As an analyst, I use the Securosis blog as my primary mechanism for publishing research. I also think it’s important to develop a relationship (platonic, of course) with readers, which is why I mix a little personal content and context in with the straighter security posts. For blogging I deliberately use an informal tone which I strip out of content that is later incorporated into research reports and such. Our informal guidelines are that while not everything needs to be directly security related, over 90% of the content should be dedicated to our coverage areas. Of our research content, 80% should be focused on helping practitioners get their jobs done, with the remaining 20% split between news and more forward-looking thought leadership. We strive for a minimum of 1 post a day, with 3 “meaty” content posts each week, a handful of “drive-by” quick responses/news items a week, and our Friday summary. Yes, we really do think about this stuff that much. I don’t currently have a personal blog outside of the site due to time, and (as we’ll get to) Twitter takes care of a lot of that. I also read a ton of other blogs, and try to comment and link to them as much as possible. I also consider the blog the most powerful peer-review mechanism for our research on the face of the planet. It’s the best way to be open and transparent about what we do, while getting important feedback and perspectives we never could otherwise. As an analyst, it’s absolutely invaluable. Podcasts: My primary podcast is co-hosting The Network Security Podcast with Martin McKeay. This isn’t a Securosis-specific thing, and I try not to drag too much of my work onto the show. Adrian and I plan on doing some more podcasts/webcasts, but those will be oriented towards specific topics and filling out our other content. Running a regular podcast is darn hard. I like the NetSecPodcast since it’s more informal and we get to talk about any off the wall topic (generally in the security realm) that comes to mind. Twitter: After the blog, this is my single biggest outlet. I initially started using Twitter to communicate with a small community of friends and colleagues in the Mac and security communities, but as Twitter exploded I’ve had to change how I approach it. Initially I described Twitter as a water cooler where I could hang out and chat informally with friends, but with over 1200 followers (many of them PR, AR, and other marketing types) I’ve had to be a little more careful about what I say. Generally, I’m still very informal on Twitter and fully mix in professional and personal content. I use it to share and interact with friends, highlight some content (but not too much, I hate people who use Twitter only to spam their blog posts), and push out my half-baked ideas. I’ve also found Twitter especially powerful to get instant feedback on things, or to rally people towards something interesting. I really enjoy being so informal on Twitter, and hope I don’t have to tighten things down any more because too many professional types are watching. It’s my favorite way to participate in the wider online community, develop new collaboration, toss out random ideas, and just stay connected with the outside world as I hide in my home office day after day. The bad side is I’ve had to reduce using it to organize meeting up with people (too many random followers in any given area), and some PR types use it to spy on my personal life (not too many; some of them are also in the friends category, but it’s happened). The @Securosis Twitter account is designed for the corporate “voice”, while the @rmogull account is my personal one. I tend to follow people I either know or who contribute positively to the community dialog. I only follow a few corporate accounts, and I can’t possibly follow everyone who follows me. I follow people who are interesting and I want to read, rather than using it as a mass-networking tool. With @rmogull there’s absolutely no split between my personal and professional lives; it’s for whatever I’m doing at the moment, but I’m always aware of who is watching. LinkedIn: I keep going back and forth on how I use LinkedIn, and recently decided to use it as my main business networking tool. To keep the network under control I generally only accept invitations from people I’ve directly connected with at some point. I feel bad turning down all the random connections, but I see social networks as having power based on quality rather than quantity (that’s what groups are for). Thus I tend to turn down connections from people who randomly saw a presentation or listened to a podcast. It isn’t an ego thing; it’s that, for me, this is a tool to keep track of my professional network, and I’ve never been one of those business card collectors. Facebook:

Share:
Read Post

Science, Skepticism, and Security

This is part 2 of our series on skepticism in security. You can read part 1 here. Being a bit of a science geek, over the past year or so I’ve become addicted to The Skeptics’ Guide to the Universe podcast, which is now the only one I never miss. It’s the Skeptics’ Guide that first really exposed me to the scientific skeptical movement, which is well aligned with what we do in security. We turn back to Wikipedia for a definition of scientific skepticism: Scientific skepticism or rational skepticism (also spelled scepticism), sometimes referred to as skeptical inquiry, is a scientific or practical, epistemological position in which one questions the veracity of claims lacking empirical evidence. … Scientific skepticism utilizes critical thinking and inductive reasoning while attempting to oppose claims made which lack suitable evidential basis. … Characteristics: Like a scientist, a scientific skeptic attempts to evaluate claims based on verifiability and falsifiability rather than accepting claims on faith, anecdotes, or relying on unfalsifiable categories. Skeptics often focus their criticism on claims they consider to be implausible, dubious or clearly contradictory to generally accepted science. This distinguishes the scientific skeptic from the professional scientist, who often concentrates their inquiry on verifying or falsifying hypotheses created by those within their particular field of science. The skeptical movement has expanded well beyond merely debunking fraudsters (such as that Airborne garbage or cell phone radiation absorbers) into the general promotion of science education, science advocacy, and the use of the scientific method in the exploration of knowledge. Skeptics battle the misuse of scientific theories and statistics, and it’s this aspect I consider essential to the practice of security. In the security industry we never lack for theories or statistics, but very few of them are based on sound scientific principles, and often they cannot withstand scientific scrutiny. For example, the historic claim that 70% of security attacks were from the “insider threat” never had any rigorous backing. That claim was a munged up “fact” based on the free headline from a severely flawed survey (the CSI/FBI report), and an informal statement from one of my former coworkers made years earlier. It seems every day I see some new numbers about how many systems are infected with malware, how many dollars are lost due to the latest cybercrime (or people browsing ESPN during lunch), and so on. I believe that the appropriate application of skepticism is essential in the practice of security, but we are also in the position of often having to make critical decisions without the amount of data we’d like. Rather than saying we should only make decisions based on sound science, I’m calling for more application of scientific principles in security, and increased recognition of doubt when evaluating information. Let’s recognize the difference between guesses, educated guesses, facts, and outright garbage. For example – the disclosure debate. I’m not claiming I have the answers, and I’m not saying we should put everything on hold until we get the answers, but all sides do need to recognize we have no effective evidentiary basis for defining general disclosure policies. We have personal experience and anecdote, but no sound way to measure the potential impact of full disclosure vs. responsible disclosure vs. no disclosure. Another example is the Annualized Loss Expectancy (ALE) model. The ALE model takes losses from a single event and multiplies that times the annual rate of occurrence, to give ‘the probable annual loss’. Works great for defined assets with predictable loss rates, such as lost laptops and physical theft (e.g., retail shrinkage). Nearly worthless in information security. Why? Because we rarely know the value of an asset, or the annual rate of occurrence. Thus we multiply a guess by a guess to produce a wild-assed guess. In scientific terms neither input value has precision or accuracy, and thus any result is essentially meaningless. Skepticism is an important element of how we think about security because it helps us make decisions on what we know, while providing the intellectual freedom to change those decisions as what we know evolves. We don’t get as hung up on sticking with past decisions merely to continue to validate our belief system. In short, let’s apply more science and formal skepticism to security. Let’s recognize that just because we have to make decisions from uncertain evidence, we aren’t magically turning guesses and beliefs into theories or facts. And when we’re presented with theories, facts, and numbers, let’s apply scientific principles and see which ones hold up. Share:

Share:
Read Post

Cyberskeptic: Cynicism vs. Skepticism

Note: This is the first part of a two part series on skepticism in security; click here for part 2. Securosis: A mental disorder characterized by paranoia, cynicism, and the strange compulsion to defend random objects. For years I’ve been joking about how important cynicism is to be an effective security professional (and analyst). I’ve always considered it a core principle of the security mindset, but recently I’ve been thinking a lot more about skepticism than cynicism. My dictionary defines a cynic as: a person who believes that people are motivated purely by self-interest rather than acting for honorable or unselfish reasons : some cynics thought that the controversy was all a publicity stunt. * a person who questions whether something will happen or whether it is worthwhile : the cynics were silenced when the factory opened. 1. (Cynic) a member of a school of ancient Greek philosophers founded by Antisthenes, marked by an ostentatious contempt for ease and pleasure. The movement flourished in the 3rd century BC and revived in the 1st century AD. Cynicism is all about distrust and disillusionment; and let’s face it, those are pretty important in the security industry. As cynics we always focus on an individual’s (or organization’s) motivation. We can’t afford a trusting nature, since that’s the fastest route to failure in our business. Back in physical security days I learned the hard way that while I’d love to trust more people, the odds are they would abuse that trust for self-interest, at my expense. Cynicism is the ‘default deny’ of social interaction. Skepticism, although closely related to cynicism, is less focused on individuals, and more focused on knowledge. My dictionary defines a skeptic as: a person inclined to question or doubt all accepted opinions. * a person who doubts the truth of Christianity and other religions; an atheist or agnostic. 1. Philosophy an ancient or modern philosopher who denies the possibility of knowledge, or even rational belief, in some sphere. But to really define skepticism in modern society, we need to move past the dictionary into current usage. Wikipedia does a nice job with its expanded definition: an attitude of doubt or a disposition to incredulity either in general or toward a particular object; the doctrine that true knowledge or knowledge in a particular area is uncertain; or the method of suspended judgment, systematic doubt, or criticism that is characteristic of skeptics (Merriam-Webster). Which brings us to the philosophical application of skepticism: In philosophy, skepticism refers more specifically to any one of several propositions. These include propositions about: an inquiry, a method of obtaining knowledge through systematic doubt and continual testing, the arbitrariness, relativity, or subjectivity of moral values, the limitations of knowledge, a method of intellectual caution and suspended judgment. In other words, cynicism is about how we approach people, while skepticism is about how we approach knowledge. For a security professional, both are important, but I’m realizing it’s becoming ever more essential to challenge our internal beliefs and dogmas, rather than focusing on distrust of individuals. I consider skepticism harder than cynicism, because we are often forced to challenge our own internal beliefs on a regular basis. In part 2 of this series I’ll talk about the role of skepticism in security. Share:

Share:
Read Post

Mike Andrews Releases Free Web and Application Security Series

I first met Mike Andrews about 3 years ago at a big Black Hat party. Turns out we both worked in the concert business at the same time. Despite being located nowhere near each other, we each worked some of the same tours and had a bit of fun swapping stories. Mike managed to convince his employer to put up a well-designed series of webcasts on the basics of web and web application security. Since Mike wrote one of the books, he’s a great resource. Here’s Mike’s blog post, and a direct link to the WebSec 101 series hosted by his employer (he also gives out the slides if you don’t want to listen to the webcast). This is 101-level stuff, which means even an analyst can understand it. Share:

Share:
Read Post

Elephants, the Grateful Dead, and the Friday Summary – June 12, 2009

Back before Jerry Garcia moved on to the big pot cloud in the sky, I managed security at a couple of Dead shows in Boulder/Denver. In those days I was the assistant director for event security at the University of Colorado (before a short stint as director), and the Dead thought it would be better to bring us Boulder guys into Denver to manage the show there since we’d be less ‘aggressive’. Of course we all also worked as regular staff or supervisors for the company running the shows in Denver, but they never really asked about that. I used to sort of like the Dead until I started working Dead shows. While it might have seemed all “free love and mellowness” from the outside, if you’ve ever gone to a Dead show sober you’ve never met a more selfish group of people. By “free” they meant “I shouldn’t have to pay no matter what because everything in the world should be free, especially if I want it”, and by mellow they meant, “I’m mellow as long as I get to do whatever I want and you are a fascist pig if you tell me what to do, especially if you’re telling me to be considerate of other people”. We had more serious injuries and deaths at Dead shows (and other Dead-style bands) than anywhere else. People tripping out and falling off balconies, landing on other people and paralyzing them, then wandering off to ‘spin’ in a fire aisle. Once we had something like a few hundred counterfeit tickets sold for the same dozen or so seats, leading to all sorts of physical altercations. (The amusing part of that was hearing what happened to the counterfeiter in the parking lot after we kicked out the first hundred or so).   Running security at a Dead show is like eating an elephant, or running a marathon. When the unwashed masses (literally – we’re talking Boulder in the 90s) fill the fire aisles, all you can do is walk slowly up and down the aisle, politely moving everyone back near their seats, before starting all over again. Yes, my staff were fascist pigs, but it was that or let the fire marshal shut the entire thing down (for real – they were watching). I’d tell my team to keep moving slowly, don’t take it personally, and don’t get frustrated when you have to start all over again. The alternative was giving up, which wasn’t really an option. Because then I wouldn’t pay them. It’s really no different in IT security. Most of what we do is best approached like trying to eat an elephant (you know, one bite at a time, for the 2 of you who haven’t heard that one before). Start small, polish off that spleen, then move on to the liver. Weirdly enough in many of my end user conversations lately, people seem to be vapor locking on tough problems. Rather than taking them on a little bit at a time as part of an iterative process, they freak out at the scale or complexity, write a bunch of analytical reports, and complain to vendors and analysts that there should be a black box to solve it for them. But if you’ve ever done any mountaineering, or worked a Dead show, you know that all big jobs are really a series of small jobs. And once you hit the top, it’s time to turn around and do it all over again. Yes, you all know that, but it’s something we all need to remind ourselves of on a regular basis. For me, it’s about once a quarter when I get caught up on our financials. One additional reminder: Project Quant Survey is up. Yeah, I know it’s SurveyMonkey, and yeah, I know everyone bombards you with surveys, but this is pretty short and the results will be open to everyone. (Picture courtesy of me on safari a few years ago). And now for the week in review: Webcasts, Podcasts, Outside Writing, and Conferences A ton of articles referenced my TidBITS piece on Apple security, but most of them were based on a Register article that took bits out of context, so I’m not linking to them directly. I spoke at the TechTarget Financial Information Security Decisions conference on Pragmatic Data Security. Favorite Securosis Posts Rich: I flash back to my paramedic days in The Laws of Emergency Medicine—Security Style. Adrian: How Market Forces will Affect Payment Processing. Other Securosis Posts Application vs. Database Encryption Database Encryption, Part 2: Selection Process Overview iPhone Security Updates Facebook Monetary System Project Quant Posts Project Quant: Acquire Phase Project Quant: Patch Evaluation Phase Details: Monitor for Advisories Favorite Outside Posts Adrian: Rsnake’s RFC1918 Caching Problems post. Rich: Rothman crawls out from under the rock, and is now updating the Daily Incite on a more-regular basis again. Keep it up Mike! Top News and Posts Microsoft Office Security Updates. iPhone 3G S. I smell another Securosis Holiday coming up. T-Mobile Confirms data theft. Snow Leopard is coming! No penalty apparently, but the Sears data leak fiasco is settled. Black Hat founder appointed to DHS council. Congrats Jeff, well done. VM’s busting out. Symantec and McAfee fined over automatic renewals. China mandating a bot on everyone’s computer. Maybe that isn’t how they see it, but that’s what’s going to happen with the first vulnerability. Security spending is taking a hit. Critical Adobe patches out. Mike Andrews points us to a Firefox web app testing plugin set Bad guys automating Twitter phishing via trending topics. Blog Comment of the Week This week’s best comment comes from Allen in response to the State of Web Application and Data Security post: … I bet (a case of beers) that if there was no PCI DSS in place that every vendor would keep credit card details for all transactions for every customer forever, just in case. It is only now that they are forced to apply “pretty-good” security restrictions

Share:
Read Post

Application vs. Database Encryption

There’s a bit of debate brewing in the comments on the latest post in our database encryption series. That series is meant to focus only on database encryption, so we weren’t planning about talking much about other options, but it’s an important issue. Here’s an old diagram I use a lot in presentations to describe potential encryption layers. What we find is that the higher up the stack you encrypt, the greater the overall protection (since it stays encrypted through the rest of the layers), but this comes with the cost of increased complexity. It’s far easier to encrypt an entire hard drive than a single field in an application; at least in real world implementations. By giving up granularity, you gain simplicity. For example, to encrypt the drive you don’t have to worry about access controls, tying in database or application users, and so on. In an ideal world, encrypting sensitive data at the application layer is likely your best choice. Practically speaking, it’s not always possible, or may be implemented entirely wrong. It’s really freaking hard to design appropriate application level encryption, even when you’re using crypto libraries and other adjuncts like external key management. Go read this post over at Matasano, or anything by Nate Lawson, if you want to start digging into the complexity of application encryption. Database encryption is also really hard to get right, but is sometimes slightly more practical than application encryption. When you have a complex, multi-tiered application with batch jobs, OLTP connections, and other components, it may be easier to encrypt at the DB level and manage access based on user accounts (including service accounts). That’s why we call this “user encryption” in our model. Keep in mind that if someone compromises user accounts with access, any encryption is worthless. Additional controls like application-level logic or database activity monitoring might be able to mitigate a portion of that risk, but once you lose the account you’re at least partially hosed. For retail/PCI kinds of transactions I prefer application encryption (done properly). For many users I work with that’s not an immediate option, and they at least need to start with some sort of database encryption (usually transparent/external) to deal with compliance and risk requirements. Application encryption isn’t a panacea – it can work well, but brings additional complexities and is really easy to screw up. Use with caution. Share:

Share:
Read Post

The Laws of Emergency Medicine—Security Style

Thanks to some bad timing on the part of our new daughter, I managed to miss the window to refresh my EMT certification and earned the privilege of spending two weekends in a refresher class. The class isn’t bad, but I’ve been riding this horse for nearly 20 years (and have the attention span of a garden gnome), so it’s more than a little boring. On the upside, it’s bringing back all sorts of fun memories from my days as a field paramedic. One of my favorite humorous/true anecdotes is the “Rules of Emergency Medicine”. I’ve decided to translate them into security speak: All patients die… eventually. Security equivalent: You will be hacked… eventually. It sucks when you kill^H^H^H^Hfail to save a patient, but all you’re ever doing is delaying the inevitable. In the security world, you’ll get breached someday. Maybe not at this job, but it’s going to happen. Get over it, and make sure you also focus on what you need to do after you’re breached. React faster and better. All bleeding stops… eventually. Security equivalent: If you don’t fix the problem, it will fix itself. You can play all the games you want, and sponsor all the pet projects you want, but if you don’t focus on the real threats they’ll take care of your problems for you. Take vulnerability scanning – if it isn’t in your budget, don’t worry about it. I’m sure someone on the Internet will take care of it for you. This one also applies to management – if they want to ignore data breaches, web app security, or whatever… eventually it will take care of itself. If you drop the baby, pick it up. Security equivalent: If you screw up, move on. None of us are perfect and we all screw up on a regular basis. When something bad happens, rather than freaking out, it’s best to move on to the next task. Fix the mistake, and carry on. The key of this parable is to fix the problem rather than all the other hand wringing/blame-pushing we tend to do when we make mistakes. I think I’m inspired to write a new presentation – “The Firefighter’s Guide to Data Security”. Share:

Share:
Read Post

Hackers 1, Marketing 0

You ever watch a movie or TV show where you know you know the ending, but you keep viewing in suspense to find out how it actually happens? That’s how I felt when I read this: Break Into My Email Account and Win $10,000 StrongWebmail.com is offering $10,000 to the first person that breaks into our CEO’s email account…and to make things easier, we’re giving you his username and password. No surprise, it only took a few days for this story to break: On Thursday, a group of security researchers claimed to have won the contest, which challenged hackers to break into the Web mail account of StrongWebmail CEO Darren Berkovitz and report back details from his June 26 calendar entry. The hackers, led by Secure Science Chief Scientist Lance James and security researchers Aviv Raff and Mike Bailey, provided details from Berkovitz’s calendar to IDG News Service. In an interview, Berkovitz confirmed those details were from his account. Reading deeper, they say it was a cross site scripting attack. However, Berkovitz could not confirm that the hackers had actually won the prize. He said he would need to check to confirm that the hackers had abided by the contest rules, adding, “if someone did it, we’ll kind of put our heads down,” he said. Silly rules- this is good enough for me. (Thanks to @jeremiahg for the pointer). Share:

Share:
Read Post

Boaz Nails It- The Encryption Dilemma

Boaz Gelbord wrote a thoughtful response (as did Mike Andrews) to my post earlier this week on the state of web application and data security. In it was one key tidbit on encryption: The truth is that you just don’t mitigate that much risk by encrypting files at rest in a reasonably secure environment. Of course if a random account or service is compromised on a server, having those database files encrypted would sure come in handy. But for your database or file folder encryption to actually save you from anything, some other control needs to fail. I wouldn’t say this is always true, but it’s generally true. In fact, this situation was the inspiration behind the Three Laws of Data Encryption I wrote a few years ago. The thing is, access controls work really freaking well, and the only reason to use encryption instead of them is if the data is moving, or you need to somehow restrict the data with greater granularity than is possible with access controls. For most systems, this is to protect data from administrators, since you can manage everyone else with access controls. Also keep in mind that many current data encryption systems tie directly to the user’s authentication, and thus are just as prone to compromised user accounts as are access controls. Again, not true in all cases, but true in many. The first step in encryption is to know what threat you are protecting against, and if other controls would be just as effective. Seriously, we toss encryption around as the answer all the time, without knowing what the question is. (My favorite question/answer? Me: Why are you encrypting. Them: To protect against hackers. Me: Um. Cool. You have a bathroom anywhere?) Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.