Securosis

Research

Science, Skepticism, and Security

This is part 2 of our series on skepticism in security. You can read part 1 here. Being a bit of a science geek, over the past year or so I’ve become addicted to The Skeptics’ Guide to the Universe podcast, which is now the only one I never miss. It’s the Skeptics’ Guide that first really exposed me to the scientific skeptical movement, which is well aligned with what we do in security. We turn back to Wikipedia for a definition of scientific skepticism: Scientific skepticism or rational skepticism (also spelled scepticism), sometimes referred to as skeptical inquiry, is a scientific or practical, epistemological position in which one questions the veracity of claims lacking empirical evidence. … Scientific skepticism utilizes critical thinking and inductive reasoning while attempting to oppose claims made which lack suitable evidential basis. … Characteristics: Like a scientist, a scientific skeptic attempts to evaluate claims based on verifiability and falsifiability rather than accepting claims on faith, anecdotes, or relying on unfalsifiable categories. Skeptics often focus their criticism on claims they consider to be implausible, dubious or clearly contradictory to generally accepted science. This distinguishes the scientific skeptic from the professional scientist, who often concentrates their inquiry on verifying or falsifying hypotheses created by those within their particular field of science. The skeptical movement has expanded well beyond merely debunking fraudsters (such as that Airborne garbage or cell phone radiation absorbers) into the general promotion of science education, science advocacy, and the use of the scientific method in the exploration of knowledge. Skeptics battle the misuse of scientific theories and statistics, and it’s this aspect I consider essential to the practice of security. In the security industry we never lack for theories or statistics, but very few of them are based on sound scientific principles, and often they cannot withstand scientific scrutiny. For example, the historic claim that 70% of security attacks were from the “insider threat” never had any rigorous backing. That claim was a munged up “fact” based on the free headline from a severely flawed survey (the CSI/FBI report), and an informal statement from one of my former coworkers made years earlier. It seems every day I see some new numbers about how many systems are infected with malware, how many dollars are lost due to the latest cybercrime (or people browsing ESPN during lunch), and so on. I believe that the appropriate application of skepticism is essential in the practice of security, but we are also in the position of often having to make critical decisions without the amount of data we’d like. Rather than saying we should only make decisions based on sound science, I’m calling for more application of scientific principles in security, and increased recognition of doubt when evaluating information. Let’s recognize the difference between guesses, educated guesses, facts, and outright garbage. For example – the disclosure debate. I’m not claiming I have the answers, and I’m not saying we should put everything on hold until we get the answers, but all sides do need to recognize we have no effective evidentiary basis for defining general disclosure policies. We have personal experience and anecdote, but no sound way to measure the potential impact of full disclosure vs. responsible disclosure vs. no disclosure. Another example is the Annualized Loss Expectancy (ALE) model. The ALE model takes losses from a single event and multiplies that times the annual rate of occurrence, to give ‘the probable annual loss’. Works great for defined assets with predictable loss rates, such as lost laptops and physical theft (e.g., retail shrinkage). Nearly worthless in information security. Why? Because we rarely know the value of an asset, or the annual rate of occurrence. Thus we multiply a guess by a guess to produce a wild-assed guess. In scientific terms neither input value has precision or accuracy, and thus any result is essentially meaningless. Skepticism is an important element of how we think about security because it helps us make decisions on what we know, while providing the intellectual freedom to change those decisions as what we know evolves. We don’t get as hung up on sticking with past decisions merely to continue to validate our belief system. In short, let’s apply more science and formal skepticism to security. Let’s recognize that just because we have to make decisions from uncertain evidence, we aren’t magically turning guesses and beliefs into theories or facts. And when we’re presented with theories, facts, and numbers, let’s apply scientific principles and see which ones hold up. Share:

Share:
Read Post

Cyberskeptic: Cynicism vs. Skepticism

Note: This is the first part of a two part series on skepticism in security; click here for part 2. Securosis: A mental disorder characterized by paranoia, cynicism, and the strange compulsion to defend random objects. For years I’ve been joking about how important cynicism is to be an effective security professional (and analyst). I’ve always considered it a core principle of the security mindset, but recently I’ve been thinking a lot more about skepticism than cynicism. My dictionary defines a cynic as: a person who believes that people are motivated purely by self-interest rather than acting for honorable or unselfish reasons : some cynics thought that the controversy was all a publicity stunt. * a person who questions whether something will happen or whether it is worthwhile : the cynics were silenced when the factory opened. 1. (Cynic) a member of a school of ancient Greek philosophers founded by Antisthenes, marked by an ostentatious contempt for ease and pleasure. The movement flourished in the 3rd century BC and revived in the 1st century AD. Cynicism is all about distrust and disillusionment; and let’s face it, those are pretty important in the security industry. As cynics we always focus on an individual’s (or organization’s) motivation. We can’t afford a trusting nature, since that’s the fastest route to failure in our business. Back in physical security days I learned the hard way that while I’d love to trust more people, the odds are they would abuse that trust for self-interest, at my expense. Cynicism is the ‘default deny’ of social interaction. Skepticism, although closely related to cynicism, is less focused on individuals, and more focused on knowledge. My dictionary defines a skeptic as: a person inclined to question or doubt all accepted opinions. * a person who doubts the truth of Christianity and other religions; an atheist or agnostic. 1. Philosophy an ancient or modern philosopher who denies the possibility of knowledge, or even rational belief, in some sphere. But to really define skepticism in modern society, we need to move past the dictionary into current usage. Wikipedia does a nice job with its expanded definition: an attitude of doubt or a disposition to incredulity either in general or toward a particular object; the doctrine that true knowledge or knowledge in a particular area is uncertain; or the method of suspended judgment, systematic doubt, or criticism that is characteristic of skeptics (Merriam-Webster). Which brings us to the philosophical application of skepticism: In philosophy, skepticism refers more specifically to any one of several propositions. These include propositions about: an inquiry, a method of obtaining knowledge through systematic doubt and continual testing, the arbitrariness, relativity, or subjectivity of moral values, the limitations of knowledge, a method of intellectual caution and suspended judgment. In other words, cynicism is about how we approach people, while skepticism is about how we approach knowledge. For a security professional, both are important, but I’m realizing it’s becoming ever more essential to challenge our internal beliefs and dogmas, rather than focusing on distrust of individuals. I consider skepticism harder than cynicism, because we are often forced to challenge our own internal beliefs on a regular basis. In part 2 of this series I’ll talk about the role of skepticism in security. Share:

Share:
Read Post

Mike Andrews Releases Free Web and Application Security Series

I first met Mike Andrews about 3 years ago at a big Black Hat party. Turns out we both worked in the concert business at the same time. Despite being located nowhere near each other, we each worked some of the same tours and had a bit of fun swapping stories. Mike managed to convince his employer to put up a well-designed series of webcasts on the basics of web and web application security. Since Mike wrote one of the books, he’s a great resource. Here’s Mike’s blog post, and a direct link to the WebSec 101 series hosted by his employer (he also gives out the slides if you don’t want to listen to the webcast). This is 101-level stuff, which means even an analyst can understand it. Share:

Share:
Read Post

Elephants, the Grateful Dead, and the Friday Summary – June 12, 2009

Back before Jerry Garcia moved on to the big pot cloud in the sky, I managed security at a couple of Dead shows in Boulder/Denver. In those days I was the assistant director for event security at the University of Colorado (before a short stint as director), and the Dead thought it would be better to bring us Boulder guys into Denver to manage the show there since we’d be less ‘aggressive’. Of course we all also worked as regular staff or supervisors for the company running the shows in Denver, but they never really asked about that. I used to sort of like the Dead until I started working Dead shows. While it might have seemed all “free love and mellowness” from the outside, if you’ve ever gone to a Dead show sober you’ve never met a more selfish group of people. By “free” they meant “I shouldn’t have to pay no matter what because everything in the world should be free, especially if I want it”, and by mellow they meant, “I’m mellow as long as I get to do whatever I want and you are a fascist pig if you tell me what to do, especially if you’re telling me to be considerate of other people”. We had more serious injuries and deaths at Dead shows (and other Dead-style bands) than anywhere else. People tripping out and falling off balconies, landing on other people and paralyzing them, then wandering off to ‘spin’ in a fire aisle. Once we had something like a few hundred counterfeit tickets sold for the same dozen or so seats, leading to all sorts of physical altercations. (The amusing part of that was hearing what happened to the counterfeiter in the parking lot after we kicked out the first hundred or so).   Running security at a Dead show is like eating an elephant, or running a marathon. When the unwashed masses (literally – we’re talking Boulder in the 90s) fill the fire aisles, all you can do is walk slowly up and down the aisle, politely moving everyone back near their seats, before starting all over again. Yes, my staff were fascist pigs, but it was that or let the fire marshal shut the entire thing down (for real – they were watching). I’d tell my team to keep moving slowly, don’t take it personally, and don’t get frustrated when you have to start all over again. The alternative was giving up, which wasn’t really an option. Because then I wouldn’t pay them. It’s really no different in IT security. Most of what we do is best approached like trying to eat an elephant (you know, one bite at a time, for the 2 of you who haven’t heard that one before). Start small, polish off that spleen, then move on to the liver. Weirdly enough in many of my end user conversations lately, people seem to be vapor locking on tough problems. Rather than taking them on a little bit at a time as part of an iterative process, they freak out at the scale or complexity, write a bunch of analytical reports, and complain to vendors and analysts that there should be a black box to solve it for them. But if you’ve ever done any mountaineering, or worked a Dead show, you know that all big jobs are really a series of small jobs. And once you hit the top, it’s time to turn around and do it all over again. Yes, you all know that, but it’s something we all need to remind ourselves of on a regular basis. For me, it’s about once a quarter when I get caught up on our financials. One additional reminder: Project Quant Survey is up. Yeah, I know it’s SurveyMonkey, and yeah, I know everyone bombards you with surveys, but this is pretty short and the results will be open to everyone. (Picture courtesy of me on safari a few years ago). And now for the week in review: Webcasts, Podcasts, Outside Writing, and Conferences A ton of articles referenced my TidBITS piece on Apple security, but most of them were based on a Register article that took bits out of context, so I’m not linking to them directly. I spoke at the TechTarget Financial Information Security Decisions conference on Pragmatic Data Security. Favorite Securosis Posts Rich: I flash back to my paramedic days in The Laws of Emergency Medicine—Security Style. Adrian: How Market Forces will Affect Payment Processing. Other Securosis Posts Application vs. Database Encryption Database Encryption, Part 2: Selection Process Overview iPhone Security Updates Facebook Monetary System Project Quant Posts Project Quant: Acquire Phase Project Quant: Patch Evaluation Phase Details: Monitor for Advisories Favorite Outside Posts Adrian: Rsnake’s RFC1918 Caching Problems post. Rich: Rothman crawls out from under the rock, and is now updating the Daily Incite on a more-regular basis again. Keep it up Mike! Top News and Posts Microsoft Office Security Updates. iPhone 3G S. I smell another Securosis Holiday coming up. T-Mobile Confirms data theft. Snow Leopard is coming! No penalty apparently, but the Sears data leak fiasco is settled. Black Hat founder appointed to DHS council. Congrats Jeff, well done. VM’s busting out. Symantec and McAfee fined over automatic renewals. China mandating a bot on everyone’s computer. Maybe that isn’t how they see it, but that’s what’s going to happen with the first vulnerability. Security spending is taking a hit. Critical Adobe patches out. Mike Andrews points us to a Firefox web app testing plugin set Bad guys automating Twitter phishing via trending topics. Blog Comment of the Week This week’s best comment comes from Allen in response to the State of Web Application and Data Security post: … I bet (a case of beers) that if there was no PCI DSS in place that every vendor would keep credit card details for all transactions for every customer forever, just in case. It is only now that they are forced to apply “pretty-good” security restrictions

Share:
Read Post

Application vs. Database Encryption

There’s a bit of debate brewing in the comments on the latest post in our database encryption series. That series is meant to focus only on database encryption, so we weren’t planning about talking much about other options, but it’s an important issue. Here’s an old diagram I use a lot in presentations to describe potential encryption layers. What we find is that the higher up the stack you encrypt, the greater the overall protection (since it stays encrypted through the rest of the layers), but this comes with the cost of increased complexity. It’s far easier to encrypt an entire hard drive than a single field in an application; at least in real world implementations. By giving up granularity, you gain simplicity. For example, to encrypt the drive you don’t have to worry about access controls, tying in database or application users, and so on. In an ideal world, encrypting sensitive data at the application layer is likely your best choice. Practically speaking, it’s not always possible, or may be implemented entirely wrong. It’s really freaking hard to design appropriate application level encryption, even when you’re using crypto libraries and other adjuncts like external key management. Go read this post over at Matasano, or anything by Nate Lawson, if you want to start digging into the complexity of application encryption. Database encryption is also really hard to get right, but is sometimes slightly more practical than application encryption. When you have a complex, multi-tiered application with batch jobs, OLTP connections, and other components, it may be easier to encrypt at the DB level and manage access based on user accounts (including service accounts). That’s why we call this “user encryption” in our model. Keep in mind that if someone compromises user accounts with access, any encryption is worthless. Additional controls like application-level logic or database activity monitoring might be able to mitigate a portion of that risk, but once you lose the account you’re at least partially hosed. For retail/PCI kinds of transactions I prefer application encryption (done properly). For many users I work with that’s not an immediate option, and they at least need to start with some sort of database encryption (usually transparent/external) to deal with compliance and risk requirements. Application encryption isn’t a panacea – it can work well, but brings additional complexities and is really easy to screw up. Use with caution. Share:

Share:
Read Post

The Laws of Emergency Medicine—Security Style

Thanks to some bad timing on the part of our new daughter, I managed to miss the window to refresh my EMT certification and earned the privilege of spending two weekends in a refresher class. The class isn’t bad, but I’ve been riding this horse for nearly 20 years (and have the attention span of a garden gnome), so it’s more than a little boring. On the upside, it’s bringing back all sorts of fun memories from my days as a field paramedic. One of my favorite humorous/true anecdotes is the “Rules of Emergency Medicine”. I’ve decided to translate them into security speak: All patients die… eventually. Security equivalent: You will be hacked… eventually. It sucks when you kill^H^H^H^Hfail to save a patient, but all you’re ever doing is delaying the inevitable. In the security world, you’ll get breached someday. Maybe not at this job, but it’s going to happen. Get over it, and make sure you also focus on what you need to do after you’re breached. React faster and better. All bleeding stops… eventually. Security equivalent: If you don’t fix the problem, it will fix itself. You can play all the games you want, and sponsor all the pet projects you want, but if you don’t focus on the real threats they’ll take care of your problems for you. Take vulnerability scanning – if it isn’t in your budget, don’t worry about it. I’m sure someone on the Internet will take care of it for you. This one also applies to management – if they want to ignore data breaches, web app security, or whatever… eventually it will take care of itself. If you drop the baby, pick it up. Security equivalent: If you screw up, move on. None of us are perfect and we all screw up on a regular basis. When something bad happens, rather than freaking out, it’s best to move on to the next task. Fix the mistake, and carry on. The key of this parable is to fix the problem rather than all the other hand wringing/blame-pushing we tend to do when we make mistakes. I think I’m inspired to write a new presentation – “The Firefighter’s Guide to Data Security”. Share:

Share:
Read Post

Hackers 1, Marketing 0

You ever watch a movie or TV show where you know you know the ending, but you keep viewing in suspense to find out how it actually happens? That’s how I felt when I read this: Break Into My Email Account and Win $10,000 StrongWebmail.com is offering $10,000 to the first person that breaks into our CEO’s email account…and to make things easier, we’re giving you his username and password. No surprise, it only took a few days for this story to break: On Thursday, a group of security researchers claimed to have won the contest, which challenged hackers to break into the Web mail account of StrongWebmail CEO Darren Berkovitz and report back details from his June 26 calendar entry. The hackers, led by Secure Science Chief Scientist Lance James and security researchers Aviv Raff and Mike Bailey, provided details from Berkovitz’s calendar to IDG News Service. In an interview, Berkovitz confirmed those details were from his account. Reading deeper, they say it was a cross site scripting attack. However, Berkovitz could not confirm that the hackers had actually won the prize. He said he would need to check to confirm that the hackers had abided by the contest rules, adding, “if someone did it, we’ll kind of put our heads down,” he said. Silly rules- this is good enough for me. (Thanks to @jeremiahg for the pointer). Share:

Share:
Read Post

Boaz Nails It- The Encryption Dilemma

Boaz Gelbord wrote a thoughtful response (as did Mike Andrews) to my post earlier this week on the state of web application and data security. In it was one key tidbit on encryption: The truth is that you just don’t mitigate that much risk by encrypting files at rest in a reasonably secure environment. Of course if a random account or service is compromised on a server, having those database files encrypted would sure come in handy. But for your database or file folder encryption to actually save you from anything, some other control needs to fail. I wouldn’t say this is always true, but it’s generally true. In fact, this situation was the inspiration behind the Three Laws of Data Encryption I wrote a few years ago. The thing is, access controls work really freaking well, and the only reason to use encryption instead of them is if the data is moving, or you need to somehow restrict the data with greater granularity than is possible with access controls. For most systems, this is to protect data from administrators, since you can manage everyone else with access controls. Also keep in mind that many current data encryption systems tie directly to the user’s authentication, and thus are just as prone to compromised user accounts as are access controls. Again, not true in all cases, but true in many. The first step in encryption is to know what threat you are protecting against, and if other controls would be just as effective. Seriously, we toss encryption around as the answer all the time, without knowing what the question is. (My favorite question/answer? Me: Why are you encrypting. Them: To protect against hackers. Me: Um. Cool. You have a bathroom anywhere?) Share:

Share:
Read Post

Join the Open Patch Management Survey—Project Quant

Are you tired of all those BS vendor surveys designed to sell products, and they don’t ever even show you the raw data? Yeah, us too. Today we’re taking the next big step for Project Quant by launching an open survey on patch management. Our goal here is to gain an understanding of what people are really doing with regards to patch management, to better align the metrics model with real practices. We’re doing something different with this survey. All the results will be made public. We don’t mean the summary results, but the raw data (minus any private or identifiable information that could reveal the source person or organization). Once we hit 100 responses we will release the data in spreadsheet formats. Then, either every week or for every 100 additional responses, we will release updated data. We don’t plan on closing this for quite some time, but as with most surveys we expect an initial rush of responses and want to get the data out there quickly. As with all our material, the results will be licensed under Creative Commons. We will, of course, provide our own analysis, but we think it’s important for everyone to be able to evaluate the results for themselves. All questions are optional, but the more you complete the more accurate the results will be. In two spots we ask if you are open for a direct interview, which we will start scheduling right away. Please spread the word far and wide, since the more responses we collect, the more useful the results. If you fill out the survey as a result of reading this post please use SECUROSISBLOG as the registration code (helps us figure out what channels are working best). If you came to this post via twitter, use TWITTER as the reg code. This won’t affect the results, but we think it might be interesting to track how people found the survey, and which social media channels are more effective. Thanks for participating, and click here to fill it out. (This is a SurveyMonkey survey, so we can’t disable the JavaScript like we do for everything here on the main site. Sorry). Share:

Share:
Read Post

Five Ways Apple Can Improve Their Security Program

This is an article I’ve been thinking about for a long time. Sure, we security folks seem to love to bash Apple, but I thought it would be interesting to take a more constructive approach. From the TidBITS article: With the impending release of the next versions of both Mac OS X and the iPhone operating system, it seems a good time to evaluate how Apple could improve their security program. Rather than focusing on narrow issues of specific vulnerabilities or incidents, or offering mere criticism, I humbly present a few suggestions on how Apple can become a leader in consumer computing security over the long haul. The short version of the suggestions are: Appoint and empower a CSO Adopt a secure software development program Establish a security response team Manage vulnerabilities in included third party software Complete the implementation of anti-exploitation technologies Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.