Securosis

Research

Friday Summary – November 20, 2009

Ironically, I was calling to activate my new credit card yesterday – as the number was considered compromised by BofA – when I read about the credit card scam in Spain. Very little information is coming out about the EU Credit Card Breach. Seems to be Visa specific; some 100k cards are being recalled in Germany, and police efforts are focused in Spain. And it seems every news agency and security blog in the country is reliant on this tiny amount of data provided by the BBC. Given this is a multi-country effort, I would have bet some tangible news would have slipped out somewhere, but nothing more than these nuggets of almost nothing yet. On the home front it is pretty much the same: no news of what happened. I was pretty sure that BofA recalling the Visa card meant a serious breach because this is a card I have not used in more than a year. Yes, I am making some assumptions here, but this was not an issue with skimming at a local restaurant or gas station. So someone was breached; going back through two years of records of very limited use, as there are two large firms who had this number in their databases (without my consent) and I am guessing one of them leaked it. This is not directly related to the Citigroup/BofA breach. I was trying to find out what their disclosure responsibilities were here in Arizona, but you could drive a big truck full o’ sensitive data through the holes in the Breach Notification Bill. And the BofA Disclosure Page basically says “we don’t know ‘nuthin ‘bout ‘nuthin’”, but don’t worry, your money will be returned to you. Let’s hope the Europeans get more data than we do. On a more lighthearted note, this video is pretty funny, but I bring it up because I want a third opinion. Do you think a crime was committed? The Mogull pointed something out to me after I watched this … that the girl in the white shirt appears to shoplift in the video. I was skeptical but I think he’s right. At 2:14 in, the girl drops a shopping bag off he shoulder, grabs something off the table, and it places into the bag. She then shoves what looks like a pad of paper on top, pulls the strap back on her shoulder, dancing the entire time. She even performs this maneuver the moment the rest of the ‘dance troupe’ has their backs turned. She is one of a few without a badge and so I assume she was not an employee. Anyway, the whole thing is a little like a car wreck … it’s hard to look away. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian’s post on A Peek at Transparent Database Encryption. Rich quoted on InZero launch. The dog ate my podcast. No, really! Sorry Martin! Adrian on Encryption ‘Gotchas’ that hinder implementation. (Podcast) Rich and Adrian on Truth, Lies and Fiction with Data Encryption. Favorite Securosis Posts Rich: The Anonymization of Losses: A Market Forces Failure. Adrian: Why You Should Take the Adobe Flash Origin Issues Seriously. Meier: Microsoft Encryption and the Cloud Mort: Ur C0de Sux. Other Securosis Posts What the Renegotiation Bug Means to You Critical Infrastructure, 60 Minutes, and Missing the Point Three acquisitions, two visions ADMP Market Acceptance Why Successful Risk Management is Still a Failure New Thoughts On The CIO Is Your Friend Favorite Outside Posts Rich: Not security-specific, but lasers on fighter jets! Adrian: Not really a single post, but a collection of posts on Microsoft Azure. It’s probably just me, but this feels like 1997, when MS did an about-face on their acceptance of the Internet … only this time they are a little late to the Cloud party. Mort: Google Books Settlement 2.0: Evaluating the Pros and Cons. Meier: Whose customers are they? Pepper: Researcher busts into Twitter via SSL reneg hole. Top News and Posts Verizon admits employees sold private data. Most security products fail to perform. Good analysis by Larry Walsh on Fortinet IPO and some market risks, and for those of you tracking these things, the current stock price. Incite Rides Again. NIST updates infosec guidelines. Four in UK sentenced in connection to banking trojan. Inside the botnet hunters. Metasploit 3.3 released. Hoff launches A6 working group for cloud audits/assessments. Brazilian power company hacked (for real this time). Background checks in an iPhone app. Pentesting Adobe Flex Applications with a Custom AMF Client. Customers have a unique way of capturing your product’s nuances. Blog Comment of the Week It was hard to pick this week, but this week’s best comment comes from our own David Mortman’s in response to David Meier’s post What the Renegotiation Bug Means to You: Okay I tried it: openssl s_client -connect ebay.com:443 -ssl2 New, SSLv2, Cipher is DES-CBC3-MD5 Server public key is 1024 bit SSL-Session: Protocol : SSLv2 Cipher : DES-CBC3-MD5 Session-ID: D5F3FA4A3750154014CE495E96E36139 Session-ID-ctx: Master-Key: 35F5ED93B6FC890AA84EBFCE849E9EE54919C8D3FA38D35F Key-Arg : 63826612A872A6AD Start Time: 1258654301 Timeout : 300 (sec) Verify return code: 21 (unable to verify the first certificate) So something thinks it can speak sslv2, however if I force my browser to use only sslv2 it loops before dying so there’s some business logic stopping it. On the other hand, yahoo and hotmail/live.com both allow ssl2 connections no problem as does twitter and lenovo. Btw, so does Bank of America and Fidelity. So while clearly some folks are getting it (because of PCI?), there are some major players who don’t. Btw even the security vendors don’t do it right, McAfee allows SSLv2 only connections (Symantec doesn’t) as does HiTrust (gotta love an organization dedicated to security that screws it up). And my all time favorite, the IRS allows SSLv2 connections and has an invalid cert. So lots of potentially vulnerable sites, which in general make MitM attacks much easier, renegotiation bug or not. Share:

Share:
Read Post

Microsoft Encryption and the Cloud

I was reading PC Magazine’s recap of Ray Ozzie’s announcement of the Azure cloud computing platform. The vision of Azure, said Ozzie, is “… three screens and a cloud,” meaning Internet-based data and software that plays equally well on PCs, mobile devices, and TVs. I am already at a stage where almost everything I want to do on the road I can accomplish with my smartphone. Any heavy lifting on the desktop. I am sure we will quickly reach a point where there is no longer a substantial barrier, and I can perform most tasks (with varying degrees of agility) with whatever device I have handy. “We’re moving into an era of solutions that are experienced by users across PCs, phones and the Web, and that are delivered from datacenters we refer to as private clouds and public clouds. But I read this just after combing through the BitLocker specifications, and the dichotomy of the old school model and new cloud vision seemed at odds. With cloud computing we are going to see data encryption become common. We are going to be pushing data into the cloud, where we do know what security will be provided, and we may not have thoroughly screened the contents prior to moving it. Encryption, especially when the data is stored separately from the keys and encryption engine, is a very good approach to keeping data private and secure. But given the generic nature of the computing infrastructure, the solutions will need to be flexible enough to support many different environments. Microsoft’s data security solution set includes several ways to encrypt data: BitLocker is available for full drive encryption on laptops and workstations. Windows Mobile Device Manager will manage security on your mobile storage and mobile application data encryption. Exchange can manage email and TLS encryption. SQL Server offers transparent and API-level encryption. But BitLocker’s architecture seems a little odd when compared to the others, especially in light of the cloud based vision. It has hardware and BIOS requirements to run. BitLocker has different key management, key recovery, and backup interfaces than laptops and other mobile devices and applications. BitLocker’s architecture does not seem like it could be stretched to support other mobile devices. Given that this is a major new launch, something a little more platform-neutral would make sense. If you are an IT manager, do you care? Is it acceptable to you? Does your device security belong to a different group than platform security? The offerings seem scattered to me. Rich does not see this as an issue, as each solves a specific problem relevant to the device in question and key management is localized. I would love to hear your thoughts on this. I also learned that there is no current plan for Transparent Database Encryption with SQL Azure. That means developers using SQL Azure who want data encryption will need to take on the burden at the application level. This is fine, provided your key management and encryption engine is not in the cloud. But as this is being geared to use with the Azure application platform, you will probably have that in the cloud as well. Be careful. Share:

Share:
Read Post

Three acquisitions, two visions

I had to laugh when I read Alan Shimel’s post “Where does Tipping Point fit in the post-3Com ProCurve”? His comment: I found it insightful that nowhere among all of this did security or Tipping Point get a mention at all. Does HP realize it is part of this deal? Which was exactly what I was thinking when reading the press releases. One of 3Com’s three pillars is completely absent from the HP press clippings I’ve come across in the last couple days. Usually there is some mention of everything, to assuage any fears of the employees and avoid having half the headcount leave for ‘new opportunities’. And the product line does not include the all-important cloud or SaaS based models so many firms are looking for, so selling off is a plausible course of action. It was easy to see why Barracuda purchased Purewire. It filled the biggest hole in their product line. And the entire market has been moving to a hybrid model, outsourcing many of the resource intensive features & functions, and keeping the core email and web security functions in house. This allows customers to reduce cost with the SaaS service and increase the longevity of existing investments. Cisco’s acquisition of ScanSafe is similar in that it provides customers with a hybrid model to keep existing IronPort customers happy, as well as a pure SaaS web security offering. I could see this being a standard security option for cloud-based services, ultimately a cloud component, and part of a much larger vision than Barracuda’s. Which gets me back to Tipping Point and Alan’s question “Will they just spin it out, so as not to upset some of their security partners”? My guess is not. If I was king in charge, I would roll this up with the EDS division acquired earlier this year for a comprehensive managed security services offering. Tipping Point is well entrenched and respected as a product, and both do a lot of business with the government. My guess is this is what they will do. But they need to have the engineering team working on a SaaS offering, and I would like to see them leverage their content analysis capabilities more, and perhaps offer what BlueLane did for VMWare. Share:

Share:
Read Post

ADMP Market Acceptance

Rich and I were on a data security Q&A podcast today. I was surprised when the audience asked questions about Application & Database Monitoring and Protection (ADMP), as it was not on our agenda, nor have we written about it in the last year. When Rich first sketched out the concept, he listed specific market forces behind ADMP, and presented a couple of ADMP models. But these are really technical challenges to management and security and the projected synergies if they are linked. When we were asked about ADMP today, I was able to name a half dozen vendors implementing parts of the model, each with customers who deployed their solution. ADMP is no longer a philosophical discussion of technical synergies but a reality, due to customer acceptance. I see the evolution of ADMP being very similar to what happened with web and email security. Just a couple years ago there was a sharp division between email security and web security vendors. That market has evolved from the point solutions of email security, anti-virus, email content security, anti-malware, web content filtering, URL filtering, TLS, and gateway services into single platforms. In customer minds the problem is monitoring and controlling how employees use the Internet. The evolution of Symantec, Websense, Proofpoint and Barracuda are all examples, and it is nearly impossible for any collection of technologies to compete with these unified platforms. ADMP is about monitoring and controlling use of web applications. A year ago I would have discussed the need for ADMP’s technical benefits, due to having all products under one management interface. The ability to write one policy to direct multiple security functions. The ability for discovery from one component to configure other features. The ability to select the most appropriate tool or feature to address a threat, or even provide some redundancy. ADMP became a reality when customers began viewing web application monitoring and control as a single problem. Successful relationships between database activity monitoring vendors, web app firewalls companies, pen testers, and application assessment firms are showing value and customer acceptance. We have a long, long way to go in linking these technologies together into a robust solution, but the market has evolved a lot over the last 14 months. Share:

Share:
Read Post

Ur C0de Sux

I was working at Unisys two decades ago when I first got into the discussion of what traits, characteristics, or skills to look for in programmer candidates we interviewed. One of the elder team members shocked me when he said he tried to hire musicians regardless of prior programming experience. His feeling was that anyone could learn a language, but people who wrote music understood composition and flow, far harder skills to teach. At the time I thought I understood what he meant, that good code has very little to do with individual statements or programing language used. And the people he hired did make mistakes with the language, but their applications were well thought out. Still, it took 10 years before I fully grasped why this approach worked. I got to thinking about this today when Rich forwarded me the link to Esther Schindler’s post “If the comments are ugly, the code is ugly”. Perhaps my opinion is colored by my own role as a writer and editor, but I firmly believe that if you can’t take the time to learn the syntax rules of English (including “its” versus “it’s” and “your” versus “you’re”), I don’t believe you can be any more conscientious at writing code that follows the rules. If you are sloppy in your comments, I expect sloppiness in the code. Thoughtful and well written, but horseshit none the less! Worse, this is a red herring. The quality of code lies in its suitability to perform the task it was designed to do. The goal should not be to please a spell checker. Like it or not, there are very good coders who are terrible at putting comments into the code, and what comments they provide are gibberish. They think like coders. They don’t think like English majors. And yes, I am someone who writes like English was my second language, and code like Java was my first. I am just more comfortable with the rules and uses. We call Java and C++ ‘languages’, which seems to invite comparison or cause some to equate these two things. But make no mistake: trying to extrapolate some common metric of quality is simply nuts. It is both a terrible premise, and the wrong perspective for judging a software developer’s skills. Any relevance of human language skill to code quality is purely accidental. I have gotten to the point in my career where a lack of comments in code can mean the code is of higher quality, not lower. Why? Likely the document first, code later process was followed. When I started working with seasoned architects for the first time, we documented everything long before any code was written. And we had an entire hierarchy of documents, with the first layer covering the goals of the project, the second layer covering the major architectural components and data flow, the third layer covering design issues and choices, and finally documentation at the object level. These documents were checked into the source code control system along with the code objects for reference during development. There were fewer comments in the code, but a lot more information was readily available. Good programs may have spelling errors in the comments. They may not have comments at all. They may have one or two logic flaws. Mostly irrelevant. I call the above post a red herring because it tries to judge software quality using spelling as a metric, as opposed to more relevant attributes such as: The number of bugs in any given module (on a per-developer basis if I can tell). The complexity or effort required to fix these bugs. How closely the code matches the design specifications. Uptime during stress testing. How difficult it is to alter or add functionality not provided for in the original design. The inclusion of debugging flags and tools. The inclusion of test cases with source code. The number of bugs is far more likely to be an indicator of sloppiness, mis-reading the design specification, bad assumptions, or bogus use cases. The complexity of the fix usually tells me, especially with new code, if the error was a simple mistake or a major screw-up. Logic errors need to be viewed in the same way. Finally, test cases and debugging built into the code are a significant indicator that the coder was thinking about the important data points in the code. Witnessing code behavior has been far more helpful for debugging code than inline comments. Finding ‘breadcrumbs’ and debugging flags is a better indication of a skilled coder than concise grammatically correct comments. I know some very good architects whose code and comments are sloppy. There are a number of reasons for this, primarily that coding is no longer their primary job. Most follow coding practices because heck, they wrote them. And if they are responsible for peer review this is a form of self preservation and education for their reviewees. But their most important skill is an understanding of business goals, architecture, and 4GL design. These are the people I want laying out my object models. These are the people I want stubbing out objects and prototyping workflow. These are the people I want choosing tools and platforms. Attention to detail is a prized attribute, but some details are more important than others. The better code I have seen comes from those who have the big picture in mind, not those who fuss over coding standards. Comments save time if professional code review (outsourced or peer) is being used, but a design specification is more important than inline comments. There is another angle to consider here, and that is coding in the open source community is a bit different than working for “The Man”. This is because the eyes of your peers are on you. Not just one or two co-workers, but an entire community. Peer pressure is a great way to get better quality code. Misspellings will earn you a few private email messages pointing out your error,

Share:
Read Post

Layman’s view of X.509

A couple weeks ago, we began an internal discussion about DNS security and X.509 certificates. It dawned on me that those of you who have never worked with certificates may not understand what they are or what they are for. Sure, you can go to the X.509 Wiki, where you get the rules for usage and certificate structure, but that’s a little like trying to figure out football by reading the rule book. If you are asking, “What the heck is it and what is it used for?”, you are not alone. An X.509 certificate is used to make an authoritative statement about something. A real life equivalent would be “Hi, I’m David, and I live at 555 Main Street.” The certificate holder presents it to someone/something in order to prove they are who they say they are, in order to establish trust. X.509 and other certificates are useful because the certificate provides the necessary information to validate the presenter’s claim and the authenticate the certificate itself. Like a driver’s license with a hologram, but much better. The recipient examines the certificate’s contents to decide if the presenter is who they say they are, and them whether to trust them with some privilege. Certificates are used primarily to establish trust on the web, and rely heavily on cryptography to provide the built-in validation. Certificates are always signed with a chain of authority. If the root of the chain is trusted, the user or application can extend that level of trust to some other domain/server/user. If the recipient doesn’t already trust the top signing authority, the certificate is ignored and no trust is established. In a way, an x.509 certificate is a basic embodiment of data centric security, as it contains both information and some rules of use. Most certificates state within themselves what they are used for, and yes, they can be used for purposes other than validating web site identity/ownership, but in practice we don’t see diverse uses of X.509 certificates. You will hear that X.509 is an old format, that it’s not particularly flexible or adaptable. All of which is true and why we don’t see it used very often in different contexts. Considering that X.509 certificates are used primarily for network security, but were designed a decade before most people had even heard of the Internet, they have worked considerably better than we had any right to expect. Share:

Share:
Read Post

Google Dashboard Comments

I was playing around with Google Dashboard this morning. After reading the cnet post on Google’s Data Liberation Project, and Google’s announcement of DataLiberation.org, I could not help but get a excited about what they were doing. Trying to be ‘open’ and ‘liberate’ data sounds great! Many web services make it difficult to leave their services – you have to pay them for exporting your data, or jump through all sorts of technical hoops – for example, exporting your photos one by one, versus all at once. We believe that users – not products – own their data, and should be able to quickly and easily take that data out of any product without a hassle. We’d rather have loyal users who use Google products because they’re innovative – not because they lock users in. You can think of this as a long-term strategy to retain loyal users, rather than the short-term strategy of making it hard for people to leave. We’ve already liberated over half of all Google products, from our popular blogging platform Blogger, to our email service Gmail, and Google developer tools including App Engine. In the upcoming months, we also plan to liberate Google Sites and Google Docs (batch-export). Awesome! I jumped right in as I had two very specific things to address. I wanted to see if I could remove some information from Google that would change Google search behavior. Those issues are: 1. After I responded to a friend’s email inquiry a few months ago (sent to my Gmail account) regarding a piece of electronics equipment, I started to see ads for that product in my search results. I have no interest in the product and it does not belong in my search results. 2. I do a lot of driving and I use Google and Amazon maps. Google has started altering my route endpoints arbitrarily. I own a home, but the address is not registered as my home address anywhere except tax records, and has never been used in any online search, much less a Google map search (for very specific reasons). But Google Maps has been altering the endpoints of my routes to direct me to this property; it’s not an address I want to travel to and I did not enter it. How Google found it and then associated it with me is a interesting in and of itself, but to arbitrarily assume I want to go there is both annoying and disconcerting. So I plunged right and and found: zero. Nothing that showed any of that data, nor how it was being used. Oh well. I guess my expectations were far too high. So I took a step back and looked at exactly what Google is offering. Digging in, what does the concept of “liberated” data get me? To “… easily take that data out of any product without a hassle” is a nice idea. Medical records, photos, and social media site contents would be great to have copies of. But making digital copies is trivial, and I don’t think Google is talking about removal from products or services, but taking a copy and importing that copy into another app or service. Looking at the Dashboard, control and management is absent. To put this into context, when I think of data management, I think of the Data Security Lifecycle concept that Rich and I present at conferences. Data ownership and management is totally different than getting a copy. Most people will read this ‘take’ in a non-digital, real-world analog sense, meaning to ‘remove’. Google is using the digital sense, where ‘take’ is closer to ‘propagate’. Furthermore, I am not sure just what exactly they mean by an “an open web run on open standards”. Is Google offering an open data format? An open API to control or manage data? Or do they mean all web data being open to web search (Google), and available to as many applications (Google) and services (Google) as you care to use? It sounded so good, but unfortunately there does not seem to be anything of substance behind the press releases! That’s why I think this is all window dressing. Call me a skeptical security guy, but it looks like Google is taking a page out of Microsoft’s handbook, in that they are creating a tool to combat user fears and concerns, but data storage and management become tied more closely to Google, not less. Taking data from one place to another provides additional attributes and context that increases its value. Google remains in control and it will be very difficult to argue who owns that data. Share:

Share:
Read Post

Compliance vs. Security

Reading Bill Brenner’s PCI Security a Devil, ‘Like No Child Left Behind’, I had the impression Brenner’s summary of Joshua Corman’s presentation would be: Joshua was %#!*$ crazy. In a nutshell: “Organizations have made PCI DSS and compliance in general the basis of their information security policies,” he said. “They’re basing security on sloppy logic from Visa and MasterCard and in the process are ignoring some very bad state-sponsored threats. As a community, we have not evolved at all.” You have to read the whole article to fully grasp Corman’s nuances, and note that some of the inflammatory additions seem to be Bill’s, rather than direct quotes from Joshua. Still, while there are points I agree with, Corman seems to have connected the dots arbitrarily. Not only do I not see general security policies being based off compliance initiatives, I don’t buy the argument that compliance is at the expense of security. Is there overlap? Absolutely. But the recognized lack of security is motivated by completely different forces. In the presence of evidence that many organizations are doing the absolute minumum to comply with regulations, how can you suppose that they would voluntarily invest in security without compliance requirements? Why would companies take a risk-based approach to spending efficiently, when they really don’t want to spend at all? To me, companies embody the approach of The Three Wise Monkeys: “See no evil. Hear no evil. Speak no evil.” Regulations espouse the ideals of safety, security and efficacy, and companies want tasks performed cheaply, quickly, and easily. Regulation is supposed to alter the way companies do business, providing guidance on how to realize the ideal. Companies often handle compliance as just another task, and try to address it from within the same processes the compliance mandate is designed to reform. If companies could be trusted to come close to the ideals and intentions, we would not have auditors. Part of Corman’s presentation seems to be a derivative of his 8 Dirty Secrets presentation (summarized), where part 6 discusses how “Compliance Threatens Security”. Do I think that security product vendors are “…offering products that do everything from offer PCI compliance out of the box to ultimate cure-alls for healthcare entities coping with the demands of HIPAA”? Absolutely. But this was the cheapest, fastest and easiest way to comply. Take Sarbanes-Oxley as an example: products like Database Activity Monitoring and Log Management are the only way to achieve some of the required controls over automated financial systems that process millions of transactions a day. The fact that these unique data collection and analysis capabilities came from a security vendor is incidental. The security investment was made to satisfy a compliance mandate, not for the sake of security. The fact that the tools provide security as well is a by-product for many vendors and customers, often considered unimportant or incidental. If I was going to create my own Dirty Little Secret list, I would say most companies treat security as “Don’t Ask, Don’t Tell”. Security tools that are bought to fulfill compliance have a bad habit of illuminating threats companies really don’t want to know about. They want to pass their compliance audits and not worry about other problems problems discovered … those just lead to additional expenses. If you doubt my cynical perspective, look at how most firms react when told their corporate network is host to 5,000 bots that just commenced a DDOS attack on another company: they tend to threaten suit for invasion of privacy or libel. Another example we see is that a high percentage of companies have web application firewalls for PCI, but run them as monitors rather than proxies! They need to have WAF to comply with PCI, so they bought one, but no one mandateed they use it effectively. Security professionals really care about security, but the executive management cares precisely as much as legal and finance tells them to. I think security is a really hard problem, and far too often our attempts at security are flawed. I just don’t see any evidence that risk management is subjugated to compliance. Share:

Share:
Read Post

Major SSL Flaw Discovered

A major flaw has been found that enables a man-in-the-middle attacks against SSL connections. Several other media outlets are reporting, but Kelly Jackson Higgins has a nice summary over at Dark Reading, and betanews has a much more detailed discussion. According to Marsh Ray at PhoneFactor: “The bug results in a set of related attacks that allow a man-in-the-middle to do bad things to your SSL/TLS connection. The (attacker) in the middle is able to inject his own chosen text into what your application believes is an encrypted, secure communications channel,” says Ray, a senior software development engineer for PhoneFactor. “This has implications for all protocols that run on top of SSL/TLS, such as HTTPS … What’s different with this (bug) is that both the client and server need to be patched to restore the full security guarantees that are expected with TLS.” The communication process two parties go through to establish a trusted connection inadvertently leaves some response information in clear text during part of the dialogue. Basically when they agree to change some of the session attributes the protocol leaves some information exposed: “Methods exist for one or the other party to request a change in the parameters of their transactions, perhaps to switch to a different, stronger cipher suite … In a situation similar to someone’s e-mail application replying to your e-mail with a message whose subject line begins, RE:, the conversation between client and server over what to change to, contains a reference to the request for renegotiation – the request that had, when sent earlier, been encrypted. Now it’s not, and that’s the problem. “ The fix for this should be relatively straightforward and, from what I understand, should be available within the next few days. The issue becomes deploying a patch to a piece of code used for just about any secure communication session. So plan on patching a lot of applications in the coming weeks! PhoneFactor named their efforts ‘Project Mogul’, which has nothing to do with The Mogull so far as I know. Share:

Share:
Read Post

Friday Summary – November 6, 2009

When I was in college, I figured every professor assumed I had only one class: the one they were teaching. They seemed to assume I dedicated days and nights solely to their coursework, and was no less interested in the subject they had dedicated their lives to. And they allocated my time accordingly, giving me enough work to do to consume 40 hours a week. But I was taking 5 classes! WTF! Berkeley was especially bad this way. By noon each Monday I felt like I was a week behind the curve. For the first few weeks I was quite angry about the selfishness of those professors: how could they possibly be so callous as to give us far more work than any two people could perform? Were they encouraging shoddy work? Were they nuts?!? After a few weeks I grudgingly acknowledged that the profs were not in their positions because they were stupid or ignorant, but because they were smart. Well, maybe one was stupid and ignorant, but most of them were really freakin’ intelligent. And consciously or not, this overburdening forced you to work faster, prioritize, and be more efficient. Handling an overburden of requirements has been a skill that has served me better than the subject matter of any one of those courses. I am not talking about time management here, like some motivational seminar might teach; I am talking about strategy. When you have 5 times more work work than you can do, tasks become self selecting. You do those things that you must do to survive. If you’re lucky, some of the things that you want to do overlap with what must be done. You learn to select the right opportunities that are most in line with success, and not look back when you walk away from good ideas that don’t support your goals or the requirements on you. Your choices will differ from your peers, but you make choices and you do the best you can. For those of you who have participated in startups, I expect that you have a full appreciation of this viewpoint. That’s the way I approach my project work here. And my goal is that our research makes it easier for you to do this as well. With just Rich and me being the only full-time guys here, we go through this process a lot. There are simply not enough hours in the day to do some things that look like great ideas at first. On the bright side it forces us to re-evaluate projects and come up with much more streamlined versions, which improves the quality and the usability of the research. And frankly I want to get away from this computer and, I dunno, have a life, so it’s important on several levels. A big portion of this blog’s readers are not security professionals, but deal with an aspect of security in their daily jobs. They don’t necessarily want to be experts, but just understand how to find answers to their security questions and get the job done. This is a bit of a tease, but as a result of viewing our research calendar in this light, we are reconsidering what we had planned to create. In the coming weeks we are going to be adding a lot of new stuff to the research library, fitting our new more streamlined approach, as our plans grew too big for us to handle. More importantly, it was too cumbersome for part-time security practitioners to benefit from. On to the Friday Summary: Webcasts, Podcasts, Outside Writing, and Conferences Rich’s presentation on Pragmatic Data Security and Pragmatic Database Security from Information Security Decisions 2009. Adrian’s Dark Reading post What DAM Does. Rich and Martin on The Network Security Podcast, Episode 172. Favorite Securosis Posts Rich and Adrian: Myths Surrounding Databases in Virtual Environments. Meier & Mort: Verizon Has Most of the Web Application Security Pieces… But Do They Know It? Other Securosis Posts Major SSL Flaw Discovered Favorite Outside Posts Rich & Mortman: Gunnar’s Thinking Person’s Guide to the Cloud series. It’s 4 parts, but excellent. Adrian: The Thinking Person’s Guide to the Cloud, part 3 B. For the “just too busy” theme… Meier: Jailbroken iPhones hacked via UMTS network – This is my favorite simply because the ‘hacker’ publicly apologized after his PayPal account was removed. Chris: Windows 7 vulnerable to 8 of 10 Viruses. Top News and Posts Google Dashboard lets you control some of your own data. I’ve already cleaned up mine – will be interesting to see how complete this really is. Hypervisor-Based Tool for Blocking Rootkits. Browser cookies allow attackers to widen attack space. Josh Corman: PCI Security a Devil, ‘Like No Child Left Behind’. Ivan Arce on Talk the Walk, an interesting perspective on our use of language in security. Elections system is pulled from IBM data center contract in Texas. 88% of the 27 agencies involved with the master contract are dissatisfied with IBM… those be bad numbers. Gartner’s magic hydrant. Anti-Counterfeiting Trade Agreement and some commentary. Money Mule Move Mo’ Money. Cracking Password in the Cloud. Shimmy … Solo. OK, it’s finance, not security, but to echo Gunnar Peterson’s post, here is a ridiculously good interview with Charlie Munger. The video actually got me to change several long held opinions regarding the current financial crisis in an elegant and disarming way. Cross-subdomain Cookie Attacks. Man Sues Over Leaky Baby Monitor. …and obviously: Renegotiating TLS. Blog Comment of the Week This week’s best comment comes from Stacy Shelley in response to Verizon Has Most of the Web Application Security Pieces… But Do They Know It?: Hi Rich – Yes, SecureWorks offers managed WAF and web app scanning services. We also have the capability to leverage the web app scanning data in the management of WAF policies. Our Web App Sec services align pretty well with the components you guys cover in your “Building a Web Application Security Program” paper. Our Consulting group has been doing web app pen testing and code audits for a few years now. In the spring, we launched the managed

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.