Incite 4/6/2011: Do Work

We spent last weekend up north visiting friends and family while the kids are on Spring Break. We decided to surprise them on Sunday by going to a baseball game. It was opening weekend and our home team was in town. We got cheap seats in the upper deck, but throughout the game we kept moving downwards, and by the 9th inning we were literally in the front row on the dugout. The Boss turned to me and asked if the kids had any idea how lucky they are. Yeah, right. And that’s a huge problem for me. Given a lot of luck and a little talent, I make a pretty good living, which means my kids can do things that weren’t possible for me growing up. But where do you draw the line? You want the kids to have great experiences, but you also want them to understand the work involved to provide those experiences. The best answer I have right now is to do work. I think I saw Chris Nickerson say that on Twitter one day and it resonated with me. It’s basically leading by example. I get up every morning and do work. Even though most of the time what I do all day doesn’t feel like work. The kids know that I work hard and I’m good about reminding them when they get a little uppity. One of the best parts of the weekend was seeing our twin nephews. They are 3 months old and a lot of fun. But each time I got my hands on one of them, I’d start working them out. You know, getting them to start supporting their weight – both sitting and standing. I also had them doing some tummy time, which brought back plenty of memories from when my kids were babies. Just like I remembered, newborns don’t like to do work. They like to eat and sleep and crap their pants. And when they would bark at me I’d just look them in the eye and say “stop bitching and do work!” Though maybe it is a bit early to push them out of their comfort zone. Although they do have to get into that fancy pre-school, after all… Yes, I know kids need to be kids too. They need to play and have fun because lord knows once they get out of school it’s not as much fun. But they can work at having fun. They can work on their ball skills, being a good friend, or even Angry Birds. If you want to be good, you need to work at it. That’s right. Do work! Working at home creates some challenges because every so often one of the kids will want to play during the work day. I politely (or sometimes not so politely) decline and remind them that Dad is doing work. Then I make sure they did work before letting them go do their own thing. You see, working hard is a habit and I know that sometimes I can be a bit relentless with them, but if they don’t learn a good work ethic now life will be pretty tough. So I’ll assume that reading my drivel is work for you, so you can feel good about spending 10 minutes with us each day. And no, I won’t reimburse you for those 10 minutes you’ll never get back. Now get back to work! That’s what I’m going to do. -Mike Photo credits: “Do work, son!” originally uploaded by Lee Huynh Incite 4 U Bully? I’m good with that: We haven’t spoken about Stuxnet recently, so let me point to an interesting post from VC David Cowan (the first money into VeriSign among others), who talks about how the guy that decomposed and published all the gory details of Stuxnet is misguided in calling the US a cyber-bully. You see, whether Ralph Langner wants to admit it or not, a nuclear-capable Iran isn’t in anyone’s best interests. Regardless of your politics, it’s hard to make a case otherwise. So presumably the US (and other partners) came up with a way to avoid bombing the crap out of somewhere while meeting their requirements. That’s innovation, folks. And innovation can’t be stopped. Remember the Manhattan Project? How long was it before the USSR had their own nuclear weapons? Once Pandora’s box is open, it’s open. And I’m glad the US got to open this one. – MR Advanced Persistent Service Providers: Ever hear of Epsilon? Not the Greek letter – the email marketing company. Me neither, until the breach notifications started rolling in. I bet the Secret Service never heard of them either. Evidently they are a pretty successful company, and that made them a target. As our emails and names start circulating the botnets, one interesting point is emerging. If you read one email sent to the folks you realize that the lost data included not only folks who opted out, but leftover data from prior corporate customers. That’s right, they kept everything. Forever. This provides a new perspective on the idea of persistence, eh? Perhaps it’s time to check your contracts with your service providers, so you aren’t exposed by their mistakes, after you switch to their competitor. – RM Consumerization FTW: ZDNet discussed an interesting use case for Pano Logic virtual client terminals at public libraries. I am a big fan of desktop virtualization, both for security because it’s easier to patch and implement policy centrally, and also because this makes your virtual session available regardless of your location or device. This is not an endorsement of any product – just of this type of technology in general. The use case makes sense, and particularly for schools which need controlled environments. At the same time I realize this will probably never catch on – for the same reason phone booths are gone – cell phones made them obsolete. The organizations with the most to gain from this service model are least likely to be able to afford it. In the long run schools and public libraries will likely require people to

Read Post

Security Benchmarking, Going Beyond Metrics: Sharing Data Safely

The best definition of a security benchmarking effort I am aware of is in Chapter 11 of my book, The Pragmatic CSO, which provides a good perspective on why benchmarking is important. Since it is very hard to have objective, defendable measures of security effectiveness, impact, etc., a technique that can yield very interesting insight into the performance of your security program is to compare it to others. If you can get a sample set of standard questions, then you can get a feel for whether you are off the reservation on some activities and out ahead of others. Benchmarking has been in use in other IT disciplines for decades. Whether it was data center performance or network utilization, companies have always felt compelled to compare themselves to others. It’s part of the competitive, win at all costs mentality that pervades business. So one of the best ways to figure out how good your security is, and get a feel for various other operational aspects of your security program, is to figure out how you compare to someone else. The objective is not to come up with a “security number” or “risk score”, but to present information in the context of other companies that face the same kinds of attacks. This provides management with what they always want: a perspective on the level of risk they are willing to take. If you are behind a reasonable peer group, they can decide to invest more or to accept the risks of a less effective security program. If they are ahead, maybe they will opt to maintain or even accelerate investment in the unlikely event they can differentiate on security). Or, yes, they might decide to scale back on security ‘overhead’. Either way, it’s a win for you as the practitioner, because you know where you stand and the decision makers are actually making informed decisions with data. How novel! But before we can start thinking about comparing all the metrics we’ve decided are important and are now collecting systematically, we need some kind of infrastructure and mechanism to share this data, safely and securely. A few years ago I did a lot of research into building a security benchmark, and customers clearly agreed that any sharing mechanism would need to ensure: Anonymity: First and foremost, these customers wanted to make sure the data wasn’t attributed back to them. No way, no how. Of all the things I discussed with these customers, this was non-negotiable. There could be no way for another customer could identify source data or derive which company provided any of the data. Integrity: The next issue was making sure the data was meaningful. That means it must be objectively and consistently gathered. Obviously there would need to be some level of agreement on what to count and how to count it, and that would likely be the purview of a third party. Security: This goes hand in hand with anonymity, but it’s different in that potential customers need to understand how the data would be protected (at a granular level) before they’d be comfortable sharing. Given all that, is it any wonder that security benchmarking remains in its infancy? When talking to any potential community aggregator or commercial benchmark offering, be sure to dig very deeply into how the data is both secured and aggregated to calculate the benchmarks. You need to ensure proper data encryption and segregation to make sure your data doesn’t get mixed with others, and that even if it somehow does, it wouldn’t be accessible. Additionally, you’ll want to make sure any device uploading data (this must be systematic and automated, remember) is mutually authenticated and authorized so no one can game the benchmark. From an infrastructure protection standpoint, make sure all the proper controls are in place. Things like strong identity management, egress filtering, HIPS (if not whitelisting on the devices with access to the data), as well as significant monitoring on the network and database. Given some recent high-profile breaches, it’s not unreasonable to expect network full packet capture as well. Ultimately you need to be comfortable with how your data is protected, so ask as many questions as you need to. From an application standpoint it’s also reasonable to expect the code to be built using some kind of secure development methodology. So learn about the threat models the vendor (or community) used to design the protection, as well as to what degree automated and non-automated testing mechanisms were used to scrutinize the application at all points during the development process. Learn about audits and pen tests, and basically crawl into very dark places in the provider’s infrastructure to get comfortable. This is a tall order and adds substantially to the due diligence required to get comfortable participating in a security benchmark. We understand this will be too high a hurdle for some. But keep your eyes on the prize: making security decisions based only actual data, within the context of your peer group. As opposed to doing what your gut tells you, or politics, or prayer. Once you clear this intellectual hurdle it’s time to define your peer groups for comparison and how to analyze the data. That’s next. Share:

Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.