Securosis

Research

Incite 5/26/2010: Funeral for a Friend

I don’t like to think of myself as a sentimental guy. I have very few possessions that I really care about, and I don’t really fall into the nostalgia trap. But I was shaken this week by the demise of a close friend. We were estranged for a while, but about a year ago we got back in touch and now that’s gone. I know it’s surprising, but I’m talking about my baseball glove, a Wilson A28XX, vintage mid-1980’s. You see, I got this glove from my Dad when I entered little league, some 30+ years ago. It was as big as most of my torso when I got it. The fat left-handed kid always played first base, so I had a kick-ass first baseman’s glove and it served me well. I stopped playing in middle school (something about being too slow as the bases extended to 90 feet), played a bit of intramural in college, and was on a few teams at work through the years. A few of my buddies here in ATL are pretty serious softball players. They play in a couple leagues and seem to like it. So last year I started playing for my temple’s team in the Sunday morning league with lots of other old Jews. I dug my glove out of the trunk, and amazingly enough it was still very workable. It was broken in perfectly and fit my hand like a glove (pun intended). It was like a magnet – if the ball was within reach, that glove swallowed it and didn’t give it up. But the glove was showing signs of age. I had replaced the laces in the webbing a few times over the years, and the edges of the leather were starting to fray. Over this weekend the glove had a “leather stroke”, when the webbing fell apart. I could have patched it up a bit and probably made it through the summer season, but I knew the glove was living on borrowed time. So I made the tough call to put it down. Well, not exactly down, since the leather is already dead, but I went out and got a new glove. Like with a trophy wife, my new glove is very pretty. A black leather Mizuno. No scratches. No imperfections. It even has a sort-of new-car smell. I’ll be breaking it in all week and hopefully it’ll be ready for practice this weekend. For an anti-nostalgia guy, this was actually hard, and it will be weird taking the field with a new rig. I’m sure I’ll adjust, but I won’t forget. – Mike Photo credits: “Leather and Lace” originally uploaded by gfpeck Incite 4 U I want to personally thank Rich and the rest of the security bloggers for really kicking it into gear over the past week. Where my feed reader had been barren of substantial conversations and debate for (what seemed like) months, this week I saw way too much to highlight in the Incite. Let’s keep the momentum going. – Mike. Focus on the problem, not the category – Stepping back from my marketing role has given me the ability to see how ridiculous most of security marketing is. And how we expect the vendors to lead us practitioners out of the woods, and blame then when they find another shiny object to chase. I’m referring to NAC (network access control), and was a bit chagrined by Joel Snyder’s and Shimmy’s attempts to point the finger at Cisco for single-handedly killing the NAC business. It’s a load of crap. To be clear, NAC struggled because it didn’t provide must-have capabilities for customers. Pure and simple. Now clearly Cisco did drive the hype curve for NAC, but amazingly enough end users don’t buy hype. They spend money to solve problems. It’s a cop-out to say that smaller vendors and VCs lost because Cisco didn’t deliver on the promise of NAC. If the technology solved a big enough problem, customers would have found these smaller vendors and Cisco would have had to respond with updated technology. – MR I can haz your ERP crypto – Christopher Kois noted on his blog that he had ‘broken’ the encryption on the Microsoft Dynamics GP, the accounting package in the Dynamics suite from the Great Plains acquisition. Encrypting data fields in the database, he noticed odd behavioral changes when altering encrypted data. What he witnessed was that if he changed a single character, only two bytes of encrypted data changed. With most block ciphers, if you change a single character in the plaintext, you get radically different output. Through trial and error he figured out the encryption used was a simple substitution cipher – and without too much trouble Kois was able to map the substitution keys. While Microsoft Dynamics does run on MS SQL Server, there are some components that still rely upon Pervasive SQL. Christopher’s discovery does not mean that MS SQL Server is secretly using the ancient Caesar Cipher, but rather that some remaining portion Great Plains does. It does raise some interesting questions: how do you verify sensitive data has been removed from Pervasive? If the data remains in Pervasive, even under a weak cipher, will your data discovery tools find it? Does your discovery tool even recognize Pervasive SQL? – AL Blame the addicts – When I was working at Gartner, nothing annoyed me more than those client calls where all they wanted me to do was read them the Magic Quadrant and confirm that yes, that vendor really is in the upper right corner. I could literally hear them checking their “talked to the analyst” box. An essential part of the due diligence process was making sure their vendor was a Leader, even if it was far from the best option for them. I guess no one gets fired for picking the upper right. Rocky DeStefano nails how people see the Magic Quadrant in his Tetragon of Prestidigitation post. Don’t blame the analyst

Share:
Read Post

Code Re-engineering

I just ran across a really interesting blog post by Joel Spolsky from last April: Things You Should Never Do, Part 1. Actually. the post pissed me off. This is one of those hot-button topics that I have had to deal with several times in my career, and have had to manage in the face of entrenched beliefs. His statement is t hat you should never rewrite a code base from scratch. The reasoning is “No major firm has ever successfully survived a product rewrite. Just look at Netscape … ” Whatever. I am a fixer. I was the guy who was able to make code reliable. I was the guy who found and fixed the obscure bugs. As I progressed in my career and started to manage teams of developers, more often than not I was handed the really crummy re-engineering projects because I could fix the problems and make customers happy. Sometimes success is its own penalty. I have inherited code so bad that bug fixes cost 4x in time and usually created new bugs in the process. I have inherited huge bodies of Java code written entirely as if Java were a 3G procedural language – ignoring the object-oriented paradigm completely. I have been tasked with fixing code that – for a simple true/false comparison – made 12 comparisons, 8 database, insertions and 7 deletions – causing an 180x performance penalty. I have inherited code so bad it broke the compiler. I have inherited code so bad that you could not change a back-end database query without breaking the GUI! It takes a real gift for bad programming to do these things. There are times when the existing code – all or part – simply needs to be thrown away. There are times that code is so tightly intertwined that you cannot simply fix one piece at a time. And in some cases there are really good business reasons, like your major customers say your code is crap and needs to be thrown away. Bad code can bleed a company to death with lost sales, brand impairment, demoralization, and employee turnover. That said, I agree with Joel’s basic premise that re-writing your product can kill your company. And I even agree about a lot of the social behaviors he describes that create failure. There is absolutely no reason to believe that the people who developed bad code the first time will not do the same thing the next time. But I don’t agree that you should never rewrite. I don’t agree that it has never been done successfully. I know because I have done it successfully. Twice. Out of three attempts, but hey, I got the important projects right. We tend not to hear about successful rewrites because the companies that carried it off really don’t want everyone knowing that previous versions were terrible. They would rather focus on happy customers and competitive products. It’s very likely that companies who need to rewrite code will screw up a second time. Honestly, there are a lot more historic rewrite flameouts than success stories. Companies know what they want to fix in the code, but they don’t understand what they need to fix in the company. I contend this is because there are company behaviors that promote failure, and if they did it once, they are likely to do it again. And again. Until, mercifully, the company goes down in flames. There are a lot of reasons why re-architecture and re-implementations projects fail. In no particular order … Big eyes: You are the chief developer and you hate your current product. You have catalogued everything that is wrong with it and how you would fix it. You have extensive lists of features you would like to implement. You have a grand vision of how this product should function, how it should be architected, and how it will be implemented. This causes your re-engineering effort to fail because you think that you are going to build perfect software, tackle every problem, and build every feature, in the first revision. And you commit to do so, just to get the project green-lighted. Resources: You current product sucks. It really sucks. It has atrocious quality and low performance, and is miserable to manage. It’s so freaking bad that customers ask for their money back, and sales falter. This causes your re-engineering effort to fail because there is simply not enough time, and not enough revenue to pay for your rebuild. Not with customers breathing down management’s neck, and investors looking for the quick “liquidity event”. So marketing keeps on marketing, sales keeps on selling, and you keep on supporting the old mess you have. Bad blood: When you car gets old and dies, you don’t expect someone to give you a new one for free. When your crappy old code no longer supports your customers, in essence you need to pay for new code. Yes, it is unfortunate that you bought a lemon last time, but you need to make additional investments in time and development resources, and fix the problems that led you down the wrong path. Your project fails because management is so bitter about the failure that they muck around with development practices, apply more pressure and try to get more involved with day-to-day development, when the opposite is needed. Expectations: Not only is the development team excited at not having to work on the atrocious code you have now, but they are really looking forward to working on a product that has semi-modern design. The whole department is buzzing, and so is management! This causes your re-engineering effort to fail because the Chickens think that no only are you going to deliver perfect software, but you are going to deliver every feature and function of the old crappy product, as well as a handful of new and extraordinary features as well. And it’s unlikely that management will let you adjust the ship date to accommodate the new demands.

Share:
Read Post

Gaming the Tetragon

Rich highlighted a great post from Rocky DiStefano of Visible Risk in today’s Incite: Blame the addicts – When I was working at Gartner, nothing annoyed me more than those client calls where all they wanted me to do was read them the Magic Quadrant and confirm that yes, that vendor really is in the upper right corner. I could literally hear them checking their “talked to the analyst” box. An essential part of the due diligence process was making sure their vendor was a Leader, even if it was far from the best option for them. I guess no one gets fired for picking the upper right. Rocky DeStefano nails how people see the Magic Quadrant in his Tetragon of Prestidigitation post. Don’t blame the analyst for giving you what you demand – they are just giving you your fix, or you would go someplace else. – RM Rocky is dead on – there are a number of constituencies that leverage information like the Magic Quadrant, and they all have different perspectives on the report. I don’t need to repeat what Rocky said, but I want to add a little more depth about each of the constituencies and provide some anecdotes from my travels. To be clear, Gartner (and Forrester, for that matter) place all sorts of caveats on their vendor rankings. They say not to use them to develop a short list, and they want clients to call to discuss their specific issues. But here’s the rub: They know far too many organizations use the MQ as a crutch to support either their own laziness and stupidity, or to play the game and support decisions they’ve already made. Institutionally they don’t care. As Rich pointed out, (most of) the analysts hate it. But the vendor rankings represent enough revenue that they don’t want to mess with them. Yes, that’s a cynical view, but at the end of the day both of the big IT research shops are public companies and they have to cater to shareholders. And shareholders love licensing 10-page documents for $20K each to 10 vendors. Rocky uses 3 cases to illuminate his point, first a veteran information security professional, and those folks (if they have a clue) know that they’ve got to focus their short list on vendors close to the Leader Quadrant. If not, they’ll spend more time justifying another lesser-ranked vendor than implementing the technology. It’s just not worth the fight. So they don’t. They pick the best vendor from the leader quadrant and move on. This leads us to the second case, the executive, who basically doesn’t care about the technology, but has a lot of stuff on his/her plate and figures if a vendor is a leader, they must have lots of customers calling Gartner and their stuff can’t be total crap. Most of the time, they’d be right. And the third case is vendors. Rocky makes some categorizations about the different quadrants, which are mostly accurate. Vendors in the “niche” space (bottom left) don’t play into the large enterprise market, or shouldn’t be. Those in the “challenger” quadrant (top left) are usually big companies with products they bundle into broad suites, so the competitiveness of a specific offering is less important. Those in the “visionary” sector (bottom right) delude themselves into thinking they’ve got a chance. They are small, but Gartner thinks they understand the market. In reality it doesn’t matter because the vast majority of the market – dumb and/or lazy information security professionals – see the MQ like this: In most enterprise accounts the only vendors with a chance are the ones in the leader quadrant, so placement in this quadrant is critical. I’ve literally had CEOs and Sales VPs take out a ruler and ask why our arch-nemesis was 2mm to the right of our dot. 2 frackin millimeters. You may think I’m kidding, but I’m not. So many of the high-flying vendors make it their objective to spend whatever resources it takes to get into the leader quadrant. They have customers call into Gartner with inquiries about their selection process (even though the selection is already made) to provide data points about the vendor. Yes, they do that, and the vendors provide talking points to their clients. They show up at the conferences and take full advantage of their 1on1 meeting slots. They buy strategy days. To be clear, you cannot buy a better placement on the MQ. But you can buy access, which gives a vendor a better opportunity to tell their story, which in many cases results in better placement. Sad but true. Vendors can game the system to a degree. Which is why Rich, Adrian, and I made a solemn blood oath that we at Securosis would never do a vendor ranking. We’d rather focus our efforts on the folks who want advice on how to do their job better. Not those trying to maximize their Tetris time. Share:

Share:
Read Post

Quick Wins with DLP Presentation

Yesterday I gave this presentation as a webcast for McAfee, but somehow my last 8 slides got dropped from the deck. So, as promised, here is a PDF of the slides. McAfee is hosting the full webcast deck over at their blog. Since we don’t host vendor materials here at Securosis, here is the subset of my slides. (You might still want to check out their full deck, since it also includes content from an end user). Presentation: Quick Wins with DLP Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.