Securosis

Research

Piracy Fighting Dog FUD

OK, I have to call Bull$%} on this: Anti-piracy pup sniffs out 35,000 illegal DVDs. A piracy fighting dog. Really. From Yahoo! News: The black Labrador helped enforcement officials who carried out raids last week in southern Johor state which neighbours Singapore, the Motion Picture Association (MPA) said in a statement. Paddy was given to Malaysia by the MPA to help close down piracy syndicates who churn out vast quantities of illegal DVDs. The dog is specially trained to detect chemicals in the discs. So the dog can detect chemicals used in DVDs. Call me a cynic, but I suspect that ‘Paddy’ cannot tell the difference between Best Buy, an adult video store, and an underground DVD warehouse. So unless someone has figured out how to install laser diodes and detection software onto a Labrador, it’s not happening. Of course, when they do, the pirates will be forced to escalate the confrontation with the unstoppable “Fuzzy, bouncy, piracy tennis ball of mayhem”. Seriously, this is an illustration of the huge difference between marketing security and actual security. It looks to me like someone is trying to create the MPA version of Sexual Harassment Panda, and it’s just wrong! Share:

Share:
Read Post

How Market Forces Can Fix PCI

It’s no secret that I haven’t always been the biggest fan of PCI (the Payment Card Industry Data Security Standard). I believe that rather than blowing massive amounts of cash trying to lock down an inherently insecure system, we should look at building a more fundamentally secure way of performing payment transactions. Not that I think anything is ever perfectly secure, but there is a heck of a lot of room for progress, and our current focus has absolutely no chance of doing more than slightly staving off the inevitable. It’s like a turtle trying to outrun the truck that’s about to crush it- the turtle might buy itself an extra microsecond or two, but the outcome won’t change. That said, I’ve also (fairly recently, and due in no small part to debates with Martin McKeay come to believe that as flawed as PCI is, it’s also the single biggest driver of security improvements throughout the industry. It’s the primary force helping security pros gain access to executive management; even more so than any other regulation out there. And while I’d really like to see us focus more on a secure transaction ecosystem, until that wonderful day I recognize we need to live with what we have, and use it to the best of our ability. Rather than merely adding new controls to the PCI standard, I think the best way to do this is to fix some of the inherent problems with how the program is currently set up. If you’ve ever been involved with other types of auditing, PCI looks totally bass ackwards. The program itself isn’t designed to produce the most effective results. Here are a couple of changes that I think could make material improvements to PCI, possibly doubling the number of microseconds we have until we’re a steaming mass of roadkill: Eliminate conflicts of interest by forbidding assessors from offering security tools/services to their clients: This is one of the single biggest flaws in PCI- assessors may also offer security services to their clients. This is a massive conflict of interest that’s illegal in financial audits due to all the problems it creates. It will royally piss off a few of the companies involved, but this change has to happen. Decertify QSAs (assessors) that certify non-compliant companies: Although the PCI council has a few people on their probation list, despite failure after failure we haven’t seen anyone penalized. Without these penalties, and a few sacrificial lambs, QSAs are not accountable for their actions. Eliminate the practice of “shopping” for the cheapest/easiest QSA: Right now, if a company isn’t happy with their PCI assessment results, they can fire their QSA and go someplace else. Let’s steal a rule from the financial auditing playbook (not that that system is perfect) and force companies to disclose when they change assessors, and why. There are other actions we could take, such is publicly disclosing all certified companies on a quarterly basis or requiring unscheduled spot-checks, but I don’t think we necessarily need those if we introduce more accountability into the system, and reduce conflicts of interest. Another very interesting development comes via @treyford on twitter, pointing to a blog post by James DeLuccia and an article over at Wired. It seems that the auditor for CardSystems (you know, one of the first big breaches from back in 2005) is being sued for improperly certifying the company. If the courts step in and establish a negligence standard for security audits, the world might suddenly become mighty interesting. But as interesting (and hopefully motivating, at least to the auditors/assessors) as this is, I think we should focus on what we can do with PCI today to allow market forces to drive security and program improvements. Share:

Share:
Read Post

Macworld Security Article Up- The Truth About Apple Security

Right when the Macalope was sending along his take on the recent ComputerWorld editorial calling for the FTC to investigate Apple, Macworld asked me to write a more somber take. Here’s an excerpt: On May 26, Macworld republished a controversial Computerworld article by Ira Winkler suggesting that Apple is “grossly negligent” when it comes to security, and should be investigated by the Federal Trade Commission for false advertising. The author was motivated to write this piece based on Apple’s recent failure to patch a known Java security flaw that was fixed on other platforms nearly six months ago. While the article raises some legitimate issues, it’s filled with hyperbole, inaccurate interpretations, and reaches the wrong conclusions. Here’s what you really need to know about the Java situation, Mac security in general, and the important lesson on how we control Apple’s approach to security. … The real failure of this, and many other, calls for Mac security is that they fail to accurately identify those who are really responsible for Apple’s current security situation. It isn’t security researchers, malicious attackers, or even Apple itself, but Apple’s customers. Apple is an incredibly successful company because it produces products that people purchase. We still buy MacBooks despite the lack of a matte screen, for example. And until we tell Apple that security will affect our buying decisions, there’s little motivation for the company to change direction. Think of it from Apple’s perspective—Macs may be inherently less secure, but they are safer than the competition in the real world, and users aren’t reducing what they spend on Apple because of security problems. There is reasonable coverage of Mac security issues in the mainstream press (Mr. Winkler’s claim to the contrary), but without demonstrable losses it has yet to affect consumer behavior. Don’t worry- I rip into Apple for their totally irresponsible handling of the Java flaw, but there really isn’t much motivation for Apple to make any major changes to how they handle things, as bad as they often are. Share:

Share:
Read Post

The State of Web Application and Data Security—Mid 2009

One of the more difficult aspects of the analyst gig is sorting through all the information you get, and isolating out any inherent biases. The kinds of inquiries we get from clients can all too easily skew our perceptions of the industry, since people tend to come to us for specific reasons, and those reasons don’t necessarily represent the mean of the industry. Aside from all the vendor updates (and customer references), our end user conversations usually involve helping someone with a specific problem – ranging from vendor selection, to basic technology education, to strategy development/problem solving. People call us when they need help, not when things are running well, so it’s all too easy to assume a particular technology is being used more widely than it really is, or a problem is bigger or smaller than it really is, because everyone calling us is asking about it. Countering this takes a lot of outreach to find out what people are really doing even when they aren’t calling us. Over the past few weeks I’ve had a series of opportunities to work with end users outside the context of normal inbound inquiries, and it’s been fairly enlightening. These included direct client calls, executive roundtables such as one I participated in recently with IANS (with a mix from Fortune 50 to mid-size enterprises), and some outreach on our part. They reinforced some of what we’ve been thinking, while breaking other assumptions. I thought it would be good to compile these together into a “state of the industry” summary. Since I spend most of my time focused on web application and data security, I’ll only cover those areas: When it comes to web application and data security, if there isn’t a compliance requirement, there isn’t budget – Nearly all of the security professionals we’ve spoken with recognize the importance of web application and data security, but they consistently tell us that unless there is a compliance requirement it’s very difficult for them to get budget. That’s not to say it’s impossible, but non-compliance projects (however important) are way down the priority list in most organizations. In a room of a dozen high-level security managers of (mostly) large enterprises, they all reinforced that compliance drove nearly all of their new projects, and there was little support for non-compliance-related web application or data security initiatives. I doubt this surprises any of you. “Compliance” may mean more than compliance – Activities that are positioned as helping with compliance, even if they aren’t a direct requirement, are more likely to gain funding. This is especially true for projects that could reduce compliance costs. They will have a longer approval cycle, often 9 months or so, compared to the 3-6 months for directly-required compliance activities. Initiatives directly tied to limiting potential data breach notifications are the most cited driver. Two technology examples are full disk encryption and portable device control. PCI is the single biggest compliance driver for web application and data security – I may not be thrilled with PCI, but it’s driving more web application and data security improvements than anything else. The term Data Loss Prevention has lost meaning – I discussed this in a post last week. Even those who have gone through a DLP tool selection process often use the term to encompass more than the narrow definition we prefer. It’s easier to get resources to do some things manually than to buy a tool – Although tools would be much more efficient and effective for some projects, in terms of costs and results, manual projects using existing resources are easier to get approval for. As one manager put it, “I already have the bodies, and I won’t get any more money for new tools.” The most common example cited was content discovery (we’ll talk more about this a few points down). Most people use DLP for network (primarily email) monitoring, not content discovery or endpoint protection – Even though we tend to think discovery offers equal or greater value, most organizations with DLP use it for network monitoring. Interest in content discovery, especially DLP-based, is high, but resources are hard to get for discovery projects – Most security managers I talk with are very interested in content discovery, but they are less educated on the options and don’t have the resources. They tell me that finding the data is the easy part – getting resources to do anything about it is the limiting factor. The Web Application Firewall (WAF) market and Security Source Code Tools markets are nearly equal in size, with more clients on WAFs, and more money spent on source code tools per client – While it’s hard to fully quantify, we think the source code tools cost more per implementation, but WAFs are in slightly wider use. WAFs are a quicker hit for PCI compliance – Most organizations deploying WAFs do so for PCI compliance, and they’re seen as a quicker fix than secure source code projects. Most WAF deployments are out of band, and false positives are a major problem for default deployments – Customers are installing WAFs for compliance, but are generally unable to deploy them inline (initially) due to the tuning requirements. Full drive encryption is mature, and well deployed in the early mainstream – Full drive encryption, while not perfect, is deployable in even large enterprises. It’s now considered a level-setting best practice in financial services, and usage is growing in healthcare and insurance. Other asset recovery options, such as remote data destruction and phone home applications, are now seen as little more than snake oil. As one CISO told us, “I don’t care about the laptop, we just encrypt it and don’t worry about it when it goes missing”. File and folder encryption is not in wide use – Very few organizations are performing any wide scale file/folder encryption, outside of some targeted encryption of PII for compliance requirements. Database encryption is hard, and not widely used – Most organizations are dissatisfied with

Share:
Read Post

Database Security Mass-Market Update and Friday Summary – May 29, 2009

I ran across a lot of little tidbits in the world of database security this week, so I figured I would share this for the Friday Summary: Idera has been making a lot of noise this week with seemingly two dozen TechTarget ‘KnowledgeAlerts’ hitting my inbox. Yes, they are still around, but it’s hard to consider them a database security vendor. Customers mostly know them as a DB tools vendor; but they do additionally offer backup encryption, a form of activity monitoring, and what I call “permission mapping” solutions. Not a comprehensive security suite, but handy tools. They really only support the SQL Server platform, but they do in fact offer security products, so bad on me for thinking they were dead and buried. I may not hear about them very often, but the one or two customers I hear from seem to be happy, and that’s what counts. And it’s a challenge to put security tools into the hands of DBA’s and non-security personnel and make them happy. And speaking of “I thought they were dead”, NGS Software entered into a partnership with Secerno recently. NGS has always incredibly database security savvy but product-deficient, focusing more on their professional services capabilities rather than product development. It shows. Secerno is a small DAM firm with a novel approach to detecting anomalous queries. I would like to see them able to compete on an even footing to demonstrate what they can do, as they need more proof points and customer successes to prove how this technology performs in the real world. To do that they are going to need to offer the assessment capability or they will get relegated to the sidelines as a ‘feature’ and not a database security solution. Secerno is too small and probably does not want to sink the time and money required to develop a meaningful body of assessment policies, so being able to leverage the NGS team and their products will help with preventative security measures. Ideally Secerno will put an updated face on the ‘Squirrel’, and leverage the expanded body of policies, but better to have the capability for now and improve later. I have said it before and I will say it again: any customer needs to have assessment to baseline database configurations, and monitoring to enforce policy and detect threats. The compliance buyers demand it, and that’s your buying center in this market. I am eager to see what this UK tag team can do. LogLogic announced their database security intentions a little while back, but shipped their Database Security Manager this week. This is not a scruffy startup entering the database security arena, but a successful and polished firm with an established customer base. Granted, we have seen similar attempts botched, but this is the addition of a more complimentary technology with a much better understanding of the customer buying requirements. LogLogic is touting the ability to perform privileged user monitoring, and that this is fully integrated with their existing audit log collection and analysis. But everyone they will be competing with will have something similar, so that’s not very interesting. What is significant to me is a log management vendor providing the near-real-time monitoring and event blocking capabilities that need to be present to take a security product seriously. Additionally, it is done in a way that will address console and privileged users, which is necessary for compliance. The speed of the integration implies that the product architecture is conducive to both, and if you have ever tried implementing a solution of this type you understand that it is difficult because the two functions offer diametrically opposed technical challenges in data storage and processing. Keep in mind that they just acquired Exaprotect to accomplish similar goals for SEM, so I expect we will see that integration happen soon as well. Now let’s see if their customers find it compelling. Thanks to one of our readers for the heads-up on this one: The Netezza Corporation Investor relations transcript. Interesting details coming out of their end-of-quarter investor call. Turns out that the $3M acquisition price I quoted was slightly off, and the real total was slightly higher at $3.1 million. Given Netezza’s nominal head-count increase since January 1, 2009 (9 people), it looks as if they kept just a handful of the Tizor staff. What shocked me is that they are being credited with 23 customers – less than half the number of customers I thought they had. I am not sure what their average deal size was, but I am willing to bet it was sub-$200k, so revenues must have been very small. This deal was better for their investors than I realized. Lumigent continues to thrive as the contra-database-security platform. While I find most things GRC to be little more than marketing doublespeak, Lumigent has done a good job at locating and mining their ‘AppGRC’ niche. It’s not my intention to marginalize what they provide because there is customer need, there has been for some time, and the platform is suitable for the task. It is interesting that none of their (former?) competitors had success with that marketing angle and reverted to security and compliance messages, but Lumigent is making it work. The segment needs to move up from generic database security to business policy analysis and enforcement, but the ‘what’ and how to get there are not always clear. I confess I think it funny that for most of their articles such as this one, I could substitute “database security” for ‘AppGRC’ and they would still work. Does the need to move beyond reliance on DBA scripts to a more comprehensive assessment and audit platform with separation of duties sound like DB security? You bet it does. It goes to show that messaging & positioning is an art form. So bravo on the re-branding, appropriate new partnerships and intense focus they have on GRC buyers in the back-office application space. And now for the week in review: Webcasts, Podcasts,

Share:
Read Post

The Government Must Save Our Children from Apple!

Editors Note: This morning I awoke in my well-secured hotel room to find a sticky note on my laptop that said, “The Securosis site is now under my control. Do not attempt to remove me our you will suffer my wrath. Best regards, The Macalope.” ComputerWorld has published an interesting opinion piece from Ira Winkler entitled “Man selling book writes incendiary Mac troll bait”. Oh, wait, that’s not the title! Ha-ha! That would be silly! What with it being so overly frank. No, the title is “It’s time for the FTC to investigate Mac security”. You might be confused about the clumsy phrasing because the FTC, of course, doesn’t investigate computer security, it investigates the veracity of advertising claims. What Winkler believes the FTC should investigate is whether Apple is violating trade laws by claiming in its commercials that Macs are less affected by viruses than Windows. Apple gives people the false impression that they don’t have to worry about security if they use a Mac. Really? The ads don’t say Macs are invulnerable. They say that Macs don’t have the same problem with exploits that Windows has. And it’s been the Macalope’s experience that people get that. The switchers he’s come into contact with seem to know exactly the score: more people use Windows so malicious coders have, to date, almost exclusively targeted Windows. Some people – many of them security professionals like WInkler – find this simple fact unfair. Sadly, life isn’t fair. Well, “sadly” for Windows users. Not so much for Mac users. We’re kind of enjoying it. And perhaps because the company is invested in fostering that impression, Apple is grossly negligent in fixing problems. The proof-of-concept code in this case is proof that Apple has not provided a fix for a vulnerability that was identified six months ago. There is no excuse for that. On this point, the Macalope and Winkler are in agreement. There is no excuse for that. The horny one thinks the company has been too lax on implementing a serious security policy and was one of many Mac bloggers to take the company to task for laughing off shipping infected iPods. He’s hopeful the recent hire of security architect Ivan Krstic signals a new era for the company. But let’s get back to Winkler’s call for an FTC investigation. Because that’s funnier. The current Mac commercials specifically imply that Windows PCs are vulnerable to viruses and Macs are not. Actually, no. What they say is that Windows PCs are plagued by viruses and Macs are not. I can’t disagree that PCs are frequent victims of viruses and other attacks… Ah, so we agree! …but so are Macs. Oops, no we don’t. The Macalope would really love to have seen a citation here because it would have been hilarious. In fact, the first viruses targeted Macs. So “frequent” in terms of the Mac here is more on a geologic time scale. Got it. Apple itself recommended in December 2008 that users buy antivirus software. It quickly recanted that statement, though, presumably for marketing purposes. OK, let’s set the story straight here because Winkler’s version reads like something from alt.microsoft.fanfic.net. The document in question was a minor technical note created in June of 2007 that got updated in December. The company did not “recant” the statement, it pulled the note after it got picked up by the BBC, the Washington Post and CNet as some kind of shocking double-faced technology industry scandal. By the way, did you know that Apple also markets Macs as easier to use, yet continues to sell books on how to use Macs in its stores? It’s true! But if it’s so easy to use, why all the books, Apple? Why? All? The? Books? A ZDNet summary of 2007 vulnerabilities showed that there were five times more vulnerabilities for Mac OS than for all types of Windows PC operating systems. No citation, but the Macalope knows what he’s talking about. He’s talking about this summary by George Ou. George loved to drag these stats out because they always made Apple look worse than Microsoft. But he neglected to mention the many problems with this comparison, most importantly that Secunia, the source of the data, expressly counseled against using it to compare the relative security of the products listed because they’re tracked differently. But buy Winkler’s book! The Macalope’s sure the rigor of the research in them is better than in this piece! How can Apple get away with this blatant disregard for security? How can Computerworld get away with printing unsourced accusations that were debunked a year and a half ago? Its advertising claims seem comparable to an automobile manufacturer implying that its cars are completely safe and its competitors’ cars are death traps, when we all know that all cars are inherently unsafe. That’s a really lousy analogy. But to work with it, it’s not that Apple’s saying its car is safer, it’s saying the roads in Macland are safer. Get out of that heavy city traffic and into the countryside. The mainstream press really doesn’t cover Mac vulnerabilities… The real mainstream press doesn’t cover vulnerabilities for any operating system. It covers attacks (even lame Mac attacks). The technology press, on the other hand, loves to cover Mac vulnerabilities, despite Winkler’s claim to the contrary, even though exploits of those vulnerabilities have never amounted to much. When I made a TV appearance to talk about the Conficker worm, I mentioned that there were five new Mac vulnerabilities announced the day before. Several people e-mailed the station to say that I was lying, since they had never heard of Macs having any problems. (By the way, the technical press isn’t much better in covering Mac vulnerabilities.) So, let’s get this straight. Winkler gets on TV and talks up Mac vulnerabilities in a segment about a Windows attack. But because he got five mean emails, the story we’re supposed to get is about how the coverage is all pro-Apple? Were

Share:
Read Post

Sarbanes-Oxley Is Here to Stay

This is an off-topic post. It has a bit to do with Compliance, but nothing to do with Security, so read no further if you are offended by such things. I am surprised that we have not been reading more about the off balance sheet ‘assets’ that was brought to light last week. In a nutshell, over $900 billion in ‘assets’, spread across the 19 largest US banks, was not part of the normal 10K/10Q, and the SEC is telling banks they need to be brought back onto the balance sheets. This is an issue is because these ‘assets’ are mostly comprised of real estate and credit card debt owed to the banks. The change could result in about $900 billion in assets being brought onto the balance sheets of the 19 largest U.S. banks, according to federal regulators. The information was provided by Citigroup Inc., JPMorgan Chase & Co. and 17 other institutions during the government’s recent “stress tests,” which were designed to determine which banks would need more capital if the economy worsened. … In general, companies transfer assets from balance sheets to special purpose entities to insulate themselves from risk or to finance a large project. Given the accelerating rate at which credit card debt is going bad, and the fact that real estate values in states like Arizona have dropped as much as 70% since 2006, it’s likely we are looking at the majority of these ‘assets’ simply vanishing. Across the board, 12% of all homeowners are behind in payments or in foreclosure, and the remaining assets are worth far less than they were originally. It was ironic that I ran across an article about the need to repeal the Sarbanes-Oxley Act of 2002 on the very morning I saw this news item. There has been a methodical drumbeat for several years now about the need to repeal SOX, saying it makes it harder to fill out company boards of directors, going so far as to claim the reversal could help stimulate the economy. Of course corporate executives never liked SOX as there were additional costs associated with keeping accurate records, and it’s hard to balance the perception of financial performance with the potential for jail time as a consequence of rule violations. The scandals at Worldcom, Enron, Tyco, and others prompted this regulation to ensure the have accuracy and completeness in financial reporting which might enable us to avoid another similar fiasco. But we find ourselves in the same place we did in 2001, where many companies are in worse financial shape than was readily apparent – many of the same firms requesting more money from the government (taxpayer) to stay afloat. Section 302 was specifically about controls and procedures to ensure that financial statements are accurate, and it looks to me like moving hundreds of billions of dollars in high risk real estate & credit card loans “off balance sheet” would violate the spirit of the act. I would have thought that given the current economic situation, and with the motivating events for Sarbanes-Oxley still in recent memory, there would be greater outcry, but maybe people are just worried about keeping the roofs over their heads. But the call will come for additional regulation and controls over financial systems as more banks fail. Clearly there needs to be refinement and augmentation to the PCAOB guidelines on several accounting practices, but to what degree will not be determined for a long time. Will this mean new business for vendors who collect data and enforce policies in and around SOX? Nope. Instead it will underscore the core value that they cannot provide. Security and Compliance vendors who offer help with SOX policy enforcement cannot analyze a balance sheet. While there were a couple notable examples where internal auditors monitored accounting and database systems to show fraud, this is not a skill you can bottle up for sale. Collection of the raw data and simple policy enforcement can be provided, but there is no way any product vendors could have assisted in detecting the shuffling of balance sheet assets. Still, I bet we will see it in someone’s marketing collateral come RSA 2010! Share:

Share:
Read Post

The CIS Consensus Metrics and Project Quant

Just before release, the Center for Internet Security sent us a preview copy of the CIS Consensus Metrics. I’m a longtime fan of the Center, and, once I heard they were starting on this project, was looking forward to the results. Overall I think they did a solid job on a difficult problem. Is it perfect? Is it complete? No, but it’s a heck of a good start. There are a couple things that stand out: They do a great job of interconnecting different metrics, and showing you how you can leverage a single collected data attribute across multiple higher-level metrics. For example, a single “technology” (product/version) is used in multiple places, for multiple metrics. It’s clear they’ve designed this to support a high degree of automation across multiple workflows, supporting technologies, and operational teams. I like how they break out data attributes from security metrics. Attributes are the feeder data sets we use to create the security metrics. I’ve seen other systems that intermix the data with the metrics, creating confusion. Their selected metrics are a reasonable starting point for characterizing a security program. They don’t cover everything, but that makes it more likely you can collect them in the first place. They make it clear this is a start, with more metrics coming down the road. The metrics are broken out by business function – this version covering incident management, vulnerability management, patch management, application security, configuration management, and financial. The metric descriptions are clear and concise, and show the logic behind them. This makes it easy to build your own moving forward. There are a few things that could also be improved: The data attributes are exhaustive. Without automated tool support, they will be very difficult to collect. The document suggests prioritization, but doesn’t provide any guidance. A companion paper would be nice. This isn’t a mind-bending document, and we’ve seen many of these metrics before, but not usually organized together, freely available, well documented, or from a respected third party. I highly recommend you go get a copy. Now on to the CIS Consensus Metrics and Project Quant… I’ve had some people asking me if Quant is dead thanks to the CIS metrics. While there’s the tiniest bit of overlap, the two projects have different goals, and are totally complementary. The CIS metrics are focused on providing an overview for an entire security program, while Quant is focused on building a detailed operational metrics model for patch management. In terms of value, this should provide: Detailed costs associated with each step of a patch management process, and a model to predict costs associated with operational changes. Measurements of operational efficiency at each step of patch management to identify bottlenecks/inefficiencies and improve the process. Overall efficiency metrics for the entire patch management process. CIS and Quant overlap for the last goal, but not for the first two. If anything, Quant will be able to feed the CIS metrics. The CIS metrics for patch management include: Patch Policy Compliance Patch Management Coverage Mean Time to Patch I highly suspect all of these will appear in Quant, but we plan on digging into much greater depth to help the operational folks directly measure and optimize their processes. Share:

Share:
Read Post

Acquisitions and Strategy

There have been a couple of acquisitions in the last two weeks that I wanted to comment on; one by Oracle and one by McAfee. But between a minor case of food poisoning followed shortly by a major case of influenza, pretty much everything I wanted to do in the last 12 days, blogging notwithstanding, was halted. I am feeling better and trying to catch up on the stuff I wanted to talk about. At face value, neither of the acquisitions I want to mention are all that interesting. In the big picture, the investments do spotlight product strategy, so I want to comment on that. But before I do, I wanted to make some comments about how I go about assessing the value of an acquisition. I always try to understand the basic value proposition to the acquiring company, as well as other contributing factors. There are always a set of reasons why company A acquires company B, but understanding these reasons is much harder than you might expect. The goals of the buyers and the seller are not always clear. The market strategy and self-perception of each firm come into play when considering what they buy, why they bought it, and how much they were willing to pay. The most common motivators are as follows: Strategic: You want to get into a new market and it is either cheaper or faster to acquire a company that is already in that segment rather than organically develop and sell your own product. Basically this is paving the road for a strategic vision. Buying the major pieces to get into a new market or new growth opportunities in existing markets. No surprises here. Tactical: Filling in competitive gaps. A tactical effort to fill in a piece of the puzzle that your existing customers really need, or complete a product portfolio to address competitive deficiencies within your product. For example, having network DLP was fine up until a point, and then endpoint became a de facto requirement. We saw this with email security vendors who had killer email security platforms, but were still getting hammered in the market for not having complete web security offerings as well. Neither is surprising, but there are many more than these basic two reasons. And this is where things can get weird. Other motivating factors that make the deal go forward may not always be entirely clear. A couple that come to mind: Accretive Acquisition: Buying a solid company to foster your revenue growth curve. Clear value from the buyer’s perspective, but not so clear why profitable companies are willing to sell themselves for 2-4 times revenue when investor hopes, dreams, and aspirations are often much more than that. You have to view this from the seller’s side to make sense of it. There are many small, profitable companies out there in the $15-35M range, with no hope of going public because their market is too small and their revenue growth curve is too shallow. But the investors are pushing for an IPO that will take years, or possibly never happen. So what is your exit strategy? Which firms decide they want the early exit vs. betting their fortunes on a brighter future? You would think that in difficult economic times it is often based upon the stability of their revenue in the next couple of quarters. More often it comes down to which crazy CEOs still swear their firm is at the cusp of greatness for a multi-billion-dollar-a-year market and can convince their boards, vs. pragmatists who are ready to move on. I am already aware of a number of mid-sized companies and investment firms trying to tell “the wheat from the chaff” and target viable candidates, and a handful of pragmatic CEOs willing to look for their next challenge. Look for a lot more of these acquisitions in the next 12 months. Leveraged/Platform Enabler: Not quite strategic, not quite tactical, but a product or feature that multiple products can leverage. For example a web application server, a policy management engine, or a reporting engine may not be a core product offering, but could provide a depth of service that makes all your other products perform better. And better still, where a small firm could not achieve profitability, a large company might realize value across their larger customer base/product suite far in excess of the acquisition price. Good Tech, Bad Company: These firms are pretty easy to spot in this economy. The technology is good and the market is viable, but the company that produces the technology sucks. Wrong sales model, bad positioning, bad leadership decisions, or whatever – they simply cannot execute. I also call this “bargain bin”’ shopping because this is one of the ways mid-sized and larger firms can get cutting edge technology at firesale prices, and cash shortfalls force vendors to sell quickly! Still, it’s not always easy to distinguish the “over-sold bad tech” or “overfunded and poorly managed bad technology” firms from the “good tech, bad management” gems you are after. We have seen a few of these in the last 12 months, and we will see more in the coming 12 months as investors balk and lose confidence. The Hedge: This is where you want into a billion dollar market, but you cannot afford to buy one of the leaders, or your competitors have already bought all of them. What do you do? You practice the art of fighting without fighting: You buy any other player that is a long way from being the front-runner and market that solution like crazy! Sure, you’re not the leader in the category, but it’s good enough not to lose sales, and you paid a fraction of the price. It may even give you time to build a suitable product if you want to, but more often than not, you ride the positive perception train till it runs off the rails. Sellers know this game as well, and you will often see firms

Share:
Read Post

Is the Term “DLP” Finally Meaningless?

As most of you know, I’ve been covering DLP for entirely too long. It’s a major area of our research, with an entire section of our site dedicated to it. To be honest, I never really liked the term “Data Loss Prevention”. When this category first appeared, I used the term Content Monitoring and Filtering. The vendors didn’t like it, but since I wrote (with a colleague) the Gartner Magic Quadrant, they sort of rolled with it. The vendors preferred DLP since it sounded better for marketing purposes (I have to admit, it’s sexier than CMF). Once market momentum took over and end users started using DLP more than CMF, I rolled with it and followed the group consensus. I never liked Data Loss Prevention since, in my mind, it could mean pretty much anything that “prevents data loss”. Which is, for the most part, any security tool on the market. My choice was to either jump on the DLP bandwagon, or stick to my guns and use CMF, even though no one would know what I was talking about. Thus I transitioned over, started using DLP, and focused my efforts on providing clear definitions and advice related to the technology. Over the past 2 weeks I’ve come to realize that DLP, as a term for a specific category of technology, is pretty much dead. I’ve been invited to multiple DLP conferences/speaking opportunities, none of which are focused on what I’d consider DLP tools. I’ve been asked to help work on DLP training materials that don’t even have a chapter on DLP tools. I’ve had multiple end-user conversations on DLP… almost always referring to a different technology. The DLP vendors did such a good job of coming up with a sexy name for their technology that the rest of the world decided to use it… even when they had nothing to do with DLP. Thus, any vendor reading this can consider this post my official recommendation that you drop the term DLP, and move to Content Monitoring and Protection (CMP – a term Chris Hoff first suggested that I’ve glommed onto). Or just make something else up. I’ll continue using DLP on this site, but the non-DLP vendors have won and the term is completely diluted and no longer refers to a specific technology. Thus I’ll stop being incredibly anal about it, and you might see me associated with “DLP” when it has nothing to do with pure-play DLP as I’ve historically defined it. That said, when I’m writing about it I still intend to use the term DLP in my personal writing in accordance with my very specific definition (below), and will start using ‘CMP’ more heavily. Data Loss Prevention/Content Monitoring and Protection is: Products that, based on central policies, identify, monitor, and protect data at rest, in motion, and in use through deep content analysis. For the record, I get all uppity about mangled definitions because all too often they’re used to create market confusion, and reduce value to users. People end up buying things that don’t do what they expected. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.