Securosis

Research

Bit9 Bets on (Carbon) Black

In an advanced endpoint and server protection consolidation play, Bit9 and Carbon Black announced a merger this morning. Simultaneously, the combined company raised another $38 million in investment capital to fund the integration, pay the bankers, and accelerate their combined product evolution. Given all the excitement over anything either advanced or cyber, this deal makes a lot of sense as Bit9 looks to fill in some holes in its product line, and Carbon Black gains a much broader distribution engine. But let’s back up a bit. As we have been documenting in our Advanced Endpoint and Server Protection series, threat management has evolved to require assessment, prevention, detection, investigation, and remediation. Bit9’s heritage is in prevention, but they have been building out a much broader platform, including detection and early investigation capabilities, over the past 18 months. But pulling detailed telemetry from endpoints and servers is difficult, so they had a few more years of work to build out and mature their offering. Integrating Carbon Black’s technology gives them a large jump ahead, toward a much broader product offering for dealing with advanced malware. Carbon Black was a small company, and despite impressive technology they were racing against the clock. With FireEye’s acquisition of Mandiant, endpoint forensic and investigation technology is becoming much more visible in enterprise accounts as FireEye’s sales machine pushes the new toy into existing customers. Without a means to really get into that market, Carbon Black risked losing ground and drowning in the wake of the FireEye juggernaut. Combined with Bit9, at least they have a field presence and a bunch of channel relationships to leverage. So we expect them to do exactly that. Speaking of FireEye, the minute they decided to buy Mandiant, the die was cast on the strategic nature of their Bit9 partnership. As in, it instantly became not so strategic. Not that the technology overlapped extensively, but clearly FireEye was going to go its own way in terms of endpoint and server protection. So Bit9 made a shrewd move, taking out one of the main competitors to the MIR (now FireEye HX) product. With the CB technology Bit9 can tell a bigger, broader story than FireEye about prevention and detection on devices for a while. We also like the approach of bundling both the Bit9 and Carbon Black technologies for one price per protected endpoint or server. This way they remove any disincentive to protect devices across their entire lifecycle. They may be leaving some money on the table, but all their competitors require multiple products (with multiple license fees) to provide comparably broad protection. Bundling makes it much easier to tell a differentiated story. We got one question about whether Bit9 is now positioned to go after the big endpoint protection market. Many security companies have dancing fairies in their eyes, thinking of the multiple billions companies spend on endpoint protection that doesn’t work. Few outfits have been able to break the inertia of the big EPP vendors, to build a business on alternative technology. But it will happen at some point. Bit9 now has most of the pieces and could OEM the others pretty cheaply, because it’s not like an AV signature engine or FDE product is novel today. It is too early to tell whether they will go down that path – to be candid they have a lot of runway to sell protection for critical devices, and follow that with detection/investigation capabilities across the enterprise. In a nutshell we are positive on this deal. Of course there are always pesky details to true technical integration and building a consistent and integrated user experience. But Bit9 + CB has a bunch of the pieces we believe are central to advanced endpoint and server protection. Given FireEye’s momentum, it is just a matter of time before one of the bigger network players takes Bit9 out to broaden their own protection to embrace endpoints and servers. Share:

Share:
Read Post

RSA Conference Guide 2014 Deep Dive: Network Security

As we begin deeper dives into our respective coverage areas, we will start with network security. We have been tracking the next generation (NG) evolution for 5 years, during which time it has fundamentally changed the meaning of the perimeter – as we will discuss below. Those who moved quickly to embrace NG have established leadership positions, at the expense of those that didn’t. Players who were leaders 5 short years ago have become non-existent, and there is a new generation of folks with innovative network security approaches to handle advanced attacks. After many years of stagnation, network security has come back with a vengeance. Back to Big Swinging (St)icks The battle for the perimeter is raging right now in network security land. In one corner you have the incumbent firewall players, who believe that because the future of network security has been anointed ‘NGFW’ by those guys in Stamford, it is their manifest destiny to subsume every other device in the perimeter. Of course the incumbent IPS folks have a bit to say about that, and are happy to talk about how NGFW devices keel over when you turn on IPS rules and SSL decryption. So we come back to the age-old battle when you descend into the muck of the network. Whose thing is bigger? Differentiation on the network security front has gone from size of the application library in 2012, to migrating from legacy port/protocol policies in 2013, to who has the biggest and fastest gear in 2014. As they work to substantiate their claims, we see a bunch of new entrants in the security testing business. This is a good thing – we still don’t understand how to read NSS Labs’ value map. Besides the size of the equipment, there is another more impactful differentiation point for NGXX boxes: network-based malware detection (NBMD). All the network security leaders claim to detect malware on the box, and then sling mud about where analysis occurs. Some run analysis on the box (or more often, set of boxes) while others run in the cloud – and yes, they are religious about it. So if you want to troll a network security vendor, tell them their approach is wrong. You will also hear the NGXX folks who continue to espouse consolidation, but not in a UTM-like way because UTM is so 2003. But in a much cooler and shinier NGXX way. No, there is no difference – but don’t tell the marketeers that. They make their money ensuring things are sufficiently shiny on the RSAC show floor. More Bumps (in the Wire) Speaking of network-based malware detection (NBMD), that market continues to be red hot. Almost every organization we speak to either has or is testing one. Or they are pumping some threat intelligence into network packet capture devices to look for callbacks. Either way, enterprises have gotten religion about looking for malware on the way in – before it wreaks havoc. One area where they continue to dawdle, though, is putting devices inline. Hold up a file for a microsecond, and employees start squealing like stuck pigs. The players in this market who offer this capability as a standalone find most of their devices deployed out-of-band in monitor mode. With the integration of NBMD into broader NG network security platforms, the capability is deployed inline because the box is inherently inline. This puts standalone devices at a competitive disadvantage, and likely means there won’t be any standalone players for much longer. By offering capabilities that must be inline (like IPS), vendors like FireEye will force the issue and get their boxes deployed inline. Problem solved, right? Of course going inline requires a bunch of pesky features like fail open, hot standby, load balancing, and redundant hardware. And don’t forget the flack jacket when a device keels over and takes down a Fortune 10 company’s call center. ET Phone Home Another big theme you will see at this year’s RSA is the attack of Threat Intelligence (TI). You know, kind of like when ET showed up all those years ago, got lost, and figured out how to send a network ping zillions of light years with a Fisher Price toy. We are actually excited about how TI offerings are developing – with more data on things like callbacks, IP reputation, attack patterns, and all sorts of other cool indicators of badness. Even better, there is a specific drive to integrate this data more seamlessly into security monitoring and eventually update blocking rules on network security devices in an automated fashion. Of course automatic blocking tends to scare the crap out of security practitioners. Mostly because they saw Terminator too many times. But given the disruption of cloud computing and this whole virtualization thing, security folks will get much more comfortable with having a machine tune their rules, because it’s going to happen fast. There is no alternative – carbon-based units just can’t keep up. Though we all know how that story featuring Skynet turned out, so there will be a clear focus on ensuring false positives are minimized, probably to the point of loosening up the blocking rules just to make sure. And that’s fine – the last thing you want is a T1000 showing up to tell you that sessions you knocked down caused a missed quarter. Network and Endpoints: BFF When it comes to advanced malware, the network and the endpoints are not mutually exclusive. In fact over the past year we have seen integration between endpoint folks like Bit9 and network-based malware detection players such as FireEye and Palo Alto Networks. This also underlies the malware defense stories coming from Sourcefire (now Cisco) and McAfee, and pushed the FireEye/Mandiant acquisition announced in January. You can bet the Mandiant folks were drinking some high-end champagne as they welcomed 2014. There is method to the madness, because network folks need visibility on endpoints. These network detection devices are going to miss at some point, both due to new attack tactics (those notorious 0-days) and devices that escape the comfy confines of the corporate network and

Share:
Read Post

Security Management 2.5: Replacing Your SIEM Yet? [New Paper]

Security Information and Event Management (SIEM) systems create a lot of controversy among security folks – they are a pain but it is an instrumental technology for security, compliance, and operations management. The problem is – given the rapid evolution of SIEM/Log Management over the past 4-5 years – that product obsolescence is a genuine issue. The problems caused by products that have failed to keep pace with technical evolution and customer requirements cannot be trivialized. This pain becomes more acute when a SIEM fails to collect the essential information during an incident – and even worse when it completely fails to detect a threat. Customers spend significant resources (both time and money) on caring for and feeding their SIEM. If they don’t feel the value is commensurate with their investment they will move on – searching for better, easier, and faster products. It is only realistic for these customers to start questioning whether their incumbent offerings make sense moving forward. We are happy to announce the launch our latest research paper: Security Management 2.5. We discuss changing customer demands, and how vendors are altering their platforms to address them. We then provide a detailed process to help determine whether you need to swap providers, and if so how. We would like to thank IBM and McAfee for licensing this research. Support from the community enables us to bring you our Totally Transparent Research free of charge, so we are happy IBM and McAfee chose to license this report. You can get the full paper: Security Management 2.5: Replacing Your SIEM Yet? Share:

Share:
Read Post

RSA Conference Guide 2014 Watch List: DevOps

We have covered the key themes we expect to see at the RSA Conference, so now we will cover a theme or two you probably won’t see at the show (or not enough of, at least), but really should. The first is this DevOps things guys like Gene Kim are pushing. It may not be obvious yet, but DevOps promises to upend everything you know about building and launching applications, and make a fundamental mark on security. Or something I like to call “SecOps”. DevOps, Cloud, and the Death of Traditional IT Recently in one of my cloud security classes I had a developer in attendance from one of those brand-name consumer properties all of you, and your families, probably use. When he writes a code update he checks it in and marks it for production; then a string of automated tools and handoffs runs it through test suites and security checks, and eventually deploys it onto their infrastucture/platform automatically. The infrastructure itself adjusts to client demands (scaling up and down), and the concept of an admin accessing a production server is an anachronism. At the latest Amazon Web Services conference, Adobe (I believe the speaker was on the Creative Cloud team) talked about how they deploy their entire application stack using a series of AWS templates. They don’t patch or upgrade servers, but use templates to provision an entirely new stack, slowly migrate traffic over, and then shut down the old one when they know everything works okay. The developers use these templates to define the very infrastructure they run on, then deploy applications on top of it. Microsoft Office? In the cloud. Your CRM tool? In the cloud. HR? Cloud. File servers? Cloud. Collaboration? Cloud. Email? Cloud. Messaging? Get the picture? Organizations can move almost all (and sometimes all) their IT operations onto cloud-based services. DevOps is fundamentally transforming IT operations. It has its flaws, but if implemented well it offers clear advantages for agility, resiliency, and operations. At the same time, cloud services are replacing many traditional IT functions. This powerful combination has significant security implications. Currently many security pros are completely excluded from these projects, as DevOps and cloud providers take over the most important security functions. Only a handful of security vendors are operating in this new model, and you will see very few sessions address it. But make no mistake – DevOps and the Death of IT will show up as a key theme within the next couple years, following the same hype cycle as everything else. But like the cloud these trends are real and here to stay, and have an opportunity to become the dominant IT model in the future. Share:

Share:
Read Post

Friday Summary: February 14, 2014

Bacon as a yardstick: This year will see the 6th annual Securoris Disaster Recovery Breakfast, and I am measuring attendance in required bacon reserves. Jillian’s at the Metreon has been a more than gracious host each year for the event. But when we order food we (now) do it in increments of 50 people. At the moment we are ordering bacon for 250, and we might need to bump that up! We have come a long way since 2009, when we had about 35 close friends show up, but we are overjoyed that so many friends and associates will turn out. Regardless, we expect a quiet, low-key affair. It has always been our favorite event of the week because of that. Bring your tired, your hungry, your hungover, or just plain conference-weary self over and say ‘Howdy’. There will be bacon, good company, and various OTC pharmaceuticals to cure what ills you. Note from Rich: Actually we had a solid 100 or so that first year. I know – I had to pay the bill solo. Big Spin: More and more firms are spinning their visions of big data, which in turn makes most IT folks’ heads spin. These visions look fine within a constrained field of view, but the problem is what is left unsaid: essentially the technologies and services you will need but which are not offered – and vendors don’t talking about them. Worse, you have to filter through non-standard terminology deployed to support vendor spin – so it’s extremely difficult to compare apples against apples. You cannot take vendor big data solutions at face value – at this early stage you need to dig in a bit. But to ask the right questions, you need to know what you probably don’t yet understand. So the vendor product demystification process begins with translating their materials out of vendor-speak. Then you can determine whether what they offer does what you need, and finally – and most importantly – identify the areas they are not discussing, so you can discover their deficiencies. Is this a pain in the ass? You betcha! It’s tough for us – and we do this all day, for a living. So if you are just learning about big data, I urge you to look at the essential characteristics defined in the introduction to our Securing Big Data Clusters paper – it is a handy tool to differentiate big data from big iron, or just big BS. Laying in wait. I have stated before that we will soon stop calling it “big data”, and instead just call these platforms “modular databases”. Most new application development projects do not start with a relational repository – instead people now use some form of NoSQL. Which should be very troubling to any company that derives a large portion of its revenue from database sales. Is it odd that none of the big three database vendors has developed a big data platform (a real one – not a make believe version)? Not at all. Why jump in this early when developers are still trying to decide whether Couch or Riak or Hadoop or Cassandra or something else entirely is best for their projects? So do the big three database vendors endorse big data? Absolutely. To varying degrees they encourage customer adoption, with tools to support integration with big data – usually Hadoop. It is only smart to play it slow, lying in wait like a giant python, and later swallow the providers that win out in the big data space. Until then you will see integration and management tools, but very tepid development of NoSQL platforms from big relational players. Yes, I expect hate mail on this from vendors, so feel free to chime in. Hunter or hunted? One the Securosis internal chat board we were talking about open security job positions around the industry. Some are very high-profile meat grinders that we wouldn’t touch with asbestos gloves and a 20’ pole. Some we recommend to friends with substantial warnings about mental health and marital status. Others not at all. Invariably our discussion turned to the best job you never took: jobs that sounded great until you go there – firms often do a great job of hiding dirty laundry until after you come on board. Certain positions provide a learning curve for a company: whoever takes the job, not matter how good, fails miserably. Only after the post-mortem can the company figure out what it needs and how to structure the role to work out. Our advice: be careful and do your homework. Security roles are much more difficult than, say, programmer or generic IT staffer. Consult your network of friends, seek out former employees, and look at the firm’s overall financial health for some obvious indicators. Who held the job before you and what happened? And if you get a chance to see Mike Rothman present “A day in the life of a CISO”, check it out – he captures the common pitfalls in a way that will make you laugh – or cry, depending on where you work. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Rich quoted in “Building the security bridge to the Millennials”. Favorite Securosis Posts Dave Lewis: After-School Special: It’s Time We Talked – about Big Data Security. David Mortman: RSA Conference Guide 2014 Watch List: DevOps. Adrian Lane: RSA Conference Guide 2014 Watch List: DevOps. Just a great post. Mike Rothman: RSA Conference Guide 2014 Watch List: DevOps. Sometimes it’s helpful to look into the crystal ball and think a bit about what’s coming. You won’t see that much at the RSAC, but we can at least give you some food for thought. Rich: Bit9 Bets on (Carbon) Black. Mike always does the best deal posts in the industry. Other Securosis Posts Security Management 2.5: Replacing Your SIEM Yet? [New Paper]. Advanced Endpoint and Server Protection: Prevention. Incite 2/12/2014: Kindling. Firestarter: Mass Media Abuse. RSA Conference Guide 2014 Deep Dive: Network Security. RSA Conference Guide 2014 Key Theme: Cloud Everything. RSA Conference Guide 2014 Key Theme: Crypto and Data Protection. RSA

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.