Securosis

Research

Research Revisited: POPE analysis on the new Securosis

Since we’re getting all nostalgic and stuff, I figured I’d dust off the rationale I posted the day we announced that I was joining Securosis. That was over 4 years ago and it has been a great ride. Rich and Adrian haven’t seen fit to fire me for cause yet, and I think we’ve done some great work. Of course the plans you make typically aren’t worth the paper they’re written on. We have struggled to launch that mid-market research offering. And we certainly couldn’t have expected the growth we have seen with our published research (using our unique Totally Transparent Research process) or our retainer business. So on balance it’s still all good. By the way, I still don’t care about an exit, as I mentioned in the piece below. I am having way too much fun doing what I love. How many folks really get to say that? So we will continue to take it one day at a time, and one research project at a time. We will try new stuff because we’re tinkerers. We will get some things wrong, and then we’ll adapt. And we’ll drink some beer. Mostly because we can… The Pope Visits Security Incite + Securosis (Originally published on the Security Incite blog – Jan 4, 2010.) When I joined eIQ, I did a “POPE” analysis on the opportunity, to provide a detailed perspective on why I made the move. The structure of that analysis was pretty well received, so as I make another huge move, I may as well dust off the POPE and use that metaphor to explain why I’m merging Security Incite with Securosis. People Analyzing every “job” starts with the people. I liked the freedom of working solo, but ultimately I knew that model was inherently limiting. So thinking about the kind of folks I’d want to work with, a couple of attributes bubbled to the top. First, they need to be smart. Smart enough to know when I’m full of crap. They also need to be credible. Meaning I respect their positions and their ability to defend them, so when they tell me I’m full of crap – I’m likely to believe them. Any productive research environment must be built on mutual respect. Most importantly, they need to stay on an even keel. Being a pretty excitable type (really!), when around other excitable types the worst part of my personality surfaces. Yet, when I’m around guys that go with the flow, I’m able to regulate my emotions more effectively. As I’ve been working long and hard on personal development, I didn’t want to set myself back by working with the wrong folks. For those of you that know Rich and Adrian, you know they are smart and credible. They build things and they break them. They’ve both forgotten more about security than most folks have ever known. Both have been around the block, screwed up a bunch of stuff and lived to tell the story. And best of all, they are great guys. Guys you can sit around and drink beer with. Guys you looking forward to rolling your sleeves up with and getting some stuff done. Exactly the kind of guys I wanted to work with. Opportunity Securosis will be rolling out a set of information products targeted at accelerating the success of mid-market security and IT professionals. Let’s just say the security guy/gal in a mid-market company may be the worst job in IT. They have many of the same problems as larger enterprises, but no resources or budget. Yeah, this presents a huge opportunity. We also plan to give a lot back to the community. Securosis publishes all its primary research for free on the blog. We’ll continue to do that. So we have an opportunity to make a difference in the industry as well. To be clear, the objective isn’t to displace Gartner or Forrester. We aren’t going to build a huge sales force. We will focus on adding value and helping to make our clients better at their jobs. If we can do that, everything else works itself out. Product To date, no one has really successfully introduced a syndicated research product targeted to the mid-market, certainly not in security. That fact would scare some folks, but for me it’s a huge challenge. I know hundreds of thousands of companies struggle on a daily basis and need our help. So I’m excited to start figuring out how to get the products to them. In terms of research capabilities, all you have to do is check out the Securosis Research Library to see the unbelievable productivity of Rich and Adrian. The library holds a tremendous amount of content and it’s top notch. As with every business trying something new, we’ll run into our share of difficulties – but generating useful content won’t be one of them. Exit Honestly, I don’t care about an exit. I’ve proven I can provide a very nice lifestyle for my family as an independent. That’s liberating, especially in this kind of economic environment. That doesn’t mean I question the size of the opportunity. Clearly we have a great opportunity to knock the cover off the ball and build a substantial company. But I’m not worried about that. I want to have fun, work with great guys and help our clients do their jobs better. If we do this correctly, there are no lack of research and media companies that will come knocking. Final thoughts On the first working day of a new decade, I’m putting the experiences (and road rash) gained over last 10 years to use. Whether starting a business, screwing up all sorts of things, embracing my skills as an analyst or understanding the importance of balance in my life, this is the next logical step for me. Looking back, the past 10 years have been very humbling. It started with me losing a fortune during the Internet bubble. selling the company I founded for the cash on our balance sheet because

Share:
Read Post

Research Revisited: Off Topic: A Little Perspective

As I was crawling through the old archives for some posts, I found my very first reference to Mike here at Securosis. I timed this Revisited post to fire off when Mike’s post on joining Securosis goes live, and the title now seems to have more meaning. Off Topic: A Little Perspective This has nothing to do with security other than the fact Mike Rothman is a security analyst. Sometimes it’s worth sitting back and evaluating why you’re in the race in the first place. It’s all too easy to get caught up in the insanity of day-to-day demands or the incredibly deceptive priorities of the corporate and government rat races. A few months ago I took a step back and decided to reduce travel, stay healthy, and start this blog. I wanted a more personal outlet for writing on topics and in a style that’s inappropriate at my day job (in other words, more fun). My challenge is running this site in a way that doesn’t create a conflict of interest with my employer, and thus I don’t publish anything here that I should be publishing there. Mike just went off and started his own company to support his real priorities. You should really read this. Share:

Share:
Read Post

Research Revisited: Apple, Security, and Trust

Update: After publishing this, I realized I should have taken more time editing, especially after Apple released their iOS Security paper this week. My intention was to refer to situations where, often due to attacks, vulnerabilities, or other events, Apple is pushed into responding. They can still struggle to balance the lines between what they want to say, and what outsiders want to hear. They have very much improved communications with researchers, the media, and the level of security information they publish in the open. It is the crisis situations that knock things off kilter at times. I am sometimes called an Apple apologist for frequently defending their security choices, but it wasn’t always that way. I first started writing about Apple security because those were the products I used, and I was worried Apple didn’t take security seriously. I was very personally invested in their choices, and there were a lot of reasons when I first posted this back in 2006 to think we were headed for disaster. In retrospect, my post was both on and off target. I thought at the time that Apple needed to focus more on communications. But Apple, as always, chose their own path. They have improved communications significantly, but not nearly as much as someone like Microsoft. But at the same time they tripled down on security. iOS is now one of the most secure platforms out there (yes, even despite the patch last week). OS X is also far more secure than it was, and Apple continues to invest in new security options for users. I was right and I was wrong. Apple recognized, due to the massive popularity of iOS, that building customer trust was essential to maintaining a market lead. They acted on that with dramatic improvements in security. iOS has yet to suffer any major wide-scale exploitation. OS X added features like FileVault 2 (encryption for the masses) and GateKeeper (wrecking malware markets). Apple most definitely sees security as essential to trust. But they still struggle with communications. Not that I expect them to ever not act like Apple, but they are still feeling their way around the lines to find a level they are comfortable with culturally, which still avoids negative spin cycles like I talk about below. This post originally appeared on October 18, 2006 Apple, Security, and Trust Before I delve into this topic I’d like to remind readers that I’m a Mac user and Apple fan. We are a 2 person, 2 Mac, 3 iPod, 2 Airport Express household, with another Mac in the plans this spring. By the same token I don’t think Microsoft is evil and consider some of their products to be quite good. That said I prefer OS X and have no plans to switch to Vista, although I’ll probably run it in a virtual machine on my Mac. What I’m about to say is in the nature of protecting, not attacking, one of my favorite vendors. Apple faces a choice. Down one path is the erosion of trust, lost opportunities, and customers facing increased risk. On the other path is increased trust, greater opportunities, and happy, safe, customers. I have a lot vested in Apple, and I’d like to keep it that way. As most of you probably know by now, Apple shipped a limited number of video iPods loaded with a Windows virus that could infect an attached PC. The virus is well known and all antivirus software should stop it, but the reality is this is an extremely serious security failure on the part of Apple. The numbers are small and damages limited, but there was obviously some serious breakdown in their security controls and QA process. As with many recent Apple security stories this one was about to quietly fade into the night were it not for Apple PR. In Apple’s statement they said, “As you might imagine, we are upset at Windows for not being more hardy against such viruses, and even more upset with ourselves for not catching it.”. As covered by George Ou and Amrit Williams, this statement is embarrassing, childish, and irresponsible. It’s the technical equivalent of blaming a crime victim for their own victimization. I’m not defending the security problems of XP, which are a serious epidemic unto themselves, but this particular mistake was Apple’s fault, and easily preventable. While Mike Rothman agrees with Ou and Williams, he correctly notes that this is just Apple staying on message. That message, incorporated into all major advertising and marketing, is that Macs are more secure and if you’d just switch to a Mac you wouldn’t have to worry about spyware and viruses. It’s a good message, today, because it’s true. I bought my mom a Mac and talked my sister into switching her small business to Macs primarily because of security. I’m overprotective and no longer feel my friends and family can survive on the Internet on XP. Vista is a whole different animal, fundamentally more secure than its predecessors, but it’s not available yet so I couldn’t consider that option. Thus it was iMac and Mac mini city. But when Apple sticks to this message in the face of a contradictory reality they expose themselves, and their customers, to greater risks. Reality is starting to change and Apple isn’t, and therein lies my concern. All relationships are founded on trust and need. (Amrit has another good post on this topic in business relationships). One of the keystones of trust is security. I like to break trust into three components: Intent: How do you intend to treat participants in a relationship? Capability: Can you behave in compliance with your intent? Communication: Can you effectively communicate both your intent and capability? Since there’s no perfect security we always need to make security tradeoffs. Intent decides how far you need to go with security, while capability defines if you’re really that secure, and communication is how you get customers to believe both your intent and capability. Recent actions by Apple are breaking their foundations of trust. As a business this is a

Share:
Read Post

Research Revisited: Hammers vs. Homomorphic Encryption

We are running a retrospective during RSA because we cannot blog at the show. We each picked a couple posts we like and still think relevant enough to share. I picked a 2011 post on Hammers and Homomorphic Encryption, because a couple times a year I hear about a new startup which is going to revolutionize security with a new take on homomorphic encryption. Over and over. And perhaps some day we will get there, but for now we have proven technologies that work to the same end. (Original post with comments) Researchers at Microsoft are presenting a prototype of encrypted data which can be used without decrypting. Called homomorphic encryption, the idea is to keep data in a protected state (encrypted) yet still useful. It may sound like Star Trek technobabble, but this is a real working prototype. The set of operations you can perform on encrypted data is limited to a few things like addition and multiplication, but most analytics systems are limited as well. If this works, it would offer a new way to approach data security for publicly available systems. The research team is looking for a way to reduce encryption operations, as they are computationally expensive – their encryption and decryption demand a lot of processing cycles. Performing calculations and updates on large data sets becomes very expensive, as you must decrypt the data set, find the data you are interested in, make your changes, and then re-encrypt altered items. The ultimate performance impact varies with the storage system and method of encryption, but overhead and latency might typically range from 2x-10x compared to unencrypted operations. It would be a major advancement if they could dispense away with the encryption and decryption operations, while still enabling reporting on secured data sets. The promise of homomorphic encryption is predictable alteration without decryption. The possibility of being able to modify data without sacrificing security is compelling. Running basic operations on encrypted data might remove the threat of exposing data in the event of a system breach or user carelessness. And given that every company even thinking about cloud adoption is looking at data encryption and key management deployment options, there is plenty of interest in this type of encryption. But like a lot of theoretical lab work, practicality has an ugly way of pouring water on our security dreams. There are three very real problems for homomorphic encryption and computation systems: Data integrity: Homomorphic encryption does not protect data from alteration. If I can add, multiply, or change a data entry without access to the owner’s key: that becomes an avenue for an attacker to corrupt the database. Alteration of pricing tables, user attributes, stock prices, or other information stored in a database is just as damaging as leaking information. An attacker might not know what the original data values were, but that’s not enough to provide security. Data confidentiality: Homomorphic encryption can leak information. If I can add two values together and come up with a consistent value, it’s possible to reverse engineer the values. The beauty of encryption is that when you make a very minor change to the ciphertext – the data you are encrypting – you get radically different output. With CBC variants of encryption, the same plaintext has different encrypted values. The question with homomorphic encryption is whether it can be used while still maintaining confidentiality – it might well leak data to determined attackers. Performance: Performance is poor and will likely remain no better than classical encryption. As homomorphic performance improves, so do more common forms of encryption. This is important when considering the cloud as a motivator for this technology, as acknowledged by the researchers. Many firms are looking to “The Cloud” not just for elastic pay-as-you-go services, but also as a cost-effective tool for handling very large databases. As databases grow, the performance impact grows in a super-linear way – layering on a security tool with poor performance is a non-starter. Not to be a total buzzkill, but I wanted to point out that there are practical alternatives that work today. For example, data masking obfuscates data but allows computational analytics. Masking can be done in such a way as to retain aggregate values while masking individual data elements. Masking – like encryption – can be poorly implemented, enabling the original data to be reverse engineered. But good masking implementations keep data secure, perform well, and facilitate reporting and analytics. Also consider the value of private clouds on public infrastructure. In one of the many possible deployment models, data is locked into a cloud as a black box, and only approved programatic elements ever touch the data – not users. You import data and run reports, but do not allow direct access the data. As long as you protect the management and programmatic interfaces, the data remains secure. There is no reason to look for isolinear plasma converters or quantum flux capacitors when when a hammer and some duct tape will do. Share:

Share:
Read Post

Research Revisited: Security Snakeoil

Wow! Sometimes we find things in the archives that still really resonate. This is a short one but I’ll be damned if I don’t expect to see this exact phrase used on the show floor at RSA this week. This was posted September 25, 2006. I guess some things never change… How to Smell Security Snake Oil in One Sentence or Less If someone ever tells you something like the following: “We defend against all zero day attacks using a holistic solution that integrates the end-to-end synergies in security infrastructure with no false positives.” Run away. Share:

Share:
Read Post

New Paper: The Future of Security The Trends and Technologies Transforming Security

This paper originally started with a blog post called Inflection. Sure, many of our papers start as a series of posts, but this time the post came long before I thought of a paper. I started seeing a bunch of interrelated trends, and what appeared to be some likely unavoidable outcomes. Unlike most predictive pieces, I focused as much on inherent security trends as on disruptive forces. Less “new attacks” and more “new ways we are doing things”. The research continued, but I never expected a chance to write it up as a paper. Out of nowhere the folks at Box contacted me to see if I had an interest in writing up and licensing something on where security is headed. I pointed them toward Inflection, and it hit exactly what they were looking for. So I got a chance to pull together the additional research I have been thinking about since that post back in 2012, and compile everything into a paper. As an analyst it isn’t often I get a chance to focus on far-field research, so I am excited to get this one out the door. This paper is also being co-released by the Cloud Security Alliance, who reviewed and approved its findings. I hope you find it useful, and please keep in mind that everything I discuss is in practice someplace today, but I expect it to take ten or more years for these practices to become widespread and their full implications to kick in. The Future of Security (Full Report, PDF) Executive Overview (PDF)   Share:

Share:
Read Post

Research Revisited: RSA/NetWitness Deal Analysis

As we continue our journey down memory lane I want to take a look at what I said about the RSA/NetWitness deal back in April 2011, when it was announced. In hindsight the NetWitness technology has become the underlying foundation of RSA’s security management and security analytics offerings, so I underplayed that a bit. EnVision is pretty much dead. And we haven’t really seen a compelling alternative on the full packet capture and analytics front. Although a bunch of bigger SIEM players started introducing that technology this year. As with most everything, some prognostications were good and some not so good. And if I had a crystal ball that worked I would have invested in WhatsApp, rather than trying to figure out the future of security. Fool us once… EMC/RSA Buys NetWitness (Published on the Securosis blog April 4, 2011) To no one’s surprise (after NetworkWorld spilled the beans two weeks ago), RSA/EMC formalized its acquisition of NetWitness. I guess they don’t want to get fooled again the next time an APT comes to visit. Kidding aside, we have long been big fans of full packet capture, and believe it’s a critical technology moving forward. On that basis alone, this deal looks good for RSA/EMC. Deal Rationale APT, of course. Isn’t that the rationale for everything nowadays? Yes, that’s a bit tongue in cheek (okay, a lot) but for a long time we have been saying that you can’t stop a determined attacker, so you need to focus on reacting faster and better. The reality remains that the faster you figure out what happened and remediate (as much as you can), the more effectively you contain the damage. NetWitness gear helps organizations do that. We should also tip our collective hats to Amit Yoran and the rest of the NetWitness team for a big economic win, though we don’t know for sure how big a win. NetWitness was early into this market and did pretty much all the heavy lifting to establish the need, stand up an enterprise class solution, and show the value within a real attack context. They also showed that having a llama at a conference party can work for lead generation. We can’t minimize the effect that will have on trade shows moving forward. So how does this help EMC/RSA? First of all, full packet capture solves a serious problem for obvious targets of determined attackers. Regardless of whether the attack was a targeted phish/Adobe 0-day or Stuxnet type, you need to be able to figure out what happened, and having the actual network traffic helps the forensics guys put the pieces together. Large enterprises and governments have figured this out and we expect them to buy more of this gear this year than last. Probably a lot more. So EMC/RSA is buying into a rapidly growing market early. But that’s not all. There is a decent amount of synergy with the rest of RSA’s security management offerings. Though you may hear some SIEM vendors pounding their chests as a result of this deal, NetWitness is not SIEM. Full packet capture may do some of the same things (including alert on possible attacks), but it analysis is based on what’s in the network traffic – not logs and events. More to the point, the technologies are complimentary – most customers pump NetWitness alerts into a SIEM for deeper correlation with other data sources. Additionally some of NetWitness’ new visualization and malware analysis capabilities supplement the analysis you can do with SIEM. Not coincidentally, this is how RSA positioned the deal in the release, with NetWitness and EnVision data being sent over to Archer for GRC (whatever that means). Speaking of EnVision, this deal may take some of the pressure off that debacle. Customers now have a new shiny object to look at, while maybe focusing a little less on moving off the RSA log aggregation platform. It’s no secret that RSA is working on the next generation of the technology, and being able to offer NetWitness to unhappy EnVision customers may stop the bleeding until the next version ships. A side benefit is that the sheer amount of network traffic to store will drive some back-end storage sales as well. For now, NetWitness is a stand-alone platform. But it wouldn’t be too much of a stretch to see some storage/archival integration with EMC products. EMC wouldn’t buy technology like NetWitness just to drive more storage demand, but it won’t hurt. Too Little, Too Late (to Stop the Breach) Lots of folks drew the wrong conclusion, that RSA bought NetWitness because of their recent breach. But these deals doesn’t happen overnight, so this acquisition has been in the works for quite a while. But what could better justify buying a technology than helping to detect a major breach? I’m sure EMC is pretty happy to control that technology. The trolls and haters focus on the fact that the breach still happened, so the technology couldn’t work that well, right? Actually, the biggest issue is that EMC didn’t have enough NetWitness throughout their environment. They might have caught the breach earlier if they had the technology more widely deployed. Then again, maybe not, because you never know how effective any control will be at any given time against any particular attack, but EMC/RSA can definitely make the case that they could have reacted faster if they had NetWitness everywhere. And now they likely will. Competitive Impact The full packet capture market is still very young. There are only a handful of direct competitors to NetWitness, all of whom should see their valuations skyrocket as a result of this deal. Folks like Solera Networks are likely grinning from ear to ear today. We also expect a number of folks in adjacent businesses (such as SIEM) to start dipping their toes into this water. Speaking of SIEM, NetWitness did have partnerships with the major SIEM providers to send them data, and this deal is unlikely to change much in the short term. But we

Share:
Read Post

Research Revisited: The Data Breach Triangle

This has always been one of my favorite posts, and it is one I still use regularly. I even have a slide on it in my RSA presentation for this week. The triangle still guides a lot of my thinking on data security. I am also now starting to think in terms of workload security, about which you will be hearing more soon. In this age of increased focus on egress filtering and incident response, I think the triangle does a good job of capturing a direction many security professionals realize we need to head. I originally posted this May 12, 2009. The Data Breach Triangle I’d like to say I first became familiar with fire science back when I was in the Boulder County Fire Academy, but it really all started back in the Boy Scouts. One of the first things you learn when you’re tasked with starting, or stopping, fires is something known as the fire triangle. Fire is a pretty fascinating process when you dig into it. It demonstrates many of the characteristics of life (consumption, reproduction, waste production, movement), but is just a nifty chemical reaction that’s all sorts of fun when you’re a kid with white gas and a lighter (sorry Mom). The fire triangle is a simple model used to describe the elements required for fire to exist: heat, fuel, and oxygen. Take away any of the three, and fire can’t exist. (In recent years the triangle was updated to a tetrahedron, but since that would ruin my point, I’m ignoring it). In wildland fires we create backburns to remove fuel, in structure fires we use water to remove heat, and with fuel fires we use chemical agents to remove oxygen. With all the recent breaches, I came up with the idea of a Data Breach Triangle to help prioritize security controls. The idea is that, just like fire, a breach needs three elements. Remove any of them and the breach is prevented. It consists of:   Data: The equivalent of fuel – information to steal or misuse. Exploit: The combination of a vulnerability and/or an exploit path to allow an attacker unapproved access to the data. Egress: A path for the data to leave the organization. It could be digital, such as a network egress, or physical, such as portable storage or a stolen hard drive. Our security controls should map to the triangle, and technically only one side needs to be broken to prevent a breach. For example, encryption or data masking removes the data (depending a lot on the encryption implementation). Patch management and proactive controls prevent exploits. Egress filtering or portable device control prevents egress. This assumes, of course, that these controls actually work – which we all know isn’t always the case. When evaluating data security I like to look for the triangle – will the controls in question really prevent the breach? That’s why, for example, I’m a huge fan of DLP content discovery for data cleansing – you get to ignore a whole big chunk of expensive security controls if there’s no data to steal. For high-value networks, egress filtering is a key control if you can’t remove the data or absolutely prevent exploits (exploits being the toughest part of the triangle to manage). The nice bit is that exploit management is usually our main focus, but breaking the other two sides is often cheaper and easier. Share:

Share:
Read Post

Research Revisited: 2006 Incites

All of us Securosis folks will be at the RSA Conference this week, so we figured we’d pre-load some old stuff to get a feel for how our research positions turned out. Mine is really old, digging back into the archives from when I had just started Security Incite. Each year I put together a set of Incites that reflected what I expected to happen that year. I basically copied the idea (and format) from my META Group days, where each year we obsessed over our 12 META Trends. The idea was to come up with a paragraph for each of our main coverage areas and provide some guidance. No percentages or anything like that. The innovation that I introduced was to actually go back later in the year and assess how well the projection worked out. We never did that at META. But I figured it would be a hoot to look back at what I thought was going to be big in 2006, so here are the Incites and some more tactical predictions. Some stuff was good. Some stuff was, um, not so good. At least it should provide some laughs. And if you want to check out the grades I gave myself on each Incite a year later, check out my 2006 report card. I can tell you my predictions stunk very badly. You can also check out the 2007 report card while you’re at it, which will ensure you never ask me to prognosticate about anything… 2006 Incites and Predictions (These originally appeared on the Security Incite blog, Jan 9, 2006.) What are the Security Incites? Annually, Security Incite will publish a list of the key “trends” and expectations in the security business for the next year. Called “Security Incites” and written from the perspective of the end user (or security consumer), Incites provide direction on what to expect, assisting the decision making process as budgets and technology adoption plans are finalized for the upcoming year. Each Incite provides a clear position and distills the impact on buying dynamics and architectural constructs. Incites also set the stage for Security Incite’s upcoming research agenda. What’s the difference between a “Security Incite” and a “Prediction?” Predictions are things we expect to happen within the next 12 months, and tend to be more event-oriented. The Security Incites provide a broader perspective across the security domains and can take a longer than 12 months view. 1. No Mas Box (Less Boxes, More Functionality) Users will increasingly revolt about adding yet another narrowly focused security appliance into their network and actively examine new “simplification” architectures. New Unified Threat Management (UTM) products, using blade servers and virtualization technologies, appear in 2006 putting vendors that license key intellectual property at a disadvantage. Management of the integrated UTM environment will remain difficult through 2007. 2. Get the NAC! The increasing number of ingress points into corporate networks (mobile, contractors, VPN) forces users to migrate to a virtual network infrastructure with a secure net and an unsecured net. Network Admission Control (NAC) architectures gain traction in 2006 to facilitate this architectural construct, but do require homogeneity of equipment pushing the pendulum away from best of breed providers. 3. Who are you? Identity Management (IDM) breaks out in 2006, as ROI-driven password management and single sign-on (SSO) initiatives are deployed en masse. Smart users increasingly figure out that strong and centralized IDM provides “good enough” authentication and authorization for compliance purposes, accelerating market growth in 2H 2006. Yet, identity federation continues to lag in a cloud of useless vendor bickering and standards immaturity until mid-2007. Token-based authentication finally hits the wall, as passwords remain good enough and no compelling alternative appears. 4. Stay Out of Jail Compliance continues to generate tremendous hype, but largely remains a red herring throughout 2006. Smart users will use the compliance word to get funding for critical imperatives (perimeter redesign, identity management) and sufficiently document their processes to keep regulators happy. Those not so smart users figure encryption is a panacea and buy some; ultimately realizing making encryption work on a large scale basis hasn’t gotten any easier. 5. Losing The Religion Everyone finally realizes in 2006 that regardless of technical approach (IDS vs. IPS vs. firewalls vs. anomaly detection) it’s all about detecting and blocking malware quickly and effectively. Users expect to see multiple techniques implemented, spurring another wave of consolidation as vendors look to bring complete enterprise-class UTM solutions to market. 6. Endpoint Hostile Takeover Driven by the prevalence of unwanted applications, internal zombies outbreaks, and documented information leaks enabled by key loggers and spyware, users will increasingly lock down endpoint devices, despite pushback from the business users. Limitations of the Windows XP security model makes lockdown difficult in 2006, but much easier when Microsoft’s Vista operating system is ready for deployment beginning in 2007. 7. Bad Content is Bad Content Given “innovation” by spammers and fraudsters, keeping content filtering algorithms accurate and timely is proving very difficult for content-focused security vendors. In 2006, heuristics-based detection cocktails fall out of favor, pushing the pendulum back towards signatures that favor entrenched AV vendors. Users increasingly embrace “in the cloud” content filtering for e-mail, IM, and web traffic because it allows them to get rid of another box in the perimeter and stop worrying about exponentially increasing message volumes. 8. Security Management (oxy)Moron Stand-alone security information management (SIM) plateaus in 2006, as consolidation continues and the need for large-scale system integration makes acceptable “time to value” out of reach for all but the largest enterprises. Closed correlation systems increasingly take root as users swing towards homogeneity and ratchet back expectations on which devices really need to be integrated into the management system, while leveraging the reporting infrastructure for compliance purposes. 9. Services Managed Security Services provide increasing value in terms of both operational capabilities and content filtering. Users realize that removing threats “in the cloud” provides better bang for the buck for mature technologies (firewalls, IPS, anti-spam, gateway AV, web filtering). The biggest challenge in 2006 will

Share:
Read Post

Research Revisited: The 3 Dirty Little Secrets of Disclosure No One Wants to Talk About

This post doesn’t hold up that well, but it goes back to 2006 and the first couple weeks the site was up. And I think it is interesting to reflect on how my thinking has evolved, as well as the landscape around the analysis.   In 2006 the debate was all about full vs. responsible disclosure. While that still comes up from time to time, the debate has clearly shifted. Many bugs aren’t disclosed at all, now that there is a high-stakes market where researchers can support entire companies just discovering and selling bugs to governments and other bidders. The legal landscape and prosecutorial abuse of laws have pushed some researchers to keep things to themselves. The adoption of cloud services also changes things, requiring completely different risk assessment around bug discovery. Some of what I wrote below is still relevant, and perhaps I should have picked something different for my first flashback, but I like digging into the archives and reflecting on something I wrote back when I was still with Gartner, wasn’t even thinking about Securosis as more than a blog, and was in a very different place in my life (i.e., no kids). Also, this is a heck of an excuse to include a screenshot of what the site looked like back then. The 3 Dirty Little Secrets of Disclosure No One Wants to Talk About As a child one of the first signs of my budding geekness was a strange interest in professional “lingo”. Maybe it was an odd side effect of learning to walk at a volunteer ambulance headquarters in Jersey. Who knows what debilitating effects I suffered due to extended childhood exposure to radon, the air imbued with the random chemicals endemic to Jersey, and the staccato language of the early Emergency Medical Technicians whose ranks I would feel compelled to join later in life. But this interest wasn’t limited to the realm of lights and sirens; it extended to professional subcultures ranging from emergency services, to astronauts, to the military, to professional photographers. As I aged and even joined some of these groups I continued to relish the mechanical patois reserved for those earning expertise in a domain. Lingo is often a compression of language; a tool for condensing vast knowledge or concepts into a sound byte easily communicated to a trained recipient, slicing through the restrictive ambiguity of generic language. But lingo is also used as a tool of exclusion or to mask complexity. The world of technology in general, and information security in particular, is as guilty of lingo abuse as any doctor, lawyer, or sanitation specialist. Nowhere is this more apparent than in our discussions of “Disclosure”. A simple term evoking religious fervor among hackers, dread among vendors, and misunderstanding among normal citizens and the media who wonder if it’s just a euphemism for online dating (now with photos!). Disclosure is a complex issue worthy of full treatment; but today I’m going to focus on just 3 dirty little secrets. I’ll cut through the lingo to focus on the three problems of disclosure that I believe create most of the complexity. After the jump that is… “Disclosure” is a bizarre process nearly unique to the world of information technology. For those of you not in the industry, “disclosure” is the term we use to describe the process of releasing information about vulnerabilities (flaws in software and hardware that attackers use to hack your systems). These flaws aren’t always discovered by the vendors making the products. In fact, after a product is released they are usually discovered by outsiders who either accidentally or purposely find the vulnerabilities. Keeping with our theme of “lingo” they’re often described as “white hats”, “black hats”, and “agnostic transgender grey hats”. You can think of disclosure as a big-ass product recall where the vendor tells you “mistakes were made” and you need to fix your car with an updated part (except they don’t recall the product, you can only get the part if you have the right support contract and enough bandwidth, you have to pay all the costs of the mechanic (unless you do it yourself), you bear all responsibility for fixing your car the right way, if you don’t fix it or fix it wrong you’re responsible for any children killed, and the car manufacturer is in no way actually responsible for the car working before the fix, after the fix, or in any related dimensions where they may sell said product). It’s really all your fault you know. Conceptually “disclosure” is the process of releasing information about the flaw. The theory is consumers of the product have a right to know there’s a security problem, and with the right level of details can protect themselves. With “full disclosure” all information is released, sometimes before there’s a patch, sometimes after; sometimes the discoverer works with the vendor (not always), but always with intense technical detail. “Responsible disclosure” means the researcher has notified the vendor, provided them with details so they can build a fix, and doesn’t release any information to anyone until a patch is released or they find someone exploiting the flaw in the wild. Of course to some vendors use the concept of responsible disclosure as a tool to “manage” researchers looking at their products. “Graphic disclosure” refers to either full disclosure with extreme prejudice, or online dating (now with photos!). There’s a lot of confusion, even within the industry, as to what we really mean by disclosure and it it’s good or bad to make this information public. Unlike many other industries we seem to feel it’s wrong for a vendor to fix a flaw without making it public. Some vendors even buy flaws in other vendors products; just look at the controversy around yesterday’s announcement from TippingPoint. There was a great panel with all sides represented at the recent Black Hat conference. So what are the dirty little secrets? Full disclosure helps the bad guys It’s about ego, control, and competition We need the threat of full disclosure or vendors

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.