Securosis

Research

Low Risk Doesn’t Mean It Won’t Kill You

Got an interesting link from my friend Don, who prefers to stay behind the scenes, pointing out an interesting perspective on Jared Diamond, an older guy evaluating the risks of his daily activities. Consider: If you’re a New Guinean living in the forest, and if you adopt the bad habit of sleeping under dead trees whose odds of falling on you that particular night are only 1 in 1,000, you’ll be dead within a few years. In fact, my wife was nearly killed by a falling tree last year, and I’ve survived numerous nearly fatal situations in New Guinea. Most folks won’t bat an eyelash about a 1 in 1,000 event. But Jared hopes to have 15 years of life left, so if he averages one shower per day that’s 5,475 showers. If he were to fall once every thousand showers, he would still take 5 or more spills. Obviously falling in a confined area is problematic for the elderly. So the small risk is quite real. But the real point isn’t to forget about personal hygiene – it’s to be constructively paranoid. Build on-the-fly threat models, and mitigate those risks. Regardless of what you are doing. My hypervigilance doesn’t paralyze me or limit my life: I don’t skip my daily shower, I keep driving, and I keep going back to New Guinea. I enjoy all those dangerous things. But I try to think constantly like a New Guinean, and to keep the risks of accidents far below 1 in 1,000 each time. Can you see the applicability to security? Photo credit: US 12 – White Pass – Watch for falling trees #2, originally uploaded by WSDOT Share:

Share:
Read Post

Oracle takes another SIP of Hardware

Evidently there aren’t any interesting software companies to buy, so Oracle just dropped a cool $2B (as in Billion, sports fans) on Acme Packet. These guys build session border controllers (SBC), VoIP telecom gear. As Andy Abramson says: This is an interesting grab by one of the tech world’s true giants because it sqaurely puts Oracle into a game where they begin to compete with the giants of telecom, many of whom run Oracle software to drive things including SBC’s, media gateways and firewall technology that’s sold. This is an interesting turn of events. Obviously Oracle dipped their feet into the hardware waters when they put Sun Microsystems out of its misery a few years back. But this is different. This isn’t directly related to their existing businesses, but instead subsuming a key technology for one of their major customer segments: telecom carriers. So how long will it be before Oracle decides they want a security technology? They have some identity management stuff – actually a bunch of it. Both their own and stuff they inherited from Sun. They don’t currently have security hardware or even core security software. But since security is important to pretty much every large enterprise segment Oracle plays in, you have to figure they’ll make a move into the market at some point. Clearly money isn’t an issue for these guys, so paying up for a high multiple security player seems within reach. Yes, I’m throwing crap against the wall. But the security investment bankers must be licking their chops thinking about another deep-pocketed buyer entering the fray. Photo credit: “straws” originally uploaded by penguincakes Share:

Share:
Read Post

Network-based Threat Intelligence: Following the Trail of Bits

Our first post in Network-based Threat Intelligence delved into the kill chain. We outlined the process attackers go through to compromise a device and steal its data. Attackers are very good at their jobs, so it’s best to assume any endpoint is compromised. But with recent advances in obscuring attacks (through tactics such as VM awareness) and the sad fact that many compromised devices lie in wait for instructions from their C&C network, you need to start thinking a bit differently about finding these compromised devices – even if they don’t act compromised. Network-based threat intelligence is all about using information gleaned from network traffic to determine which devices are compromised. We call that following the Trail of Bits, to reflect the difficulty of undertaking modern malware activities (flexible and dynamic malware, various command and control infrastructures, automated beaconing, etc.) without leveraging the network. Attackers try to hide in plain site and obscure their communications within the tens of billions of legitimate packets traversing enterprise networks. But they always leave a trail or evidence of the attack, if you know what to look for. It turns out we learned most of what we need in kindergarten. It’s about asking the right questions. The five key questions are Who?, What?, Where?, When?, and How?, and they can help us determine whether a device may be compromised. So let’s dig into our questions and see how this would work. Where? The first key set of indicators to look for are based on where devices are sending requests. This important because modern command and control requires frequent communication with each compromised device. So the malware downloader must first establish contact with the C&C network; then it can get new malware or other instructions. The old reliable network indicator is reputation. First established in the battle against spam, we tag each IP address as either ‘good’ or ‘bad’. Yes, this looks an awful lot like the traditional black list/negative security approach of blocking bad. History has shown the difficulty of keeping a black list current, accurate, and comprehensive over time. Combined with advances by attackers, we are left with blind spots in reputation’s ability to identify questionable traffic. One of these blind spots results from attackers using legitimate sites as C&C nodes or for other nefarious uses. In this scenario a binary reputation (good or bad) is inadequate – the site itself is legitimate but not behaving correctly. For instance, if an integrated ad network or other third party web site is compromised, a simplistic reputation system could flag the entire site as malicious. A recent example of that was the Netseer hack, where browser-based web filters flagged traffic to legitimate sites as malicious due to integration with a compromised ad network. They threw the proverbial baby out with the bathwater. Another issue with IP reputation is the fact that IP addresses change constantly based on what command and control nodes are operational at any given time. Much of the sophistication in today’s C&C infrastructure has to do with how attackers associate domains with IP addresses on a dynamic basis. With the increasing use of domain generating algorithms (DGA), malware doesn’t need to be hard-coded with specific IP addresses – instead it cycles through a set of domains (based on the DGA) searching for a C&C controller. This provides tremendous flexibility, enabling attackers to protect the ability of newly compromised devices to establish contact, despite domain takedowns and C&C interruptions. This makes the case for DNS traffic analysis in the identification of C&C traffic, along with monitoring packet stream. Ultimately domain requests (to find active C&C nodes) will be translated into IP addresses, which requires a DNS request. By monitoring these DNS requests for massive amounts of traffic (as you would see in a very large enterprise or a carrier network), patterns associated with C&C traffic and domain generation algorithms can be identified. When? If we look to the basics of network anomaly detection, by tracking and trending all ingress and egress traffic; flow patterns can be used to map network topology, track egress points, etc. By identifying a baseline of normal communication patterns we can pinpoint new destinations, communications outside ‘normal’ activity, and perhaps spikes in traffic volume. For example, if you see traffic originating from the marketing group during off hours, without a known reason (such as a big product launch or ad campaign), that might warrant investigation. What? The next question involves what kind of requests and/or files are coming in and going out. We have written a paper on Network-based Malware Detection, so we won’t revisit it here. But we need to point out that by analyzing and profiling how each piece of malware uses the network, you can monitor for those traffic patterns on your own network. In addition, this enables you to work around VM-aware malware. The malware escapes detection as it enters the network, because it doesn’t do anything when it detects it’s running in a sandbox VM. But on an bare-metal device it executes the malicious code to compromise the device. To take the analysis to the next level, you can track the destination of the suspicious file, and then monitor specifically for evidence that the malware has executed and done damage. Again, it’s not always possible to block the malware on the way in, but you can shorten the window between compromise and detection by searching for identifying communication pattern that indicate a successful attack. How? You can also look for types of connection requests which might indicate command and control, or other malicious traffic. This could include looking strange or unusual protocols, untrusted SSL, spoofed headers, etc. You can also try to identify requests from automated actors, which have predictable patterns even when randomized to simulate a human being. But this means all egress and ingress traffic is in play; it all needs to be monitored and analyzed in order to isolate patterns and answer the where, when, what, and how questions. Of course

Share:
Read Post

RSA Conference Guide 2013: Network Security

After many years in the wilderness of non-innovation, there has been a lot of activity in the network security space over the past few years. Your grand-pappy’s firewall is dead and a lot of organizations are in the process of totally rebuilding their perimeter defenses. At the same time, the perimeter gradually becomes even more a mythical beast of yesteryear, forcing folks to ponder how to enforce network isolation and segmentation while the underlying cloud and virtualized technology architectures are built specifically to break isolation and segmentation. The good news is that there will be lots of good stuff to see and talk about at the RSA Conference. But, as always, it’s necessary to keep everything in context to balance hype against requirements, with a little reality sprinkled on top. Whatever the question, the answer is NGFW… For the 4th consecutive year we will hear all about how NGFW solves the problem. Whatever the problem may be. Of course that’s a joke, but not really. All the vendors will talk about visibility and control. They will talk about how many applications they can decode, and how easy it is to migrate from your existing firewall vendor and instantaneously control the scourge that is Facebook chat. As usual they will be stretching the truth a bit. Yes, NGXX network security technology is maturing rapidly. But unfortunately it’s maturing much faster than most organizations’ ability to migrate their rules to the new application-aware reality. So the catchword this year should be operationalization. Once you have the technology, how can you make best use of it? That means talking about scaling architectures, policy migration, and ultimately consolidation of a lot of separate gear you already have installed in your network. The other thing to look out for this year is firewall management. This niche market is starting to show rapid growth, driven by the continued failure of the network security vendors to manage their boxes, and accelerated by the movement towards NGFW – which is triggering migrations between vendors, and driving a need to support heterogenous network security devices, at least for a little while. If you have more than handful of devices you should probably look at this technology to improve operational efficiency. Malware, malware, everywhere. The only thing hotter than NGFW in the network security space are network-based malware detection devices. You know, the boxes that sit out on the edge of your network and explode malware to determine whether each file is bad or not. Some alternative approaches have emerged that don’t actually execute the malware on the device – instead sending files to a cloud-based sandbox, which we think is a better approach for the long haul, because exploding malware takes a bunch of computational resources that would better be utilized to enforce security policy. Unless you have infinite rack space – then by all means continue to buy additional boxes for every niche security problem you have. Reasonable expectations about how much malware these network-resident boxes can actually catch are critical, but there is no question that network-based malware detection provides another layer of defense against advanced malware. At this year’s show we will see the first indication of a rapidly maturing market: the debate between best of breed and integrated solution. That’s right, the folks with standalone gateways will espouse the need for a focused, dedicated solution to deal with advanced malware. And Big Network Security will argue that malware detection is just a feature of the perimeter security gateway, even though it may run on a separate box. Details, details. But don’t fall hook, line, and sinker for this technology to the exclusion of other advanced malware defenses. You may go from catching 15% of the bad stuff to more than 15%. But you aren’t going to get near 90% anytime soon. So layered security is still important regardless of what you hear. RIP, Web Filtering For those network security historians this may be the last year we will be able to see a real live web filter. The NGFW meteor hit a few years ago, and it’s causing a proverbial ice age for niche products including web filters and on-premise email security/anti-spam devices. The folks who built their businesses on web filtering haven’t been standing still, of course. Some moved up the stack to focus more on DLP and other content security functions. Others have moved whole hog to the cloud, realizing that yet another box in the perimeter isn’t going to make sense for anyone much longer. So consolidation is in, and over the next few years we will see a lot of functions subsumed by the NGFW. But in that case it’s not really a NGFW, is it? Hopefully someone will emerge from Stamford, CT with a new set of stone tablets calling the integrated perimeter security device something more relevant, like the Perimeter Security Gateway. That one gets my vote, anyway, which means it will never happen. Of course the egress filtering function for web traffic, and enforcement of policies to protect users from themselves, are more important than ever. They just won’t be deployed as a separate perimeter box much longer. Protecting the Virtually Cloudy Network We will all hear a lot about ‘virtual’ firewalls at this year’s show. For obvious reasons – the private cloud is everywhere, and cloud computing inherently impacts visibility at the network layer. Most of the network security vendors will be talking about running their gear in virtual appliances, so you can monitor and enforce policies on intra-datacenter traffic, and even traffic within a single physical chassis. Given the need to segment protected data sets and how things like vMotion screw with our ability to know where anything really is, the ability to insert yourself into the virtual network layer to enforce security policy is a good thing. At some point, that is. But that’s the counterbalance you need to apply at the conference. A lot of this technology is still glorified science experiments, with much

Share:
Read Post

The Increasing Irrelevance of Vulnerability Disclosure

Gunter Ollmann (now of IOActive) offers a very interesting analysis of why vulnerability disclosures don’t really matter any more. But I digress. The crux of the matter as to why annual vulnerability statistics don’t matter and will continue to matter less in a practical sense as times goes by is because they only reflect ‘Disclosures’. In essence, for a vulnerability to be counted (and attribution applied) it must be publicly disclosed, and more people are finding it advantageous to not do that. This is a good point. With an increasingly robust market for weaponized exploits, it’s very unwise to assume that the number of discovered software vulnerabilities bears any resemblance to the number of reported vulnerabilities. Especially given how much more attack surface we expose than the traditional operating system. But Gunter isn’t done yet. With today’s ubiquitous cloud-based services – you don’t own the software and you don’t have any capability (or right) to patch the software. Whether it’s Twitter, Salesforce.com, Dropbox, Google Docs, or LinkedIn, etc. your data and intellectual property is in the custodial care of a third-party who doesn’t need to publicly disclose the nature (full or otherwise) of vulnerabilities lying within their backend systems – in fact most would argue that it’s in their best interest to not make any kind of disclosure (ever!). Oh man, Gunter is opening up the cloudy Pandora’s Box. With the advent of SaaS, these vulnerabilities won’t be disclosed. Unless it’s a hacktivist exploiting the vulnerability, you won’t hear about the exploit either. The data will be lost and the breach will happen. There is nothing for you to patch, nothing for enterprises to control, nothing but cleaning up the mess when these SaaS providers inevitably suffer data losses. We haven’t seen a major SaaS breach yet. But we have all been around way too long to believe that can last. A lot of food for thought here. Photo credit: “Funeral Procession in Crossgar” originally uploaded by Burns Library, Boston College Share:

Share:
Read Post

Network-based Threat Intelligence: Understanding the Kill Chain

Our recently published Early Warning paper put forth the idea of leveraging external threat intelligence to better utilize internal data collection, further shortening the window between weaponized attack and ability to detect said attack. But of course, the Devil is in the details and taking this concept to reality means delving into actually putting these ideas into practice. There are number of different types of “threat intelligence” that can (and should) be utilized in an Early Warning context. We’ve already documented a detailed process map and metric model to undertaking malware analysis (check out our Malware Analysis Quant research). Being able to identify and search for those specific indicators of compromise on your devices can be invaluable to determine the extent of an outbreak. But what can be done to identify malicious activity, if you don’t have the specific IoCs for the malware in question? That’s when we can look at the network to yield information about what may be a problem, even if the controls on the specific device fail. Why look at the network? Obviously it’s very hard to stage attacks, move laterally within an organization, and achieve the objective of data exfiltration without relying on the network. This means the attackers will necessarily leave a trail of bits on the network, which can provide a powerful indication of the kinds of attacks you’re seeing and which devices on your network are already compromised. In Network-based Threat Intelligence: Searching for the Smoking Gun, we’ll going to dig into these network-based indicators and share tactics to leverage these indicators quickly to identify compromised devices. Hopefully shortening this detection window helps to contain imminent damage and prevent data loss. Finally we’ll discuss how this approach allows you to iterate towards a true Early Warning System. We’d like to thank our friends at Damballa for licensing the content at the end of the project, but as always we’ll be developing the research independently in accordance with our Totally Transparent Research methodology. With that pre-amble done, in order to understand how to detect signs of malware on your network, you need to understand how malware gains a presence in a network, spreads within that network, and finally moves the data outside of the network. That’s become known in industry parlance as The Kill Chain. Describing the Attack There has been plenty of research done through the years about how malware does it’s nefarious dealings. The best description of the Kill Chain we’ve seen was done back in 2009 by Mike Cloppert, which we recommend you check out for yourself. To highlight Mike’s terminology, let’s describe (at a high level) how malware works. Source: Security Intelligence: Attacking the Kill Chain Reconnaissance: The attackers first profile their targets. Understanding how the target organization is structured, gleaning information about the control set, and assembling information that can be used in social engineering attacks. Weaponization: Next comes preparing malware to exploit a vulnerability on the device. This involves the R&D efforts to find these exploits, which allow the attacker to gain control of the victim’s device, and the development of a delivery system to get the exploit onto the target device. Delivery: Once the exploit is weaponized, it needs to be delivered to the target. This usually means some kind of effort to get the target to take an action (usually clicking on a link or using an application attack) that would render a web page to deliver the malware. Exploitation: This is the actual running of the exploit code on the target device to provide the attacker with control of the device. This can be a pretty complicated process and take advantage of known or unknown vulnerabilities in either the operating system or application code. Nowadays this tends to be a multi-stage process where a downloader gains control of the machine and then downloads additional exploit code. Another focus of this step is obfuscation of the attack to hide the trail of the attackers and stay below the radar. C2: Known nowadays as Command and Control, this is the process of the newly compromised device establishing contact with the network to receive further instructions. Exfiltration: Once the attackers achieve their goals of their mission, they must package up the spoils and move it to a place where they can pick it up. Again, this can be a rather sophisticated endeavor to evade detection of the stolen data leaving the organization. There has been significant innovation in a number of the aspects of the kill chain, but overall the process remains largely the same. Let’s talk a bit about how each step in the process has evolved over the past 3 years. Let’s start with reconnaissance, since that’s become far easier now that lots of targets seem to publish their life story and sordid details on public social networks. There are tools today (like Maltego) that can automatically assemble a fairly detailed profile of a person by mining social networks. Despite the protestations of many security professionals, folks aren’t going to stop sharing their information on social networks, and that is going to make the attackers recon efforts that much easier. In terms of weaponization, we’ve seen increasing sophistication and maturity in terms of how the exploits are developed and updated. Besides a third party market for good exploits creating a significant economic opportunity for those willing to sell their exploits, you see attackers using modern software development techniques like Agile programming, as well as undertaking sophisticated testing of the attack against not only the targets, but the majority of security software products designed to stop the attack. Finally, attackers now package up their code into “kits” foruse by anyone with a checkbook (or BitCoin account). So sophisticated malware is now within reach of unsophisticated attackers. Awesome. In terms of the delivery step, as mentioned above, given the rapid change inherent to malware many attackers opt to deliver a very small downloader onto the compromised device. Once C&C contact is established, the downloader will receive a

Share:
Read Post

Incite 2/6/2013: The Void

It’s over. Sunday night, when the confetti fell on the Ravens and we finished cleaning up the residual mess from the Super Bowl party, the reality set in. No NFL for months. Yeah, people will start getting fired up about spring training, but baseball just isn’t my thing. Not as a spectator sport. I can take some comfort that in the NFL being a 12-month enterprise now. In a few weeks the combine will give us a look at the next generation of football stars. Then we’ll start following free agency in early March to see who is going to be in and who is out. It’s like Project Runway, but with much higher stakes (and no Tim Gunn). I guess there are other sports to follow, like NCAA Basketball. The March Madness tournament is always fun. Until I’m blown out of all my brackets – then it’s not so fun anymore. But it’s not football. There will be flurries of activity throughout the year. Like when the schedule makers publish the 2013 NFL matchups in mid-April. I dutifully spend a morning putting all the games in my calendar. If only to make sure I don’t schedule business travel around those times. Lord knows, I only get 10-12 opportunities a year to see NFL football live, and no business trip is going to impact that. A man must have his priorities. Then the draft happens at the end of April. Between free agency and the draft you can start to envision what your favorite team will look like next season. Even through the void of no games, there are always shiny football objects to obsess about. If you are a Patriots fan, you can live vicariously through the Gronk throughout the offseason. First he’s making out with some girl, then he’s doing some wacky dance and falling on his $54 million forearm. It’s good to be the Gronk, evidently. Though you figure if he’s making $9MM a year, he could afford a T-shirt, right? There is also an NFL punditry machine that never sleeps. It’s like the security echo chamber times eleventy billion. Hundreds of bloggers, writers, and ponitificators stirring the pot every day. They tweet incessantly and keep our attention focused on even the most minute details. If they aren’t covering the exploits of the Gronk, they are worrying about this guy’s contract negotiations, that guy’s salary cap number, which sap ended up on the waiver wire, some new dude’s endorsement deal, or that other guy’s rehab. No detail is too small to be tweeted and retweeted 20 times in the offseason. Then the real void sets in. After the draft analysis and re-analysis finishes up sometime in May, and they do the OTAs and other activities, things go dead until August. But by that point summer has begun, the kids are off at camp, and life is good. I’m trying to live more in the present, so taking a respite and maybe getting some work done won’t be a bad thing. Before we blink it will be time for training camp in August. At least it’s not hot in Atlanta that time of year. But we persevere anyway and pack up the car, lather on the sunscreen, and watch our modern-day gladiators installing new plays and scheming up ways to keep us on the edge of our seats for another season. We’ll wait in line to get the signature of some 3rd-string linebacker and be ecstatic. Why? Because it means the void will be ending soon. And soon enough Labor Day will usher in another season. –Mike Photo credits: Void originally uploaded by Jyotsna Sonawane Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. Understanding Identity Management for Cloud Services Architecture and Design Integration Newly Published Papers Building an Early Warning System Implementing and Managing Patch and Configuration Management Defending Against Denial of Service Attacks Securing Big Data: Security Recommendations for Hadoop and NoSQL Environments Pragmatic WAF Management: Giving Web Apps a Fighting Chance Incite 4 U Remembering the basics: Peter Wayner offers a great set of code development security tips. The cynic in me immediately asked, “Where do you get the time to implement these tips?” and “When does the time come to build tools, or when is it time to spend money on security testing products?” But when you look closer, he has chosen tips which are simply good development practices that make code most robust and more stable … they lead to higher-quality code. Rigorous input testing, modular (read: insulated) design, avoiding too many trust assumptions, building on certified code libraries, and so on, are all simply good programming methods. This advice is not “bolt security on”, but instead to embrace good design and implementation techniques to improve security. Good stuff! – AL A new disclosure FAIL: Imagine you are a product vendor who actually cares about security. Someone reports a very serious exploit, says it’s being used in common exploit kits, and it could allow attackers to bypass all your security controls and pwn whoever they want. Not a good day. But you are a proactive type, so you engage your product security incident team and get cracking. Except, as recently discussed by Adobe, all you have is a video of the exploit, no vulnerability details, and eventually the researchers cut off contact. Alrighty then, what next? Rather than forgetting about it, Adobe tried their best to run down potential exploit options and bump up some security fixes that may or may not fix the potential problem, which may or may not be real. I give Adobe a ton of crap for all the security problems in their products, but the security folks definitely deserve some credit for trying their best

Share:
Read Post

RSA Conference Guide 2013: Key Themes

It’s that time of year again. Time to get ready for a week of mayhem, debauchery, and the hunt for tchotchkes. OK, there isn’t a lot of debauchery at the RSA Conference besides the Barracuda party at the Gold Club, which we hear is an establishment of high repute. Realistically, you’ll spend most of your week fending off sales droids, gawking at booth babes (much to the chagrin of the security echo chamber), and maybe learning something about what’s new and exciting in security. As in previous years, your pals at Securosis have put together our 4th annual RSA Guide to give you some perspective on what to expect at the show and some of our key trends for the upcoming year. And we even include the snark for free. These themes are compiled and written by the entire Securosis team, so don’t pay too much attention to the posting author when you call us out. We’ll give you blog-reading faithful an early look, over the next 10 days, at what we expect to see at the show. So today we start with the key themes… Anti-Malware Everywhere Security folks have been dealing with malicious software since the days when your networking gear came with a swoosh on it. Yes, you young whippersnappers – back when sneakernet was the distribution vector for viruses. But what’s old is new again, and driven by advanced attackers who figured out that employees like to click on things, we expect almost every vendor at the show to be highlighting their ability to not block advanced attacks. Oh, was that a Freudian slip? Yes, you’ll hear a lot about newfangled approaches to stop advanced malware. The reality remains that sophisticated attackers can and will penetrate your defenses, regardless of how many shiny objects you buy to stop them. That doesn’t mean you should use 5-year-old technology to check the compliance box, but that’s another story for another day. Of course, kidding aside, there will be some innovative technologies in play to deal with this malware stuff. The ability to leverage cloud-based sandboxes that block malware on the network, advanced endpoint agents that look an awful lot like HIPS that works better, and threat intelligence services to learn who else got pwned and by what, are poised to improve detection. Of course these new tools aren’t a panacea, but they aren’t the flaming pile of uselessness that traditional AV has become. Many of the emerging products and services are quite young, so there won’t be much substantiation beyond outrageous claims about blocking this attack or that attack. So leave your checkbook at home but spend some time learning about the different approaches to stopping advanced malware. This will be an area of great interest to everyone through 2013. BYOD Is No BS We may not all be Anonymous, but we are certainly all consumers. It seems a little fruit company in Cupertino sparked the imaginations of technology users everywhere, so now the rest of us have to put out the fire. Technology used to be something you used at work, but now it is embedded into the fabric of our daily lives. So we shouldn’t be surprised as the workforce continually demands work tools that keep up with the things the kids are playing with in the back seat. While consumerization of IT is the trend of people bringing consumer-class devices and services into the workplace, BYOD encompasses the policies, processes, and technologies to safely enable this usage. In the past year we have moved beyond the hype stage, and we see more and more companies either developing or implementing their BYOD and general consumerization strategies. This trend won’t go away, you can’t stop it, and if you think you can block it you will get to find a new job. Even the government and financial services companies are starting to crack and take hard looks at supporting consumer devices and services. On the device side we see the core as Mobile Device Management, but MDM is merely the hook to enable all the other interesting technologies and controls. The constantly changing nature of BYOD and varied enterprise cultures will likely keep the market from ever maturing around a small set of options. We will see a huge range of options, from the mostly-mature MDM, to network access gateways (the rebirth of NAC), to containerized apps and security wrappers, to new approaches to encryption and DRM. And each of them is right… for someone. There is no silver bullet, but wandering the show floor is a great opportunity to see all the different approaches in one place and think about where they fit into your strategy and culture. Are you lockdown artists? Free-loving tech hippies? Odds are you can find the pieces to meet your requirements, but it definitely isn’t all completely there yet, regardless of what the sales droids say. The main thing to focus on is whether the approach is really designed for BYOD, or whether it’s just marketed as BYOD. There is a huge difference, and a fair number of vendors haven’t yet adjusted their products to this new reality beyond cosmetic changes. Think hard about which controls and deployment models will fit your corporate culture and, especially, workflows. Don’t look at approaches that take these wonderful consumer experiences and suck the life out of them, reverting to the crappy corporate tech you know you hate yourself. Yes, there will be a lot of hype, but this is a situation where we see more demand than supply at this point. Viva la revolucion! Security Big Data In the past two years at RSA we have heard a lot about risk management and risk reduction, which basically mean efficiently deploying security to focus on threats you face – rather than hypothetical threat scenarios or buying more protection than you need. This year’s risk management will be security analytics. Analytics is about risk identification, but the idea is that big data clusters mine the sea

Share:
Read Post

The Data Breach Triangle in Action

I refer back to Rich’s Data Breach Triangle over and over again. It’s such a clear and concise way to describe a data breach – past or potential. And we continue to see examples of how focusing on breaking one leg of the triangle works. From How the RSA Attackers Swung and Missed at Lockheed Martin on Threatpost: “But instead of closing the door and shutting the attackers out, Lockheed’s team began monitoring their activities to see what they were doing, where they were going and what tactics they used.” The typical incident response playbook involves finding a compromised device and fixing it, but with today’s advanced attacks you can’t be sure you actually have eliminated the threat with a single remediation activity. So in some cases it makes more sense to observe the attackers, rather than [trying to] clean them up immediately. “The lesson, Adegbite said, is that preventing attackers from getting anything useful off a network is far more important than trying to prevent every attacker from getting in. “The investment to stop people from coming in is too high,” he said.” Break the egress leg of the triangle and there is no breach. And that’s why we focus on egress filtering and active protections like DLP in an effort to prevent exfiltration. Share:

Share:
Read Post

Improving the Hype Cycle

Gartner’s Hype Cycle is one of my favorite market models. It very succinctly describes the ridiculous way PR and other external hype factors make more of a technology than it really is. When many of us show up at the RSA Conference at the end of the month, we will get our best view of the Hype Cycle in action. Most of the stuff very hyped at the show tends to be (roughly) 12 to 18 months from hitting, if it ever does. But as with any industry research, where something lands in the Hype Cycle is open to interpretation and opinion. It’s as squishy as can be – there are no real attributes that can pinpoint where in the hype cycle any technology fits. These factors may exist, but the Big G certainly doesn’t talk about them. And when I see someone saying an over-hyped technology like Big Data has hit the “trough of disillusionment”, I scratch my head. Gartner Inc. said that Big Data has fallen into a “trough of disillusionment,” due to its complexity. The research firm said current obstacles to adoption could be eliminated, though, as Big Data tools such as Hadoop are integrated into mainstream analytic applications. You can’t really disagree with that statement, but I’d still say Big Data is closer to the peak of inflated expectations than to the trough. It’s not like a zillion companies have tried and failed at their Big Data deployments. These folks are still trying to figure out what Hadoop and MapReduce are. This kind of pronouncement is meant to push the market along, facilitate the integration that needs to happen over time to provide more demonstrable and sustainable value to customers. But there is still something missing from this kind of analysis. It would be very helpful to overlay some kind of growth and/or market size numbers onto the Hype Cycle. So when a technology is climbing the cycle, market size is relatively small but growth is off the chart. As it descends into the trough the number of customers grows, but market growth slows as companies struggle to effectively institutionalize the technology. And then as it exits the trough into the plateau of productivity, revenues start to grow again significantly but at a more predictable rate. But public revenue and growth metrics for private companies are about as much hype as these technologies. But that’s the missing piece (IMO) to really understand where these markets are in their development. Or am I still hammered from yesterday’s Super Bowl festivities and talking nonsense? Photo credit: “Hype” originally uploaded by nouspique Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.