Securosis

Research

RSAC wrap-up. Same as it ever was.

The RSA conference is over and put up some massive numbers (for security). But what does it all mean? Can all those 450 vendors on the show floor possibly survive? Do any of them add value? Do bigger numbers mean we are any better than last year? And how can we possibly balance being an industry, community, and profession simultaneously? Not that we answer any of that, but we can at least keep you entertained for 13 minutes. Watch or listen: Share:

Share:
Read Post

Network-based Threat Detection: Looking for Indicators

Now that RSAC is behind us, it’s time to get back to our research agenda. So we pick up Network-based Threat Detection where we left off. In that first post, we made the case that math and context are the keys to detecting attacks from network activity, given that we cannot totally prevent endpoint compromise. Attackers always leave a trail on the network. So we need to collect and analyze network telemetry to determine whether the communication between devices and the content of communications are legitimate, or warrant additional investigation. Modern malware relies heavily on the network to initiate the connection between the device and the controller, download attacks, perform automated beaconing, etc. Fortunately these activities show a deterministic pattern, which enables you to pinpoint malicious activity and identify compromised systems. Attackers bet they will be able to obscure their communications within the tens of billions of legitimate packets traversing enterprise networks on any given day, and on defenders’ general lack of sophistication preventing them from identifying the giveaway patterns. But if you can identify the patterns, you have an opportunity to detect attacks. Command and Control Command and Control (C&C) traffic is communication between compromised devices and botnet controllers. Once the device executes malware (by whatever means) and the dropper is installed, the device searches for its controller to receive further instructions. There are two main ways to identify C&C activity: traffic destination and communications patterns between. The industry has been using IP reputation for years to identify malicious destinations on the Internet. Security researchers evaluate each IP address and determine whether it is ‘good’ or ‘bad’ based on activity they observe across a massive network of sensors. IP reputation turns out to be a pretty good indicator that an address has been used for malicious activity at some point. Traffic to known-bad destinations is definitely worth checking out, and perhaps even blocking. But malicious IP addresses (and even domains) are not active for long, as attackers cycle through addresses and domains frequently. Attackers also use legitimate sites as C&C nodes, which can leave innocent (but compromised) sites with a bad reputation. So the downside to blocking traffic to sites with bad reputation is the risk of irritating users who want to use the legitimate site. Our research shows increasing comfort with blocking sites because the great majority of addresses with bad reputations have legitimately earned it. Keep in mind that IP reputation is not sufficient to identify all the C&C traffic on your network – many malicious sites don’t show up on IP reputation lists. So next look for other indications of malicious activity on the network, which depends on how compromised devices find their controllers. With the increasing use of domain generating algorithms (DGA), malware doesn’t need to be hard-coded with specific domains or IP addresses – instead it cycles through a set of domains according to its DGA, searching for a dynamically addressed C&C controller; the addresses cycle daily. This provides tremendous flexibility for attackers to ensure newly-compromised devices can establish contact, despite frequent domain takedowns and C&C interruptions. But these algorithms look for controllers in a predictable pattern, making frequent DNS calls in specific patterns. So DNS traffic analysis has become critical for identification of C&C traffic, along with monitoring packet streams. Outliers Identifying C&C traffic before the compromised device becomes a full-fledged member of the botnet is optimal. But if you miss, once the device is part of the botnet you can look for indications that it is being used as part of an attack chain. You do this by looking for outliers: devices acting atypically. Does this sound familiar? It should – anomaly detection has been used to find attackers for over a decade, typically using Netflow. You profile normal traffic patterns for users on your network (source/destination/protocol), and then look for situations where traffic varies outside your baseline and exceeds tolerances. Network-based anomaly detection was reasonably effective, but as adversaries got more sophisticated detection needed to dig more deeply into traffic. Deep packet inspection and better analytics enabled detection offerings to apply context to traffic. Attack traffic tends to occur in a few cycles: Command and Control: As described above, devices communicate with botnet controllers to join the botnet. Reconnaissance: After compromising the device and gaining access via the botnet, attackers communicate with internal devices to map the network and determine the most efficient path to their target. Lateral Movement: Once the best path to the target is identified, attackers systematically move through your network to approach their intended target, by compromising additional devices. Exfiltration: Once the target device is compromised, the attacker needs to move the data from the target device, outside the network. This can be done using tunnels, staging servers, and other means to obfuscate activity. Each of these cycles includes patterns you can look for to identify potential attacks. But this still isn’t a smoking gun – at some point you will need to apply additional context to understand intent. Analyzing content in the communication stream is the next step in identifying attacks. Content One way to glean more context for network traffic is to understand what is being moved. With deep packet inspection and session reassembly, you can perform file-based analysis on content as well. Then you can compare against baselines to look for anomalies in the movement of content within your network as well. File size: For example, if a user moved 2gb of traffic over a 24 hour period, when they normally move no more than 100mb, that should trigger an alert. Perhaps it’s nothing, but it should be investigated. Time of day: Similarly, if a user doesn’t normally work in the middle of the night, but does so two days in a row by themselves, that could indicate malicious activity. Of course it might be just a big project, but it bears investigation. Simple DLP: You can fingerprint files to look for sensitive content, or regular expressions which match account numbers or other protected data. Of course that isn’t full DLP-style classification and analysis. But it could flag something malicious without the

Share:
Read Post

RSA Conference Guide 2015 Deep Dives: Security Management

Last year Big Data was all the rage at the RSAC in terms of security monitoring and management. So the big theme this year will be…(drum roll, please)…Big Data. Yes, it’s more of the same, though we will see security big data called a bunch of different things—including insider threat detection, security analytics, situational awareness, and probably two or three more where we have no idea what they even mean. But they all have one thing in common: math. That’s right—remember those differential equations you hated in high school and college? Be glad that helpful freshman in AP Calculus actually liked math. Those are the folks who will save your bacon, because their algorithms are helping detect attackers doing their thing. Detecting the Insider It feels a bit like we jumped into a time machine, and ended up in back 1998. Or 2004. Or 2008. You remember—that year when everyone was talking about insiders and how they were robbing your organization blind. We still haven’t solved the problem, because it’s hard. So every 4-5 years the vendors get tired of using black-masked external-attacker icons in their corporate PowerPoint decks, and start talking about catching insiders instead. This year will be no different—you will hear a bunch of noise at RSAC about the insider threat. The difference this year is that the math folks I mentioned earlier have put their algorithms to work on finding anomalous behaviors inside your network, and profiling what insiders typically does while they are robbing you blind. You might even be able to catch them before Brian Krebs calls to tell you all about your breach. These technologies and companies are pretty young, so you will see them on the outside rings of the conference hall and in the RSAC Innovation Sandbox, but they are multiplying like [name your favorite pandemic]. It won’t be long before the big SIEM players and other security management folks (yes, vulnerability management vendors, we’re looking at you) start talking about users and insiders to stay relevant. Don’t you just love the game? Security Analytics: Bring Your PhD The other epiphany many larger organizations had over the past few years is that they already have a crapton of security data. You can thank PCI-DSS for making them collect and aggregate all sorts of logs over the past few years. Then the forensics guys wanted packets, so you started capturing those too. Then you had the bright idea to put everything into a common data model. Then what? Your security management strategy probably looked something like this: Collect data. Put all data in one place. ??? Detect attacks. This year a bunch of vendors will be explaining how they can help you with step 3, using their analytical engines to answer questions you didn’t even know to ask. They’ll use all sorts of buzzwords like ElasticSearch and Cassandra, talk about how cool their Hadoop is, and convince you they have data scientists thinking big thoughts about how to solve the security problem, and their magic platform will do just that. Try not to laugh too hard at the salesperson. Then find an SE and have them walk you through setup and tuning of the analytics platform. Yes, it needs to be tuned regardless of what the salesperson tells you. How do you start? What data do you need? How do you refine queries? How do you validate a potential attack? Where can you send data for more detailed forensic analysis? If the SE has on dancing shoes, the product probably isn’t ready yet—unless you have your own group of PhDs you can bring to the table. Make sure the analytics tool actually saves time, rather than just creating more detailed alerts you don’t have time to handle. We’re not saying PhD’s aren’t cool—we think it’s great that math folks are rising in prominence. But understand that when your SOC analyst wants you to call them a “Data Scientist” it’s so they can get a 50% raise for joining another big company. Forensication We have finally reached the point as an industry where practitioners don’t actually believe they can stop all attacks any more. We know that story was less real than the tooth fairy, but way too many folks actually believed it. Now that ruse is done, so we can focus on the fact that at some point soon you will be investigating an incident. So you will have forensics professionals onsite, trying to figure out what actually happened. The forensicators will ask to see your data. It’s good you have a crapton of security data, right? But you will increasingly be equipping your internal team for the first few steps of the investigation. So you will see a lot of forensics tools at the RSAC, and forensics companies repositioning as security shops. They will show their forensics hooks within your endpoint security products and your network security controls. Almost every vendor will have something to say about forensics. Mostly because it’s shiny.   Even better, most vendors are fielding their own incident response service. It is a popular belief that if a company can respond to an incident, they are well positioned to sell product at the back-end of the remediation/recovery. Of course that creates a bull market for folks with forensics skills. These folks can jump from company to company, driving up compensation quickly. They are on the road 5 days a week anyway, if not more, so why would they care which company is on their business cards? This wave of focus on forensics, and resulting innovation, has been a long time coming. The tools are still pretty raw and cater to overly sophisticated customers, but we see progress. This progress is absolutely essential – there aren’t enough skilled forensics folks, so you need a way to make your less skilled folks more effective with tools and automation. Which is a theme throughout the RSAC-G this year. SECaaS or SUKRaaS The other downside to an overheated security environment is

Share:
Read Post

RSA Conference Guide 2015 Deep Dives: Identity and Access Management

No Respect Identity is one of the more difficult topics to cover in our yearly RSAC Guide, because identity issues and trends don’t grab headlines. Identity and Access Management vendors tend to be light-years ahead of most customers. You may be thinking “Passwords and Active Directory: What else do I need to know?” which is pretty typical. IAM responsibilities sit in a no-man’s land between security, development, and IT… and none of them wants ownership. Most big firms now have a CISO, CIO, and VP of Engineering, but when was the last time you heard of a VP of Identity? Director? No, we haven’t either. That means customers—and cloud providers, as we will discuss in a bit—are generally not cognizant of important advancements. But those identity systems are used by every employee and customer. Unfortunately, despite ongoing innovation, much of what gets attention is somewhat backwards.   The Cutting Edge—Role-Based Access Control for the Cloud Roles, roles, and more roles. You will hear a lot about Role-Based Access Controls from the ‘hot’ product vendors in cloud, mobile management, and big data. It’s ironic—these segments may be cutting-edge in most ways, but they are decidedly backwards for IAM. Kerberos, anyone? The new identity products you will hear most about at this year’s RSAC show—Azure Active Directory and AWS Access Control Lists—are things most of the IAM segment have been trying to push past for a decade or more. We are afraid to joke about it, because an “identity wizard” to help you create ACLs “in the cloud” could become a real thing. Despite RBAC being outdated, it keeps popping up unwanted, like that annoying paper clip because customers are comfortable with it and even look for those types of solutions. Attribute Based Access Controls, Policy Based Access Controls, real-time dynamic authorization, and fully cloud-based IDaaS are all impressive advances, available today. Heck, even Jennifer Lawrence knows why these technologies are important—her iCloud account was apparently hacked because there was no brute-force replay checker to protect her. Regardless, these vendors sit unloved, on the outskirts of the convention center floor. Standard Bearer We hear it all the time from identity vendors: “Standards-based identity instills confidence in customers,” but the vendors cannot seem to agree on a standard. OpenID vs. SAML vs. OAuth, oh my! Customers do indeed want standards-based identity, but they fall asleep when this debate starts. There are dozens of identity standards in the CSA Guidance, but which one is right for you? They all suffer from the same issue: they are all filled with too many options. As a result interoperability is a nightmare, especially for SAML. Getting any two SAML implementations to talk to each other demands engineering time from both product teams. IAM in general, and specifically SAML, beautifully illustrate Tannenbaum’s quote: “The nice thing about standards is that you have so many to choose from.” Most customers we speak with don’t really care which standard is adopted—they just want the industry to pick one and be done with it. Until then they will focus on something more productive, like firewall rules and password resets. They are waiting for it to be over so they can push a button to interoperate—you do have an easy button, right? Good Dog, Have a Biscuit We don’t like to admit it, but in terms of mobile payments and mobile identity, the U.S. is a laggard. Many countries we consider ‘backwards’ were using mobile payments as their principal means to move money long before Apple Pay was announced. But these solutions tend to be carrier-specific; U.S. adoption was slowed by turf wars between banks, carriers, and mobile device vendors. Secure elements or HCE? Generic wallets or carrier payment infrastructure? Tokens or credit cards? Who owns the encryption keys? Do we need biometrics, and if so which are acceptable? Each player has a security vision which depends on and only supports and their business model. Other than a shared desire to discontinue the practice of sending credit card numbers to merchants over SSL, there has been little agreement. For several years now the FIDO Alliance has been working on an open and interoperable set of standards to promote mobile security. This standard does not just establish a level playing field for identity and security vendors—it defines a user experience to make mobile identity and payments easier. So the FIDO standard is becoming a thing. It enables vendors to hook into the framework, and provide their solution as part of the ecosystem. You will notice a huge number of vendors on the show floor touting support for the FIDO standard. Many demos will look pretty similar because they all follow the same privacy, security, and ease of use standards, but all oars are finally pulling in the same direction. Share:

Share:
Read Post

RSA Conference Guide 2015 Deep Dives: Endpoint Security

What you’ll see at the RSAC in terms of endpoint security is really more of the same. Advanced attacks blah, mobile devices blah blah, AV-vendor hatred blah blah blah. Just a lot of blah… But we are still recovering from the advanced attacker hangover, which made painfully clear that existing approaches to preventing malware just don’t work. So a variety of alternatives have emerged to do it better. Check out our Advanced Endpoint and Server Protection paper to learn more about where the technology is going. None of these innovations has really hit the mainstream yet, so it looks like the status quo will prevail again in 2015. But the year of endpoint security disruption is coming—perhaps 2016 will be it… White listing becomes Mission: POSsible Since last year’s RSAC many retailers have suffered high-profile breaches. But don’t despair—if your favorite retailer hasn’t yet sent you a disclosure notice, it will arrive with your new credit card just as soon as they discover the breach. And why are retailers so easy to pop? Mostly because many Point-of-Sale (POS) systems use modern operating systems like Embedded Windows XP. These devices are maintained using state-of-the-art configuration and patching infrastructures—except when they aren’t. And they all have modern anti-malware protection, unless they don’t have even ineffective signature-based AV. POS systems have been sitting ducks for years. Quack quack. Clearly this isn’t a really effective way to protect devices that capture credit cards and handle money, which happen to run on circa-1998 operating systems. So retailers and everyone else dealing with kiosks and POS systems has gotten the white listing bug, big-time. And this bug doesn’t send customer data to carder exchanges in Eastern Europe. What should you look for at the RSAC? Basically a rep who isn’t taking an order from some other company. Calling Dr. Quincy… We highlighted a concept last year, which we call endpoint monitoring. It’s a method for collecting detailed and granular telemetry from endpoints, to facilitate forensic investigation after a device compromise. As it turned out, that actually happened—our big research friends who shall not be named have dubbed this function ETDR (Endpoint Threat Detection and Response). And ETDR is pretty shiny nowadays. As you tour the RSAC floor, pay attention to ease-of-use. The good news is that some of these ETDR products have been acquired by big companies, so they will have a bunch of demo pods in their huge booths. If you want to check out a startup you might have to wait—you can only fit so much in a 10’ by 10’ booth, and we expect these technologies to garner a lot of interest. And since the RSAC has outlawed booth babes (which we think is awesome), maybe the crowded booths will feature cool and innovative technology rather than spandex and leather. While you are there you might want to poke around a bit, to figure out when your EDTR vendor will add prevention to their arsenal, so you can finally look at alternatives to EPP. Speaking of which… Don’t look behind the EPP curtain… The death of endpoint protection suites has been greatly exaggerated. Which continues to piss us off, to be honest. In what other business can you be largely ineffective, cost too much, and slow down the entire system, and still sell a couple billion dollars worth of product annually? The answer is none, but the reason companies still spend money is compliance. If EPP was a horse we would have shot it a long time ago. So what is going to stop the EPP hegemony? We need something that can protect devices and drive down costs, without killing endpoint performance. It will take a vendor with some cajones. Companies offering innovative solutions tend to be content positioning them as complimentary solution to EPP suites. Then they don’t have to deal with things like signature engines (to keep QSAs who are stuck in 2006 happy) or full disk encryption. Unfortunately cajones will be in short supply at the 2015 RSAC—even in a heavily male-dominated crowd. But at some point someone will muster up the courage to acknowledge the EPP emperor has been streaking through RSAC for 5 years, and finally offer a compelling package that satisfies compliance requirements. Can you do us a favor on the show floor? Maybe drop some hints that you would be happy to divert the $500k you plan to spend renewing EPP this year to something that doesn’t suck instead. Mobility gets citizenship… As we stated last year, managing mobile devices is quite the commodity now. The technology keeps flying off the shelves, and MDM vendors continue to pay lip service to security. But last year devices were not really integrated into the organization’s controls and defenses. That has started to change. Thanks to a bunch of acquisitions, most MDM technology is now controlled by big IT shops, so we will start to see the first linkages between managing and protecting mobile devices, and the rest of infrastructure. Leverage is wonderful, especially now when we have such a severe skills gap in security. Now that mobile devices are full citizens, what does that even mean? It means MDM environments are now expected to send alerts to the SIEM and integrate with the service/operations infrastructure. They need to speak enterprise language and play nice with other enterprise systems. Even though there have been some high-profile mobile app problems (such as providing access to a hotel chain’s customer database), there still isn’t much focus on assessing apps and ensuring security before apps hit an app store. We don’t get it. You might check out folks assessing mobile apps (mostly for privacy issues, rather than mobile malware) and report back to your developers so they can ignore you. Again. IoT: Not so much It wouldn’t be an RSAC-G if we didn’t do at least a little click baiting. Mostly just to annoy people who are hoping for all sorts of groundbreaking research on protecting the Internet of Things (IoT). At

Share:
Read Post

RSA Conference Guide 2015 Deep Dives: Network Security

We had a little trouble coming up with a novel and pithy backdrop for what you will see in the Network Security space at RSAC 2015. We wonder if this year we will see the first IoT firewall, because hacking thermostats and refrigerators has made threat models go bonkers. The truth is that most customers are trying to figure out what to do with the new next-generation devices they already bought. We shouldn’t wonder why the new emperor looks a lot like the old emperor, when we dress our new ruler (NGFW) up in clothes (rules) that look so similar to our old-school port- and protocol-based rulesets. But the fact is there will be some shiny stuff at this year’s conference, largely focused on detection. This is a very productive and positive trend—for years we have been calling for a budget shift away from ineffective prevention technologies to detecting and investigating attacks. We see organizations with mature security programs making this shift, but far too many others continue to buy the marketing hyperbole, “of course you can block it.” Given that no one really knows what ‘it’ is, we have a hard time understanding how we can make real progress in blocking more stuff in the coming year. Which means you need to respond faster and better. Huh, where have we heard that before? Giving up on Prevention… Talking to many practitioners over the past year I felt like I was seeing a capitulation of sorts. There is finally widespread acknowledgement that it is hard to reliably prevent attacks. And we are not just talking about space alien attacks coming from a hacking UFO. It’s hard enough for most organizations to deal with Metasploit. Of course we are not going all Jericho on you, advocating giving up on prevention on the network. Can you hear the sigh of relief from all the QSAs? Especially the ones feeling pressure to push full isolation of protected data (as opposed to segmentation) during assessments. Most of those organizations cannot even manage one network, so let’s have them manage multiple isolated environments. That will work out just great. There will still be a lot of the same old same old—you still need a firewall and IPS to enforce both positive (access control) and negative (attack) policies on your perimeter. You just need to be realistic about what they can block—even shiny NGFW models. Remember that network security devices are not just for blocking attacks. We still believe segmentation is your friend—you will continue to deploy those boxes, both to keep the QSAs happy and to make sure that critical data is separated from not-so-critical data. And you will also hear all about malware sandboxes at the RSAC this year. Again. Everyone has a sandbox—just ask them. Except some don’t call them sandboxes. I guess they are discriminating against kids who like sand in today’s distinctly un-politically-correct world. They might be called malware detonation devices or services. That sounds shinier, no? But if you want to troll the reps on the show floor (and who doesn’t?), get them to debate an on-premise approach versus a cloud-based approach to detonation. It doesn’t really matter what side of the fence they are on, but it’s fun seeing them get all red in the face when you challenge them. Finally, you may hear some lips flapping about data center firewalls. Basically just really fast segmentation devices. If they try to convince you they can detect attacks on a 40gbps data center network, and flash their hot-off-the-presses NSS Lab results, ask what happens when they turn on more than 5 rules at a time. If they bother you, say you plan to run SSL on your internal networks and the device needs to inspect all traffic. But make sure an EMT is close by, as that strategy has been known to cause aneurysms in sales reps. To Focus on Detection… So if many organizations have given up trying to block all attacks, what the hell are they supposed to do? Spend tons of money on more appliances to detect attacks they missed at the perimeter, of course. And the security industrial complex keeps chugging along. You will see a lot of focus on network-based threat detection at the show. We ourselves are guilty of fanning the flames a bit with our new research on that topic. The fact is, the technology is moving forward. Analyzing network traffic patterns, profiling and baselining normal communications, and then looking for stuff that’s not normal gives you a much better chance of finding compromised devices on your networks. Before your new product schematics wind up in some non-descript building in Shanghai, Chechnya, Moscow, or Tel Aviv. What’s new is the base of analysis possible with today’s better analytics. Booth personnel will bandy about terms like “big data” and “machine learning” like they understand what they even mean. But honestly baselines aren’t based only on Netflow records or DNS queries any more—they can now incorporate very granular metadata from network traffic including identity, content, frequency of communication, and various other attributes that get math folks all hot and bothered. The real issue is making sure these detection devices can work with your existing gear and aren’t just a flash in the pan, about to be integrated as features in your perimeter security gateway. Okay, we would be pulling your leg if we said any aspect of detection won’t eventually become an integrated feature of other network security gear. That’s just the way it goes. But if you really need to figure out what’s happening on your network, visit these vendors on the floor. While Consolidating Functions… What hasn’t changed is that big organizations think they need separate devices for all their key functions. Or has it? Is best of breed (finally) dead? Well, not exactly, but it has more to do with politics than technology. Pretty much all the network security players have technologies that allow authorized traffic and block attacks. Back when category

Share:
Read Post

RSA Conference Guide 2015 Deep Dives: Application Security

Coming Soon to an Application Near You: DevOps For several years you have been hearing the wonders of Agile development, and how it has done wondrous things for software development companies. Agile development isn’t a product – it is a process change, a new way for developers to communicate and work together. It’s effective enough to attract almost every firm we speak with away from traditional waterfall development. Now there is another major change on the horizon, called DevOps. Like Agile it is mostly a process change. Unlike Agile it is more operationally focused, relying heavily on tools and automation for success. That means not just your developers will be Agile – your IT and security teams will be, too! The reason DevOps is important at RSA Conference – the reason you will hear a lot about it – is that it offers a very clear and positive effect on security. Perhaps for the first time, we can automate many security requirements – embedding them into the daily development, QA, and operational tasks we already perform. DevOps typically goes hand in hand with continuous integration and continuous deployment. For software development teams this means code changes go from idea to development to live production in hours rather than months. Sure, users are annoyed the customer portal never works the same way twice, but IT can deliver new code faster than sales and marketing wanted it, which is itself something of a miracle. Deployment speed makes a leap in the right direction, but the new pipeline provides an even more important foundation for embedding security automation into processes. It’s still early, but you will see the first security tools which have been reworked for DevOps at this year’s RSA conference. I Can Hardly Contain Myself Containers. They’re cool. They’re hot. They… wait, what are they exactly? The new developer buzzword is Docker – the name of both the company and the product – which provides a tidy container for applications and all the associated stuff an application needs to do its job. The beauty of this approach comes from hiding much of the complexity around configuration, supporting libraries, OS support, and the like – all nicely abstracted away from users within the container. In the same way we use abstract concepts like ‘compute’ and ‘storage’ as simple quantities with cloud service providers, a Docker container is an abstract run-anywhere unit of ‘application’. Plug it in wherever you want and run it. Most of the promise of virtualization, without most of the overhead or cost. Sure, some old-school developers think it’s the same “write once, crash anywhere” concept Java did so well with 20 years ago, and of coures security pros fear containers as the 21st-century Trojan Horse. But containers do offer some security advantages: they wrap accepted version of software up with secure configuration settings, and narrowly define how to interact with the container – all of which reduces the dreaded application “threat surface”. You are even likely to find a couple vendors who now deploy a version of their security appliance as a Docker container for virtualized or cloud environments.   All Your Code-base Belong to Us As cloud services continue to advance outsourced security services are getting better, faster, and cheaper than your existing on-premise solution. Last year we saw this at the RSA Conference with anti-malware and security analytics. This year we will see it again with application development. We have already seen general adoption of the cloud for quality assurance testing; now we see services which validate open source bundles, API-driven patching, cloud-based source code scanning, and more dynamic application scanning services. For many the idea of letting anyone outside your company look at your code – much less upload it to a multi-tenant cloud server – is insane. But lower costs have a way of changing opinions, and the automated, API-driven cloud model fits very well with the direction development teams are pulling. Share:

Share:
Read Post

RSA Conference Guide 2015 Deep Dives: Data Security

Data security is the toughest coverage area to write up this year. It reminds us of those bad apocalypse films, where everyone runs around building DIY tanks and improvising explosives to “save the children,” before driving off to battle the undead hordes and—leaving the kids with a couple spoons, some dirt, and a can of corned beef hash. We have long argued for information-centric security—protecting data needs to be an equal or higher priority than defending infrastructure itself. Thanks to a succession of major breaches and a country or two treating our corporate intellectual property like a Metallica song during Napster’s heyday, CEOs and Directors now get it: data security matters. It not only matters—it permeates everything we do across the practice of security (except for DDoS). But that also means data security appears in every section in this year’s RSAC Guide. But it doesn’t mean anyone has the slightest clue of how to stop the hemorrhaging. Anyone Have a Bigger Hammer? From secret-stealing APTs, to credit-card-munching cybercrime syndicates, our most immediate response is… more network and endpoint security. That’s right—the biggest trends in data security are network and endpoint security. Better firewalls, sandboxes, endpoint whitelisting, and all the other stuff in those two buckets. When a company gets breached the first step (after hiring an incident response firm to quote in the press release, saying this was a “sophisticated attack”) is to double down on new anti-malware and analytics. It makes sense. That’s how the bad guys most frequently get in. But it also misses the point. Years ago we wrote up something called the “Data Breach Triangle.” A breach requires three things: an exploit (a way in), something to steal (data) and an egress (way out). Take away any side of that triangle, and no breach. But stopping the exploit is probably the hardest, most expensive side to crack—especially because we have spent the last thirty years working on it… unsuccessfully. The vast majority of data security you’ll see at this conference, from presentations to the show floor, will be more of the same stuff we have always seen, but newer and shinier. As if throwing more money at the same failed solutions will really solve the problem. Look—you need network and endpoint security, but doubling down doesn’t seem to be changing the odds. Perhaps a little diversification is in order. The Cloud Ate My Babies Data security is still one of the top two concerns we run into when working with clients on cloud projects—the other is compliance. Vendors are listening, so you will see no shortage of banners and barkers offering to protect your data in the cloud. Which is weird, because if you pick a decent cloud provider the odds are that your data is far safer with them than in your self-managed data center. Why? Economics. Cloud providers know they can easily lose vast numbers of customers if they are breached. The startups aren’t always there, but the established providers really don’t mess around—they devote far more budget and effort to protecting customer data than nearly any enterprise we have worked with. Really, how many of you require dual authorization to access any data? Exclusively through a monitored portal, with all activity completely audited and two-factor authentication enforced? That’s table stakes for these guys. Before investing in extra data security for the cloud, ask yourself what you are protecting it from. If the data is regulated you may need extra assurance and logging for compliance. Maybe you aren’t using a major provider. But for most data, in most situations, we bet you don’t need anything too extreme. If a cloud data protection solution mostly protects you from an administrator at your provider, you might want to just give them a fake number.   BYOD NABD One area trending down is the concern over data loss from portable devices. It is hard to justify spending money here when we can find almost no cases of material losses or public disclosures from someone using a properly-secured phone or tablet. Especially on iOS, which is so secure the FBI is begging Congress to force Apple to add a back door (we won’t make a joke here—we don’t want to get our editor fired). You will still see it on the show floor, and maybe a few sessions (probably panels) where there’s a lot of FUD, but we mostly see this being wrapped up into Mobile Device Management and Cloud Security Gateways, and by the providers themselves. It’s still on the list—just not a priority. Encrypt, Tokenize, or Die (well, look for another job) Many organizations are beginning to realize they don’t need to encrypt every piece of data in data centers and at cloud providers, but there are still a couple massive categories where you’d better encrypt or you can kiss your job goodbye. Payment data, some PII, and some medical data demand belt and suspenders. What’s fascinating is that we see encryption of this data being pushed up the stack into applications. Whether in the cloud or on-premise, there is increasing recognition that merely encrypting some hard drives won’t cut it. Organizations are increasingly encrypting or tokenizing at the point of collection. Tokenization is generally preferred for existing apps, and encryption for new ones. Unless you are looking at payment networks, which use both. You might actually see this more in sessions than on the show floor. While there are some new encryption and tokenization vendors, it is mostly the same names we have been working with for nearly 10 years. Because encryption is hard. Don’t get hung up on different tokenization methods; the security and performance of the token vault itself matters more. Walk in with a list of your programming languages and architectural requirements, because each of these products has very different levels of support for integrating with your projects. The lack of a good SDK in the language you need, or a REST API, can set you back months. Cloud Encryption Gets Funky Want to

Share:
Read Post

RSA Conference Guide 2015 Deep Dives: Cloud Security

Before delving into the world of cloud security we’d like to remind you of a little basic physics. Today’s lesson is on velocity vs. acceleration. Velocity is how fast you are going, and acceleration is how fast velocity increases. They affect our perceptions differently. No one thinks much of driving at 60mph. Ride a motorcycle at 60mph, or plunge down a ski slope at 50mph (not that uncommon), and you get a thrill. But accelerate from 0mph to 60mph in 2.7 seconds in a sports car (yep, they do that), and you might need new underwear. That’s pretty much the cloud security situation right now. Cloud computing is, still, the most disruptive force hitting all corners of IT, including security. It has pretty well become a force of nature at this point, and we still haven’t hit the peak. Don’t believe us? That’s cool—not believing in that truck barreling towards you is always a good way to ensure you make it into work tomorrow morning. (Please don’t try that—we don’t want your family to sue us). Clouds Everywhere The most surprising cloud security phenomena are how widespread cloud computing has spread, and the increasing involvement of security teams… sort of. Last year we mentioned seeing ever more large organizations dipping their toes into cloud computing, and this year it’s hard to find any large organization without some active cloud projects. Including some with regulated data. Companies that told us they wouldn’t use public cloud computing a year or two ago are now running multiple active projects. Not unapproved shadow IT, but honest-to-goodness sanctioned projects. Every one of these cloud consumers also tells us they are planning to move more and more to the cloud over time. Typically these start as well-defined projects rather than move-everything initiatives. A bunch we are seeing involve either data analysis (where the cloud is perfect for bursty workloads) or new consumer-facing web projects. We call these “cloud native” projects because once the customer digs in, they design the architectures with the cloud in mind. We also see some demand to move existing systems to the cloud, but frequently those are projects where the architecture isn’t going to change, so the customer won’t gain the full agility, resiliency, and economic benefits of cloud computing. We call these “cloud tourists” and consider these projects ripe for failure because all they typically end up doing is virtualizing already paid-for hardware, adding the complexity of remote management, and increasing operational costs to manage the cloud environment on top of still managing just as many servers and apps. Not that we don’t like tourists. They spend a lot of money. One big surprise is that we are seeing security teams engaging more deeply, quickly, and positively than in past years, when they sat still and watched the cloud rush past. There is definitely a skills gap, but we meet many more security pros who are quickly coming up to speed on cloud computing. The profession is moving past denial and anger, through bargaining (for budget, of course), deep into acceptance and…DevOps. Perhaps we pushed that analogy. But the upshot is that this year we feel comfortable saying cloud security is becoming part of mainstream security. It’s the early edge, but the age of denial and willful ignorance is coming to a close. Wherever You Go, There You Aren’t Okay, you get it, the cloud is happening, security is engaging, and now it’s time for some good standards and checklists for us to keep the auditors happy and get those controls in place. Wait, containers, what? Where did everybody go? Not only is cloud adoption accelerating, but so is cloud technology. Encryption in the cloud too complex? That’s okay—Amazon just launched a simple and cheap key management service, fully integrated through their services. Nailed down your virtual server controls for VMWare? How well do those work with Docker? Okay, with which networking stack you picked for your Docker on AWS deployment, that uses a different management structure than your Docker on VMWare deployment. Your security vendor finally offers their product as a virtual appliance? Great! How does it work in Microsoft Azure, now that you have moved to a PaaS model where you don’t control network flow? You finally got CloudTrail data into your SIEM? Nice job, but your primary competitor now offers live alerts on streaming API data via Lambda. Got those Chef and Puppet security templates set? Darn, the dev team switched everything to custom images and rollouts via autoscaling groups. None of that make sense? Too bad—those are all real issues from real organizations. Everything is changing so quickly that even vendors trying to keep up are constantly dancing to fit new deployment and operations models. We are past the worst cloudwashing days, but we will still see companies on the floor struggling to talk about new technologies (especially containers); how they offer value over capabilities Amazon, Microsoft, and other major providers have added to their services, and why their products are still necessary with new architectural models. The good news is that not everything lives on the bleeding edge. The bad news is that this rate of change won’t let up any time soon, and the bleeding edge seems to become early mainstream more quickly than it used to. This theme is more about what you won’t see than what you will. SIEM vendors won’t be talking much about how they compete with a cloud-based ELK stack, encryption vendors will struggle to differentiate from Amazon’s Key Management Service, AV vendors sure won’t be talking about immutable servers, and network security vendors won’t really talk about the security value of their product in a properly designed cloud architecture. On the upside not everyone lives on the leading edge. But if you attend the cloud security sessions, or talk to people actively engaged in cloud projects, you will likely see some really interesting, practical ways of managing security for cloud computing that don’t rely on ‘traditional’ approaches. Bump in

Share:
Read Post

RSA Conference Guide 2015 Deep Dives: Overview

With lots of folks (including us) at the RSA Conference this week, we figured we’d post the deep dives we wrote for the RSAC Guide and give those of you not attending a taste of what your missing. Though we haven’t figured out how to relay the feel of the meat market at the W bar after 10 PM nor the ear deafening bass at any number of conference parties nor the sharp pain you feel in your gut after a night of being way too festive. Though we’re working on that for next year’s guide. Overview While everyone likes to talk about the “security market” or the “security industry,” in practice security is more a collection of markets, tools, and practices all competing for our time, attention, and dollars. Here at Securosis we have a massive coverage map (just for fun, which doesn’t say much now that you’ve experienced some of our sense of humor), which includes seven major focus areas (like network, endpoint, and data security), and dozens of different practice and product segments. It’s always fun to whip out the picture when vendors are pitching us on why CISOs should spend money on their single-point defense widget instead of the hundreds of other things on the list, many of them mandated by auditors using standards that get updated once every decade or so. In our next sections we dig into the seven major coverage areas and detail what you can expect to see, based in large part on what users and vendors have been talking to us about for the past year. You’ll notice there can be a bunch of overlap. Cloud and DevOps, for example, affect multiple coverage areas in different ways, and cloud is a coverage area all on its own. When you walk into the conference, you are likely there for a reason. You already have some burning issues you want to figure out, or specific project needs. These sections will let you know what to expect, and what to look for. The information is based in many cases on dozens of vendor briefings and discussions with security practitioners. We try to help illuminate what questions to ask, where to watch for snake oil, and what key criteria to focus on, based on successes and failures from your peers who tried it first. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.