Securosis

Research

Network-based Threat Detection: Prioritizing with Context

During speaking gigs we ask how many in the audience actually get through their to-do list every day. Usually we get one or two jokers in the crowd between jobs, or maybe just trying to troll us a bit. But nobody in a security operational role gets everything done every day. So the critical success factor is to make sure you are getting the right things done, and not burning time on activities that don’t reduce risk or contain attack damage. Underpinning this series is the fact that prevention inevitably fails at some point. Along with a renewed focus on network-based detection, that means your monitoring systems will detect a bunch of things. But which alerts are important? Which represent active adversary activity? Which are just noise and need to be ignored? Figuring out which is which is where you need the most help. To use a physical security analogy, a security fence will alert regularly. But you need to figure out whether it’s a confused squirrel, a wayward bird, a kid on a dare, or the offensive maneuver of an adversary. Just looking at the alert won’t tell you much. But if you add other details and additional context into your analysis, you can figure out which is which. The stakes are pretty high for getting this right, as the postmortems of many recent high-profile breaches indicated alerts did fire – in some cases multiple times from multiple systems – but the organizations failed to take action… and suffered the consequences. Our last post listed network telemetry you could look for to indicate potential malicious activity. Let’s say you like the approach laid out in that post and decide to implement it in your own monitoring systems. So you flip the switch and the alerts come streaming in. Now comes the art: separating signal from noise and narrowing your focus to the alerts that matter and demand immediate attention. You do this by adding context to general network telemetry and then using an analytics engine to crunch the numbers. To add context you can leverage both internal and external information. At this point we’ll focus on internal data, because you already have that and can implement it right away. Our next post will tackle external data, typically accessible via a threat intelligence feed. Device Behavior You start by figuring out what’s important – not all devices are created equal. Some store very important data. Some are issued to employees with access to important data, typically executives. But not all devices present a direct risk to your organization, so categorizing them provides the first filter for prioritization. You can use the following hierarchy to kickstart your efforts: Critical devices: Devices with access to protected information and/or particularly valuable intellectual property should bubble to the top. Fast. If a device on a protected and segmented network shows indications of compromise, that’s bad and needs to be dealt with immediately. Even if the device is dormant, traffic on a protected network that looks like command and control constitutes smoke, and you need to act quickly to ensure any fire doesn’t spread. Or enjoy your disclosure activities… Active malicious devices: If you see device behavior which indicates an active attack (perhaps reconnaissance, moving laterally within the environment, blasting bits at internal resources, or exfiltrating data), that’s your next order of business. Even if the device isn’t considered critical, if you don’t deal with it promptly the compromise might find an exploitable hole to a higher-value device and move laterally within the organization. So investigate and remediate these devices next. Dormant devices: These devices at some point showed behavior consistent with command and control traffic (typically staying in communication with a C&C network), but aren’t doing anything malicious at the moment. Given the number of other fires raging in your environment, you may not have time to remediate these dormant devices immediately. These priorities are fairly coarse but should be sufficient. You don’t want a complicated multi-tier rating system which is too involved to use on a daily basis. Priorities should be clear. If you have a critical device that is showing malicious activity, that’s a red alert. Critical devices that throw alerts need to be investigated next, and then non-critical devices showing malicious activity. Finally, after you have all the other stuff done, you can get around to dealing with devices you’re pretty sure are compromised. Of course this last bucket might show malicious activity at any time, so you still need to watch it. The question is when you remediate. This categorization helps, but within each bucket you likely have multiple devices. So you still need additional information and context to make decisions. Who and Where Not all employees are created equal either. Another source of context is user identity, and there are a bunch of groups you need to pay attention to. The first is people with elevated privileges, such as administrators and others with entitlements to manage devices that hold critical information. They can add, delete, delete, change accounts and access rules on the servers, and manipulate data. They have access to tamper with logs, and basically can wreck an environment from the inside. There are plenty of examples of rogue or disgruntled administrators making a real mess, so when you see an administrator’s device behaving strangely, that should bubble up to the top of your list. The next group of folks to watch closely are executives with access to financials, company strategy, and other key intellectual property. These users are attacked most frequently via phishing and other social engineering, so they need to be watched closely – even trained, they aren’t perfect. This may trigger organizational backlash – some executives get cranky when they are monitored. But that’s not your problem, and without this kind of context it’s hard to do your job. So dig in and make your case to the executives for why it’s important. As you look for indicators that devices are connecting to a C&C server or performing reconnaissance, you are protecting the organization, and

Share:
Read Post

Incite 5/6/2015: Just Be

I’m spent after the RSAC. By Friday I have been on for close to a week. It’s nonstop, from the break of dawn until the wee hours of the morning. But don’t feel too bad – it’s one of my favorite weeks of the year. I get to see my friends. I do a bunch of business. And I get a feel for how close our research is to reflecting the larger trends in the industry. But it’s exhausting. When the kids were smaller I would fly back early Friday morning and jump back into the fray of the Daddy thing. I had very little downtime and virtually no opportunity to recover. Shockingly enough, I got sick or cranky or both. So this year I decided to do it differently. I stayed in SF through the weekend to unplug a bit.   I made no plans. I was just going to flow. There was a little bit of structure. Maybe I would meet up with a friend and get out of town to see some trees (yes, Muir Woods was on the agenda). I wanted to catch up with a college buddy who isn’t in the security business, at some point. Beyond that, I’d do what I felt like doing, when I felt like doing it. I wasn’t going to work (much) and I wasn’t going to talk to people. I was just going to be. Turns out my friend wasn’t feeling great, so I was solo on Friday after the closing keynote. I jumped in a Zipcar and drove down to Pacifica. Muir Woods would take too long to reach, and I wanted to be by the water. Twenty minutes later I was sitting by the ocean. Listening to the waves. The water calms me and I needed that. Then I headed back to the city and saw an awesome comedian was playing at the Punchline. Yup, that’s what I did. He was funny as hell, and I sat in the back with my beer and laughed. I needed that too. Then on Saturday I did a long run on the Embarcadero. Turns out a cool farmer’s market is there Saturdays. So I got some fruit to recover from the run, went back to the hotel to clean up, and then headed back to the market. I sat in a cafe and watched people. I read a bit. I wrote some poetry. I did a ZenTangle. I didn’t speak to anyone (besides a quick check-in with the family) for 36 hours after RSA ended. It was glorious. Not that I don’t like connecting with folks. But I needed a break. Then I had an awesome dinner with my buddy and his wife, and flew back home the next day in good spirits, ready to jump back in. I’m always running from place to place. Always with another meeting to get to, another thing to write, or another call to make. I rarely just leave myself empty space with no plans to fill it. It was awesome. It was liberating. And I need to do it more often. This is one of the poems I wrote, watching people rushing around the city. Rush You feel them before you see They have somewhere to be It’s very important Going around you as quickly as they can. They are going places. Then another And another And another Constantly rushing But never catching up. They are going places. Until they see that right here is the only place they need to be. – MSR, 2015 –Mike Photo credit: “65/365: be. [explored]“_ originally uploaded by It’s Holly The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the 2014 conference. You can check it out on YouTube. Take an hour and check it out. Your emails, alerts and Twitter timeline will be there when you get back. Securosis Firestarter Have you checked out our new video podcast? Rich, Adrian, and Mike get into a Google Hangout and.. hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail. March 31 – Using RSA March 16 – Cyber Cash Cow March 2 – Cyber vs. Terror (yeah, we went there) February 16 – Cyber!!! February 9 – It’s Not My Fault! January 26 – 2015 Trends January 15 – Toddler December 18 – Predicting the Past November 25 – Numbness October 27 – It’s All in the Cloud October 6 – Hulk Bash September 16 – Apple Pay August 18 – You Can’t Handle the Gartner Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too. Network-based Threat Detection Looking for Indicators Overcoming the Limits of Prevention Applied Threat Intelligence Building a TI Program Use Case #3, Preventative Controls Use Case #2, Incident Response/Management Use Case #1, Security Monitoring Defining TI Network Security Gateway Evolution Introduction Recently Published Papers Endpoint Defense: Essential Practices Cracking the Confusion: Encryption & Tokenization for Data Centers, Servers & Applications Security and Privacy on the Encrypted Network Monitoring the Hybrid Cloud Best Practices for AWS Security Securing Enterprise Applications Secure Agile Development Trends in Data Centric Security Leveraging Threat Intelligence in Incident Response/Management The Future of Security Incite 4 U Threat intel still smells like poop? I like colorful analogies. I’m sad that my RSAC schedule doesn’t allow me to see some of the more interesting sessions by my smart friends. But this blow-by-blow of Rick Holland’s Threat Intelligence is Like Three-Day Potty Training makes me feel like I was there. I like the maturity model, and know many large organization invest a boatload of cash in threat intel, and as long as they take a process-centric view (as Rick advises) they can get great value from that investment. But I’m fixated on the not Fortune 500. You know, organizations with a couple folks on the security team

Share:
Read Post

RSAC wrap-up. Same as it ever was.

The RSA conference is over and put up some massive numbers (for security). But what does it all mean? Can all those 450 vendors on the show floor possibly survive? Do any of them add value? Do bigger numbers mean we are any better than last year? And how can we possibly balance being an industry, community, and profession simultaneously? Not that we answer any of that, but we can at least keep you entertained for 13 minutes. Watch or listen: Share:

Share:
Read Post

Network-based Threat Detection: Looking for Indicators

Now that RSAC is behind us, it’s time to get back to our research agenda. So we pick up Network-based Threat Detection where we left off. In that first post, we made the case that math and context are the keys to detecting attacks from network activity, given that we cannot totally prevent endpoint compromise. Attackers always leave a trail on the network. So we need to collect and analyze network telemetry to determine whether the communication between devices and the content of communications are legitimate, or warrant additional investigation. Modern malware relies heavily on the network to initiate the connection between the device and the controller, download attacks, perform automated beaconing, etc. Fortunately these activities show a deterministic pattern, which enables you to pinpoint malicious activity and identify compromised systems. Attackers bet they will be able to obscure their communications within the tens of billions of legitimate packets traversing enterprise networks on any given day, and on defenders’ general lack of sophistication preventing them from identifying the giveaway patterns. But if you can identify the patterns, you have an opportunity to detect attacks. Command and Control Command and Control (C&C) traffic is communication between compromised devices and botnet controllers. Once the device executes malware (by whatever means) and the dropper is installed, the device searches for its controller to receive further instructions. There are two main ways to identify C&C activity: traffic destination and communications patterns between. The industry has been using IP reputation for years to identify malicious destinations on the Internet. Security researchers evaluate each IP address and determine whether it is ‘good’ or ‘bad’ based on activity they observe across a massive network of sensors. IP reputation turns out to be a pretty good indicator that an address has been used for malicious activity at some point. Traffic to known-bad destinations is definitely worth checking out, and perhaps even blocking. But malicious IP addresses (and even domains) are not active for long, as attackers cycle through addresses and domains frequently. Attackers also use legitimate sites as C&C nodes, which can leave innocent (but compromised) sites with a bad reputation. So the downside to blocking traffic to sites with bad reputation is the risk of irritating users who want to use the legitimate site. Our research shows increasing comfort with blocking sites because the great majority of addresses with bad reputations have legitimately earned it. Keep in mind that IP reputation is not sufficient to identify all the C&C traffic on your network – many malicious sites don’t show up on IP reputation lists. So next look for other indications of malicious activity on the network, which depends on how compromised devices find their controllers. With the increasing use of domain generating algorithms (DGA), malware doesn’t need to be hard-coded with specific domains or IP addresses – instead it cycles through a set of domains according to its DGA, searching for a dynamically addressed C&C controller; the addresses cycle daily. This provides tremendous flexibility for attackers to ensure newly-compromised devices can establish contact, despite frequent domain takedowns and C&C interruptions. But these algorithms look for controllers in a predictable pattern, making frequent DNS calls in specific patterns. So DNS traffic analysis has become critical for identification of C&C traffic, along with monitoring packet streams. Outliers Identifying C&C traffic before the compromised device becomes a full-fledged member of the botnet is optimal. But if you miss, once the device is part of the botnet you can look for indications that it is being used as part of an attack chain. You do this by looking for outliers: devices acting atypically. Does this sound familiar? It should – anomaly detection has been used to find attackers for over a decade, typically using Netflow. You profile normal traffic patterns for users on your network (source/destination/protocol), and then look for situations where traffic varies outside your baseline and exceeds tolerances. Network-based anomaly detection was reasonably effective, but as adversaries got more sophisticated detection needed to dig more deeply into traffic. Deep packet inspection and better analytics enabled detection offerings to apply context to traffic. Attack traffic tends to occur in a few cycles: Command and Control: As described above, devices communicate with botnet controllers to join the botnet. Reconnaissance: After compromising the device and gaining access via the botnet, attackers communicate with internal devices to map the network and determine the most efficient path to their target. Lateral Movement: Once the best path to the target is identified, attackers systematically move through your network to approach their intended target, by compromising additional devices. Exfiltration: Once the target device is compromised, the attacker needs to move the data from the target device, outside the network. This can be done using tunnels, staging servers, and other means to obfuscate activity. Each of these cycles includes patterns you can look for to identify potential attacks. But this still isn’t a smoking gun – at some point you will need to apply additional context to understand intent. Analyzing content in the communication stream is the next step in identifying attacks. Content One way to glean more context for network traffic is to understand what is being moved. With deep packet inspection and session reassembly, you can perform file-based analysis on content as well. Then you can compare against baselines to look for anomalies in the movement of content within your network as well. File size: For example, if a user moved 2gb of traffic over a 24 hour period, when they normally move no more than 100mb, that should trigger an alert. Perhaps it’s nothing, but it should be investigated. Time of day: Similarly, if a user doesn’t normally work in the middle of the night, but does so two days in a row by themselves, that could indicate malicious activity. Of course it might be just a big project, but it bears investigation. Simple DLP: You can fingerprint files to look for sensitive content, or regular expressions which match account numbers or other protected data. Of course that isn’t full DLP-style classification and analysis. But it could flag something malicious without the

Share:
Read Post

RSA Conference Guide 2015 Deep Dives: Security Management

Last year Big Data was all the rage at the RSAC in terms of security monitoring and management. So the big theme this year will be…(drum roll, please)…Big Data. Yes, it’s more of the same, though we will see security big data called a bunch of different things—including insider threat detection, security analytics, situational awareness, and probably two or three more where we have no idea what they even mean. But they all have one thing in common: math. That’s right—remember those differential equations you hated in high school and college? Be glad that helpful freshman in AP Calculus actually liked math. Those are the folks who will save your bacon, because their algorithms are helping detect attackers doing their thing. Detecting the Insider It feels a bit like we jumped into a time machine, and ended up in back 1998. Or 2004. Or 2008. You remember—that year when everyone was talking about insiders and how they were robbing your organization blind. We still haven’t solved the problem, because it’s hard. So every 4-5 years the vendors get tired of using black-masked external-attacker icons in their corporate PowerPoint decks, and start talking about catching insiders instead. This year will be no different—you will hear a bunch of noise at RSAC about the insider threat. The difference this year is that the math folks I mentioned earlier have put their algorithms to work on finding anomalous behaviors inside your network, and profiling what insiders typically does while they are robbing you blind. You might even be able to catch them before Brian Krebs calls to tell you all about your breach. These technologies and companies are pretty young, so you will see them on the outside rings of the conference hall and in the RSAC Innovation Sandbox, but they are multiplying like [name your favorite pandemic]. It won’t be long before the big SIEM players and other security management folks (yes, vulnerability management vendors, we’re looking at you) start talking about users and insiders to stay relevant. Don’t you just love the game? Security Analytics: Bring Your PhD The other epiphany many larger organizations had over the past few years is that they already have a crapton of security data. You can thank PCI-DSS for making them collect and aggregate all sorts of logs over the past few years. Then the forensics guys wanted packets, so you started capturing those too. Then you had the bright idea to put everything into a common data model. Then what? Your security management strategy probably looked something like this: Collect data. Put all data in one place. ??? Detect attacks. This year a bunch of vendors will be explaining how they can help you with step 3, using their analytical engines to answer questions you didn’t even know to ask. They’ll use all sorts of buzzwords like ElasticSearch and Cassandra, talk about how cool their Hadoop is, and convince you they have data scientists thinking big thoughts about how to solve the security problem, and their magic platform will do just that. Try not to laugh too hard at the salesperson. Then find an SE and have them walk you through setup and tuning of the analytics platform. Yes, it needs to be tuned regardless of what the salesperson tells you. How do you start? What data do you need? How do you refine queries? How do you validate a potential attack? Where can you send data for more detailed forensic analysis? If the SE has on dancing shoes, the product probably isn’t ready yet—unless you have your own group of PhDs you can bring to the table. Make sure the analytics tool actually saves time, rather than just creating more detailed alerts you don’t have time to handle. We’re not saying PhD’s aren’t cool—we think it’s great that math folks are rising in prominence. But understand that when your SOC analyst wants you to call them a “Data Scientist” it’s so they can get a 50% raise for joining another big company. Forensication We have finally reached the point as an industry where practitioners don’t actually believe they can stop all attacks any more. We know that story was less real than the tooth fairy, but way too many folks actually believed it. Now that ruse is done, so we can focus on the fact that at some point soon you will be investigating an incident. So you will have forensics professionals onsite, trying to figure out what actually happened. The forensicators will ask to see your data. It’s good you have a crapton of security data, right? But you will increasingly be equipping your internal team for the first few steps of the investigation. So you will see a lot of forensics tools at the RSAC, and forensics companies repositioning as security shops. They will show their forensics hooks within your endpoint security products and your network security controls. Almost every vendor will have something to say about forensics. Mostly because it’s shiny.   Even better, most vendors are fielding their own incident response service. It is a popular belief that if a company can respond to an incident, they are well positioned to sell product at the back-end of the remediation/recovery. Of course that creates a bull market for folks with forensics skills. These folks can jump from company to company, driving up compensation quickly. They are on the road 5 days a week anyway, if not more, so why would they care which company is on their business cards? This wave of focus on forensics, and resulting innovation, has been a long time coming. The tools are still pretty raw and cater to overly sophisticated customers, but we see progress. This progress is absolutely essential – there aren’t enough skilled forensics folks, so you need a way to make your less skilled folks more effective with tools and automation. Which is a theme throughout the RSAC-G this year. SECaaS or SUKRaaS The other downside to an overheated security environment is

Share:
Read Post

RSA Conference Guide 2015 Deep Dives: Identity and Access Management

No Respect Identity is one of the more difficult topics to cover in our yearly RSAC Guide, because identity issues and trends don’t grab headlines. Identity and Access Management vendors tend to be light-years ahead of most customers. You may be thinking “Passwords and Active Directory: What else do I need to know?” which is pretty typical. IAM responsibilities sit in a no-man’s land between security, development, and IT… and none of them wants ownership. Most big firms now have a CISO, CIO, and VP of Engineering, but when was the last time you heard of a VP of Identity? Director? No, we haven’t either. That means customers—and cloud providers, as we will discuss in a bit—are generally not cognizant of important advancements. But those identity systems are used by every employee and customer. Unfortunately, despite ongoing innovation, much of what gets attention is somewhat backwards.   The Cutting Edge—Role-Based Access Control for the Cloud Roles, roles, and more roles. You will hear a lot about Role-Based Access Controls from the ‘hot’ product vendors in cloud, mobile management, and big data. It’s ironic—these segments may be cutting-edge in most ways, but they are decidedly backwards for IAM. Kerberos, anyone? The new identity products you will hear most about at this year’s RSAC show—Azure Active Directory and AWS Access Control Lists—are things most of the IAM segment have been trying to push past for a decade or more. We are afraid to joke about it, because an “identity wizard” to help you create ACLs “in the cloud” could become a real thing. Despite RBAC being outdated, it keeps popping up unwanted, like that annoying paper clip because customers are comfortable with it and even look for those types of solutions. Attribute Based Access Controls, Policy Based Access Controls, real-time dynamic authorization, and fully cloud-based IDaaS are all impressive advances, available today. Heck, even Jennifer Lawrence knows why these technologies are important—her iCloud account was apparently hacked because there was no brute-force replay checker to protect her. Regardless, these vendors sit unloved, on the outskirts of the convention center floor. Standard Bearer We hear it all the time from identity vendors: “Standards-based identity instills confidence in customers,” but the vendors cannot seem to agree on a standard. OpenID vs. SAML vs. OAuth, oh my! Customers do indeed want standards-based identity, but they fall asleep when this debate starts. There are dozens of identity standards in the CSA Guidance, but which one is right for you? They all suffer from the same issue: they are all filled with too many options. As a result interoperability is a nightmare, especially for SAML. Getting any two SAML implementations to talk to each other demands engineering time from both product teams. IAM in general, and specifically SAML, beautifully illustrate Tannenbaum’s quote: “The nice thing about standards is that you have so many to choose from.” Most customers we speak with don’t really care which standard is adopted—they just want the industry to pick one and be done with it. Until then they will focus on something more productive, like firewall rules and password resets. They are waiting for it to be over so they can push a button to interoperate—you do have an easy button, right? Good Dog, Have a Biscuit We don’t like to admit it, but in terms of mobile payments and mobile identity, the U.S. is a laggard. Many countries we consider ‘backwards’ were using mobile payments as their principal means to move money long before Apple Pay was announced. But these solutions tend to be carrier-specific; U.S. adoption was slowed by turf wars between banks, carriers, and mobile device vendors. Secure elements or HCE? Generic wallets or carrier payment infrastructure? Tokens or credit cards? Who owns the encryption keys? Do we need biometrics, and if so which are acceptable? Each player has a security vision which depends on and only supports and their business model. Other than a shared desire to discontinue the practice of sending credit card numbers to merchants over SSL, there has been little agreement. For several years now the FIDO Alliance has been working on an open and interoperable set of standards to promote mobile security. This standard does not just establish a level playing field for identity and security vendors—it defines a user experience to make mobile identity and payments easier. So the FIDO standard is becoming a thing. It enables vendors to hook into the framework, and provide their solution as part of the ecosystem. You will notice a huge number of vendors on the show floor touting support for the FIDO standard. Many demos will look pretty similar because they all follow the same privacy, security, and ease of use standards, but all oars are finally pulling in the same direction. Share:

Share:
Read Post

RSA Conference Guide 2015 Deep Dives: Endpoint Security

What you’ll see at the RSAC in terms of endpoint security is really more of the same. Advanced attacks blah, mobile devices blah blah, AV-vendor hatred blah blah blah. Just a lot of blah… But we are still recovering from the advanced attacker hangover, which made painfully clear that existing approaches to preventing malware just don’t work. So a variety of alternatives have emerged to do it better. Check out our Advanced Endpoint and Server Protection paper to learn more about where the technology is going. None of these innovations has really hit the mainstream yet, so it looks like the status quo will prevail again in 2015. But the year of endpoint security disruption is coming—perhaps 2016 will be it… White listing becomes Mission: POSsible Since last year’s RSAC many retailers have suffered high-profile breaches. But don’t despair—if your favorite retailer hasn’t yet sent you a disclosure notice, it will arrive with your new credit card just as soon as they discover the breach. And why are retailers so easy to pop? Mostly because many Point-of-Sale (POS) systems use modern operating systems like Embedded Windows XP. These devices are maintained using state-of-the-art configuration and patching infrastructures—except when they aren’t. And they all have modern anti-malware protection, unless they don’t have even ineffective signature-based AV. POS systems have been sitting ducks for years. Quack quack. Clearly this isn’t a really effective way to protect devices that capture credit cards and handle money, which happen to run on circa-1998 operating systems. So retailers and everyone else dealing with kiosks and POS systems has gotten the white listing bug, big-time. And this bug doesn’t send customer data to carder exchanges in Eastern Europe. What should you look for at the RSAC? Basically a rep who isn’t taking an order from some other company. Calling Dr. Quincy… We highlighted a concept last year, which we call endpoint monitoring. It’s a method for collecting detailed and granular telemetry from endpoints, to facilitate forensic investigation after a device compromise. As it turned out, that actually happened—our big research friends who shall not be named have dubbed this function ETDR (Endpoint Threat Detection and Response). And ETDR is pretty shiny nowadays. As you tour the RSAC floor, pay attention to ease-of-use. The good news is that some of these ETDR products have been acquired by big companies, so they will have a bunch of demo pods in their huge booths. If you want to check out a startup you might have to wait—you can only fit so much in a 10’ by 10’ booth, and we expect these technologies to garner a lot of interest. And since the RSAC has outlawed booth babes (which we think is awesome), maybe the crowded booths will feature cool and innovative technology rather than spandex and leather. While you are there you might want to poke around a bit, to figure out when your EDTR vendor will add prevention to their arsenal, so you can finally look at alternatives to EPP. Speaking of which… Don’t look behind the EPP curtain… The death of endpoint protection suites has been greatly exaggerated. Which continues to piss us off, to be honest. In what other business can you be largely ineffective, cost too much, and slow down the entire system, and still sell a couple billion dollars worth of product annually? The answer is none, but the reason companies still spend money is compliance. If EPP was a horse we would have shot it a long time ago. So what is going to stop the EPP hegemony? We need something that can protect devices and drive down costs, without killing endpoint performance. It will take a vendor with some cajones. Companies offering innovative solutions tend to be content positioning them as complimentary solution to EPP suites. Then they don’t have to deal with things like signature engines (to keep QSAs who are stuck in 2006 happy) or full disk encryption. Unfortunately cajones will be in short supply at the 2015 RSAC—even in a heavily male-dominated crowd. But at some point someone will muster up the courage to acknowledge the EPP emperor has been streaking through RSAC for 5 years, and finally offer a compelling package that satisfies compliance requirements. Can you do us a favor on the show floor? Maybe drop some hints that you would be happy to divert the $500k you plan to spend renewing EPP this year to something that doesn’t suck instead. Mobility gets citizenship… As we stated last year, managing mobile devices is quite the commodity now. The technology keeps flying off the shelves, and MDM vendors continue to pay lip service to security. But last year devices were not really integrated into the organization’s controls and defenses. That has started to change. Thanks to a bunch of acquisitions, most MDM technology is now controlled by big IT shops, so we will start to see the first linkages between managing and protecting mobile devices, and the rest of infrastructure. Leverage is wonderful, especially now when we have such a severe skills gap in security. Now that mobile devices are full citizens, what does that even mean? It means MDM environments are now expected to send alerts to the SIEM and integrate with the service/operations infrastructure. They need to speak enterprise language and play nice with other enterprise systems. Even though there have been some high-profile mobile app problems (such as providing access to a hotel chain’s customer database), there still isn’t much focus on assessing apps and ensuring security before apps hit an app store. We don’t get it. You might check out folks assessing mobile apps (mostly for privacy issues, rather than mobile malware) and report back to your developers so they can ignore you. Again. IoT: Not so much It wouldn’t be an RSAC-G if we didn’t do at least a little click baiting. Mostly just to annoy people who are hoping for all sorts of groundbreaking research on protecting the Internet of Things (IoT). At

Share:
Read Post

RSA Conference Guide 2015 Deep Dives: Network Security

We had a little trouble coming up with a novel and pithy backdrop for what you will see in the Network Security space at RSAC 2015. We wonder if this year we will see the first IoT firewall, because hacking thermostats and refrigerators has made threat models go bonkers. The truth is that most customers are trying to figure out what to do with the new next-generation devices they already bought. We shouldn’t wonder why the new emperor looks a lot like the old emperor, when we dress our new ruler (NGFW) up in clothes (rules) that look so similar to our old-school port- and protocol-based rulesets. But the fact is there will be some shiny stuff at this year’s conference, largely focused on detection. This is a very productive and positive trend—for years we have been calling for a budget shift away from ineffective prevention technologies to detecting and investigating attacks. We see organizations with mature security programs making this shift, but far too many others continue to buy the marketing hyperbole, “of course you can block it.” Given that no one really knows what ‘it’ is, we have a hard time understanding how we can make real progress in blocking more stuff in the coming year. Which means you need to respond faster and better. Huh, where have we heard that before? Giving up on Prevention… Talking to many practitioners over the past year I felt like I was seeing a capitulation of sorts. There is finally widespread acknowledgement that it is hard to reliably prevent attacks. And we are not just talking about space alien attacks coming from a hacking UFO. It’s hard enough for most organizations to deal with Metasploit. Of course we are not going all Jericho on you, advocating giving up on prevention on the network. Can you hear the sigh of relief from all the QSAs? Especially the ones feeling pressure to push full isolation of protected data (as opposed to segmentation) during assessments. Most of those organizations cannot even manage one network, so let’s have them manage multiple isolated environments. That will work out just great. There will still be a lot of the same old same old—you still need a firewall and IPS to enforce both positive (access control) and negative (attack) policies on your perimeter. You just need to be realistic about what they can block—even shiny NGFW models. Remember that network security devices are not just for blocking attacks. We still believe segmentation is your friend—you will continue to deploy those boxes, both to keep the QSAs happy and to make sure that critical data is separated from not-so-critical data. And you will also hear all about malware sandboxes at the RSAC this year. Again. Everyone has a sandbox—just ask them. Except some don’t call them sandboxes. I guess they are discriminating against kids who like sand in today’s distinctly un-politically-correct world. They might be called malware detonation devices or services. That sounds shinier, no? But if you want to troll the reps on the show floor (and who doesn’t?), get them to debate an on-premise approach versus a cloud-based approach to detonation. It doesn’t really matter what side of the fence they are on, but it’s fun seeing them get all red in the face when you challenge them. Finally, you may hear some lips flapping about data center firewalls. Basically just really fast segmentation devices. If they try to convince you they can detect attacks on a 40gbps data center network, and flash their hot-off-the-presses NSS Lab results, ask what happens when they turn on more than 5 rules at a time. If they bother you, say you plan to run SSL on your internal networks and the device needs to inspect all traffic. But make sure an EMT is close by, as that strategy has been known to cause aneurysms in sales reps. To Focus on Detection… So if many organizations have given up trying to block all attacks, what the hell are they supposed to do? Spend tons of money on more appliances to detect attacks they missed at the perimeter, of course. And the security industrial complex keeps chugging along. You will see a lot of focus on network-based threat detection at the show. We ourselves are guilty of fanning the flames a bit with our new research on that topic. The fact is, the technology is moving forward. Analyzing network traffic patterns, profiling and baselining normal communications, and then looking for stuff that’s not normal gives you a much better chance of finding compromised devices on your networks. Before your new product schematics wind up in some non-descript building in Shanghai, Chechnya, Moscow, or Tel Aviv. What’s new is the base of analysis possible with today’s better analytics. Booth personnel will bandy about terms like “big data” and “machine learning” like they understand what they even mean. But honestly baselines aren’t based only on Netflow records or DNS queries any more—they can now incorporate very granular metadata from network traffic including identity, content, frequency of communication, and various other attributes that get math folks all hot and bothered. The real issue is making sure these detection devices can work with your existing gear and aren’t just a flash in the pan, about to be integrated as features in your perimeter security gateway. Okay, we would be pulling your leg if we said any aspect of detection won’t eventually become an integrated feature of other network security gear. That’s just the way it goes. But if you really need to figure out what’s happening on your network, visit these vendors on the floor. While Consolidating Functions… What hasn’t changed is that big organizations think they need separate devices for all their key functions. Or has it? Is best of breed (finally) dead? Well, not exactly, but it has more to do with politics than technology. Pretty much all the network security players have technologies that allow authorized traffic and block attacks. Back when category

Share:
Read Post

RSA Conference Guide 2015 Deep Dives: Application Security

Coming Soon to an Application Near You: DevOps For several years you have been hearing the wonders of Agile development, and how it has done wondrous things for software development companies. Agile development isn’t a product – it is a process change, a new way for developers to communicate and work together. It’s effective enough to attract almost every firm we speak with away from traditional waterfall development. Now there is another major change on the horizon, called DevOps. Like Agile it is mostly a process change. Unlike Agile it is more operationally focused, relying heavily on tools and automation for success. That means not just your developers will be Agile – your IT and security teams will be, too! The reason DevOps is important at RSA Conference – the reason you will hear a lot about it – is that it offers a very clear and positive effect on security. Perhaps for the first time, we can automate many security requirements – embedding them into the daily development, QA, and operational tasks we already perform. DevOps typically goes hand in hand with continuous integration and continuous deployment. For software development teams this means code changes go from idea to development to live production in hours rather than months. Sure, users are annoyed the customer portal never works the same way twice, but IT can deliver new code faster than sales and marketing wanted it, which is itself something of a miracle. Deployment speed makes a leap in the right direction, but the new pipeline provides an even more important foundation for embedding security automation into processes. It’s still early, but you will see the first security tools which have been reworked for DevOps at this year’s RSA conference. I Can Hardly Contain Myself Containers. They’re cool. They’re hot. They… wait, what are they exactly? The new developer buzzword is Docker – the name of both the company and the product – which provides a tidy container for applications and all the associated stuff an application needs to do its job. The beauty of this approach comes from hiding much of the complexity around configuration, supporting libraries, OS support, and the like – all nicely abstracted away from users within the container. In the same way we use abstract concepts like ‘compute’ and ‘storage’ as simple quantities with cloud service providers, a Docker container is an abstract run-anywhere unit of ‘application’. Plug it in wherever you want and run it. Most of the promise of virtualization, without most of the overhead or cost. Sure, some old-school developers think it’s the same “write once, crash anywhere” concept Java did so well with 20 years ago, and of coures security pros fear containers as the 21st-century Trojan Horse. But containers do offer some security advantages: they wrap accepted version of software up with secure configuration settings, and narrowly define how to interact with the container – all of which reduces the dreaded application “threat surface”. You are even likely to find a couple vendors who now deploy a version of their security appliance as a Docker container for virtualized or cloud environments.   All Your Code-base Belong to Us As cloud services continue to advance outsourced security services are getting better, faster, and cheaper than your existing on-premise solution. Last year we saw this at the RSA Conference with anti-malware and security analytics. This year we will see it again with application development. We have already seen general adoption of the cloud for quality assurance testing; now we see services which validate open source bundles, API-driven patching, cloud-based source code scanning, and more dynamic application scanning services. For many the idea of letting anyone outside your company look at your code – much less upload it to a multi-tenant cloud server – is insane. But lower costs have a way of changing opinions, and the automated, API-driven cloud model fits very well with the direction development teams are pulling. Share:

Share:
Read Post

RSA Conference Guide 2015 Deep Dives: Data Security

Data security is the toughest coverage area to write up this year. It reminds us of those bad apocalypse films, where everyone runs around building DIY tanks and improvising explosives to “save the children,” before driving off to battle the undead hordes and—leaving the kids with a couple spoons, some dirt, and a can of corned beef hash. We have long argued for information-centric security—protecting data needs to be an equal or higher priority than defending infrastructure itself. Thanks to a succession of major breaches and a country or two treating our corporate intellectual property like a Metallica song during Napster’s heyday, CEOs and Directors now get it: data security matters. It not only matters—it permeates everything we do across the practice of security (except for DDoS). But that also means data security appears in every section in this year’s RSAC Guide. But it doesn’t mean anyone has the slightest clue of how to stop the hemorrhaging. Anyone Have a Bigger Hammer? From secret-stealing APTs, to credit-card-munching cybercrime syndicates, our most immediate response is… more network and endpoint security. That’s right—the biggest trends in data security are network and endpoint security. Better firewalls, sandboxes, endpoint whitelisting, and all the other stuff in those two buckets. When a company gets breached the first step (after hiring an incident response firm to quote in the press release, saying this was a “sophisticated attack”) is to double down on new anti-malware and analytics. It makes sense. That’s how the bad guys most frequently get in. But it also misses the point. Years ago we wrote up something called the “Data Breach Triangle.” A breach requires three things: an exploit (a way in), something to steal (data) and an egress (way out). Take away any side of that triangle, and no breach. But stopping the exploit is probably the hardest, most expensive side to crack—especially because we have spent the last thirty years working on it… unsuccessfully. The vast majority of data security you’ll see at this conference, from presentations to the show floor, will be more of the same stuff we have always seen, but newer and shinier. As if throwing more money at the same failed solutions will really solve the problem. Look—you need network and endpoint security, but doubling down doesn’t seem to be changing the odds. Perhaps a little diversification is in order. The Cloud Ate My Babies Data security is still one of the top two concerns we run into when working with clients on cloud projects—the other is compliance. Vendors are listening, so you will see no shortage of banners and barkers offering to protect your data in the cloud. Which is weird, because if you pick a decent cloud provider the odds are that your data is far safer with them than in your self-managed data center. Why? Economics. Cloud providers know they can easily lose vast numbers of customers if they are breached. The startups aren’t always there, but the established providers really don’t mess around—they devote far more budget and effort to protecting customer data than nearly any enterprise we have worked with. Really, how many of you require dual authorization to access any data? Exclusively through a monitored portal, with all activity completely audited and two-factor authentication enforced? That’s table stakes for these guys. Before investing in extra data security for the cloud, ask yourself what you are protecting it from. If the data is regulated you may need extra assurance and logging for compliance. Maybe you aren’t using a major provider. But for most data, in most situations, we bet you don’t need anything too extreme. If a cloud data protection solution mostly protects you from an administrator at your provider, you might want to just give them a fake number.   BYOD NABD One area trending down is the concern over data loss from portable devices. It is hard to justify spending money here when we can find almost no cases of material losses or public disclosures from someone using a properly-secured phone or tablet. Especially on iOS, which is so secure the FBI is begging Congress to force Apple to add a back door (we won’t make a joke here—we don’t want to get our editor fired). You will still see it on the show floor, and maybe a few sessions (probably panels) where there’s a lot of FUD, but we mostly see this being wrapped up into Mobile Device Management and Cloud Security Gateways, and by the providers themselves. It’s still on the list—just not a priority. Encrypt, Tokenize, or Die (well, look for another job) Many organizations are beginning to realize they don’t need to encrypt every piece of data in data centers and at cloud providers, but there are still a couple massive categories where you’d better encrypt or you can kiss your job goodbye. Payment data, some PII, and some medical data demand belt and suspenders. What’s fascinating is that we see encryption of this data being pushed up the stack into applications. Whether in the cloud or on-premise, there is increasing recognition that merely encrypting some hard drives won’t cut it. Organizations are increasingly encrypting or tokenizing at the point of collection. Tokenization is generally preferred for existing apps, and encryption for new ones. Unless you are looking at payment networks, which use both. You might actually see this more in sessions than on the show floor. While there are some new encryption and tokenization vendors, it is mostly the same names we have been working with for nearly 10 years. Because encryption is hard. Don’t get hung up on different tokenization methods; the security and performance of the token vault itself matters more. Walk in with a list of your programming languages and architectural requirements, because each of these products has very different levels of support for integrating with your projects. The lack of a good SDK in the language you need, or a REST API, can set you back months. Cloud Encryption Gets Funky Want to

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.