Securosis

Research

RSA Conference Guide 2014 Key Theme: APT0

  It’s that time of year. The security industry is gearing up for the annual pilgrimage to San Francisco for the RSA Conference. For the fifth year your pals at Securosis are putting together a conference guide to give you some perspective on what to look for and how to make the most of your RSA experience. We will start with a few key themes for the week, and then go into deep dives on all our coverage areas. The full guide will be available for download next Wednesday, and we will post an extended Firestarter video next Friday discussing the Guide. Without further ado, here is our first key theme. APT0 Last year the big news at the RSA Conference was Mandiant’s research report outing APT1 and providing a new level of depth on advanced attacks. It seemed like every vendor at the show had something to say about APT1, but the entire conference was flowing in Mandiant’s wake. They should have called the report “How to increase your value by a couple hundred million in 12 short months”, but that’s another story for another day. In 2014 Edward Snowden put on his Kevin Mandia costume and identified the clear predecessor to the APT1 group. That’s right, the NSA is APT0. Evidently the NSA was monitoring and hacking things back when the APT1 hackers were in grade school. We expect most vendors will be selling spotlights and promises to cut through the fog of the NSA disclosures. But getting caught up in FUD misses the point: Snowden just proved what we have always known. It is much harder to build things than to break them. Our position on APT0 isn’t much different than on APT1. You cannot win against a nation-state. Not in the long term, anyway. Rather than trying to figure out how much public trust in security tools has eroded, we recommend you focus on what matters: how to protect information in your shop. Are you sure an admin (like Snowden) can’t access everything and exfiltrate gigabytes of critical data undetected? If not you have some work to do. Keep everything in context at the show. Never forget that the security marketing machine is driven by high-profile breaches as a catalyst for folks who don’t know what they are doing to install the latest widget selling the false hope of protection. And the RSA Conference is the biggest security marketing event of the year. So Snowden impersonators will be the booth babes of 2014.   Share:

Share:
Read Post

Quick Wins with TISM

After making the case for threat intelligence (TI), and combining it with some ideas about how security monitoring (SM) is evolving – based both on customer needs and technology evolution – there is clear value in integrating TI into your SM efforts. But all that stuff is still conceptual. How can you actually apply this integrated process to shorten the window between compromise and detection? How can you get a quick win for the integration of TI and SM to build some momentum for your efforts? Finally, how do you ensure you can turn that quick win into sustainable leverage, producing increased accuracy and better prioritization of alerts from the SM platform? Let’s say you work for a big retailer with thousands of stores. You do tens of millions of transactions a month, and have credit card data for tens of millions of customers. Your organization is a high-profile target, so you have spent a bunch on security controls. Part of being a large Tier 1 merchant, at least from a PCI-DSS standpoint, is that the assessors are there pretty much every quarter. You can play the compensating control fandango to a point (and you do), but senior management understands the need to avoid becoming the latest object lesson on data breaches. So you get a bunch of resources and spend a bunch of money, with the clear responsibility to make sure private data remains private. But this is also the real world, and your organization is a big company. They have technology assets all over the place and employees come and go, especially around the holidays. They all have access to the corporate network, and no matter how much time you spend educating those folks they will make mistakes. This long preamble is just to illustrate that you get it. Your odds of keeping attackers out range between nil and less than nil. So security monitoring will be a key aspect of your plan to detect attackers. The good news is that you already aggregate a bunch of log data, mostly because you need to (thanks, PCI!). You can build on this foundation and use TI to start looking for attack patterns and other suspicious activity that others have seen to give you early warning of imminent attacks. Low Hanging Fruit With any new technology project you want to show value quickly and then parlay it into sustainable advantage. So let’s focus on obvious stuff that can yield the quick win you need. There are a couple areas to look at, but the path of least resistance tends to be finding devices that are already compromised and remediating them quickly. A couple fairly reliable TI sources can yield this kind of information quickly, as detailed earlier in this series. Once you identify the suspicious device, as discussed in The TI + SM Process, you need to collect more detailed data from it. Optimally you get deep endpoint (or server) telemetry including all file activity, registry and other configuration values, and a forensic capture of the device. To provide a full view of what’s going on you also want to capture the network traffic to and from it. Armed with that kind of information you can search for specific malware indicators and other clear manifestations of attack. Baselines At this point you have likely found some devices with issues, and acted decisively to remediate the issues and contain the damage. Once the actively compromised stuff is dealt with you can get a little more strategic about what to look for. Since you have been collecting data for a while (thanks again, PCI!), you can now build what should be a reasonable baseline of normal activity for these devices. Of course you will remove the data from compromised devices, and you will then be able to set alerts on activity that is not normal. That’s Security Monitoring 201 – not really novel. In this scenario you can accrue a lot of extra value by integrating TI into the process, by analyzing activity around devices that are no longer acting normal. You don’t have the smoking gun of seeing a device participating in a botnet, or sending traffic to known bad sites, but it isn’t acting normally so it warrants attention. Of course a lot of current malware isn’t easy to find, but you can leverage TI to look for emerging attacks. Let’s make this a little more tangible by going back to our example of the very large retailer. As with most big companies, you have a bunch of externally facing devices that serve up a variety of things to customers. Not all of them have access to mission critical data (unless you screw up your network segmentation), so they may not get much scrutiny or monitoring focus. But you can still track traffic in and out of them to see if or when they start acting strangely. If you see an externally facing web server start sending traffic to a bunch of other devices within its network segment, that is probably suspicious. Normally, they only send traffic across the internal network to the application server farm that provides the data for their applications. Communicating with other internal hosts is not normal, so you start pulling some additional telemetry from the devices and capturing their traffic. What integrating TI enables you to do with that now-suspicious device is to search for indicators and other behavior patterns you weren’t looking for. Any security monitoring platform is limited to looking for things you tell it to look for. With TI integrated you could identify traffic heading to an emerging botnet. Maybe you will be able to find new files and/or folders associated with a little-known malware kit. Since you haven’t seen this stuff before, you don’t know to look for it. But your TI provider is much more likely to see it, and they can tip your system what to look for. Without TI, when you identify a suspicious device, you are basically back to shooting in the dark. You have a device

Share:
Read Post

Security’s Future: Implications for Cloud Providers

This is the fifth post in a series on the future of information security, which will be the basis for a white paper. You can leave feedback here as a blog comment, or even submit edits directly over at GitHub, where we are running the entire editing process in public. This is the initial draft, and I expect to trim the content by about 20%. The entire outline is available. See the first post, second post, third post and fourth post. Implications for Cloud and Infrastructure Providers Security is (becoming) a top-three priority for cloud and infrastructure providers of all types. For providers with enterprise customers and those which handle regulated data, security is likely the first priority. As important as it is to offer compelling and innovative services to customers, a major security failure has the potential to wipe out clients’ ability to trust you – even before legal liabilities. If you handle information with value on behalf of your customers, you are, for nearly all intents and purposes, a form of bank. Trust Is a Feature Enterprises can’t transition to the cloud without trust. Their stakeholders and regulators simply won’t support it. Consumers may, to a point, but only the largest and most popular properties can withstand the loss of trust induced by a major breach. There are 5 corollaries: Customers need a baseline of security features to migrate to the cloud. This varies by the type of service, but features such as federated identity, data security, and internal access controls are table stakes. Cloud providers need a baseline of inherent security to withstand attacks, as well as customer-accessible security features to enable clients to implement their security strategies. You are a far bigger target than any single customer, and will experience advanced attacks on a regular basis. Centralizing resources alters the economics of attacks, inducing bad guys to incur higher costs for the higher rewards of access to all a cloud provider’s customers at once. User own their data. Even if it isn’t in a contract or SLA, if you affect their data in a way they don’t expect, that breaks trust just as surely as a breach. Multitenancy isolation failures are a material risk for you and your customers. If a customer’s data is accidentally exposed to another customer, that is, again, a breach of security and trust. People have been hunting multitenancy breaks in online services for years, and criminals sign up for services just to hunt for more. Trust applies to your entire cloud supply chain. Many cloud providers also rely on other providers. If you own the customer trust relationship you are responsible for any failures in the digital supply chain. It isn’t enough to simply be secure – you also need to build trust and enable your customers’ security strategies. Building Security in The following features and principles allow customers to align their security needs with cloud services, and are likely to become competitive differentiators over time: Support APIs for security functions. Cloud platforms and infrastructure shouldn’t merely expose APIs for cloud features; but also for security functions such as identity management, access control, network security, and whatever else falls under customer control. This enables security management and integration. Don’t require customers to log into your web portal to manage security – although you also need to expose all those functions in your user interface. Provide logs and activity feeds. Extensive logging and auditing are vital for security – especially for monitoring the cloud management plane. Expose as much data, as close to in real time, as possible. Transparency is a powerful security enabler provided by centralization of services and data. Feeds should be easily consumable in standard formats such as JSON. Simplify federated identity management. Federation allows organizations to extend their existing identity and access management to the cloud while retaining control. Supporting federation for dozens or hundreds of external providers is daunting, with entire products available to address that issue. Make it as easy as possible for your customers to use federation, and stick to popular standards that integrate with existing enterprise directories. Also support the full lifecycle of identity management, from creation and propagation to changing roles and retirement. Extend security to endpoints. We have focused on the cloud, but mobility is marching right alongside, and just as disruptive. Endpoint access to services and data – including apps, APIs, and web interfaces – should support all security features equally across platforms. Clearly document security differences across platforms, such as the different data exposure risks on an iOS device vs. Android device vs. laptops. Encrypt by default. If you hold customer data encrypt it. Even if you don’t think encryption adds much security, it empowers trust and supports compliance. Then allow customers who want, to control their own keys. This is technically and operationally complex, but becomes a competitive differentiator, and can eliminate many data security concerns and smooth cloud adoption. Maintain security table stakes. Different types of services handling different types of workflows and data tend to share a security baseline. Fall below it and customers will be drawn to the competition. For example IaaS providers must include basic network security on a per-server level. SaaS providers need to support different user roles for access management. These change over time so watch your competition and listen to customer requests. Document security. Provide extensive documentation for both your internal security controls and the security features customers can use. Have them externally audited and assessed. This allows customers to know where the security lines are drawn, where they need to implement their own security controls, and how. Pay particular attention to documenting the administrator controls that restrict your staff’s ability to see customer data and audit when they do. These are nothing near all the security features and capabilities cloud providers should consider, but they strongly align with the way we see enterprise security evolving. Conclusion Once, many years ago, I had the good fortune to enjoy a few beers with futurist and science fiction author Bruce Sterling. That night he told me that his job as a futurist is to try to

Share:
Read Post

Incite 2/5/2014: Super Dud

I’m sure long-time Incite readers know I am a huge football fan. I have infected the rest of my family, and we have an annual Super Bowl party with 90+ people to celebrate the end of each football season. I have laughed (when Baltimore almost blew a 20 point lead last year), cried (when the NY Giants won in 2011), and always managed to have a good time. Even after I stopped eating chicken wings cold turkey (no pun intended), I still figure out a way to pollute my body with pizza, chips, and Guinness. Of course, lots of Guinness. It’s not like I need to drive home or anything. This year I was very excited for the game. The sentimental favorite, Peyton Manning, was looking to solidify his legacy. The upstart Seahawks with the coach who builds his players up rather than tearing them down. The second-year QB who everyone said was too short. The refugee wide receiver from the Pats, with an opportunity to make up for the drop that gave the Giants the ring a few years ago. So many story lines. Such a seemingly evenly matched game. #1 offense vs. #1 defense. Let’s get it on! I was really looking forward to hanging on the edge of my seat as the game came down to the final moments, like the fantastic games of the last few years. And then the first snap of the game flew over Peyton’s head. Safety for the Seahawks. 2-0 after 12 seconds. It went downhill from there. Way downhill. The wives and kids usually take off at halftime because it’s a school night. But many of the hubbies stick around to watch the game, drink some brew, and mop up whatever deserts were left by the vultures of the next generation. But not this year. The place cleared out during halftime and I’m pretty sure it wasn’t in protest at the chili peppers parading around with no shirts. The game was terrible. Those sticking around for the second half seemed to figure Peyton would make a run. It took 12 seconds to dispel that myth, as Percy Harvin took the second half kick-off to the house. It was over. I mean really over. But it’s the last football game of the year, so I watched until the end. Maybe Richard Sherman would do something to make the game memorable. But that wasn’t to be, either. He was nothing but gracious in the interviews. WTF? Overall it was a forgettable Super Bowl. The party was great. My stomach and liver hated me the next day, as is always the case. And we had to deal with Rich being cranky because his adopted Broncos got smoked. But it’s not all bad. Now comes the craziness leading up to the draft, free agency, and soon enough training camp. It makes me happy that although football is gone, it’s not for long. –Mike Photo credit: “Mountain Dew flavoured Lip Balm and Milk Duds!!!” originally uploaded by Jamie Moore Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. The Future of Information Security What it means (Part 3) Six Trends Changing the Face of Security A Disruptive Collision Introduction Leveraging Threat Intelligence in Security Monitoring The Threat Intelligence + Security Monitoring Process Revisiting Security Monitoring Benefiting from the Misfortune of Others Reducing Attack Surface with Application Control Use Cases and Selection Criteria The Double Edged Sword Advanced Endpoint and Server Protection Assessment Introduction Newly Published Papers Eliminating Surprises with Security Assurance and Testing What CISOs Need to Know about Cloud Computing Defending Against Application Denial of Service Security Awareness Training Evolution Firewall Management Essentials Continuous Security Monitoring API Gateways Threat Intelligence for Ecosystem Risk Management Dealing with Database Denial of Service Identity and Access Management for Cloud Services Incite 4 U Scumbag Pen Testers: Check out the Chief Monkey’s dispatch detailing pen testing chicanery. These shysters cut and pasted from another report and used the findings as a means to try to extort additional consulting and services from the client. Oh, man. The Chief has some good tips about how to make sure you aren’t suckered by these kinds of scumbags either. I know a bunch of this stuff should be pretty obvious, but clearly an experienced and good CISO got taken by these folks. And make sure you pay the minimum amount up front, and then on results. – MR Scumbags develop apps too: We seem to be on a scumbag theme today, so this is a great story from Barracuda’s SignNow business about how they found a black hat app developer trying to confuse the market and piggyback on SignNow’s brand and capabilities. Basically copy an app, release a crappy version of it, confuse buyers by ripping off the competitor’s positioning and copy, and then profit. SignNow sent them a cease and desist letter (gotta love those lawyers) and the bad guys did change the name of the app. But who knows how much money they made in the meantime. Sounds a lot like a tale as old as time… – MR He was asking for it: As predicted and with total consistency, the PCI Security Standards Council has once again blamed the victim, defended the PCI standard, and assured the public that nothing is wrong here. In an article at bankinfosecurity.com, Bob Russo of the SSC says: “As the most recent industry forensic reports indicate, the majority of the breaches happening are a result of some kind of breakdown in security basics – poor implementation, poor maintenance of controls. And the PCI standards [already] cover these security controls”. Well, it’s all good, right? Except nobody is capable of meeting the standard consistently, and all these breaches are against PCI Certified organizations. But nothing wrong with the standard – it’s the victim’s fault. You

Share:
Read Post

TISM: The Threat Intelligence + Security Monitoring Process

As we discussed in Revisiting Security Monitoring, there has been significant change on the security monitoring (SM) side, including the need to analyze far more data sources at a much higher scale than before. One of the emerging data sources is threat intelligence (TI), as detailed in Benefiting from the Misfortune of Others. Now we need to put these two concepts together, to detail the process of integrating threat intelligence into your security monitoring process. This integration can yield far better and more actionable alerts from your security monitoring platform, because the alerts are based on what is actually happening in the wild. Developing Threat Intelligence Before you can leverage TI in SM, you need to gather and aggregate the intelligence in a way that can be cleanly integrated into the SM platform. We have already mentioned four different TI sources, so let’s go through them and how to gather information. Compromised Devices: When you talk about actionable information, a clear indication of a compromised device is the most valuable intelligence – a proverbial smoking gun. There are a bunch of ways to conclude that a device is compromised. The first is by monitoring network traffic and looking for clear indicators of command and control traffic originating from the device, such as the frequency and content of DNS requests that might show a domain generating algorithm (DGA) to connect to botnet controllers. Monitoring traffic from the device can also show files or other sensitive data, indicating exfiltration or (via traffic dynamics) a remote access trojan. One approach, which does not require on-premise monitoring, involves penetrating the major bot networks to monitor botnet traffic, in order to identify member devices – another smoking gun. Malware Indicators: As we described in Malware Analysis Quant, you can build a lab and do both static and dynamic analysis of malware samples to identify specific indicators of how the malware compromises devices. This is obviously not for the faint of heart; thorough and useful analysis requires significant investment, resources, and expertise. Reputation: IP reputation data (usually delivered as a list of known bad IP addresses) can trigger alerts, and may even be used to block outbound traffic headed for bad networks. You can also alert and monitor on the reputations of other resources – including URLs, files, domains, and even specific devices. Of course reputation scoring requires a large amount of traffic – a significant chunk of the Internet – to observe useful patterns in emerging attacks. Given the demands of gathering sufficient information to analyze, and the challenge of detecting and codifying appropriate patterns, most organizations look for a commercial provider to develop and provide this threat intelligence as a feed that can be directly integrated into security monitoring platforms. This enables internal security folks to spend their time figuring out the context of the TI to make alerts and reports more actionable. Internal security folks also need to validate TI on an ongoing basis because it ages quickly. For example C&C nodes typically stay active for hours rather than days, so TI must be similarly fresh to be valuable. Evolving the Monitoring Process Now armed with a variety of threat intelligence sources, you need to take a critical look at your security monitoring process to figure out how it needs to change to accommodate these new data sources. First let’s turn back the clock to revisit the early days of SIEM. A traditional SIEM product is driven by a defined ruleset to trigger alerts, but that requires you to know what to look for, before it arrives. Advanced attacks cannot really be profiled ahead of time, so you cannot afford to count on knowing what to look for. Moving forward, you need to think differently about how to monitor. We continue to recommend identifying normal patterns on your network with a baseline, and then looking for anomalous deviation. To supplement baselines watch for emerging indicators identified by TI. But don’t minimize the amount of work required to keep everything current. Baselines are constantly changing, and your definition of ‘normal’ needs ongoing revision. Threat intelligence is a dynamic data source by definition. So you need to look for new indicators and network traffic patterns in near real time, for any hope of keeping up with hourly changes of C&C nodes and malware distribution sites. Significant automation is required to ensure your monitoring environment is keeping pace with attackers, and successfully leveraging available resources to detect attacks. The New Security Monitoring Process Model At this point it is time to revisit the security monitoring process model developed for our Network Security Operations Quant research. By adding a process for gathering threat intelligence and integrating TI into the monitoring process, you can more effectively handle the rapidly changing attack surface and improve your monitoring results.   Gather Threat Intelligence The new addition to the process model is gathering threat intelligence. As described above, there are a number of different sources you can (and should) integrate into the monitoring environment. Here are brief descriptions of the steps: Profile Adversary: As we covered in the CISO’s Guide to Advanced Attackers, it is critical to understand who is most likely to be attacking you, which enables you to develop a profile of their tactics and methods. Gather Samples: The next step in developing threat intelligence is to gather a ton of data that can be analyzed to define the specific indicators that comprise the TI feed (IP addresses, malware indicators, device changes, executables, etc.). Analyze Data and Distill Threat Intelligence: Once the data is aggregated you can mine the repository to identify suspicious activity and distill that down into information pertinent to detecting the attack. This involves ongoing validation and testing of the TI to ensure it remains accurate and timely. Aggregate Security Data The steps involved in aggregating security data are largely unchanged in the updated model. You still need to enumerate which devices to monitor in your environment, scope the kinds of data you will get from them, and define collection policies and correlation rules. Then you can move on to the active step of

Share:
Read Post

TISM: Revisiting Security Monitoring

In our first post on Leveraging Threat Intelligence in Security Monitoring (TISM), Benefiting from the Misfortune of Others, we discussed threat intelligence as a key information source for shortening the window between compromise and detection. Now we need a look in terms of security monitoring – basically how monitoring processes need to adapt to the ability to leverage threat intelligence. We will start with the general monitoring process first documented in our Network Security Operations Quant research. This is a good starting point – it details all the gory details involved in monitoring things. Of course its focus is firewalls and IPS devices, but expanding it to include the other key devices which require monitoring isn’t a huge deal. Network Security Monitoring   Plan In this phase we define the depth and breadth of our monitoring activities. These are not one-time tasks but processes to revisit every quarter, as well as after incidents that triggers policy review. Enumerate: Find all the security, network, and server devices which are relevant to the security of the environment. Scope: Decide which devices are within scope for monitoring. This involves identifying the asset owner; profiling the device to understand data, compliance, and policy requirements; and assessing the feasibility of collecting data from it. Develop Policies: Determine the depth and breadth of the monitoring process. This consists of two parts: organizational policies (which devices will be monitored and why); and device & alerting policies (which data will be collected from. It may include any network, security, computing, application, or data capture/forensics device. Policies For device types in scope, device and alerting policies are developed to detect potential incidents which require investigation and validation. Defining these policies involves a QA process to test the effectiveness of alerts. A tuning process must be built into alerting policy definitions – over time alert policies need to evolve as the targets to defend change, along with adversaries’ tactics. Finally, monitoring is part of a larger security operations process, so policies are required for workflow and incident response. They define how monitoring information is leveraged by other operational teams and how potential incidents are identified, validated, and investigated. Monitor In this phase monitoring policies are put to use, gathering data and analyzing it to identify areas for validation and investigation. All collected data is stored for compliance, trending, and reporting as well. Collect: Collect alerts and log records based on the policies defined under Plan. This can be performed within a single-element manager or abstracted into a broader Security Information and Event Management (SIEM) system for multiple devices and device types. Store: Collected data must be stored for future access, for both compliance and forensics. Analyze: The collected data is analyzed to identify potential incidents based on alerting policies defined in Phase 1. This may involve numerous techniques, including simple rule matching (availability, usage, attack traffic policy violations, time-based rules, etc.) and/or multi-factor correlation based on multiple device types. Action When an alert fires in the analyze step, this phase kicks in to investigate and determine whether further action is necessary. Validate/Investigate: If and when an alert is generated, it must be investigated to validate the attack. Is it a false positive? Is it a real issue that requires further action? If the latter, move to the Action phase. If this was not a ‘good’ alert, do policies need to be tuned? Action/Escalate: Take action to remediate the issue. This may involve a hand-off or escalation to Operations. After a few alert validations it is time to determine whether policies must be changed or tuned. This must be a recurring feedback loop rather than a one-time activity – networks and attacks are both dynamic, and require ongoing diligence to ensure monitoring and alerting policies remain relevant and sufficient. What Has Changed Security monitoring has undergone significant change over the past few years. We have detailed many of these changes in our Security Management 2.5 series, but we will highlight a few of the more significant aspects. The first is having to analyze much more data from many more sources. We will go into detail later in this post. Next, the kind of analysis performed on the collected data is different. Setting up rules for a security monitoring environment was traditionally a static process – you would build a threat model and then define rules to look for that kind of attack. This approach requires you to know what to look for. For reasonably static attacks this approach can work. Nowadays planning around static attacks will get you killed. Tactics change frequently and malware changes daily. Sure, there are always patterns of activity to indicate a likely attack, but attackers have gotten proficient at evading traditional SIEMs. Security practitioners need to adapt detection techniques accordingly. So you need to rely much more on detecting activity patterns, and looking for variations from normal patterns to trigger the alerts and investigation. But how can you do that kind of analysis on what could be dozens of disparate data sources? Big data, of course. Kidding aside, that is actually the answer, and it is not overstating to say that big data technologies will fundamentally change how security monitoring is done – over time. Broadening Data Sources In Security Management 2.5: Platform Evolution, we explained that to keep pace with advanced attackers, security monitoring platforms must do more with more data. Having more data opens up very interesting possibilities. You can integrate data from identity stores to trace behavior back to users. You can pull information from applications to look for application misuse, or gaming legitimate application functionality including search and shopping carts. You can pull telemetry from server and endpoint devices, to search for specific indicators of compromise – which might represent a smoking gun and point out a successful attack. We have always advocated for collecting more data, and monitoring platforms are beginning to develop capabilities to take advantage of additional data for analytics. As we mentioned, security monitoring platforms are increasingly leveraging advanced data stores, supporting much different (and more advanced) analytics to find patterns among many different data sources.

Share:
Read Post

Incite 1/29/2014: Southern Snowpocalypse

I grew up in the northeast. My memories of snow weren’t really good. I didn’t ski, so all that I knew about snow was that I had to shovel it and it’s hard to drive in. It is not inherently hard to drive in snow, but too many folks have no idea what they are doing, which makes it hard. To be clear, this situation is on me. I had an opportunity to go home earlier today. But I wanted my coffee and the comfort of working in a familiar Starbucks, rather than my familiar basement office. Not my brightest decision. I figured most folks would clear out early, so it would be fine later in the day. Wrong. Wrong. Wrong. Evidently there are an infinite number of people in the northern Atlanta suburbs trying to get home. And they are all on the road at the same time. A few of them have rear wheel drive cars, which get stuck on the mildest of inclines. No one can seem to get anywhere. I depend on the Waze app for navigation. Its crowdsourced traffic info has been invaluable. Not today. It has routed me in a circle, and 90 minutes later I am basically where I started. Although I can’t blame Waze – you can’t really pinpoint where a car gets stuck and causes gridlock until someone passes by. In case it wasn’t clear, no one is going anywhere. So I wait. I read my email. I caught up on my twitter feed. I checked Facebook, where I saw that most of my friends in ATL were similarly stuck in traffic. It’s awesome. My kids have already gone out and played in the snow. I hope the boss took pictures. I missed it. Oh well. Nothing I can do now. Except smile. And breathe. And smile again. At some point I will get home. I will be grateful. Oh yeah, and next time I will stay home when it threatens to snow. Duh. –Mike UPDATE: It took me about 4 1/2 hours to get home. Yes, to travel 6 miles. I could have walked home faster. But it was 20 degrees, so that wouldn’t really have worked well either. Some kids in XX1’s middle school didn’t get home until 10 PM. It was a total nightmare. My family and friends are safe, and that’s all that matters. Now get these kids out of my hair. I have work to do… Photo credit: This is an actual picture of sitting in traffic yesterday. What you see was my view for about an hour inching along. And I don’t normally play on the phone when I’m driving, but at that point I wasn’t really driving… Heavy Research We’re back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. The Future of Information Security Introduction Leveraging Threat Intelligence in Security Monitoring Benefiting from the Misfortune of Others Reducing Attack Surface with Application Control Use Cases and Selection Criteria The Double Edged Sword Advanced Endpoint and Server Protection Assessment Introduction Newly Published Papers Eliminating Surprises with Security Assurance and Testing What CISOs Need to Know about Cloud Computing Defending Against Application Denial of Service Security Awareness Training Evolution Firewall Management Essentials Continuous Security Monitoring API Gateways Threat Intelligence for Ecosystem Risk Management Dealing with Database Denial of Service Identity and Access Management for Cloud Services Incite 4 U CISOs don’t focus on technology, not for long anyway: Seems like this roundtable that Dan Raywood covered in CISOs have “too much focus on technology” is about 5 years behind the times. I spend a bunch of time with CISOs, and for the most part they aren’t consumed by technology – more likely they are just looking for products to make the hackers go away. They have been focused on staffing and communicating the value of their security program. Yes, they still worry about malware and mobile devices and this cloud thing. But that doesn’t consume them anymore. And any CISO who is consumed by technology and believes any set of controls can make hackers go away should have a current resume – s/he will need it. – MR You don’t want to know: Sri Karnam writes about the 8 things your boss wants you to know about ‘Big Data Security’ on the HP blog – to which I respond ‘Not!’ The three things your boss wants to know, in a security context, are: 1) What sensitive data do we have in there? 2) What is being done to secure it? 3) Is that good enough? The key missing ingredient from Sri’s post is that your boss wants this information off the record. Bosses know to not go looking for trouble, and just want to know how to respond when they are asked when their boss asks. If you formally tell them what’s going on, they have knowledge, and can no longer rely on plausible deniability to blame you when something blows up. Sure, that’s an ethical copout, but it’s also a career-saver. – AL Pure vs. applied research: Interesting post on Andrew Hay’s blog about why security vendors need a research group. It seems every security vendor already has a research group (even if it’s a guy paying someone to do a survey), so he’s preaching to the choir a bit. But I like his breakdown of pure vs. applied research, where he posits vendors should be doing 70% of their research in areas that directly address customer problems. I couldn’t agree more. If you’re talking about a huge IT company, then they can afford to have Ph.D.s running around doing science projects. But folks who have to keep the lights on each quarter should be focused on doing research to help their customers solve problems. Because most customers can’t think about pure research while they are trying to survive each day. –

Share:
Read Post

Leveraging Threat Intelligence in Security Monitoring: Benefiting from the Misfortune of Others

Threat intelligence (TI) is hot because it promises to close the gap a bit between attackers and defenders. So we have done considerable research on TI over the past year. We started by talking about the Early Warning System, a monitoring concept that leverages threat intelligence feeds to look for emerging attacks. Then we dove into the kinds of TI you can extract from network traffic, the ability to identify malicious IPs and senders by gathering TI through email, and finally a view of the external world through EcoSystem TI. As you see there are many different types of threat intelligence feeds, and many ways to apply the technology – both to increase the effectiveness of alerting, and to implement preemptive workarounds based on likely attacks. That is why we call threat intelligence benefiting from the misfortune of others. By understanding attack patterns and other nuggets of information gleaned from studying attacks on other organizations, you can get ahead of the threat. Okay – you cannot actually get ahead of the threat without a time machine. The threat is already out there, but hopefully it hasn’t been used against you yet. As the networks promote their summer reruns, “If you haven’t seen it, it’s new to you!” Shortening the Window We believe one of the most compelling uses for threat intelligence is to help detect attacks earlier in the attack cycle. By looking for attack patterns identified via threat intelligence in your security monitoring/analytics functions, you can shorten the window between compromise and detection. So we are happy to start a new series called Leveraging Threat Intelligence in Security Monitoring. We will go into depth on how to update your process, in order to integrate your existing malware analysis/threat intelligence gathering function with your security monitoring team’s work. We will be using parts of our Network Security Operations Quant and Malware Analysis Quant process maps to document a new Security Monitoring Process Model leveraging threat intelligence. We would also like to thank Norse Corp for agreeing to potentially license this content at the end of the process. We build all our public research using our Totally Transparent Research model, so all the research will be posted to the blog first to give everyone an opportunity to provide feedback and comment. But first things first. We need to set the stage by revisiting the kinds of threat intelligence we have highlighted in our research. This will provide the context you need to understand the kinds of TI feeds you can integrate into your security monitoring environment. Threat Intelligence Sources You can get effective threat intelligence from a number of different sources. We can chunk them into major categories to look at for security monitoring: Compromised Devices Malware Indicators Reputation Command and Control Networks Compromised Devices The first category of TI is the proverbial smoking gun. Something may look compromised, but until it actually starts acting compromised you may never know. Services are emerging to look for indications on the Internet of devices which either act like bots or communicate with C&C networks. These services are no-touch – you don’t need to install anything on your own network to get a verdict on devices within your network. How does it work? The intelligence providers penetrate botnets and monitor traffic on C&C networks. Using this information they build lists of (compromised) devices participating in botnets. Of course these services might detect your own internal honeypots or other malware analysis. So you will want to make sure you have some means of determining which devices should show up on their lists, and which shouldn’t. But being able to identify compromised devices is extremely useful for prioritizing remediation. Malware Indicators Malware analysis continues to mature rapidly, getting better and better at understanding exactly what malicious code does. This enables you to define both technical and behavioral indicators to seek out within your environment, as Malware Analysis Quant described in gory detail. Why is this important? The key strategy of classical AV – file blacklisting – is no longer effective, so indicators enable you to detect malware by what it does. A number of companies offer information on specific malware. You can upload a hash of a malware file – if the recipient has seen it already they match the hash and return their analysis; otherwise you upload the whole file for analysis. The services run malware samples through proprietary sandbox environments and other analysis engines to figure out what they do, build detailed profiles, and provide comprehensive reports which include specific behaviors and indicators. You can search your environment for those indicators to pinpoint possibly compromised devices. You can also draw conclusions from the kinds of indicators you find. Have those tactics been tied to specific adversaries? Do you see these kinds of activities during reconnaissance, exploitation, or exfiltration? Your analysis can enrich these indicators with additional context for better decisions about the best next step. Reputation Since its emergence as a primary data source in the battle against spam, reputation data seems to have become a component of every security control. The most common reputation data is based on IP addresses, and provides a dynamic list of known bad and/or suspicious addresses. This has a variety of uses – learning that a partner’s IP address has been compromised, for instance, should set off alarms, especially if the partner has a direct connection to your network. Traffic to known malware distribution sites, phishing sites, command and control nodes, spam relays, and other sites with bad reputations should be investigated. Besides IP addresses, pretty much everything within your environment can (and should) have a reputation. Devices, URLs, domains, and files, for starters. If you have traffic going to a known bad site, weird traffic coming from a vulnerable contractor-owned device, or even a known bad file showing up when a salesperson connects to the corporate network, you have something to investigate. If something in your environment develops a bad reputation – perhaps as a spam relay or DoS attacker – you need to know ASAP, hopefully before your entire network gets blacklisted. C&C Traffic Patterns One specialized type

Share:
Read Post

The SIXTH Annual Disaster Recovery Breakfast (with 100% less boycott)

Holy crap, time flies! Especially when you mark years by making the annual pilgrimage to San Francisco for the RSA Conference. Once again we are hosting our RSA Conference Disaster Recovery Breakfast. It has been six frickin’ years! That’s hard to believe but reinforces that we are not spring chickens anymore. We are grateful that so many of our friends, clients, and colleagues enjoy a couple hours away from the glitzy show floor and club scene that dominates the first couple days of the conference. By Thursday you will probably be a disaster like us and ready to kick back, have some conversations at a normal decibel level, and grab a nice breakfast. And with the continued support of MSLGROUP and Kulesa Faul, we are happy to provide an oasis in a morass of hyperbole, booth babes, and tchotchke hunters. As always, the breakfast will be Thursday morning from 8-11 at Jillian’s in the Metreon. It’s an open door – come and leave as you want. We will have food, beverages, and assorted recovery items to ease your day (non-prescription only). Yes, the bar will be open because Mike doesn’t like to drink alone. Remember what the DR Breakfast is all about. No marketing, no spin, just a quiet place to relax and have muddled conversations with folks you know, or maybe even go out on a limb and meet someone new. After three nights of RSA Conference shenanigans we are pretty confident you will enjoy the DRB as much as we do. See you there. To help us estimate numbers, please RSVP to rsvp (at) securosis (dot) com. Share:

Share:
Read Post

Incite 1/22/2014: The Catalyst

I was on the phone last week with Jen Minella, preparing for a podcast on our Neuro-Hacking talk at this year’s RSA Conference, when she asked what my story is. We had never really discussed how we each came to start mindfulness practices. So we shared our stories, and then I realized that given everything else I share on the Incite, I should tell it here as well. Simply put, I was angry and needed to change. Back in 2006 I decided I wanted to live past 50, so I starting taking better care of myself physically. But being more physically fit is only half the equation. I needed to find a way to deal with the stress in my life. I had 3 young children, was starting an independent research boutique, and my wife needed me to help around the house. In hindsight I call that period my Atlas Phase. I took the weight of the world on my shoulders, and many days it was hard to bear. My responsibilities were crushing. So my anger frequently got the best of me. I went for an introductory session with a life coach midway through 2007. After a short discussion she asked a poignant question. She wondered if my kids were scared of me. That one question forced me to look in the mirror and realize who I really was. I had to acknowledge they were scared at times. That was the catalyst I needed. I wasn’t going to be a lunatic father. I need to change. The coach suggested meditation as a way to start becoming more aware of my feelings, and to even out the peaks and valleys of my emotions. A few weeks later I went to visit my Dad. He had been fighting a pretty serious illness using unconventional tactics for a few years at that point. I mentioned meditation to him and he jumped out of his chair and disappeared for a few minutes. He came back with 8 Minute Meditation, and then described how meditation was a key part of his plan to get healthy. He told me to try it. It was only 8 minutes. And it was the beginning of a life-long journey. These practices have had a profound impact on my life. 6 years later it’s pretty rare for me to get angry. I am human and do get annoyed and frustrated. But it doesn’t turn into true anger. Or I guess I don’t let it become anger. When I do get angry it’s very unsettling, but I’m very aware of it now and it doesn’t last long, which I know my wife and kids appreciate. I do too. Everyone has a different story. Everyone has a different approach to dealing with things. There is no right or wrong. I’ll continue to describe my approach and detail the little victories and the small setbacks. Mostly because this is a weekly journal I use to leave myself breadcrumbs on my journey, so I remember where I have been and how far I have come. And maybe some of you appreciate it as well. –Mike Photo credit: “Scared Pandas” originally uploaded by Brian Bennett Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. Reducing Attack Surface with Application Control Use Cases and Selection Criteria The Double Edged Sword Security Management 2.5: You Buy a New SIEM Yet? Negotiation Selection Process The Decision Process Evaluating the Incumbent Revisiting Requirements Platform Evolution Changing Needs Introduction Advanced Endpoint and Server Protection Assessment Introduction Newly Published Papers Eliminating Surprises with Security Assurance and Testing What CISOs Need to Know about Cloud Computing Defending Against Application Denial of Service Security Awareness Training Evolution Firewall Management Essentials Continuous Security Monitoring API Gateways Threat Intelligence for Ecosystem Risk Management Dealing with Database Denial of Service Identity and Access Management for Cloud Services Incite 4 U SGO: Standard Government Obscurity: The Target hack was pretty bad, and it seems clear it may only be the tip of the iceberg. Late last week the government released a report with more details of the attack so companies could protect themselves. Er, sort of. The report by iSIGHT Partners was only released to select retailers. As usual, the government isn’t talking much, so iSIGHT went and released the report on their own. A CNN article states, “The U.S. Department of Homeland Security did not make the government’s report public and provided little on its contents. iSIGHT Partners provided CNNMoney a copy of its findings.” Typical. If I were a retailer I would keep reading Brian Krebs to learn what’s going on. The feds are focused on catching the bad guys – you are on your own to stop them until the cuffs go on. – RM Unrealistic expectations are on YOU! Good post on the Tripwire blog about dealing with unrealistic security expectations. Especially because it seems very close to the approach I have advocated via the Pragmatic CSO for years. I like going after a quick win and making sure to prioritize activities. But my point with the title is that if senior management has unrealistic expectations, it’s because your communications strategies are not effective. You can blame them all you want for being unreasonable, but if they have been in the loop as you built the program, enlisted support, and started executing on initiatives, nothing should be a surprise to them. – MR Other people’s stuff: The recent Threatpost article ‘Starbucks App Stores User Information, Passwords in Clear Text’ is a bit misleading, as they don’t mention that the leaky bit of code is actually in the included Crashylitics utility. The real lesson here is not about potential harm from passwords in log files, which is a real problem, with

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.