Securosis

Research

TI+IR/M: Quick Wins

The best way to understand how threat intelligence impacts your incident response/management process is to actually run through an incident scenario with commentary to illustrate the concepts. For simplicity’s sake we assume you are familiar with our recommended model for an incident response organization and the responsibilities of the tier 1, 2, and 3 response levels. You can get a refresher back in our Incident Response Fundamentals series. For brevity we will use an extremely simple high-level example of how the three response tiers typically evaluate, escalate, and manage incidents. If you are dealing with an advanced adversary things will be neither simple nor high-level. But this provides an overview of how things come together. The Trigger Intellectual property theft is a common mission for advanced attacker, so that will be our scenario. As we described in our Threat Intelligence in Security Monitoring paper, you can configure your monitoring system to look for suspicious IP ranges from your adversary analysis. But let’s not put the cart before the horse. Knowing you have valuable IP (intellectual property), you can infer that a well-funded adversary (perhaps a nation-state or a competitor) has a great interest in that information. So you configure your monitoring process to look for connections to networks where those adversaries commonly hang out. You get this information from a threat intelligence service and integrate it automatically into your monitoring environment, so you are consistently looking for network traffic that indicates a bad scene. Let’s say your network monitoring tool fires an alert for an outbound request on a high port to an IP range identified as suspicious via threat intelligence. The analyst needs to validate the origin of the packet so he looks and sees the source IP is in Engineering. The tier 1 analyst passes this information along to a tier 2 responder. Important intellectual property may be involved and he suspects malicious activity, so he also phones the on-call handler to confirm the potential seriousness of the incident and provides a heads-up. Tier 2 takes over and the tier 1 analyst returns to other duties. The outbound connection is the first indication that something may be funky. An outbound request very well might indicate an exfiltration attempt. Of course it might not but you need to assume the worst until proven otherwise. Tracing it back to a network that has access to sensitive data means it is definitely something to investigate more closely. The key skill at tier 1 is knowing when to get help. Confirming the alert and pinpointing the device provide the basis for the hand-off to tier 2. Triage Now the tier 2 analyst is running point on the investigation. Here is the sequence of steps this individual will take: The tier 2 analyst opens an investigation using the formal case process because intellectual property is involved and the agreed-upon response management process requires proper chain of custody when IP is involved. Next the analyst begins a full analysis of network communications from the system in question. The system is no longer actively leaking data, but she blocks all traffic to the suspicious external IP address on the perimeter firewall by submitting a high-priority firewall management request. After that change is made she verifies that traffic is in fact blocked. The analyst does run the risk of alerting the adversary, but stopping a potential IP leak is more important than possibly tipping off an adversary. She starts to capture traffic to/from the targeted device, just so a record of activity is maintained. The good news is all the devices within engineering already run endpoint forensics on their devices, so there will be a detailed record of device activity. The analyst then sets an alert for any other network traffic to the address range in question to identify other potentially compromised devices within the organization. At this point it is time to call or visit the user to see whether this was legitimate (though possibly misguided) activity. The user denies knowing anything about the attack or the networks in question. Through that discussion she also learns that specific user doesn’t have legitimate access to sensitive intellectual property, even though they work in engineering. Normally this would be good news but it might indicate privilege escalation or that the device is a staging area before exfiltration – both bad signs. The Endpoint Protection Platform (EPP) logs for that system don’t indicate any known malware on the device and this analyst doesn’t have access to endpoint forensics, so she cannot dig deeper into the device. She has tapped out her ability to investigate so she notifies her tier 3 manager of the incident. While processing the hand-off she figures she might as well check out the network traffic she started capturing at the first attack indication. The analyst notices outbound requests to a similar destination from one other system on the same subnet, so she informs incident response leadership that they may be investigating a serious compromise. By mining some logs in the SIEM she finds that the system in question logged into to a sensitive file server it doesn’t normally access, and transferred/copied entire directories. It will be a long night. As we have mentioned, tier 2 tends to focus on network forensics and fairly straightforward log analysis because they are usually the quickest ways to pinpoint attack proliferation and gauge severity. The first step is to contain the issue, which entails blocking traffic to the external IP to temporarily eliminate any data leakage. Remember you might not actually know the extent of the compromise but that shouldn’t stop you from taking decisive action to contain the damage as quickly as possible – per the guidance laid down when you built designed the incident management process. Tier 3 is notified at this point – not necessarily to take action, but so they are aware there might be a more serious issue. Proactive communication streamlines escalation. Next the tier 2 analyst needs to assess the extent of

Share:
Read Post

Cloud File Storage and Collaboration: Core Security Features

This is part 3 of our Security Pro’s Guide to Cloud File Storage and Collaboration (file sync and share). The full paper is available on GitHub as we write it. See also part 1 and part 2 here. Identity and Access Management Managing users and access are the most important features after the security baseline. The entire security and governance model relies on it. These are the key elements to look for: Service and federated IDM: The cloud service needs to implement an internal identity model to allow sharing with external parties without requiring those individuals or organizations to register with your internal identity provider. The service also must support federated identity so you can use your internal directory and don’t need to manually register all your users with the service. SAML is the preferred standard. Both models should support API access, which is key to integrating the service with your applications as back-end storage. Authorization and access controls: Once you establish and integrate identity the service should support a robust and granular permissions model. The basics include user and group access at the directory, subdirectory, and file levels. The model should integrate internal, external, and anonymous users. Permissions should include read, write/edit, download, and view (web viewing but not downloading of files). Additional permissions manage who can share files (internally and externally), alter permissions, comment, or delete files. External Users An external authenticated user is one who registers with the cloud provider but isn’t part of your organization. This is important for collaborative group shares, such as deal and project rooms. Most services also support public external shares, but these are open to the world. That is why providers need to support both their own platform user model and federated identity to integrate with your existing internal directory. Device control: Cloud storage services are very frequently used to support mobile users on a variety of devices. Device control allows management of which devices (computers and mobile devices) are authorized for which users, to ensure only authorized devices have access. Two-factor authentication (2FA): Account credential compromise is a major concern, so some providers can require a second authentication factor to access their services. Today this is typically a text message with a one-time password sent to a registered mobile phone. The second factor is generally only required to access the service from a ‘new’ (unregistered) device or computer. Centralized management: Administrators can manage all permissions and sharing through the service’s web interface. For enterprise deployments this includes enterprise-wide policies, such as restricting external sharing completely and auto-expiring all shared links after a configurable interval. Administrators should also be able to identify all shared links without having to crawl through the directory structure. Sharing permissions and policies are a key differentiator between enterprise-class and consumer services. For enterprises central control and management of shares is essential. So is the ability to manage who can share content externally, with what permissions, and to which categories of users (e.g., restricted to registered users vs. via an open file link). You might, for example, only allow employees to share with authenticated users on an enterprise-wide basis. Or only allow certain user roles to share files externally, and even then only with in-browser viewing only, with links set to expire in 30 days. Each organizations has its own tolerances for sharing and file permissions. Granular controls allow you to align your use of the service with existing policies. These can also be a security benefit, providing centralized control over all storage, unlike the traditional model where you need to manage dozens or even thousands of different systems, with different authentication methods, and authorization models, and permissions. Audit and transparency One of the most powerful security features of cloud storage services is a complete audit log of all user and device activity. Enterprise-class services track all activity: which users touch which files from which devices. Features to look for include: Completeness of the audit log: It should include user, device, accessed file, what activity was performed (download/view/edit, with before and after versions if appropriate), and additional metadata such as location. Log duration: How much data does the audit log contain? Is it eternal or does it expire in 90 days? Log management and visibility: How do you access the log? Is the user interface navigable and centralized, or do you need to hunt around and click individual files? Can you filter and report by user, file, and device? Integration and export: Logs should be externally consumable in a standard format to integrate with existing log management and SIEM tools. Administrators should also be able to export activity reports and raw logs. These features don’t cover everything offered by these services, but they are the core security capabilities enterprise and business users should have to start with. Share:

Share:
Read Post

Cloud File Storage and Collaboration: Overview and Baseline Security

This is part 2 of our Security Pro’s Guide to Cloud File Storage and Collaboration (file sync and share). The full paper is available on GitHub as we write it. See also Part 1. Understanding Cloud File Storage and Collaboration Services Cloud File Storage and Collaboration (often called Sync and Share) is one of the first things people think of when they hear the term ‘cloud’, and one of the most popular product categories. It tends to be one of the first areas IT departments struggle to manage, because many users and business units want the functionality and use it personally, and there is a wide variety of free and inexpensive options. As you might expect, since we can’t even standardize on a single category name, we also see a wide range of different features and functions across the various services. We will start by detailing the core features with security implications, then the core security features themselves, and finally more advanced security features we see cropping up in some providers. This isn’t merely a feature list – we cover each feature’s security implications, what to look for, and how you might want to integrate it (if available) into your security program. Overview and Core Features When these services first appeared, the term Cloud Sync and Share did a good job of encapsulating their capabilities. You could save a file locally, it would sync and upload to a cloud service, and you could expose a share link so someone else on the Internet could download the file. The tools had various mobile agents for different devices, and essentially all of them had some level of versioning so you could recover deleted files or previous versions. Cloud or not? Cloud services popularized sync and share, but there are also non-cloud alternatives which rely on hosting within your own environment – connecting over a VPN or the public Internet. There is considerable overlap between these very different models, but this paper focuses on cloud options. They are where we hear the most concerned about security, and cloud services are dominant in this market – particularly as organizations move farther into the cloud and prioritize mobility. Most providers now offer much more than core sync and share. Here are the core features which tend to define these services: Storage: The cloud provider stores files. This typically includes multiple versions and retention of deleted files. The retention period, recovery method, and mechanism for reverting to a previous version all vary greatly. Enterprises need to understand how much is stored, what users can access/recover, and how this affects security. For example make sure you understand version and deletion recovery so sensitive files you ‘removed’ don’t turn up later. Sync: A local user directory (or server directory) synchronizes changes with the cloud provider. Edit a file locally, and it silently syncs up to the server. Update it on one device and it propagates to the rest. The cloud provider handles version conflicts (which can leave version orphans in the user folders). Typically users access alternate versions and recover deleted files through the web interface, and sometimes it also manages collisions. Share: Users can share files through a variety of mechanisms, including sharing directly with another user of the service (inside or outside the organization) which allows the recipient to sync the file or folder like their own content. Shared items can be web only; sharing can be open (public), restricted to registered users, or require a one-off password. This is often handled at the file or folder level, allowing capabilities such as project rooms to support collaboration across organizations without allowing direct access to any participant’s private data. We will cover security implications of sharing throughout this report, especially how to manage and secure sharing. View: Many services now include in-browser viewers for different file types. Aside from convenience and ensuring users can see files, regardless of whether they have Office installed, this can also function as a security control, instead of allowing users to download files locally. Collaborate: Expanding on simple viewers (and the reason Sync and Share isn’t entirely descriptive any more), some platforms allow users to mark up, comment on, or even edit collaborative documents directly in a web interface. This also ties into the project/share rooms we mention above. Web and Mobile Support: The platform syncs locally with multiple operating systems using local agents (okay, Windows, Mac, and at least iOS), provides a browser-based user interface for access from anywhere, and offers native apps for multiple mobile platforms. APIs: Most cloud services expose APIs for direct integration into other applications. This is how, for example, Apple is adding a number of providers at the file system layer in the next versions of OS X and iOS. On the other hand, you could potentially link into APIs directly to pull security data or manage security settings. These core features cover the basics offered by most enterprise-class cloud file storage and collaboration services. Most of the core security features we are about to cover are designed to directly manage and secure these capabilities. And since “Cloud File Storage and Collaboration Service” is a bit of a mouthful, for the rest of this paper we will simply refer to them as cloud storage providers. Core Security Features Core security features are those most commonly seen in enterprise-class cloud storage providers. That doesn’t mean every provider supports them, but to evaluate the security of a service this is where you should start. Keep in mind that different providers offer different levels of support for these features; it is important to dig into the documentation and understand how well the feature matches your requirements. Don’t assume any marketure is accurate. Security Baseline Few things matter more than starting with a provider that offers strong baseline security. The last thing you want to do is trust your sensitive files to a company that doesn’t consider security among their couple priorities. Key areas to look at include: Datacenter security:

Share:
Read Post

Incite 7/23/2014: Mystic Rhythms

One of the things I most enjoy when the kids are at camp is being able to follow my natural rhythms. During the school year things are pretty structured. Get up at 5, do my meditation, get the kids ready for school, do some yoga/exercise, clean up, and get to work. When I’m on the road things are built around the business day, when I’m running around from meeting to meeting. But during the summer, when I’m not traveling I can be a little less structured and it’s really nice. I still get up pretty early, but if I want to watch an episode of Game of Thrones at 10am I will. If I want to do some journaling at 3pm, I will. If I feel like starting the Incite at 9pm I’ll do that too. I tend to be pretty productive first thing in the morning, and then later in the day. Not sure why but that’s my rhythm. I have always tried to schedule my work calls in the early afternoon when possible, when I have a bit less energy, and needing to be on during the call carries me through. I do a lot of my writing pretty late at night. At least I have been lately. That’s when inspiration hits, and I know better than to mess with things when it’s flowing. Of course when the kids come home rhythms be damned. Seems the school board doesn’t give a rat’s ass about my rhythms. Nor does the dance company or the lax team. The kids need to be there when they need to be there. So I adapt and I’m probably not as efficient as I could be. But it’s okay. I can still nod off at 11am or catch a matinee at noon if I feel like it. Just don’t tell The Boss, Rich, or Adrian – they think I’m always diligently working. That can be our little secret… –Mike Photo credit: “Mystic Rhythms signage” originally uploaded by Julie Dennehy The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the conference this year. You can check it out on YouTube. Take an hour and check it out. Your emails, alerts and Twitter timeline will be there when you get back. Securosis Firestarter Have you checked out our new video podcast? Rich, Adrian, and Mike get into a Google Hangout and.. hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail. July 22 – Hacker Summer Camp July 14 – China and Career Advancement June 30 – G Who Shall Not Be Named June 17 – Apple and Privacy May 19 – Wanted Posters and SleepyCon May 12 – Another 3 for 5: McAfee/OSVDB, XP Not Dead, CEO head rolling May 5 – There Is No SecDevOps April 28 – The Verizon DBIR April 14 – Three for Five March 24 – The End of Full Disclosure Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too. The Security Pro’s Guide to Cloud File Storage and Collaboration Introduction Leveraging Threat Intelligence in Incident Response/Management The (New) Incident Response & Management Process Model Threat Intelligence + Data Collect = Responding Better Really Responding Faster Introduction Endpoint Security Management Buyer’s Guide (Update) Mobile Endpoint Security Management Trends in Data Centric Security Deployment Models Tools Introduction Use Cases Understanding Role-based Access Control Advanced Concepts Introduction NoSQL Security 2.0 Understanding NoSQL Platforms Introduction Newly Published Papers Open Source Development and Application Security Analysis Advanced Endpoint and Server Protection Defending Against Network-based DDoS Attacks Reducing Attack Surface with Application Control Leveraging Threat Intelligence in Security Monitoring The Future of Security Security Management 2.5: Replacing Your SIEM Yet? Defending Data on iOS 7 Eliminating Surprises with Security Assurance and Testing Incite 4 U No executive access, what? Something doesn’t compute about this Ponemon survey claiming 31% of organizations surveyed never speak to their senior team about security? And 40% in the UK? I don’t believe it. Maybe those respondents had one pint too many. Any regulated organization needs to communicate about security. Any company looking to acquire cyber liability insurance needs to communicate about security. Any friggin’ company with anything to steal needs to communicate about security. Now, is that communication effective? Probably not. Should it happen more often? Absolutely. But I don’t buy not at all – that sounds like hogwash. But it makes for good click-thru numbers, and I shouldn’t forget vendors need to feed the pageview beast. – MR And they’re off! Starbucks is launching a general purpose payment app, so you can not only buy coffee, but use the app for other retailers as well. Sure, it seems odd to use a Starbucks app to buy something like airline tickets, but the race to own the customer shopping experience is heating up! Currently it’s Visa by a nose – they both continue to push support for their mobile wallet and aggressively engage merchants to support single-button checkout in Europe. Just to pat myself on the back a bit, a year ago I said that Visa was gunning to be an Identity Provider, and that is essentially what this is. Merchant app? Merchant wallet? Payment provider wallet? Don’t like any of those options? How about one embedded into your phone? For years telcos have been working with phone manufacturers to embed a ‘secure element’ to manage secure communications, VPN, and secure payment linked directly to your cell account. Fortunately that cat herding exercise is going nowhere fast – would you choose AT&T as your bank? What could go wrong with that? And don’t forget about new payment approaches either. Host Card Emulation (e.g., a virtual secure element) running

Share:
Read Post

Firestarter: Hacker Summer Camp

In the latest Firestarter, Rich, Mike, and Adrian discuss the latest controversial research to hit the news from HOPE and Black Hat. We start with a presentation by Jonathan Zdziarski on data recoverable using forensics on iOS. While technically accurate, we think the intent he ascribes intent to Apple shows a deeply flawed analysis. We then discuss a talk removed from Black Hat on de-anonymizing Tor. In this case it seems the researchers didn’t really understand the legal environment around them. Both cases are examples of great research gone a little awry. And Rich talks about a snowball fight with a herd of elk. These things happen. The audio-only version is up too. Share:

Share:
Read Post

TI+IR/M: The New Incident (Response) & Management Process Model

Now that we have the inputs (both internal and external) to our incident response/management process we are ready to go operational. So let’s map out the IR/M process in detail to show where threat intelligence and other security data allows you to respond faster and more effectively. Trigger and Escalate You start the incident management process with a trigger that kicks off the incident response process, and the basic information you gather varies based on what triggered the alert. You may get alerts from all over the place, including any of your monitoring systems and the help desk. Nobody has a shortage of alerts – the problem is finding the critical alerts and taking immediate action. Not all alerts require a full incident response – much of what you already deal with on a day-to-day basis is handled by existing security processes. Incident response/management is about those situations that fall outside the range of your normal background noise. Where do you draw the line? That depends entirely on your organization. In a small business, a single system infected with malware might require a response because all devices have access to critical information. But a larger company might handle the same infection within standard operational processes. Regardless of where the line is drawn, communication is critical. All parties must be clear on which situations require a full incident investigation and which do not before you can decided whether to pull the trigger or not. For any incident you need a few key pieces of information early to guide next steps. These include: What triggered the alert? If someone was involved or reported it, who are they? What is the reported nature of the incident? What is the reported scope of the incident? This is basically the number and nature of systems/data/people involved. Are any critical assets involved? When did the incident occur, and is it ongoing? Are there any known precipitating events for the incident? Is there a clear cause? Gather what you can from this list to provide an initial picture of what’s going on. When the initial responder judges an incident to be more serious it’s time to escalate. You should have guidelines for escalation, such as: Involvement of designated critical data or systems. Malware infecting a certain number of systems. Sensitive data detected leaving the organization. Unusual traffic/behavior that could indicate an external compromise. Once you escalate it is time to assign an appropriate resource, request additional resources if needed, and begin the response with triage. Response Triage Before you can do anything, you will need to define accountabilities among the team. That means specifying the incident handler, or the responsible party until a new responsible party is defined. You also need to line up resources to help based on answers to the questions above, to make sure you have the right expertise and context to work through the incident. We have more detail on staffing the response in our Incident Response Fundamentals series. The next thing to do is to narrow down the scope of data you need to analyze. As discussed in the last post, you spend considerable time collecting events and logs, as well as network and endpoint forensics. This is a tremendous amount of data so narrowing down the scope of what you investigate is critical. You might filter on the segments attacked, or logs of the application in question. Perhaps you will take forensics from endpoints at a certain office if you believe the incident was contained. This is all to make the data mining process manageable. With all this shiny big data technology, do you need to actually move the data? Of course not, but you will need flexible filters so you can see only items relevant to this incident in your forensic search results. Time is of the essence in any response, so you cannot afford to get bogged down with meaningless and irrelevant results as you work through collected data. Analyze Once you have filters in place you will want to start analyzing the data to answer several questions: Who is attacking you? What tactics are they using? What is extent of the potential damage? You may have an initial idea based on the alert that triggered the response, but now you need to prove that hypothesis. This is where threat intelligence plays a huge role in accelerating your response. Based on the indicators you found, a TI service can help identify a potentially responsible party. Or at least a handful of them. Every adversary has their preferred tactics, and whether through adversary analysis (discussed in Really Responding Faster) or actual indicators, you want to leverage external information to understand the attacker and their tactics. It is a bit like having a crystal ball, allowing you to focus your efforts and what the attacker likely did, and where. Then you need to size up or scope out the damage. This comes down to the responder’s initial impressions as they roll up to the scene. The goal here is to take the initial information provided and expand on it as quickly as possible to determine the true extent of the incident. To determine scope you will want to start digging into the data to establish the systems, networks, and data involved. You won’t be able to pinpoint every single affected device at this point – the goal is to get a handle on how big a problem you might be facing, and generate some ideas on how to best mitigate it. Finally, based on the incident handler’s initial assessment, you need to decide whether this requires a formal investigation due to potential law enforcement impact. If so you will need to start thinking about chain of custody for the evidence so you can prove the data was not tampered with, and tracking the incident in a case management system. Some organizations treat every incident this way, and that’s fine. But not all organizations have the resources or capabilities for that, in

Share:
Read Post

TI+IR/M: Threat Intelligence + Data Collection = Responding Better

Our last post defined what is needed to Really Respond Faster, so now let’s peel back the next layer of the onion to delve into collecting data that will be useful for investigation, both internally and externally. This starts with gathering threat intelligence to cover the external side. It also involves a systematic effort to gather forensic information from networks and endpoints while leveraging existing security information sources including events, logs, and configurations. External View: Integrating Threat Intel In the last post we described the kinds of threat intelligence at your disposal and how they can assist your response. But that doesn’t explain how you can gather this information or where to put it so it’s useful when you are knee-deep in response. First let’s discuss the aggregation point. In Early Warning System we described a platform to aggregate threat intelligence. Those concepts are still relevant to what you need the platform to do. You need the platform to aggregate third-party intelligence feeds, and be able to scan your environment for indicators to find potentially compromised devices. To meet these goals a few major capabilities stand out: Open: The first job of any platform is to facilitate and accelerate investigation so you need the ability to aggregate threat intelligence and other security data quickly, easily, and flexibly. Intelligence feeds are typically just data (often XML), and increasingly distributed in industry-standard formats such as STIX – making integration relatively straightforward. Scalable: You will collect a lot of data during investigation, so scalability is essential. Keep in mind the difference between data scalability (the amount of stuff you can store) and computational scalability (your ability to analyze and search the collected data). Flexible search: Investigations still involve quite a bit of art, rather than being pure formal science. As tools improve and integrated threat intelligence helps narrow down targets for investigation, you will be less reliant on forensic ‘artists’. But you will always be mining collected data and searching for attack indications, regardless of the capabilities of the person with their hands on the keyboard. So your investigation platform must make it easy to search all your data sources, and then identify assets at risk based on what you found. The key to making this entire process run is automation. Yes, we at Securosis talk about automation a whole lot these days, and there is a reason for that. Things are happening too quickly for you to do much of anything manually, especially in the heat of an investigation. You need the ability to pull threat intelligence in a machine-readable format, and then pump it into an analysis platform without human intervention. Simple, right? So let’s dig into the threat intelligence sources to provide perspective on how to integrate that data into your platform. Compromised devices: The most actionable intelligence you can get is still a clear indication of compromised devices. This provides an excellent place to begin your investigation and manage your response. There are many ways you might conclude a device is compromised. The first is by seeing clear indicators of command and control traffic in the device’s network traffic, such as DNS requests whose frequency and content indicate a domain generating algorithm (DGA) for finding botnet controllers. Monitoring traffic from the device can also show files or other sensitive data being transmitted by the device, indicating exfiltration or (via network traffic analysis) a remote access trojan. Malware indicators: As described in our Malware Analysis Quant research, you can build a lab and perform both static and dynamic analysis of malware samples to identify specific indicators of how the malware compromises devices. This is not for the faint of heart – thorough and useful analysis requires significant investment, resources, and expertise. The good news is that numerous commercial services now offer those indicators in a format you can use to easily search through collected security data. Adversary networks: Using IP reputation data broken down into groups of adversaries can help you determine the extent of compromise. If during your initial investigation you find malware typically associated with Adversary A, you can then look for traffic going to networks associated with that adversary. Effective and efficient response requires focus, so knowing which of your compromised devices may have been compromised in a single attack helps you isolate and dig deeper into that attack. Given the demands of gathering sufficient information to analyze, and the challenge of detecting and codifying appropriate patterns and indicators of compromise, most organizations look for a commercial provider to develop and provide this threat intelligence. It is typically packaged as a feed for direct integration into incident response/monitoring platforms. Wrapping it all together we have the process map below. The map encompasses profiling the adversary as discussed in the last post, collecting intelligence, analyzing threats, and then integrating threat intelligence into the incident response process. Internal View: Collecting Forensics The other side of the coin is making sure you have sufficient information about what’s happening in your environment. We have researched selecting and deploying SIEM and Log Management extensively, and that information tends to be the low-hanging fruit for populating your internal security data repository. To aid investigation you should monitor the following sources (preferably continuously): Perimeter networks and devices: The bad guys tend to be out there, meaning they need to cross your perimeter to achieve their mission. So look for issues on devices between them and their targets. Identity: Who is as important as what, so analyze access to specific resources – especially within a privileged user context. Servers: We are big fans of anomaly detection, configuration assessment, and whitelisting on critical servers such as domain controllers and app servers, to alert you to funky stuff to investigate at the server level. Databases: Likewise, correlating database anomalies against other types of traffic (such as reconnaissance and network exfiltration) can indicate a breach in progress. Better to know that before your credit card brand notifies you. File integrity: Most attacks change key system files, so by

Share:
Read Post

Leading Security ‘People’

In the July 2 Incite I highlighted Dave Elfering’s discussion of the need to sell as part of your security program. Going through my Instapaper links I came across Dave’s post again, and I wanted to dig a bit deeper. Here is what I wrote in my snippet: Everyone sells. No matter what you do you are selling. In the CISO context you are selling your program and your leadership. As Dave says, “To truly lead and be effective people have to be sold on you; on what and who you are.” Truth. If your team (both upstream / senior management and downstream / security team) isn’t sold on you, you can’t deliver news they need to hear. And you’ll be delivering that news a lot – you are in security, right? That post just keeps getting better because it discusses the reality of leading. You need to know yourself. You need to be yourself. More wisdom: “Credentials and mad technical skills are great, but they’re not who you are. Titles are great, but they’re not who you are. Who you are is what you truly have to sell and the leader who instead relies on Machiavellian methods to self-serving ends is an empty suit.” If you can’t be authentic you can’t lead. Well said, Dave. Let’s dig a little deeper into the leadership angle here, because that’s not something most security folks have been trained to do. Here is another chunk of Dave’s post. As a leader you are guaranteed to be put into a continuous onslaught of events and situations, the circumstances of which are often beyond your control. What you do control is how you deal with them. This will be decided by who you are. People who rely on intimidation through authority or the manipulation of personality ethic may be effective up to a point, but in the melee of events those alone aren’t sufficient. Leading is a personal endeavor, which reflects who you are. If you are an intimidator don’t be surprised when your team consists of folks who (for whatever reason) accept being intimidated. But at some point fear and manipulation run out of gas. There is a time and a place for almost everything. There are situations where someone must take the organization on their back and carry it forward by whatever means necessary. That situation might be neither kind nor graceful. But it is also not sustainable. At some point your team needs to believe in its mission. They need to believe in their strategy for getting there. And they need to understand how they will improve and grow personally by participating. They need to want to be there, and to put forth the effort. Especially in security, given the sheer number of opportunities security folks have to choose from. Security is a hard path. You need to be tough to handle the lack of external validation, and the fact that security is not something you can ever win or finish. But that doesn’t mean you (as a leader) have to be hard all the time. At the end of the day we are all people, and we need to be treated that way. Photo credit: “LEAD” originally uploaded by Leo Reynolds Share:

Share:
Read Post

Friday Summary: July 18, 2014, Rip Van Winkle edition

I have been talking about data centric security all week, so you might figure that’s what I will talk about in this week’s summary. Wrong. That’s because I’m having a Rip Van Winkle moment. I just got a snapshot of where we have been through the last six years, and I now see pretty clearly where we are going. It is because I have not done much coding over the last six years; now that I am playing around again I realize not just that everything has changed, but also why. It’s not just that every single tool I was comfortable with – code management, testing, IDE, bug tracking, etc. – has been chucked into the dustbin, it’s that most assumptions about how to work have been tossed on their ears. Server uptime used to be the measure of reliability – I now regularly kill servers to ensure reliability. I used to worry that Java was too slow, so I would code C – now I use JRuby to speed things up. I used to slow down code releases so QA could complete test sweeps – now I speed up the dev cycle so testing can happen faster. I used to configure servers and applications after I launched them – now I do it beforehand. Developers should never push to code production; developers should now push code to production as frequently as possible. Patching destabilizes production code; now we patch as fast as possible. We’ll fix it after we ship; now infrastructure and efficiency take precedence over features and functions. Task cards over detailed design specs; design for success gave way to “fail faster” and constant refactoring. My friends are gone, my dog’s dead, and much of what I knew is no longer correct. Rip Van Winkle. It’s like that. Step away for a couple years and all your points of reference have changed – but it’s a wonderful thing! Every process control assumption has been trampled on – for good reason: those assumptions proved wrong. Things you relied on are totally irrelevant because they have been replaced by something better. Moore’s Law predicts that compute power effectively doubles every two years while costs remain static. I think development is moving even faster. Ten years ago some firms I worked with released code once a year – now it’s 20 times a day. I know nothing all over again … and that’s great! On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian and Mort talk Big Data with George V Hulme. Mort quoted in Communicating at the speed of DevOps. Favorite Securosis Posts Mike Rothman: The Security Pros Guide to Cloud File Storage and Collaboration: Introduction. I’m looking forward to this series from Rich because there is a lot of noise and lots of competitors in the cloud-based storage game. Lots of hyperbole too in terms of what an enterprise needs. Adrian Lane: Firestarter: China and Career Advancement. Lots of people looking to get into security and lots looking to hire. But HR is an impediment so both sides need to think up creative ways to find talent. Other Securosis Posts Trends in Data Centric Security: Deployment Models. The Security Pro’s Guide to Cloud File Storage and Collaboration: Introduction. Incite 7/16/2014: Surprises. Are CISOs finally ‘real’ executives? Firestarter: China and Career Advancement. Leverging TI in Incident Response/Management: Really Responding Faster. It’s Just a Matter of Time. Listen to Rich Talk, Win a … Ducati? Summary: Boulder. Favorite Outside Posts Mike Rothman: Is better possible? Another great one by Godin. “If you accept the results you’ve gotten before, if you hold on to them tightly, then you never have to face the fear of the void, of losing what you’ve got, of trading in your success for your failure.” Yes, we call can get better. You just have to accept the fear that you’ll fail. Gunnar: Apple and IBM Team Up to Push iOS in the Enterprise. My Mobile security talk two years back was “From the iPhone in your pocket to the Mainframe”, now the best in class front ends meet the best in class back ends. Or what I call iBM. IBM and Apple match was a bright strategy by Ginni Rometty and Tim Cook, but it might have been drafted by David Ricardo, who formalized comparative advantage, a trade where both sides gain. Adrian Lane: Server Lifetime as SDLC Metric. And people say cloud is not that different … but isn’t it funny how many strongly held IT beliefs are exactly reversed in cloud services. David Mortman: Oracle’s Data Redaction is Broken. Research Reports and Presentations Analysis of the 2014 Open Source Development and Application Security Survey. Defending Against Network-based Distributed Denial of Service Attacks. Reducing Attack Surface with Application Control. Leveraging Threat Intelligence in Security Monitoring. The Future of Security: The Trends and Technologies Transforming Security. Security Analytics with Big Data. Security Management 2.5: Replacing Your SIEM Yet? Defending Data on iOS 7. Eliminate Surprises with Security Assurance and Testing. What CISOs Need to Know about Cloud Computing. Top News and Posts Oracle fixes 113 security vulnerabilities, 20 just in Java. Google’s Project Zero. Specially Crafted Packet DoS Attacks, Here We Go Again. SCOTUS’s new Rummaging Doctrine. Blog Comment of the Week This week’s best comment goes to Jeff, in response to Leverging TI in Incident Response/Management. Sorry if this goes a little bit off topic, but I believe this relates back to responding faster (and continuous security monitoring that Securosis has championed), but would like to get your thoughts on the best place/recommended infrastructure designs to terminate, decrypt, and inspect SSL traffic to/from a network so all relevant security tools – IPS/IDS, WAFs, proxoes, security gateways, etc., – can inspect the traffic to ensure a complete picture of what’s entering/leaving the network to allow for quick/faster responses to threats. Thx, Jeff Share:

Share:
Read Post

Trends in Data Centric Security: Deployment Models

So far we have talked about the need for data centric security, what that means, and which tools fit the model. Now it is time to paint a more specific picture of how to implement and deploy data centric security, so here are some concrete examples of how the tools are deployed to support a data centric model. Gateways A gateway is typically an appliance that sits in-line with traffic and applies security as data passes. Data packets are inspected near line speed, and sensitive data is replaced or obfuscated before packets are passed on. Gateways are commonly used used by enterprises before data is moved off-premise, such as up to the cloud or to another third-party service provider. The gateway sits inside the corporate firewall, at the ‘edge’ of the infrastructure, discovering and filtering out sensitive data. For example some firms encrypt data before it is moved into cloud storage for backups. Others filter web-based transactions inline, replacing credit card data with tokens without disrupting the web server or commerce applications. Gateways offer high-performance substitution for data in motion; but they must be able to parse the data stream to encrypt, tokenize, or mask sensitive data. Another gateway deployment model puts appliances in front of “big data” (NoSQL databases such as Hadoop), replacing data before insertion into the cluster. But support for high “input velocity” is a key advantage of big data platforms. To avoid crippling performance at the security bottleneck, gateways must be able to perform data replacement while keeping up with the big data platform’s ingestion rate. It is not uncommon to see a cluster of appliances feeding a single NoSQL repository, or even spinning up hundreds of cloud servers on demand, to mask or tokenize data. These service must secure data very quickly, so they don’t provide deep analysis. Gateways may even need to be told the location of sensitive data within the stream to support substitution. Hub and Spoke ETL (Extract, Transform, and Load) has been around almost as long as relational databases. It describes a process for extracting data from one database, masking it to remove sensitive data, then loading the desensitized into another database. Over the last several years we have seen a huge resurgence of ETL, as firms look to populate test databases with non-sensitive data that still provides a reliable testbed for quality assurance efforts. A masking or tokenization ‘hub’ orchestrates data movement and implements security. Modeled on test data management systems, modern systems alter health care data and PII (Personally Identifiable Information) to support use in multiple locations with inconsistent or inadequate security. The hub-and-spoke model is typically used to create multiple data sets, rather than securing streams of data; to align with the hub-and-spoke model, encryption and tokenization are the most common methods of protection. Encryption enables trusted users to decrypt the data as needed, and masking supports analytics without providing the real (sensitive) data. The graphic above shows ETL in its most basic form, but the old platforms have evolved into much more sophisticated data management systems. They can now discover data stored in files and databases, morph together multiple sources to create new data sets, apply different masks for different audiences, and relocate the results – as files, as JSON streams, or even inserted into a data repository. It is a form of data orchestration, moving information automatically according to policy. Plummeting compute and storage costs have made it feasible to produce and propagate multiple data sets to various audiences. Reverse Proxy As with the gateways described above, in the reverse-proxy model an appliance – whether virtual or physical – is inserted inline into the data flow. But reverse proxies are used specifically between users and a database. Offering much more than simple positional substitution, proxies can alter what they return to users based on the recipient and the specifics of their request. They work by intercepting and masking query results on the fly, transparently substituting masked results for the user. For example if a user queries too many credit card numbers, or if a query originates from an unapproved location, the returned data might be redacted. The proxy effectively intelligently dynamically masks data. The proxy may be an application running on the database or an appliance deployed inline between users and data to force all communications through the proxy. The huge advantage of proxies is t hat they enable data protection without needing to alter the database — they avoid additional programming and quality assurance validation processes. This model is appropriate for PII/PHI data, when data can be managed from a central locations but external users may need access. Some firms have implemented tokenization this way, but masking and redaction are more common. The principal use case is to protect data dynamically, based on user identity and the request itself. Other Options Many of you have used data centric security before, and use it today, so it is worth mentioning two security platforms in wide use today which don’t quite fit our use cases. Data Loss Prevention systems (DLP), and Digital Rights Management (DRM) are forms of DCS which have each been in use over a decade. Data Loss Prevention systems are designed to detect sensitive data and ensure data usage complies with security policy – on the network, on the desktop, and in storage repositories. Digital Rights Management embeds ownership and usage rules into the data, with security policy (primarily read and write access) enforced by the applications that use the data. DLP protects at the infrastructure layer, and DRM at the application layer. Both use encryption to protect data. Both allow users to view and edit data depending on security policies. DLP can be effectively deployed in existing IT environments, helping organizations gain control over data that is already in use. DRM typically needs to be built into applications, with security controls (e.g.,: encryption and ownership rights) applied to data as it is created. These platforms are designed to expose data (making it available

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.