Securosis

Research

Government Pipe Dreams

General Keith Alexander heads the U.S. Cyber Command and is the Director of the NSA. In prepared testimony today he said the government should set up a secure zone for themselves and critical infrastructure, walled off from the rest of the Internet. “You could come up with what I would call a secure zone, a protected zone, that you want government and critical infrastructure to work in that part,” Alexander said. “At some point it’s going to be on the table. The question is how are we going to do it.” Alexander said setting up such a network would be technically straightforward, but difficult to sell to the businesses involved. Explaining the measure to the public would also be a challenge, he added. I don’t think explaining it to the public would be too tough, but practically speaking this one is a non-starter. Even if you build it, it will only be marginally more secure than the current Internet. Here’s why: The U.S. government currently runs its own private networks for managing classified information. For information of a certain classification, the networks and systems involved are completely segregated from the Internet. No playing Farmville on a SIPRnet-connected system. Extending this to the private sector is essentially a non-starter, at least without heavy regulation and a ton of cash. Most of our critical infrastructure, such as power generation/transmission and financial services, used to also be on their own private networks. But – often against the advice of us security folks – due to various business pressures they’ve connected these to Internet-facing systems and created a heck of a mess. When you are allowed to check your email on the same system you use to control electricity, it’s hard to not get hacked. When you put Internet facing web applications on top of back-end financial servers, it’s hard to keep the bad guys from stealing your cash. Backing out of our current situation could probably only happen with onerous legislation and government funding. And even then, training the work forces of those organizations to not screw it up and reconnect everything back to the Internet again would probably be an even tougher job. Gotta check that Facebook and email at work. If they pull it off, more power to them. From a security perspective isolating the network could reduce some of our risk, but I can’t really imagine the disaster we’d have to experience before we could align public and private interests behind such a monumental change. Share:

Share:
Read Post

Incite 9/22/2010: The Place That Time Forgot

I don’t give a crap about my hair. Yeah, it’s gray. But I have it, so I guess that’s something. It grows fast and looks the same, no matter what I do to it. I went through a period maybe 10 years ago where I got my hair styled, but besides ending up a bit lighter in the wallet (both from a $45 cut and all the product they pushed on me), there wasn’t much impact. I did get to listen to some cool music and see good looking stylists wearing skimpy outfits with lots of tattoos and piercings. But at the end of the day, my hair looked the same. And the Boss seems to still like me regardless of what my hair looks like, though I found cutting it too short doesn’t go over very well. So when I moved down to the ATL, a friend recommended I check out an old time barber shop in downtown Alpharetta. I went in and thought I had stepped into a time machine. Seems the only change to the place over the past 30 years was a new boom box to blast country music. They probably got it 15 years ago. Aside from that, it’s like time forgot this place. They give Double Bubble to the kids. The chairs are probably as old as I am. And the two barbers, Richard and Sonny, come in every day and do their job. It’s actually cool to see. The shop is open 6am-6pm Monday thru Friday and 6am-2pm on Saturday. Each of them travels at least 30 minutes a day to get to the shop. They both have farms out in the country. So that’s what these guys do. They cut hair, for the young and for the old. For the infirm, and it seems, for everyone else. They greet you with a nice hello, and also remind you to “Come back soon” when you leave. Sometimes we talk about the weather. Sometimes we talk about what projects they have going on at the farm. Sometimes we don’t talk at all. Which is fine by me, since it’s hard to hear with a clipper buzzing in my ear. When they are done trimming my mane to 3/4” on top and 1/2” on the sides, they bust out the hot shaving cream and straight razor to shave my neck. It’s a great experience. And these guys seem happy. They aren’t striving for more. They aren’t multi-tasking. They don’t write a blog or constantly check their Twitter feed. They don’t even have a mailing list. They cut hair. If you come back, that’s great. If not, oh well. I’d love to take my boy there, but it wouldn’t go over too well. The shop we take him to has video games and movies to occupy the ADD kids for the 10 minutes they take to get their haircuts. No video games, no haircut. Such is my reality. Sure the economy goes up and then it goes down. But everyone needs a haircut every couple weeks. Anyhow, I figure these guys will end up OK. I think Richard owns the building and the land where the shop is. It’s in the middle of old town Alpharetta, and I’m sure the developers have been chasing him for years to sell out so they can build another strip mall. So at some point, when they decide they are done cutting hair, he’ll be able to buy a new tractor (actually, probably a hundred of them) and spend all day at the farm. I hope that isn’t anytime soon. I enjoy my visits to the place that time forgot. Even the country music blaring from the old boom box… – Mike. Photo credits: “Rand Barber Shop II” originally uploaded by sandman Recent Securosis Posts Yeah, we are back to full productivity and then some. Over the next few weeks, we’ll be separating the posts relating to our research projects from the main feed. We’ll do a lot of cross-linking, so you’ll know what we are working on and be able to follow the projects interesting to you, but we think over 20 technically deep posts is probably a bit much for a week. It’s a lot for me, and following all this stuff is my job. We also want to send thanks to IT Knowledge Exchange, who listed our little blog here as one of their 10 Favorite Information Security Blogs. We’re in some pretty good company, except that Amrit guy. Does he even still have a blog? The Securosis 2010 Data Security Survey Report Rates the Top 5 Data Security Controls New Paper (+ Webcast): Understanding and Selecting a Tokenization Solution FireStarter: It’s Time to Talk about APT Friday Summary: September 17, 2010 White Paper Released: Data Encryption 101 for PCI DLP Selection Process: Infrastructure Integration Requirements Protection Requirements Defining the Content Monitoring up the Stack: Threats Introduction Understanding and Selecting an Enterprise Firewall: Advanced Features, Part 1 Advanced Features, Part 2 To UTM or Not to UTM? Selection Process NSO Quant Posts Manage Metrics – Signature Management Manage Metrics – Document Policies & Rules Manage Metrics – Define/Update Policies & Rules Manage Metrics – Policy Review Monitor Metrics – Validate and Escalate Monitor Metrics – Analyze Monitor Metrics – Collect and Store LiquidMatrix Security Briefing: September 20 September 21 Incite 4 U What’s my risk again? – Interesting comments from Intel’s CISO at the recent Forrester security conference regarding risk. Or more to the point, the misrepresentation of risk either towards the positive or negative. I figured he’d be pushing some ePO based risk dashboard or something, but it wasn’t that at all. He talked about psychology and economics, and it sure sounded like he was channeling Rich, at least from the coverage. Our pal Alex Hutton loves to pontificate about the need to objectively quantify risk and we’ve certainly had our discussions (yes, I’m being kind) about how effectively you can model risk. But the point is not necessarily to get a number, but

Share:
Read Post

Monitoring up the Stack: File Integrity Monitoring

We kick off our discussion of additional monitoring technologies with a high-level overview of file integrity monitoring. As the name implies, file integrity monitoring detects changes to files – whether text, configuration data, programs, code libraries, critical system files, or even Windows registries. Files are a common medium for delivering viruses and malware, and detecting changes to key files can provide an indication of machine compromise. File integrity monitoring works by analyzing changes to individual files. Any time a file is changed, added, or deleted, it’s compared against a set of policies that govern file use, as well as signatures that indicate intrusion. Policies are as simple as a list of operations on a specific file that are not allowed, or could include more specific comparisons of the contents and the user who made the change. When a policy is violated an alert is generated. Changes are detected by examining file attributes: specifically name, date of creation, time last modified, ownership, byte count, a hash to detect tampering, permissions, and type. Most file integrity monitors can also ‘diff’ the contents of the file, comparing before and after contents to identify exactly what changed (for text-based files, anyway). All these comparisons are against a stored reference set of attributes that designates what state the file should be in. Optionally the file contents can be stored for comparison, and what to do in case a change is detected as a baseline. File integrity monitoring can be periodic – at intervals from minutes to every few days. Some solutions offer real-time threat detection that performs the inspection as the files are accessed. The monitoring can be performed remotely – accessing the system with user credentials and running instructing the operating system to periodically collect relevant information – or an agent can be installed on the target system that performs the data collection locally, and returns data upstream to the monitoring server. As you can imagine, even a small company changes files a lot, so there is a lot to look at. And there are lots of files on lots of machines – as in tens of thousands. Vendors of integrity monitoring products provide the basic list of critical files and policies, but you need to configure the monitoring service to protect the rest of your environment. Keep in mind that some attacks are not fully defined by a policy, and verification/investigation of suspicious activity must be performed manually. Administrators need to balance performance against coverage, and policy precision against adaptability. Specify too many policies and track too many files, and the monitoring software consumes tremendous resources. File modification policies designed for maximum coverage generate many ‘false-positive’ alerts that must be manually reviewed. Rules must balance between catching specific attacks and detecting broader classes of threats. These challenges are mitigated in several ways. First, monitoring is limited to just those files that contain sensitive information or are critical to the operation of the system or application. Second, the policies have different criticality, so that changes to key infrastructure or matches against known attack signatures get the highest priority. The vendor supplies rules for known threats and to cover compliance mandates such as PCI-DSS. Suspicious events that indicate an attack policy violation are the next priority. Finally, permitted changes to critical files are logged for manual review at a lower priority to help reduce the administrative burden. File integrity monitoring has been around since the mid-90s, and has proven very effective for detection of malware and system compromise. Changes to Windows registry files and open source libraries are common hacks, and very difficult to detect manually. While file monitoring does not help with many of the web and browser attacks that use injection or alter programs in memory, it does detect many types of persistant threats, and therefore is a very logical extension of existing monitoring infrastructure. Share:

Share:
Read Post

NSO Quant: Manage Process Metrics, Part 1

We realized last week that we may have hit the saturation point for activity on the blog. Right now we have three ongoing blog series and NSO Quant. All our series post a few times a week, and Quant can be up to 10 posts. It’s too much for us to keep up with, so I can’t even imagine someone who actually has to do something with their days. So we have moved the Quant posts out of the main blog feed. Every other day, I’ll do a quick post linking to any activity we’ve had in the project, which is rapidly coming to a close. On Monday we posted the first 3 metrics posts for the Manage process. It’s the part where we are defining policies and rules to run our firewalls and IDS/IPS devices. Again, this project is driven by feedback from the community. We appreciate your participation and hope you’ll check out the metrics posts and tell us whether we are on target. So here are the first three posts: NSO Quant: Manage Metrics – Policy Review NSO Quant: Manage Metrics – Define/Update Policies and Rules NSO Quant: Manage Metrics – Document Policies and Rules Over the rest of the day, we’ll hit metrics for the signature management processes (for IDS/IPS), and then move into the operational phases of managing network security devices. Share:

Share:
Read Post

New Paper (+ Webcast): Understanding and Selecting a Tokenization Solution

Around the beginning of the year Adrian and I released our big database encryption paper: Understanding and Selecting a Database Encryption or Tokenization Solution. We realized pretty quickly there was no way we could do justice to tokenization in that paper, so we are now excited to release Understanding and Selecting a Tokenization Solution. In this paper we dig in and cover all the major ins and outs of tokenization. How it works, why you might want to use it, architectural and integration options, and key selection criteria. We also include descriptions of three major use cases… with pretty architectural diagrams. This was a fun project – the more we dug in, the more we learned about the inner workings of these systems and how they affect customers. We were shocked at how such a seemingly simple technology requires all sorts of design tradeoffs, and the different approaches taken by each vendor. In support of this presentation we are also giving a webcast with the sponsor/licensee, RSA. The webcast is September 28th at 1pm ET, and you can register. The content was developed independently of sponsorship, using our Totally Transparent Research process. You can download the PDF directly here, and the paper is also available (without registration) at RSA. Since they were so nice as to help feed my kid without mucking with the content, please pay them a visit to learn more about their offerings. Share:

Share:
Read Post

FireStarter: It’s Time to Talk about APT

There’s a lot of hype in the press (and vendor pitches) about APT – the Advanced Persistent Threat. Very little of it is informed, and many parties within the security industry are quickly trying to co-opt the term in order to advance various personal and corporate agendas. In the process they’ve bent, manipulated and largely tarnished what had been a specific description of a class of attacker. I’ve generally tried to limit how much I talk about it – mostly restricting myself to the occasional Summary/Incite comment, or this post when APT first hit the hype stage, and a short post with some high level controls. I self-censor because I recognize that the information I have on APT all comes either second-hand, or from sources who are severely restricted in what they can share with me. Why? Because I don’t have a security clearance. There are groups, primarily within the government and its contractors, with extensive knowledge of APT methods and activities. A lot of it is within the DoD, but also with some law enforcement agencies. These guys seem to know exactly what’s going on, including many of the businesses within private industry being attacked, the technical exploit details, what information is being stolen, and how it’s exfiltrated from organizations. All of which seems to be classified. I’ve had two calls over the last couple weeks that illustrate this. In the first, a large organization was asking me for advice on some data protection technologies. Within about 2 minutes I said, “if you are responding to APT we need to move the conversation in X direction”. Which is exactly where we went, and without going into details they were essentially told they’d been compromised and received a list, from “law enforcement”, of what they needed to protect. The second conversation was with someone involved in APT analysis informing me of a new technique that technically wasn’t classified… yet. Needless to say the information wasn’t being shared outside of the classified community (e.g., not even with the product vendors involved) and even the bit shared with me was extremely generic. So we have a situation where many of the targets of these attacks (private enterprises) are not provided detailed information by those with the most knowledge of the attack actors, techniques, and incidents. This is an untenable situation – further, the fundamental failure to share information increases the risk to every organization without sufficient clearances to work directly with classified material. I’ve been told that in some cases some larger organizations do get a little information pertinent to them, but the majority of activity is still classified and therefore not accessible to the organizations that need it. While it’s reasonable to keep details of specific attacks against targets quiet, we need much more public discussion of the attack techniques and possible defenses. Where’s all the “public/private” partnership goodwill we always hear about in political speeches and watered-down policy and strategy documents? From what I can tell there are only two well-informed sources saying anything about APT – Mandiant (who investiages and responds to many incidents, and I believe still has clearances), and Richard Bejtlich (who, you will notice, tends to mostly restrict himself to comments on others’ posts, probably due to his own corporate/government restrictions). This secrecy isn’t good for the industry, and, in the end, it isn’t good for the government. It doesn’t allow the targets (many of you) to make informed risk decisions because you don’t have the full picture of what’s really happening. I have some ideas on how those in the know can better share information with those who need to know, but for this FireStarter I’d like to get your opinions. Keep in mind that we should try and focus on practical suggestions that account for the nuances of the defense/intelligence culture being realistic about their restrictions. As much as I’d like the feds to go all New School and make breach details and APT techniques public, I suspect something more moderate – perhaps about generic attack methods and potential defenses – is more viable. But make no mistake – as much hype as there is around APT, there are real attacks occurring daily, against targets I’ve been told “would surprise you”. And as much as I wish I knew more, the truth is that those of you working for potential targets need the information, not just some blowhard analysts. UPDATE Richard Bejtlich also highly recommends Mike Cloppert as a good source on this topic. Share:

Share:
Read Post

Monitoring up the Stack: Threats

In our introductory post we discussed how customers are looking to derive additional value form their SIEM and log management investments by looking at additional data types to climb the stack. Part of the dissatisfaction we hear from customers is the challenge of turning collected data into actionable information for operational efficiency and compliance requirements. This challenge is compounded by the clear focus on application-oriented attacks. For the most part, our detection only pays attention to the network and servers, while the attackers are flying above that. It’s kind of like repeatedly missing the bad guys because they are flying at 45,000 feet, but you cannot get above 20,000 feet. You aren’t looking where the attacks are actually happening, which obviously presents problems. At its core SIEM can fly at 45,000’ and monitor application components looking for attacks, but it will take work to get there. Though given the evolution of the attack space, we don’t believe keeping monitoring focused on infrastructure is an option, even over the middle term. What kind of application threats are we talking about? It’s not brain surgery and you’ve seen all of these examples before, but they warrant another mention because we continue to miss opportunities to focus on detecting these attacks. For example: Email: You click a link in a ‘joke-of-the-day’ email your spouse forwarded, which installs malware on your system, and then tries to infect every machine on your corporate network. A number of devices get compromised and become latent zombies waiting to blast your network and others. Databases: Your database vendor offers a new data replication feature to address failover requirements for your financial applications, but it’s installed with public credentials. Any hacker can now replicate your database, without logging in, just by issuing a database command. Total awesomeness! Web Browsers: Your marketing team launches a new campaign, but the third party content provider site was hacked. As your customers visit your site, they are unknowingly attacked using cross-site request forgery and then download malware. The customer’s credentials and browsing history leak to Eastern Europe, and fraudulent transactions get submitted from customer machines without their knowledge. Yes, that’s a happy day for your customers and also for you, since you cannot just blame the third party content provider. It’s your problem. Web Applications: Your web application development team, in a hurry to launch a new feature on time, fails to validate some incoming parameters. Hackers exploit the database through a common SQL injection vulnerability to add new administrative users, copy sensitive data, and alter database configuration – all through normal SQL queries. By the way, as simple as this attack is, a typical SIEM won’t catch it because all the requests look normal and are authorized. It’s an application failure that causes security failure. Ad-hoc applications: The video game your kid installed on your laptop has a keystroke logger that records your activity and periodically sends an encrypted copy to the hackers who bought the exploit. They replay your last session, logging into your corporate VPN remotely to extract files and data under your credentials. So it’s fun when the corporate investigators show up in your office to ask why you sent the formula for your company’s most important product to China. The power of distributed multi-app systems to deliver services quickly and inexpensively cannot be denied, which means we security folks will not be able to stop the trend – no matter what the risk. But we do have both a capability and responsibility to ensure these services are delivered as securely as possible, and we watch for bad behavior. Many of the events we discussed are not logged by traditional network security tools, and to casual inspection the transactions look legitimate. Logic flaws, architectural flaws, and misused privileges look like normal operation to a router or an IPS. Browser exploits and SQL injection are difficult to detect without understanding the application functionality. More problematic is that damage from these exploits occurs quickly, requiring a shift from after-the-fact forensic analysis to real-time monitoring to give you a chance to interrupt the attack. Yes, we’re really reiterating that application threats are likely to get “under the radar” and past network-level tools. Customers complain the SIEM techniques they have are too slow to keep up with remote multi-stage attacks, code substitution, etc.; ill-suited to stopping SQL injection, rogue applications, data leakage, etc.; or simply effective against cross-site scripting, hijacked privileges, etc. – we keep hearing that current tools to have no chance against these new attacks. We believe the answer involves broader monitoring capabilities at the application layer, and related technologies. But reality dictates the tools and techniques used for application monitoring do not always fit SIEM architectures. Unfortunately this means some of the existing technologies you may have, and more importantly the way you’ve deployed them – may not fit into this new reality. We believe all organizations need to continue broadening how they monitor their IT resources and incorporate technologies that are designed to look at the application layer, providing detection of application attacks in near real time. But to be clear, adoption is still very early and the tools are largely immature. The following is an an overview of the technologies designed to monitor at the application layer, and these are what we will focus on in this series: File Integrity Monitoring: This is real-time verification of applications, libraries, and patches on a given platform. It’s designed to detect replacement of files and executables, code injection, and the introduction of new and unapproved applications. Identity Monitoring: Designed to identify users and user activity across multiple applications, or when using generic group or service accounts. Employs a combination of location, credential, activity, and data comparisons to ‘de-anonymize’ user identity. Database Monitoring: Designed to detect abnormal operation, statements, or user behavior; including both end users and database administrators. Monitoring systems review database activity for SQL injection, code injection, escalation of privilege, data theft, account hijacking, and misuse. Application Monitoring: Protects applications, web applications, and web-based clients from man-in-the-middle attacks, cross site scripting (XSS), cross site request forgery (CSRF), SQL

Share:
Read Post

Understanding and Selecting an Enterprise Firewall: Selection Process

Now that we’ve been through the drivers for evolved, application-aware firewalls, and a lot of the technology enabling them, how does the selection process need to evolve to keep pace? As with most of our research at Securosis, we favor mapping out a very detailed process, and leaving you to decide which steps make sense in your situation. So we don’t expect every organization to go through every step in this process. Figure out which are appropriate for your organization and use those. To be clear, buying an enterprise firewall usually involves calling up your reseller and getting the paperwork for the renewal. But given that these firewalls imply new application policies and perhaps a different deployment architecture, some work must be done during selection process to get things right. Define Needs The key here is to understand which applications you want to control, and how much you want to consider collapsing functionality (IDS/IPS, web filtering, UTM) into the enterprise firewall. A few steps to consider here are: Create an oversight committee: We hate the term ‘committee’ to, but the reality is that an application aware firewall will impact activities across several groups. Clearly this is not just all about the security team, but also the network team and the application teams as well – at minimum, you will need to profile their applications. So it’s best to get someone from each of these teams (to whatever degree they exist in your organization) on the committee. Ensure they understand your objectives for the new enterprise firewall, and make sure it’s clear how their operations will change. Define the applications to control: Which applications do you need to control? You may not actually know this until you install one of these devices and see what visibility they provide into applications traversing the firewall. We’ll discuss phasing in your deployment, but you need to understand what degree of granularity you need from a blocking standpoint, as that will drive some aspects of selection. Determine management requirements: The deployment scenario will drive these. Do you need the console to manage the policies? To generate reports? For dashboards? The degree to which you need management help (if you have a third party tool, the answer should be: not much) will define a set of management requirements. Product versus managed service: Do you plan to use a managed service for either managing or monitoring the enterprise firewall? Have you selected a provider? The provider might define your short list before you even start. By the end of this phase you should have identified key stakeholders, convened a selection team, prioritized the applications to control, and determined management requirements. Formalize Requirements This phase can be performed by a smaller team working under the mandate of the selection committee. Here the generic needs determined in phase 1 are translated into specific technical features, and any additional requirements are considered. You can always refine these requirements as you proceed through the selection process and get a better feel for how the products work (and how effective and flexible they are at blocking applications). At the conclusion of this stage you will develop a formal RFI (Request For Information) to release to vendors, and a rough RFP (Request For Proposals) that you’ll clean up and formally issue in the evaluation phase. Evaluate Products Increasingly we see firewall vendors starting to talk about application awareness, new architectures, and very similar feature sets. The following steps should minimize your risk and help you feel confident in your final decision: Issue the RFI: Larger organizations should issue an RFI though established channels and contact a few leading enterprise firewall vendors directly. Though in reality virtually all the firewall players sell through the security channel, so it’s likely you will end up going through a VAR. Define the short list: Before bringing anyone in, match any materials from the vendor or other sources to your RFI and draft RFP. Your goal is to build a short list of 3 products which can satisfy most of your needs. You should also use outside research sources and product comparisons. Understand that you’ll likely need to compromise at some point in the process, as it’s unlikely any vendor can meet every requirement. Dog and Pony Show: Instead of generic presentations and demonstrations, ask the vendors to walk you through how they protect the specific applications you are worried about. This is critical, because the vendors are very good at showing cool eye candy and presenting a long list of generic supported applications. Don’t expect a full response to your draft RFP – these meetings are to help you better understand how each vendor can solve your specific use cases and to finalize your requirements. Finalize and issue your RFP: At this point you should completely understand your specific requirements, and issue a final formal RFP. Assess RFP responses and start proof of concept (PoC): Review the RFP results and drop anyone who doesn’t meet your hard requirements. Then bring in any remaining products for in-house testing. Given that it’s not advisable to pop holes in your perimeter when learning how to manage these devices, we suggest a layered approach. Test Ingress: First test your ingress connection by installing the new firewall in front of the existing perimeter gateway. Migrate your policies over, let the box run for a little while, and see what it’s blocking and what it’s not. Test Egress: Then move the firewall to the other side of the perimeter gateway, so it’s in position to do egress filtering on all your traffic. We suggest you monitor the traffic for a while to understand what is happening, and then define egress filtering policies. Understand that you need to devote resources to each PoC, and testing ingress separately from egress adds time to the process. But it’s not feasible to leave the perimeter unprotected while you figure out what works, so this approach gives you that protection and the ability to run the devices in pseudo-production mode. Selection and Deployment Select, negotiate, and buy: Finish testing, take the results to the full selection committee, and

Share:
Read Post

Understanding and Selecting an Enterprise Firewall: to UTM or Not to UTM?

Given how much time we’ve spent discusing application awareness and how these new capabilities pretty much stomp all over existing security products like IDS/IPS and web filters, does that mean standalone network security devices go away? Should you just quietly accept that unified threat management (UTM) is the way to go because the enterprise firewall provides multiple functions? Not exactly. First let’s talk about the rise of UTM, even in the enterprise. The drive towards UTM started with smaller businesses, where using a single device for firewall, IDS/IPS, anti-spam, web filtering, gateway AV, and other functions reduced complexity and cost – and thus made a lot of sense. But over time as device performance increased, it became feasible even for enterprises to consolidate functions into a single device. This doesn’t mean many enterprises tried this, but they had the option. So why hasn’t the large enterprise embraced UTM? It comes down to predictable factors we see impacting enterprise technology adoption in general: Branding: UTM was perceived as a SMB technology, so many enterprise snobs didn’t want anything to do with it. Why pay $2,500 for a box when you can pay $50,000 to make a statement about being one of the big boys? Of course, notwithstanding the category name, every vendor brought a multi-function security gateway to market. They realize ‘UTM’ could be a liability so they use different names for people who don’t want to use the same gear as the great unwashed. Performance Perception: Again, given the SMB heritage of UTM, enterprise network security players could easily paint UTM as low-performance, and customers believed them. To be clear, the UTM-centric vendors didn’t help here pushing their boxes into use cases where they couldn’t be successful, demonstrating they weren’t always suitable. If you try to do high-speed firewall, IDS/IPS, and anti-spam with thousands of rules, all in the same box, it’s not going to work well. Hell, even standalone devices use load balancing techniques to manage high volumes, but the perception of enterprise customers was that UTM couldn’t scale. And we all know that perception is reality. Single Point of Failure: If the box goes down you are owned, right? Well, yes – or completely dead in the water – you might get to choose which. Many enterprises remain unwilling to put all their eggs in one basket, even with high availability configurations and the like. As fans of layered security we don’t blame folks for thinking this way, but understand that you can deploy a set of multi-function gateways to address the issue. But when you are looking for excuses not to do something, you can always find at least one. Specialization: The complexity of large enterprise environments demands lots of resources, and they resources tend to be specialized in the operations of one specific device. So you’ll have a firewall jockey, an IDS/IPS guru, and an anti-spam queen. If you have all those capabilities in a single box, what does that do for the job security of all three? To be clear every UTM device supports role-based management so administrators can have control only over the functions in their area, but it’s easier for security folks to justify their existence if they have a dedicated box/function to manage. Yes, this boils down to politics, but we all know political machinations have killed more than a handful of emerging technologies. Pricing: There is no reason you can’t get a multi-function security device and use it as a standalone device. You can get a UTM and run it like a firewall. Really. But to date, the enterprise pricing of these UTM devices made that unattractive for most organizations. Again, a clear case of vendors not helping themselves. So we’d like to see more of a smorgasbord pricing model, where you buy the modules you need. Yes, some of the vendors (especially ones selling software on commodity hardware) are there. But their inclination is to nickel and dime the customer, charging too much for each module, so enterprises start to lose the idea that multi-function devices will actually save money. Ultimately these factors will not stop the multi-function security device juggernaut from continuing to collapse more functions into the perimeter gateway. Vendors changed the branding to avoid calling it UTM – even though it is. The devices have increased performance with new chips and updated architectures. And even the political stuff works out over time due to economic pressure to increase operational efficiency. So the conclusion we draw is that consolidation of network security functions is inevitable, even in the large enterprise. But we aren’t religious about UTM vs. standalone devices. All we care about is seeing the right set of security controls are implemented in the most effective means to protect critical information. We don’t expect standalone IDS/IPS devices to go away any time soon. And much of the content filtering (email and web) is moving to cloud-based services. We believe this is a very positive trend. These new abilities of the enterprise firewall give us more flexibility. That’s right, we still believe (strongly) in defense in depth. So having an IDS/IPS sitting behind an application aware firewall isn’t a bad thing. Attacks change every day and sometimes it’s best to look for a specific issue. Let’s use a battle analogy – if we have a sniper (in the form of IDS/IPS) sitting behind the moat (firewall) looking for a certain individual (the new attack), there is nothing wrong with that. If we want to provision some perimeter security in the cloud, and have a cleaner stream of traffic hitting your network, that’s all good. If you want to maintain separate devices at HQ and larger regional locations, while integrating functions in small offices and branches, or maybe even running network security in a virtual machine, you can. And that’s really the point. For a long time, we security folks have been building security architectures based on what the devices could do, not what’s appropriate (or necessary) to protect information assets. Having the ability to provision the security you need where you need

Share:
Read Post

Friday Summary: September 17, 2010

Reality has a funny way of intruding into the best laid plans. Some of you might have noticed I haven’t been writing that much for the past couple weeks and have been pretty much ignoring Twitter and the rest of the social media world. It seems my wife had a baby, and since this isn’t my personal blog anymore I was able to take some time off and focus on the family. Needless to say, my “paternity leave” didn’t last nearly as long as I planned, thanks to the work piling up. And it explains why this may be showing up in your inbox on Saturday, for those of you getting the email version. Which brings me to my next point, one we could use a little feedback on. If you look at the blog this week we hit about 20 posts… many of them in-depth research to support our various projects. I’m starting to wonder if we are overwhelming people a little? As the blogging community has declined we spend less time with informal commentary and inter-blog discussions, and more time just banging out research. As a ‘research’ company, it isn’t like we won’t publish the harder stuff, but I want to make sure we aren’t losing people in the process like that boring professor everyone really respects, but who has to slam a book on the desk at the end of class to let everyone know they can go. Finally, this week it was cool to ship out the iPad for the winning participant in the 2010 Data Security Survey. When I contacted him he asked, “Is this some phishing trick?”, but I managed to still get his mailing address and phone number after a few emails. Which is cool, because now I have a new bank account with better credit, and it looks like his is good enough for the mortgage application. (But seriously, he wanted one & didn’t have one, and it was really nice to send it to someone who appreciated it). On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences SIEM In The Spotlight with ArcSight Acquisition. HP to buy ArcSight for $1.5 billion. Favorite Securosis Posts Adrian Lane: NSO Quant: Monitor Metrics – Analyze. Definitely correlate. Ten minutes to Wapner. Mike Rothman: Monitoring up the Stack: Introduction. We’re starting another research project, pushing forward on our Monitor Everything philosophy. Keep an eye on this one – it’s going to be great. Rich: HP Sets its Arcsights on Security. Mike’s analysis of the HP/Arcsight deal, which tells you whether and why this matters. Other Securosis Posts The Securosis 2010 Data Security Survey Report Rates the Top 5 Data Security Controls. Incite 9/15/2010: Up, down, up, down, Repeat. FireStarter: Automating Secure Software Development. Understanding and Selecting an Enterprise Firewall Deployment Considerations. Management. Advanced Features, Part 1. Advanced Features, Part 2. To UTM or Not to UTM? DLP Selection Process Step 1. Defining the Content. Protection Requirements. Infrastructure Integration Requirements. Favorite Outside Posts Pepper: DRG SSH Username and Password Authentication Tag Clouds. Nice rendering of human nature (you can call it laziness or stupidity, as you prefer). Adrian Lane: Gift Card FAIL. Gift cards seemed designed to be scammed. Does the bank ever lose, or only merchants? Something to think about. Mike Rothman: Evil WiFi – Captive Portal Edition. Ax0n provides very detailed instructions on building your own Evil WiFi kit. For research purposes, of course… David Mortman: Security Planning – who watches the watchers?. It’s almost but not quite Banksy. Rich: Want to know if your app (especially Adobe Reader) is using unsafe functions? Errata has an app for that. Project Quant Posts NSO Quant: Monitor Metrics – Validate and Escalate. NSO Quant: Monitor Metrics – Analyze. NSO Quant: Monitor Metrics – Collect and Store. NSO Quant: Monitor Metrics – Define Policies. NSO Quant: Monitor Metrics – Enumerate and Scope. Research Reports and Presentations Security + Agile = FAIL Presentation. Data Encryption 101: A Pragmatic Approach to PCI. White Paper: Understanding and Selecting SIEM/Log Management. White Paper: Endpoint Security Fundamentals. Top News and Posts Flash Flaw Puts Android at Risk. Web Hacking Incident Database updated. HDCP Encryption Supposedly Hacked. It’s not like you can’t reverse engineer the set top box, but the details on this will be interesting. Another Adobe Flash zero day under attack. Old-school worm making the rounds. How nostalgic. Martin: What skillz should a geek kid learn? Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Troy, in response to Tokenization Will Become the Dominant Payment Transaction Architecture. Interesting discussion. As I read the article I was also interested in the ways in which a token could be used as a ‘proxy’ for the PAN in such a system – the necessity of having the actual card number for the initial purchase seems to assuage most of that concern. Another aspect of this method that I have not seen mentioned here: if the Tokens in fact conform to the format of true PANs, won’t a DLP scan for content recognition typically ‘discover’ the Tokens as potential PANs? How would the implementing organization reliably prove the distinction, or would they simply rest on the assumption that as a matter of design any data lying around that looks like a credit card number must be a Token? I’m not sure that would cut the mustard with a PCI auditor. Seems like this could be a bit of a sticky wicket still? Troy – in this case you would use database fingerprinting/exact data matching to only look for credit card numbers in your database, or to exclude the tokens. Great question! Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.