Securosis

Research

Upcoming Webinar: Selecting SIEM

Tuesday, September 21st, at 11am PST / 2pm EST, I will be presenting a webinar: “Keys to Selecting SIEM and Log Management”, hosted by NitroSecurity. I’ll cover the basics of SIEM, including data collection and deployment, then dig into use cases, enrichment, data management, forensics, and advanced features. You can sign up for the webinar here. SIEM and Log Management platforms have been around for a while, so I am not going to spend much time on background, but instead steer more towards current trends and issues. If I gloss over any areas you are especially interested in, we will have 15 minutes for Q&A. You can send questions in ahead of time to info ‘at’ securosis dot com, and I will try to address them within the slides. Or you can submit a question in the WebEx chat facility during the presentation, and the host will help discuss. Share:

Share:
Read Post

Understanding and Selecting an Enterprise Firewall: Advanced Features, Part 2

After digging into application awareness features in Part 1, let’s talk about non-application capabilities. These new functions are really about dealing with today’s attacks. Historically, managing ports and protocols has sufficed to keep the bad guys outside the perimeter; but with today’s bumper crop of zombies & bots, the old ways don’t cut it any more. Bot Detection As law enforcement got much better at tracking attackers, the bad guys adapted by hiding behind armies of compromised machines. Better known as zombies or bots, these devices (nominally controlled by consumers) send spam, do reconnaissance, and launch other attacks. Due to their sophisticated command and control structures, it’s very difficult to map out these bot networks, and attacks can be launched from anywhere at any time. So how do we deal with this new kind of attacker on the enterprise firewall? Reputation: Reputation analysis was originally created to help fight spam, and is rapidly being adopted in the broader network security context. We know some proportion of the devices out there are doing bad things, and we know many of those IP addresses. Yes, they are likely compromised devices (as opposed to owned by bad folks specifically for nefarious purposes) but regardless, they are doing bad things. You can check a reputation service in real time and either block or take other actions on traffic originating from recognized bad actors. This is primarily a black list, though some companies track ‘good’ IPs as well, which allows them to take a cautious stance on devices not known to be either good or bad. Traffic Analysis: Another technique we are seeing on firewall is the addition of traffic analysis. Network behavioral analysis didn’t really make it as a standalone capability, but tracking network flows across the firewall (with origin, destination, and protocol information) allows you to build a baseline of acceptable traffic patterns and highlight abnormal activity. You can also set alerts on specific traffic patterns associated with command and control (bot) communications, and so use such a firewall as an IDS/IPS. Are these two capabilities critical right now? Given the prevalence of other mechanisms to detect these attacks – such as flow analysis through SIEM and pattern matching via network IDS – this is a nice-to-have capability. But we expect a lot of these capabilities to centralize on application aware firewalls, positioning these devices as the perimeter security gateway. As such, we expect these capabilities to become more prevalent over the next 2 years, and in the process make the bot detection specialists acquisition targets. Content Inspection It’s funny, but lots of vendors are using the term ‘DLP’ to describe how they analyze content within the firewall. I know Rich loves that, and to be clear, firewall vendors are not* performing Data Leak Prevention. Not the way we define it, anyway. At best, it’s content analysis a bit more sophisticated than regular expression scanning. There are no capabilities to protect data at rest or in use, and their algorithms for deep content analysis are immature when they exist at all. So we are pretty skeptical on the level of real content inspection you can get from a firewall. If you are just looking to make sure social security numbers or account IDs don’t leave the perimeter through email or web traffic, a sophisticated firewall can do that. But don’t expect to protect your intellectual property with sophisticated analysis algorithms. When firewall vendors start saying bragging on ‘DLP’, you have our permission to beat them like dogs. That said, clearly there are opportunities for better integration between real DLP solutions and the enterprise firewall, which can provide an additional layer of enforcement. We also expect to see maturation of inspection algorithms available on firewalls, which could supplant the current DLP network gateways – particularly in smaller locations where multiple devices can be problematic. Vulnerability Integration One of the more interesting integrations we see is the ability for a web application scanner or service to find an issue and set a blocking rule directly on the web application firewall. This is not a long-term fix but does buy time to investigating a potential application flaw, and provides breathing room to choose the most appropriate remediation approach. Some vendors refer to this as virtual patching. Whatever it’s called, we think it’s interesting. So we expect the same kind of capability to show up on general purpose enterprise firewalls. You’d expect the vulnerability scanning vendors to lead the way on this integration, but regardless, it will make for an interesting capability of the application aware firewall. Especially if you broaden your thinking beyond general network/system scanners. A database scan would likely yield some interesting holes which could be addressed with an application blocking rule at the firewall, no? There are numerous intriguing possibilities, and of course there is always a risk of over-automating (SkyNet, anyone?), but the additional capabilities are likely worth the complexity risk. Next we’ll address the question we’ve been dancing around throughout the series. Is there a difference between an application aware firewall and a UTM (unified threat management) device? Stay tuned… Share:

Share:
Read Post

DLP Selection: Infrastructure Integration Requirements

In our last post we detailed content protection requirements, so now it’s time to close out our discussion of technical requirements with infrastructure integration. To work properly, all DLP tools need some degree of integration with your existing infrastructure. The most common integration points are: Directory servers to determine users and build user, role, and business unit policies. At minimum, you need to know who to investigate when you receive an alert. DHCP servers so you can correlate IP addresses with users. You don’t need this if all you are looking at is email or endpoints, but for any other network monitoring it’s critical. SMTP gateway this can be as simple as adding your DLP tool as another hop in the MTA chain, but could also be more involved. Perimeter router/firewall for passive network monitoring you need someplace to position the DLP sensor – typically a SPAN or mirror port, as we discussed earlier. Web gateway will probably integrate with your DLP system if you want to on filtering web traffic with DLP policies. If you want to monitor SSL traffic (you do!), you’ll need to integrate with something capable of serving as a reverse proxy (man in the middle). Storage platforms to install client software to integrate with your storage repositories, rather than relying purely on remote network/file share scanning. Endpoint platforms must be compatible to accept the endpoint DLP agent. You may also want to use an existing software distribution tool to deploy the it. I don’t mean to make this sound overly complex – many DLP deployments only integrate with a few of these infrastructure components, or the functionality is included within the DLP product. Integration might be as simple as dropping a DLP server on a SPAN port, pointing it at your directory server, and adding it into the email MTA chain. But for developing requirements, it’s better to over-plan than miss a crucial piece that blocks expansion later. Finally, if you plan on deploying any database or document based policies, fill out the storage section of the table. Even if you don’t plan to scan your storage repositories, you’ll be using them to build partial document matching and database fingerprinting policies. Share:

Share:
Read Post

Incite 9/15/2010: Up, down, up, down, Repeat

It was an eventful weekend at chez Rothman. The twins (XX2 and XY) had a birthday, which meant the in-laws were in town and for the first time we had separate parties for the kids. That meant one party on Saturday night and another Sunday afternoon. We had a ton of work to do to get the house ready to entertain a bunch of rambunctious 7 year olds. But that’s not all – we also had a soccer game and tryouts for the holiday dance performance on Saturday. And that wasn’t it. It was the first weekend of the NFL season. I’ve been waiting intently since February for football to start again, and I had to balance all this activity with my strong desire to sit on my ass and watch football. As I mentioned last week, I’m trying to be present and enjoy what I’m doing now – so this weekend was a good challenge. I’m happy to say the weekend was great. Friday and Saturday were intense. Lots of running around and the associated stress, but it all went without a hitch. Well, almost. Any time you get a bunch of girls together (regardless of how old they are), drama cannot be far off. So we had a bit, but nothing unmanageable. The girls had a great time and that’s what’s important. We are gluttons for punishment, so we had 4 girls sleep over. So I had to get donuts in the AM and then deliver the kids to Sunday school. Then I could take a breath, grab a workout, and finally sit on my ass and watch the first half of the early NFL games. When it was time for the party to start, I set the DVR to record the rest of the game, resisted the temptation to check the scores, and had a good time with the boys. When everyone left, I kicked back and settled in to watch the games. I was flying high. Then the Falcons lost in OT. Crash. Huge bummer. Kind of balanced out by the Giants winning. So I had a win and a loss. I could deal. Then the late games started. I picked San Francisco in my knock-out pool, which means if I get a game wrong, I’m out. Of course, Seattle kicked the crap out of SFO and I’m out in week 1. Kind of like being the first one voted off the island in Survivor. Why bother? I should have just set the Jackson on fire, which would have been more satisfying. I didn’t have time to sulk because we went out to dinner with the entire family. I got past the losses and was able to enjoy dinner. Then we got back and watched the 8pm game with my in-laws, who are big Redskin fans. Dallas ended up losing, so that was a little cherry on top. As I look back on the day, I realize it’s really a microcosm of life. You are up. You are down. You are up again and then you are down again. Whatever you feel, it will soon pass. As long as I’m not down for too long, it’s all good. It helps me appreciate when things are good. And I’ll keep riding the waves of life and trying my damnedest to enjoy the ups. And the downs. – Mike. Photo credits: “Up is more dirty than down” originally uploaded by James Cridland Recent Securosis Posts As you can tell, we’ve been pretty busy over the past week, and Rich is just getting ramped back up. Yes, we have a number of ongoing research projects and another starting later this week. We know keeping up with everything is like drinking from a fire hose, and we always appreciate the feedback and comments on our research. HP Sets Its ArcSights on Security FireStarter: Automating Secure Software Development Friday Summary: September 10, 2010 White Paper Released: Data Encryption 101 for PCI DLP Selection Process, Step 1 Understanding and Selecting an Enterprise Firewall Management Deployment Considerations Technical Architecture, Part 2 Technical Architecture, Part 1 NSO Quant Monitor Metrics – Collect and Store Monitor Metrics – Define Policies Monitor Metrics – Enumerate and Scope LiquidMatrix Security Briefing: September 13 September 9 September 8 Incite 4 U Here you have… a time machine – The big news last week was the Here You Have worm, which compromised large organizations such as NASA, Comcast, and Disney. It was a good old-fashioned mass mailing virus. Wow! Haven’t seen one of those in many years. Hopefully your company didn’t get hammered, but it does remind us that what’s old inevitably comes back again. It also goes to show that users will shoot themselves in the foot, every time. So what do we do? Get back to basics, folks. Endpoint security, check. Security awareness training, check. Maybe it’s time to think about more draconian lockdown of PCs (with something like application white listing). If you didn’t get nailed consider yourself lucky, but don’t get complacent. Given the success of Here You Have, it’s just a matter of time before we get back to the future with more old school attacks. – MR Cyber-Something – A couple of the CISOs at the OWASP conference ducked out because their networks had been compromised by a worm. The “Here You Have” worm was being reported and it infected more than half the desktops at one firm; in another case it just crashed the mail server. But this whole situation ticks me off. Besides wanting to smack the person who came up with the term “Cyber-Jihad” – as I suspect this is nothing more than an international script-kiddie – I don’t like that we have moved focus off the important issue. After reviewing the McAfee blog, it seems that propagation is purely due to people clicking on email links that download malware. So WTF? Why is the focus on ‘Cyber-Jihad’? Rather than “Ooh, look at the Cyber-monkey!” how about “How the heck did the email scanner not catch this?” Why wasn’t the reputation of

Share:
Read Post

The Securosis 2010 Data Security Survey Report Rates the Top 5 Data Security Controls

Over the summer we initiated what turned out to be a pretty darn big data security survey. Our primary goal was to assess what data security controls people find most effective; and get a better understanding of how they are using the controls, what’s driving adoption, and a bit on what kinds of incidents they are experiencing. The response was overwhelming – we had over 1,100 people participate from across the IT spectrum. The responses were almost evenly split between security and regular IT folks, which helps reduce some of the response bias: I try to be self critical, and there were definitely some mistakes in how we designed the survey (although the design process was open to the public and available for review before we launched, so I do get to blame all you a bit too, for letting me screw up). But despite those flaws I think we still obtained some great data – especially on what controls people consider effective (and not), and how you are using them. Due to an error on my part we can’t release the full report here at Securosis for 30 days, but it is available from our sponsor, Imperva, who is also re-posting the survey so those of you who haven’t taken it yet can run through the questions and compare yourselves to the rest of the responses. We will also be releasing the full (anonymized) raw data so you can perform your own analysis. Everything is free under a Creative Commons license. I apologize for not being able to release the report immediately as usual – it was a mistake on my part and won’t happen again. Key Findings We received over 1,100 responses with a completion rate of over 70%, representing all major vertical markets and company sizes. On average, most data security controls are in at least some stage of deployment in 50% of responding organizations. Deployed controls tend to have been in use for 2 years or more. Most responding organizations still rely heavily on “traditional” security controls such as system hardening, email filtering, access management, and network segregation to protect data. When deployed, 40-50% of participants rate most data security controls as completely eliminating or significantly reducing security incident occurrence. The same controls rated slightly lower for reducing incident severity (when incidents occur), and still lower for reducing compliance costs. 88% of survey participants must meet at least 1 regulatory or contractual compliance requirement, with many needing to comply with multiple regulations. Despite this, “to improve security” is the most cited primary driver for deploying data security controls, followed by direct compliance requirements and audit deficiencies. 46% of participants reported about the same number of security incidents in the most recent 12 months compared to the previous 12, with 27% reporting fewer incidents, and only 12% reporting a relative increase. Organizations are most likely to deploy USB/portable media encryption and device control or data loss prevention in the next 12 months. Email filtering is the single most commonly used control, and the one cited as least effective. Our overall conclusion is that even accounting for potential response bias, data security has transitioned past early adopters and significantly penetrated the early mainstream of the security industry. Top Rated Controls (Perceived Effectiveness): The 5 top rated controls for reducing number of incidents are network data loss prevention, full drive encryption, web application firewalls, server/endpoint hardening, and endpoint data loss prevention. The 5 top rated controls for reducing incident severity are network data loss prevention, full drive encryption, endpoint data loss prevention, email filtering, and USB/portable media encryption and device control. (Web application firewalls nearly tied, and almost made the top 5). The 5 top rated controls for reducing compliance costs are network data loss prevention, endpoint data loss prevention, storage data loss prevention, full drive encryption, and USB and portable media encryption and device control. These were very closely followed by network segregation and access management. We’ll be logging more findings throughout the week, and please visit Imperva to get your own copy of the full analysis. Share:

Share:
Read Post

Monitoring up the Stack: Introduction

The question that came up over and over again during our SIEM research project: “How do I derive more value from my SIEM installation?” As we discussed throughout that report, plenty of data gets collected, but extracting actionable information remains a challenge. In part this is due to the “drinking from the fire-hose” effect, where the speed and volume of incoming data make it difficult to process effectively. Additionally, data needs to be pieced together with sufficient reference points from multiple event sources before analysis. But we found a major limiting factor was also the network-centric perspective on data collection and analysis. We were looking at traffic, rather than transactions. We were looking at packet density, not services. We were looking at IP addresses instead of user identity. We didn’t have context to draw conclusions. We continue pushing our research agenda forward in the areas of application and user monitoring, as this has practical value in performing more advanced analysis. So we will dig into these topics and trends in our new series “Monitoring up the Stack: Reacting Faster to Emerging Attacks”. Compliance and operations management are important drivers for investment in SIEM, Log Management, and other complimentary monitoring investments. SIEM has the capacity to provide continuous monitoring, but most are just not set up to provide timely threat response to application attacks. To support more advanced policies and controls, we need to peel back the veil of network-oriented analysis to look at applications and business transactions. In some cases, this just means a new way of looking at existing data. But that would be too easy, wouldn’t it? To monitor up the stack effectively we need to look at changes in architecture, policy management, data collection, and analysis. Business process analytics and fraud detection require different policies, some additional data, and additional analysis techniques beyond what is commonly found in SIEM. If we want to make sense of business use of IT systems, we need to move up the stack, into the application layer. What’s different about monitoring at the application layer? Application awareness and context. To highlight the differences in why network and security event monitoring are inherently limiting for some use cases, consider that devices and operating systems are outside business processes. In some cases they lack the information needed to perform analysis, but more commonly the policies and analysis engines are just not set up to detect fraud, spoofing, repudiation, and injection attacks. From the application perspective, network identity and user identity are extremely different. Analysis, performed in context of the application, provides contextual data unavailable from just network and device data. It also provides an understanding of transactions, which is much more useful and informative than pure events. Finally, the challenges of deploying a solution for real-time analysis of events are almost the opposite of those needed for efficient management and correlation. Evolving threats target data and application functions, and we need that perspective to understand and keep up with threats. Ultimately we want to provide business analysis and operations management support when parsing event streams, which are the areas SIEM platforms struggle with. And for compliance we want to implement controls and verify both effectiveness and appropriateness. To accomplish these we must employ additional tactics for baselining behavior, advanced forms of data analysis, policy management and – perhaps most importantly – having a better understanding of user identity and authorization. Sure, for security and network forensics, SIEM does a good job of piecing together related events across a network. Both methods detect attacks, and both help with forensic analysis. But monitoring up the stack is far better for detecting misuse and more subtle forms of data theft. And depending upon how it’s deployed in your environment, it can block activity as well as report problems. In our next post we’ll dig into the threats that drive monitoring, and how application monitoring is geared for certain attack vectors. Share:

Share:
Read Post

DLP Selection Process: Protection Requirements

Now that you’ve figured out what information you want to protect, it’s time to figure out how to protect it. In this step we’ll figure out your high-level monitoring and enforcement requirements. Determine Monitoring/Alerting Requirements Start by figuring out where you want to monitor your information: which network channels, storage platforms, and endpoint functions. Your high-level options are: Network Email Webmail HTTP/FTP HTTPS IM/Messaging Generic TCP/IP Storage File Shares Document Management Systems Databases Endpoint Local Storage Portable Storage Network Communications Cut/Paste Print/Fax Screenshots Application Control You might have some additional requirements, but these are the most common ones we encounter. Determine Enforcement Requirements As we’ve discussed in other posts, most DLP tools include various enforcement actions, which tend to vary by channel/platform. The most basic enforcement option is “Block” – the activity is stopped when a policy violation is detected. For example, an email will be filtered, a file not transferred to a USB drive, or an HTTP URL will fail. But most products also include other options, such as: Encrypt: Encrypt the file or email before allowing it to be sent/stored. Quarantine: Move the email or file into a quarantine queue for approval. Shadow: Allow a file to be moved to USB storage, but send a protected copy to the DLP server for later analysis. Justify: Warn the user that this action may violate policy, and require them to enter a business justification to store with the incident alert on the DLP server. Change rights: Add or change Digital Rights Management on the file. Change permissions: Alter the file permissions. Map Content Analysis Techniques to Monitoring/Protection Requirements DLP products vary in which policies they can enforce on which locations, channels, and platforms. Most often we see limitations on the types or size of policies that can be enforced on an endpoint, which change based as the endpoint moves off or onto the corporate network, because some require communication with the central DLP server. For the final step in this part of the process, list your content analysis requirements for each monitoring/protection requirement you just defined. These tables directly translate to the RFP requirements that are at the core of most DLP projects: what you want to protect, where you need to protect it, and how. Share:

Share:
Read Post

DLP Selection Process: Defining the Content

In our last post we kicked off the DLP selection process by putting the team together. Once you have them in place, it’s time to figure out which information you want to protect. This is extremely important, as it defines which content analysis techniques you require, which is at the core of DLP functionality. This multistep process starts with figuring out your data priorities and ends with your content analysis requirements: Stack rank your data protection priorities The first step is to list our which major categories of data/content/information you want to protect. While it’s important to be specific enough for planning purposes, it’s okay to stay fairly high-level. Definitions such as “PCI data”, “engineering plans”, and “customer lists” are good. Overly general categories like “corporate sensitive data” and “classified material” are insufficient – too generic, and they cannot be mapped to specific data types. This list must be prioritized; one good way of developing the ranking is to pull the business unit representatives together and force them to sort and agree to the priorities, rather than having someone who isn’t directly responsible (such as IT or security) determine the ranking. Define the data type For each category of content listed in the first step, define the data type, so you can map it to your content analysis requirements: Structured or patterned data is content like credit card numbers, Social Security Numbers, and account numbers – that follows a defined pattern we can test against. Known text is unstructured content, typically found in documents, where we know the source and want to protect that specific information. Examples are engineering plans, source code, corporate financials, and customer lists. Images and binaries are non-text files such as music, video, photos, and compiled application code. Conceptual text is information that doesn’t come from an authoritative source like a document repository but may contain certain keywords, phrases, or language patterns. This is pretty broad but some examples are insider trading, job seeking, and sexual harassment. Match data types to required content analysis techniques Using the flowchart below, determine required content analysis techniques based on data types and other environmental factors, such as the existence of authoritative sources. This chart doesn’t account for every possibility but is a good starting point and should define the high-level requirements for a majority of situations. Determine additional requirements Depending on the content analysis technique there may be additional requirements, such as support for specific database platforms and document management systems. If you are considering database fingerprinting, also determine whether you can work against live data in a production system, or will rely on data extracts (database dumps to reduce performance overhead on the production system). Define rollout phases While we haven’t yet defined formal project phases, you should have an idea early on whether a data protection requirement is immediate or something you can roll out later in the project. One reason for including this is that many DLP projects are initiated based on some sort of breach or compliance deficiency relating to only a single data type. This could lead to selecting a product based only on that requirement, which might entail problematic limitations down the road as you expand your deployment to protect other kinds of content. Share:

Share:
Read Post

Understanding and Selecting an Enterprise Firewall: Advanced Features, Part 1

Since our main contention in the Understanding and Selecting an Enterprise Firewall series is the movement toward application aware firewalls, it makes sense to dig a bit deeper into the technology that will make this happen and the major uses for these capabilities. With an understanding of what to look for, you should be in a better position to judge whether a vendor’s application awareness capabilities will match your requirements. Application Visibility In the first of our application awareness posts, we talked about visibility as one of the key use cases for application aware firewalls. What exactly does that mean? We’ll break this up into the following buckets: Eye Candy: Most security folks don’t care about fancy charts and graphs, but senior management loves them. What CFO doesn’t turn to jello at the first sign of a colorful pie chart? The ability to see application usage and traffic, and who is consuming bandwidth over a long period over time, provides huge value in understanding normal behavior on your network. Look for granularity and flexibility in these application-oriented visuals. Top 10 lists are a given, but be sure you can slice the data the way you need – or at least export to a tool that can. Having the data is nice; being able to use it is better. Alerting: The trending capabilities of application traffic analysis allows you to set alerts to fire when abnormal behavior appears. Given the infinite attack surface we must protect, any help you can get pinpointing and prioritizing investigative resources increases efficiency. Be sure to have sufficient knobs and dials to set appropriate alerts. You’d like to be able to alert on applications, user/group behavior in specific applications, and possibly even payload in the packets (through regular expression type analysis), and any combination therein. Obviously the more flexibility you have in setting application alerts and tightening thresholds, the better you’ll be able to cut the noise. This sounds very similar to managing an IDS, but we’ll get to that later. Also make sure setting lots of application rules won’t kill performance. Dropped packets are a lousy trade-off for application alerts. One challenge of using a traditional firewall is the interface. Unless the user experience has been rebuilt around an application context (what folks are doing), it still feels like everything is ports and protocols (how they are doing it). Clearly the further you can abstract network behavior to application behavior, the more applicable (and understandable) your rules will be. Application Blocking Visibility is the first step, but you also want to be able to block certain applications, users, and content activities. We told you this was very similar to the IPS concept – the difference is in how detection works. The IDS/IPS uses a negative security model (matching patterns to identify bad stuff) to fire rules, while application aware firewalls use a positive security model – they determine what application traffic is authorized, and block everything else. Extending this IPS discussion a bit, we see most organizations using blocking on only a small minority of the rules/signatures on the box, usually less than 10%. This is for obvious reasons (primarily because blocking legitimate traffic is frowned upon), and gets back to a fundamental tenet of IPS which also applies to application aware firewalls. Just because you can block, doesn’t mean you should. Of course, a positive security model means you are defining what is acceptable and blocking everything else, but be careful here. Most security organizations aren’t in the loop on everything that is happening (we know – quite a shocker), so you may inadvertently stymie a new/updated application because the firewall doesn’t allow it. To be clear, from a security standpoint that’s a great thing. You want to be able to vet each application before it goes live, but politically that might not work out. You’ll need to gauge your own ability to get away with this. Aside from the IPS analogy, there is also a very clear white-listing analogy to blocking application traffic. One of the issues with application white-listing on the endpoints is the challenge of getting applications classified correctly and providing a clear workflow mechanism to deal with exceptions. The same issues apply to application blocking. First you need to ensure the application profiles are accurate and up-to-date. Second, you need a process to allow traffic to be accepted, balancing the need to protect infrastructure and information against responsiveness to business needs. Yeah, this is non-trivial, which is why blocking is done on a fraction of application traffic. Overlap with Existing Web Security Think about the increasing functionality of your operating system or your office suite. Basically, the big behemoth squashed a whole bunch of third party utilities that added value by bundling such capabilities into each new release. The same thing is happening here. If you look at the typical capabilities of your web application filter, there isn’t a lot that can’t be done by an application aware firewall. Visibility? Check. Employee control/management? Check. URL blocking, heuristics, script analysis, AV? Check, check, check, check. The standalone web filter is an endangered species – which, given the complexity of the perimeter, isn’t a bad thing. Simplifying is good. Moreover, a lot of folks are doing web filtering in the cloud now, so the movement from on-premises web filters was under way anyway. Of course, no entrenched device gets replaced overnight, but the long slide towards standalone web filter oblivion has begun. As you look at application aware firewalls, you may be able to displace an existing device (or eliminate the maintenance renewal) to justify the cost of the new gear. Clearly going after the web filtering budget makes sense, and the more expense neutral you can make any purchase, the better. What about web application firewalls? To date, these categories have been separate with less clear overlap. The WAF’s ability to profile and learn about application behavior – in terms of parameter validation, session management, flow analysis, etc. – aren’t available on application aware firewalls. For now. But let’s be clear, it’s not a

Share:
Read Post

HP Sets Its ArcSights on Security

When there’s smoke, there’s usually fire. I’ve been pretty vocal over the past two weeks, stating that users need to forget what they are hearing about various rumored acquisitions, or how these deals will impact them, and focus on doing their jobs. They can’t worry about what deal may or may not happen until it’s announced. Well, this morning HP announced the acquisition of ArcSight, after some more detailed speculation appeared over the weekend. So is it time to worry yet? Deal Rationale HP is acquiring ArcSight for about $1.5 billion, which is a significant premium over where ARST was trading before the speculation started. Turns out it’s about 8 times sales, which is a large multiple. Keep in mind that HP is a $120 billion revenue company, so spending a billion here and a billion there to drive growth is really a rounding error. What HP needs to do is buy established technology they can drive through their global channels and ARST clearly fits that bill. ARST has a large number of global enterprise customers who have spent millions of dollars and years making ARST’s SIEM platform work for them. Maybe not as well as they’d like, but it’s not something they can move away from any time soon. Throw in the double-digit growth characteristic of security and the accelerating cyber-security opportunity of ARST’s dominant position within government operations, and there is a lot of leverage for HP. Clearly HP is looking for growth drivers. Additionally, ARST requires a lot of services to drive implementation and expansion with the customer base. HP has lots of services folks they need to keep busy (EDS, anyone?), so there is further leverage. On the analyst call (on which, strangely enough, no one from ArcSight was present), the HP folks specifically mentioned how they plan to add value to customers from the intersection of software, services, and hardware. Right. This is all about owning everything and increasing their share of wallets. This is further explained by the 4 aspects of HP’s security strategy: Software Security (Fortify’s code scanning technology), Visibility (ArcSight comes in here), Understanding (risk assessment?, but this is hogwash), and Operations (TippingPoint and their IT Ops portfolio). This feels like a strategy built around the assets (as opposed to the strategy driving the product line), but clearly HP is committed to security, and that’s good to see. This feels a lot like HP’s Opsware deal a few years ago. ArcSight fits a gap in the IT management stack, and HP wrote a billion-dollar check to fill it. To be clear, HP still has gaps in their security strategy (perimeter and endpoint security) and will likely write more checks. Those deals will be considerably bigger and require considerably less services, which isn’t as attractive to HP, but in order to field a full security offering they need technology in all the major areas. Finally, this continues to validate our long term vision that security isn’t a market, it will be part of the larger IT stack. Clearly security management will be integrated with regular IT management, initially from a visibility standpoint, and gradually from an operations standpoint as well. Again, not within the next two years, but over a 5-7 year time frame. The big IT vendors need to provide security capabilities, and the only way they are going to get them is to buy. User Impact End user customers tend to make large (read: millions of dollars), multi-year investments in their SIEM/Log Management platforms. Those platforms are hard to rip out once implemented, so the technology tends to be quite sticky. The entire industry has been hearing about how unhappy customers are with SIEM players like ARST and RSA, but year after year customers spend more money with these companies to expand the use cases supported by the technology. There will be corporate integration challenges, and with these big deals product innovation tends to grind to a halt while these issues are addressed. We don’t expect anything different with HP/ARST. Inertia is a reality here. Customers have spent years and millions on ARST, so it’s hard to see a lot of them moving en masse elsewhere in the near term. Obviously if HP doesn’t integrate well, they’ll gradually see customers go elsewhere. If necessary, customers will fortify their ARST deployment with other technologies in the short term, and migrate when it’s feasible down the road. Regardless of the vendor propaganda you’ll hear about this customer swap-out or that one, it takes years for a big IT company to snuff out the life of an acquired technology. Not that both HP and IBM haven’t done that, but this simply isn’t a short-term issue. Should customers who are considering ArcSight look elsewhere? It really depends on what problem they are trying to solve. If it’s something that is well within ARST’s current capabilities (SIEM and Log Management), then there is little risk. If anything, having access to HP’s services arm will help in the intermediate term. If your use case requires ARST to build new capabilities or is based on product futures, you can likely forget it. Unless you want to pay HP’s services arm to build it for you. One of the hallmarks of the Pragmatic CSO approach is to view security within a business context. As we see traditional IT ops and security ops come together over time this becomes increasingly important. Security is critical to everything IT, but security is not a standalone and must be considered within the context of the full IT stack, which helps to automate business processes. The fact that many of security’s big vendors now live within huge IT behemoths is telling. Ignore the portents at your own peril. Market Impact We’ve been seeing a bifurcation of the SIEM/Log Management market over the past year. The strong are getting stronger and the not-so-strong are struggling. This will continue. The thing so striking about the EMC/RSA deal a couple years ago was the ability of EMC’s

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.