Securosis

Research

HP Sets Its ArcSights on Security

When there’s smoke, there’s usually fire. I’ve been pretty vocal over the past two weeks, stating that users need to forget what they are hearing about various rumored acquisitions, or how these deals will impact them, and focus on doing their jobs. They can’t worry about what deal may or may not happen until it’s announced. Well, this morning HP announced the acquisition of ArcSight, after some more detailed speculation appeared over the weekend. So is it time to worry yet? Deal Rationale HP is acquiring ArcSight for about $1.5 billion, which is a significant premium over where ARST was trading before the speculation started. Turns out it’s about 8 times sales, which is a large multiple. Keep in mind that HP is a $120 billion revenue company, so spending a billion here and a billion there to drive growth is really a rounding error. What HP needs to do is buy established technology they can drive through their global channels and ARST clearly fits that bill. ARST has a large number of global enterprise customers who have spent millions of dollars and years making ARST’s SIEM platform work for them. Maybe not as well as they’d like, but it’s not something they can move away from any time soon. Throw in the double-digit growth characteristic of security and the accelerating cyber-security opportunity of ARST’s dominant position within government operations, and there is a lot of leverage for HP. Clearly HP is looking for growth drivers. Additionally, ARST requires a lot of services to drive implementation and expansion with the customer base. HP has lots of services folks they need to keep busy (EDS, anyone?), so there is further leverage. On the analyst call (on which, strangely enough, no one from ArcSight was present), the HP folks specifically mentioned how they plan to add value to customers from the intersection of software, services, and hardware. Right. This is all about owning everything and increasing their share of wallets. This is further explained by the 4 aspects of HP’s security strategy: Software Security (Fortify’s code scanning technology), Visibility (ArcSight comes in here), Understanding (risk assessment?, but this is hogwash), and Operations (TippingPoint and their IT Ops portfolio). This feels like a strategy built around the assets (as opposed to the strategy driving the product line), but clearly HP is committed to security, and that’s good to see. This feels a lot like HP’s Opsware deal a few years ago. ArcSight fits a gap in the IT management stack, and HP wrote a billion-dollar check to fill it. To be clear, HP still has gaps in their security strategy (perimeter and endpoint security) and will likely write more checks. Those deals will be considerably bigger and require considerably less services, which isn’t as attractive to HP, but in order to field a full security offering they need technology in all the major areas. Finally, this continues to validate our long term vision that security isn’t a market, it will be part of the larger IT stack. Clearly security management will be integrated with regular IT management, initially from a visibility standpoint, and gradually from an operations standpoint as well. Again, not within the next two years, but over a 5-7 year time frame. The big IT vendors need to provide security capabilities, and the only way they are going to get them is to buy. User Impact End user customers tend to make large (read: millions of dollars), multi-year investments in their SIEM/Log Management platforms. Those platforms are hard to rip out once implemented, so the technology tends to be quite sticky. The entire industry has been hearing about how unhappy customers are with SIEM players like ARST and RSA, but year after year customers spend more money with these companies to expand the use cases supported by the technology. There will be corporate integration challenges, and with these big deals product innovation tends to grind to a halt while these issues are addressed. We don’t expect anything different with HP/ARST. Inertia is a reality here. Customers have spent years and millions on ARST, so it’s hard to see a lot of them moving en masse elsewhere in the near term. Obviously if HP doesn’t integrate well, they’ll gradually see customers go elsewhere. If necessary, customers will fortify their ARST deployment with other technologies in the short term, and migrate when it’s feasible down the road. Regardless of the vendor propaganda you’ll hear about this customer swap-out or that one, it takes years for a big IT company to snuff out the life of an acquired technology. Not that both HP and IBM haven’t done that, but this simply isn’t a short-term issue. Should customers who are considering ArcSight look elsewhere? It really depends on what problem they are trying to solve. If it’s something that is well within ARST’s current capabilities (SIEM and Log Management), then there is little risk. If anything, having access to HP’s services arm will help in the intermediate term. If your use case requires ARST to build new capabilities or is based on product futures, you can likely forget it. Unless you want to pay HP’s services arm to build it for you. One of the hallmarks of the Pragmatic CSO approach is to view security within a business context. As we see traditional IT ops and security ops come together over time this becomes increasingly important. Security is critical to everything IT, but security is not a standalone and must be considered within the context of the full IT stack, which helps to automate business processes. The fact that many of security’s big vendors now live within huge IT behemoths is telling. Ignore the portents at your own peril. Market Impact We’ve been seeing a bifurcation of the SIEM/Log Management market over the past year. The strong are getting stronger and the not-so-strong are struggling. This will continue. The thing so striking about the EMC/RSA deal a couple years ago was the ability of EMC’s

Share:
Read Post

Understanding and Selecting an Enterprise Firewall: Management

The next step in our journey to understand and select an enterprise firewall has everything to do with management. During procurement it’s very easy to focus on shiny objects and blinking lights. By that we mean getting enamored with speeds, feeds, and features – to the exclusion of what you do with the device once it’s deployed. Without focusing on management during procurement, you may miss a key requirement – or even worse, sign yourself up to a virtual lifetime of inefficiency and wasted time struggling to manage the secure perimeter. To be clear, most of the base management capabilities of the firewall devices are subpar. In fact, a cottage industry of firewall management tools has emerged to address the gaps in these built-in capabilities. Unfortunately that doesn’t surprise us, because vendors tend to focus on managing their devices, rather than focusing on process of protecting the perimeter. There is a huge difference, and if you have more than 15-20 firewalls to worry about, you need to be very sensitive to how the rule base is built, distributed, and maintained. What to Manage? Let’s start by making a list of the things you tend to need to manage. It’s pretty straightforward and includes (but isn’t limited to): ports, protocols, users, applications, network access, network segmentation, and VPN access. You need to understand whether the rules will apply at all times or only at certain times. And whether the rules apply to all users or just certain groups of users. You’ll need to think about what behaviors are acceptable within specific applications as well – especially web-based apps. We talk about building these rule sets in detail in our Network Security Operations Quant research. Once we have lists of things to be managed, and some general acceptance of what the rules need to be (yes, that involves gaining consensus among business users, tech colleagues, legal, and lots of other folks there to make you miserable), you can configure the rule base and distribute to the boxes. Another key question is where you will manage the policy – or really at how many levels. You’ll likely have some corporate-wide policies driven from HQ which can’t be messed with by local admins. You can also opt for some level of regional administration, so part of the rule base reflects corporate policy but local administrators can add rules to deal with local issues. Given the sheer number of options available to manage an enterprise firewall environment, don’t forget to consider: Role-based access control: Make sure you get different classes of administrators. Some can manage the enterprise policy, others can just manage their local devices. You also need to pay attention to separation of duties, driven by the firewall change management workflow. Keep in mind the need to have some level of privileged user monitoring in place to keep everyone honest (and also to pass those pesky audits) and to provide an audit trail for any changes. Multi-domain administration: As the perimeter gets more complicated, we see a lot of focus around technologies to allow somewhat separate rule bases to be implemented on the firewalls. This doesn’t just provision for different administrators needing access to different functions on the devices, but supports different policies running on specific devices. Large enterprises with multiple operating units tend to have this issue, as each operation may have unique requirements which require different policy. Ultimately corporate headquarters bears responsibility for the integrity of the entire perimeter, so you’ll need a management environment that can effectively map to your the way your business operates. Virtual firewalls: Since everything eventually gets virtualized, why not the firewall? We aren’t talking about running the firewall in a virtual machine (we discussed that in the technical architecture post), but instead about having multiple virtual firewalls running on the same device. Depending on network segmentation and load balancing requirements, it may make sense to deploy totally separate rule sets within the same device. This is an emerging requirement but worth investigating, because supporting virtual firewalls isn’t easy with traditional hardware architectures. This may not be a firm requirement now, but could crop up in the future. Checking the Policy Those with experience managing firewalls know all about the pain of a faulty rule. To avoid that pain and learn from our mistakes, it’s critical to be able to test rules before they go live. That means the management tools must be able to tell you how a new rule or rule change impacts the rest of the rule base. For example, if you insert a rule at one point in the tree, does it obviate rules in other places? First and foremost, you want to ensure that any change doesn’t violate your policies or create a gaping hole in the perimeter. That is job #1. Also important is rule efficiency. Most organizations have firewall rule bases resembling old closets. Lots of stuff in there, and no one is quite sure why you keep this stuff or which rules still apply. So having the ability to check rule hits (how many times the rule was triggered) helps ensure all your rules remain relevant. It’s helpful to have a utility to help optimize the rule base. Since the rules tend to be checked sequentially for each incoming packet, making sure you’ve got the most frequently used rules early for maximum efficiency, so your expensive devices can work smarter rather than harder and provide some scalability headroom. But blind devotion to a policy tool is dangerous too. Remember, these tools simulate the policies and impact of new rules and updates. Don’t mistake simulation for reality – we strongly recommend confirming changes with actual tests. Maybe not every change, but periodically pen testing your own perimeter will make sure you didn’t miss anything, and minimize surprises. And we know you don’t like surprises. Reporting As interesting as managing the rule base is, at some point you’ll need to prove that you are doing the right thing. That means a set of reports substantiating the controls in place. You’ll want to be able to schedule

Share:
Read Post

DLP Selection Process, Step 1

As I mentioned previously, I’m working on an update to Understanding and Selecting a DLP Solution. While much of the paper still stands, one area I’m adding a bunch of content to is the selection process. I decided to buff it up with more details, and also put together a selection worksheet to help people figure out their requirements. This isn’t an RFP, but a checklist to help you figure out major requirements – which you will use to build your RFP – and manage the selection process. The first step, and this post, are fairly short and simple: Define the Selection Team Identify business units that need to be involved and create a selection committee. We tend to include two kinds of business units in the DLP selection process: content owners with sensitive data to protect, and content protectors with responsibility for enforcing controls over the data. Content owners include business units that hold and use the data. Content protectors tend to include departments like Human Resources, IT Security, Corporate Legal, Compliance, and Risk Management. Once you identify the major stakeholders you’ll want to bring them together for the next few steps. This list covers a superset of the people who tend to be involved with selection (BU stands for “Business Unit”). Depending on the size of your organization you may need more or less, and in most cases the primary selection work will be done by 2-3 IT and IT security staff, but we suggest you include this larger list in the initial requirements generation process. The members of this team will also help obtain sample data/content for content analysis testing, and provide feedback on user interfaces and workflow if they will eventually be users of the product. Share:

Share:
Read Post

FireStarter: Automating Secure Software Development

I just got back from the AppSec 2010 OWASP conference in Irvine, California. As you might imagine, it was all about web application security. We security practitioners and coders generally agree that we need to “bake security in” to the development process. Rather than tacking security onto a product like a band-aid after the fact, we actually attempt to deliver code that is secure from the get-go. We are still figuring out how to do this effectively and efficiently, but it seems to me a very good idea. One of the OWASP keynote presentations was at odds with the basic premise held by most of the participants. The idea presented was (I am paraphrasing) that coders suck at secure code development. Further, they will continue to suck at it, in perpetuity. So let’s take security out of the application developers’ hands entirely and build it in with compilers and pre-compilers that take care of bad code automatically. That way they can continue to be ignorant, and we’ll fix it for them! Oddly, I agree with two of the basic premisses: coders for the most part suck today at coding securely, and a couple common web application exploits can be addressed with this technique. Technology, including real and conceptual implementations, can deal with a wide variety of spoofing and injection attacks. Other than that, I think this idea is completely crazy. Coders are mostly ignorant of security today, but that’s changing. There are some vendors looking to productize some secure coding automation tactics because there are practical applications that are effective. But these are limited to correcting simple coding errors, and work because machines can easily recognize some patterns humans tend to overlook. Thinking that automating software security into a product through certifications and format checking programs is not just science fiction, it’s fantasy. I’ll give you one guess on who I’ll bet hasn’t written much code in her career. Oh crap, did I give it away? On the other hand, I have built code that was perfect. Until it was hacked. Yeah, the code was exactly to specification, and performed flawlessly. In fact it performed too flawlessly, and was subject to a timing attack that leaked enough information that the output was guessed. No compiler in the world would have picked this subtle issue up, but an attacker watching the behavior of an application will spot it quickly. And they did. My bad. I am all for automating as much security as we can into the development process, especially as a check on developer activities. Nothing wrong with that – we do it today. But to think that we can automate security and remove it from the hands of developers is naive to the point of being surreal. Timing attacks, logic attacks, and architectural flaws do not show up to a compiler or any form of pre/post automated checks. There has been substantial research on how to validate state machine behavior to detect business transaction fraud, but there has never been a practical application: it’s more work to establish the rules than to simply have someone manually verify the process. It doesn’t work, and it won’t work. People are crafty. Ingenious. Devious. They don’t play by the rules. Compilers and processors do. That’s certainly my opinion. I’m sure some entrepreneur just slit his/her wrists. Oh, well. Okay, smart guy/gal, tell me why I’m wrong. Especially if you are trying to build a company around this. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.