Outsourced Email Security

In the last post on Email Security, I commented on how easy it was to add outsourced email security services onto your existing email security deployment. That adding on an extra layer of anti-spam filtering on top of what you have not only provides an increase in the effectiveness of filtering, but also reduced the processing load on your existing hardware. But email security service vendors have been adding outbound email, data and web security offerings to their portfolio on top of their existing offerings, and these services solve different problems and offer different value propositions. Most companies I speak with state that 95~97% of the email that hits their servers are spam. A large percentage contain viruses, spyware and inappropriate content. The switch is cost effective and ‘painless’ in terms of administration and maintenance, and the large service providers tend to have very current and effective solutions. But it is worth noting that the problem you are solving is not protecting sensitive corporate information, rather keeping garbage out of your system. If you don’t see spam and your computers have not been infected, you have been successful. From the customer’s perspective, outbound email security offers many of the same advantages as inbound. As most companies have a very positive experience with inbound service, adoption of an outbound email security service is a natural extension of those advantages you enjoy today. It takes very little work to route your outbound email to a third party provider. These providers offer a canned set of security policies out of the box so you can be up and running in minutes, in conjunction with well designed web interfaces to customize and tune email (or even web security) policies. But the problem being set being addressed is very different; intellectual property leakage, use of private customer information, inappropriate content, violation of corporate policies and even bot-net detection. These problems are more complex and require policy and system verification. Just because you outsourced the operation does not mean you removed the responsibility of audit and security verification of the system itself. Specifically what do I mean by that? If all of your corporate correspondence is being routed through a third party provider, you need to make sure that they are secure, and their policies are in line with yours. Remember, the information you are sending out is all of your corporate email, your policies for enforcement, and possibly all of the web browsing history. The service providers offer ad-on email retention services for ‘compliance’, but as some of the data is stored for their own backup and recovery processes, your data will be stored for some period of time. How is privacy maintained? Who has access to the data? Is there verification of integrity? When and how is the data disposed? What the vendor will be selling you is the filtering service, the administrative interface, and the storage. What you need to ask for is their security policy, their data retention & data destruction policies, and audit reports for changes in permissions, data access and alterations to your data. The vendor will provide you a report on what was filtered and blocked according to policy; in addition you need reports on the operational controls around the system. If these services are being marketed to you as ‘must-have’ for compliance, then the vendor must be able to provide their own policies and audit trail of their service. The vendor will need to provide some degree of transparency both to their methods and processes in general, but specifics on who or what has access to your data. I know a lot of this sounds incredibly obvious, but I have yet to run across a company who has requested this information from their outbound email security provider. Share:

Read Post

Policies vs. Plans vs. Procedures vs. Standards

I was catching up with Rob Newby’s blog and this post on dealing with security policies vs. standards/processes caught my eye. Although policies form the foundation for our security programs (at least they should), I find that more often than not they are completely misused by many of my clients. While I’ve noticed definite improvement over the past few years, I still often walk into organizations and see big 3 inch binders full of their security policies. Rob does a great job of breaking these out, but I’d like to take it a step further. I’m going to dig into some nitty-gritty details, but feel free to skip to the end where I tell you why none of this parsing of language matters much. Here’s how I like to divide up the world of security gove ance documentation:200810071218.jpg Policies are high-level strategic governance with executive sponsorship. Policies should be short and to the point, since those who sign off on them don’t need to know the technical details. An example might be, “we shall monitor all database activity based on the sensitivity of the data and legal or contractual requirements”. Keep in mind, that since policies should be signed off by senior management you want to keep them generic enough that you don’t have to go back to the CEO/CIO/CFO/COO every time you want to change a firewall configuration or AV product. The next layer down are the high-level tactical documentations- plans and standards. The security plan is how you intend on achieving the policy, but it’s still not at the level of specific steps. Keeping with our policy above, the plan would specify the contractual requirements, basic data classification, which activity will be monitored, and so on. While plans define how security will do things, standards define how everyone else has to do things. Below that are your specific implementation documentations- processes, guidelines, and procedures. Here’s where you get into the bitty-gritty of actual implementation and step by step guides. A process is a repeatable series of steps to achieve an objective, while procedures are the specific things you do at each of those steps. Keeping with out example above, the process would define how monitoring occurs (e.g. third party DAM tool), and the procedure is which bits to flip within the tool. Yeah, I think that’s a whole lot of paper and a huge time sink myself. Here’s a slightly more pragmatic, and somewhat repetitive, way of looking at things: Policies are still high level strategic governance with executive sponsorship; that never changes. Short and sweet since it makes it easier to get them approved, and you want o have to change them as little as possible. I don’t really care what you call below that, but you should have a security plan for implementing your policies. Plans are managed at the CISO or security director level (whoever is in charge) and change more frequently. You don’t want to have to go to the CEO to change your plans. At this layer you also have your standards- which, if you think about it, is the next layer of gove ance. CEOs sign off on policies, and CISOs sign off on standards. Below that is where you detail how the heck you’ll accomplish all this gove ance. You document processes, list our procedures, and issue guidelines and configuration standards. This stuff will change all the time, and shouldn’t necessarily need the CISO to sign off on it unless it breaks with the layer above. The simpler the better, but if you don’t write this stuff down in an organized way you’ll eventually pay the price. By breaking it down into these three main layers, you can more easily change both the minutiae and the big picture as you adapt to changing conditions. Share:

Read Post

Clickjacking Details, Analysis, and Advice

Looks like the cat is out of the bag. Someone managed to figure out the details of clickjacking and released a proof of concept against Flash. With the information out in public, Jeremiah and Robert are free to discuss it. I highly recommend you read Robert’s post, and I won’t try and replicate the content. Rather, I’d like to add a little analysis. As I’ll spell out later, this is a serious browser flaw (phishers will have a field day), but in the big picture of risk it’s only moderate. Clickjacking allows someone to place an invisible link/button below your mouse as you browse a regular page. You think you’re clicking on a regular link, but really you are clicking someplace the attacker controls that’s hidden from you. Why is this important? Because it allows the attacker to force you to interact with something without your knowledge on a page other than the one you’ve been looking at. For example, they can hide a Flash application that follows your mouse around, and when you go to click a link it starts recording audio off your microphone. We have protections in browsers to prevent someone from automatically initiating certain actions. Also, many websites rely on you manually pressing buttons for actions like transferring large sums of money out of your bank account. There are two sides to look at this exploitation- user and website owner. As a user, if you visit a malicious site (either a bad guy site, or a regular site that’s been hit with cross site scripting), the attacker can force you to take a very large range of actions. Anytime you click something, the attacker can redirect that click to the destination of their choice in the context of you as a user. That’s the important part here- it’s like cross site request forgery (really, an enhancement of it) that not only gets you to click, but to execute actions as yourself. That’s why they can get you to approve Flash applications you might not normally allow, or to perform actions on other sites in the background. As with CSRF, if you are logged in someplace the attacker can now do whatever the heck they want as long as they know the XY coordinates of what they want you to click. As a website owner, clickjacking destroys yet more browser trust. When designing web applications (which used to be my job) we often rely on site elements that require manual mouse clicks to submit forms and such. As Robert (Rsnake) explains in his post, with clickjacking an attacker can circumvent nonces (a random code added to every form so the website knows you clicked submit from that page, and didn’t just try to submit the form without visiting the page, a common attack technique). Clickjacking can be used to do a lot of different things- launching Flash or CSRF are only the tip of the iceberg. It relies heavily on iFrames, which are so pervasive we can’t just rip them out. Sure, I turn them off in my browser, but the economics prevent us from doing that on a wide scale (especially since all the advertisers- e.g. Google/Yahoo/MS, will likely fight it). Clickjacking is very difficult to eliminate, although we can reduce its risk under certain circumstances. Because it doesn’t even rely on JavaScript and works with CSS/DHTML, it will take a lot of time, effort, and thought to eliminate. The fixes generally break other things. After spending some time talking with Robert about it, I’d rate clickjacking as a serious web browser issue (it isn’t quite a traditional vulnerability), but only a moderate risk overall. It will be especially useful for phishers who draw unsuspecting users to their sites, or when they XSS a trusted site (which seems to be happening WAY too often). Here’s how to reduce your risk as a user: Use Firefox/NoScript and check the setting to restrict iFrames. Don’t stay logged in to sensitive sites if you are browsing around (e.g., your bank, Amazon, etc.). Use something like 1Password or RoboForm to make your life easier when you have to enter passwords. Use different browsers for different things, as I wrote about here. At a minimum, dedicate one browser just for your bank. As a website operator, you can also reduce risks: Use iFrame busting code as much as possible (yes, that’s a tall order). For major transactions, require user interaction other than a click. For example, my bank always requires a PIN no matter what. An attacker may control my click, but can’t force that PIN entry. Mangle/generate URLs. If the URL varies per transaction, the attacker won’t necessarily be able to force a click on that page. Robert lays it out: From an attacker”s perspective the most important thing is that a) they know where to click and b) they know the URL of the page they want you to click, in the case of cross domain access. So if either one of these two requirements aren”t met, the attack falls down. Frame busting code is the best defense if you run web-servers, if it works (and in our tests it doesn’t always work). I should note some people have mentioned security=restricted as a way to break frame busting code, and that is true, although it also fails to send cookies, which might break any significant attacks against most sites that check credentials. Robert and Jeremiah have been very clear that this is bad, but not world-ending. They never meant for it to get so hyped, but Adobe’s last-minute request to not release caught them off guard. I spent some time talking with Robert about this in private and kept feeling like I was falling down the rabbit hole- every time I tried to think of an easy fix, there was another problem or potential consequence, in large part because we rely on the same mechanisms as clickjacking for normal website usability. Share:

Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.