3 Envelopes

I really enjoyed Thom Langford’s recent post Three Envelopes, One CISO, on the old parable about preparing three envelopes to defer blame for bad things – until you cannot shift it, when you take the bullet. In the CISO’s case it is likely to be a breach. So first blame your predecessor, though I have found that only works for about 6 months. If you get that long a honeymoon, then by the time you have been in the seat for 6 months it is your problem. For the second breach, blame your team. Of course this is limiting – you need them to work for you, but it’s a question of survival at this point, right? When the third breach comes around, you prepare 3 new envelopes, because you are done. Though most folks only get one breach now – especially if they bungle the response. But that’s not Thom’s point, nor is it mine. He brings the discussion back around to the recent Sony breach. Everyone seems to want to draw and quarter a CISO for all sorts of ills. It may be well-deserved, but the rush to judgement doesn’t really help anything, does it? Especially now that it seems to have been a highly sophisticated attack, which Mandiant called ‘unprecedented’. So did the CISOs do themselves any favors? Probably not. But as Thom says, We seem to want to chop down the CISO as soon as something goes wrong, rather than seeing it in the context of the business overall. Let’s wait and see what actually happened before declaring his Career Is So Over, and also appreciate that security breaches are not always the result of poor information security, but often simply a risk taken by the business that didn’t pay off. And with that I open the second envelope Rich gave me when I started at Securosis… Photo credit: “tiny envelope set: radioactive flora” originally uploaded by Angela Share:

Read Post

Monitoring the Hybrid Cloud: Migration Planning

We will wrap up this series with a migration path to monitoring the hybrid cloud. Whether you choose to monitor the cloud services you consume, or go all the way and create your own SOC in the cloud, these steps will get you there. Let’s dive in. Phase 1: Deploy Collectors The first phase is to collect and aggregate the data. You need to decide how to deploy event collectors – including agents, ‘edge’ proxies, and reverse proxies – to gather information from cloud resources. Your goal is to gather events as quickly and easily as possible, so start with what you know. That basically means leveraging the capabilities of your current security solution(s) to get these new events into the existing system. The complexity is not around understanding these new data sources – flow data and syslog output are well understood. The challenge comes in adapting collection methods designed for on-premises services with a cloud model. If an agent or collector works with your cloud provider’s environment, either to consume cloud vendor logs or those created by your own cloud-based servers, you are in luck. If not you will likely find yourself rerouting traffic to and/or from the cloud into a network proxy to capture events. Depending on the type of cloud service (such as SaaS or IaaS) you will have various means to access event data (such as logs and API connectivity), as outlined in our solution architectures post. We suggest collecting data directly from the cloud provider whenever possible, because much of that data is unavailable from instances or applications running inside the cloud. Monitoring agents can be deployed in IaaS or private cloud environments, where you control the full stack. But in other cloud models, particularly PaaS and SaaS, agents are generally not viable. There you need to rely on proxies that can collect data from all types of cloud deployments, provided you can route traffic through their data-gathering choke points. It is decidedly suboptimal to insert choke points in your cloud network, but it may be necessary. Finally, you have might instead be able to use remote API calls from an on-premise collector to pull events directly from your cloud provider. Not all cloud providers offer this access, and if they do you will likely need to code something yourself from their API documentation. Once you understand what is available you can figure out whether your source provides sufficiently granular data. Each cloud provider/vendor API, and each event log, offer a slightly different set of events in a slightly different format. Be prepared to go back to the future – you may need to build a collector based on sample data from your provider, because not all of the cloud vendors/providers offer logs in syslog or a similarly convenient format. Also look for feed filter options to screen out events you are not interested in – cloud services are excellent at flooding systems with (irrelevant) data. Our monitoring philosophy hasn’t changed. Collect as much data as possible. Get everything the cloud vendor provides as the basis for security monitoring. Then fill in the deficiencies with agents, proxy filters, and cloud monitoring services as needed. This is a very new capability, so likely you will need to build API interface layers to your cloud service providers. Finally keep in mind that using proxies and/or forcing cloud traffic through appliances at the ‘edge’ of your cloud is likely to require re-architecting both on-premise and cloud networks to funnel traffic in and out of your collection point. This also requires that disconnected devices (phones/tablets and laptops not on the corporate network) be configured to send traffic through the choke points / gateways, and cloud services must be configured to reject any direct access which bypasses these portals. If an inspection point can be bypassed it cannot effectively monitor security. Now that you have figured out your strategy and deployed basic collectors, it is time to integrate these new data sources into the monitoring environment. Phase 2: Integrate and Monitor Cloud-based Resources To integrate these cloud-based event sources into the monitoring solution you need to decide which deployment model will best fit your needs. If you already have an on-premise SOC platform and supporting infrastructure it may make sense to simply feed the events into your existing SIEM, malware detection, or other monitoring systems. But a few considerations might change your decision. Capacity: Ensure the existing system can handle your anticipated event volume. SaaS and PaaS environments can be noisy, so expect a significant uptick in event volume, and account for the additional storage and processing overhead. Push vs. Pull: Log Management and SIEM systems can collect events as remote systems and agents push events to them. Then the collector grabs the events, possibly performing some event preprocessing, and forwards the stream to the main aggregation point. But what if you cannot run a remote agent to push the data to you? Most cloud events must be pulled from the cloud service via an active API request. While pull requests are secured across HTTPS, SSL, or even VPN connections, this doesn’t happen magically – a program or script must initiate the transfer. Additionally, the program (script) must supply credentials or identity tokens to the cloud service. You need to know whether your current system is capable of initiating the pull request, and whether it can securely manage the remote API service credentials necessary to collect data. Data Retention: Cloud services require network access, so you need to plan for when your connection is down – especially given the frequency of DoS attacks and network service outages. Make sure you understand the impact if you cannot collect remote events for a time. If the connection goes down, how long can relevant security data be retained or buffered? You don’t want to lose that data. The good news is that many PaaS and IaaS platforms provide easy mechanisms to archive event feeds to long-term storage, to avoid event data loss, but

Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.