Securosis

Research

DDoS Attack Overblown

Sam Biddle at Gizmodo says: This guy, Prince said, could back up CloudFlare’s claims. This really was Web Dresden, or something. After an inquiry, I was ready to face vindication. Instead, I received this note from a spokesperson for NTT, one of the backbone operators of the Internet: “Hey Sam, nice to hear from you. I’m afraid that we don’t have anything we can share that substantiates global effects. I’m sure you read the same 300gbps figure that I did, and while that’s a massive amount of bandwidth to a single enterprise or service provider, data on global capacities from sources like TeleGeography show lit capacities in the tbps range in most all regions of the world. I side with you questioning if it shook the global internet. Chris” Bad for SpamHaus, but unnoticeable to everyone else. Share:

Share:
Read Post

Estimating Breach Impact

Russell Thomas and a bunch of his friends recently posted a research paper called How Bad Is It? – A Branching Activity Model to Estimate the Impact of Information Security Breaches, which attempts to provide a structure for estimating the impact of a breach. This work is necessary – we have no benchmarks, or even consensus, about what breached organizations should even be counting. This is an academic research paper, and to be honest I am not a big fan of academic papers. I have pretty bad TL;DR syndrome. But I did check out the introduction, and noted some interesting tidbits. Empirical research on breach losses often use ad hoc taxonomies for “quantified” and “non-quantified” costs as part of surveys or interviews of subject matter experts. There is no theoretical basis for these taxonomies, which limits their generality and research significance. Finally, several consulting firms publish survey-based studies. Most notable is the “Cost of a Data Breach” reports by Ponemon Institute (Ponemon Institute 2012). These survey results are widely publicized and widely quoted, even in policy discussions, but they have no foundation in theoretical or empirical academic research, and they have very serious methodological flaws (Thomas 2011a) In summary, without some reliable and robust breach impact estimation methods, quantified information security will continue to be a “weak hypothesis” (Verendel 2009). This is true. It warms my cockles (can I say that out loud?) that these guys are calling out survey monkeys like Ponemon because the industry seems set on using those numbers to justify what we do. But I have to say I’m a little disappointed by Russell’s attempt to jump on the indicators of compromise bandwagon in his New School blog post on the paper. He unnecessarily concocts a meaningless description of this breach impact estimation model by mentioning Indicators of Impact. Huh? Total non-sequitur, though I do understand the desire to capitalize on the popularity and momentum of the phrase Indicators of XXX. But let’s call this what it is. An attempt to build an academically rigorous model to estimate the cost of a breach, based upon factors that can be reasonably estimated and quantified. It would be nice to see this kind of stuff added to GRC platforms and the like, to enable us to track and estimate these costs. Ultimately I believe that as we mature as a profession we will need this kind of research to help define a common vernacular for estimating loss. Photo credit: “Impact Hoodie Design for 2006” originally uploaded by Will Foster Share:

Share:
Read Post

Defending Cloud Data: How IaaS Storage Works

Infrastructure as a Service storage can be insanely complex when you include operational and performance requirements. First you need to create a resource pool, which might itself be a pool of virtualized and abstracted storage, and then you need to tie it all together with orchestration to support the dynamic requirements of the cloud – such as moving running virtual machines between servers, instantly snapshotting multi-terabyte virtual drives, and other insanity. For security we don’t need to know all the ins and outs of cloud storage, but we do need to understand the high-level architecture and how it affects our security controls. And keep in mind that the implementations of our generic architecture vary widely between different public and private cloud platforms. Public clouds are roughly equivalent to provider-class private clouds, except that they are designed to support multiple external tenants. We will focus on private cloud storage, with the understanding that public clouds are about the same except that customers have less control. IaaS Storage Overview Here’s a diagram to work through: At the lowest level is physical storage. This can be nearly anything that satisfies the cloud’s performance and storage requirements. It might be commodity hard drives in commodity rack servers. It could be high-performance SSD drives in high-end specialized datacenter servers. But really it could be nearly any storage appliance/system you can think of. Some physical storage is generally pooled by a virtual storage controller, like a SAN. This is extremely common in production clouds but isn’t limited to traditional SAN. Basically, as long as you can connect it to the cloud storage manager, you can use it. You could even dedicate certain LUNs from a larger shared SAN to cloud, while using other LUNs for non-cloud applications. If you aren’t a storage person just remember there might be some sort of controller/server above the hard drives, outside your cloud servers, that needs to be secured. That’s the base storage. On top of that we then build out: Object Storage Object storage controllers (also called managers) connect to assigned physical or virtual storage and manage orchestration and connectivity. Above this level they communicate using APIs. Some deployments include object storage connectivity software running on distributed commodity servers to tie the servers’ hard drives into the storage pool. Object storage controllers create virtual containers (also called buckets) which are assigned to cloud users. A container is a pool of storage in which you can place objects (files). Every container stores each bit in multiple locations. This is called data dispersion, and we will talk more about it in a moment. Object storage is something of a cross between a database and a file share. You move files into and out of it; but instead of being managed by a file system you manage it with APIs, at an abstracted layer above whatever file systems actually store the data. Object storage is accessed via APIs (almost always RESTful HTTP APIs) rather than classic network file protocols, which offers tremendous flexibility for integration into different applications and services. Object storage includes logic below the user-accessible layer for features such as quotas, access control, and redundancy management. Volume Storage Volume storage controllers (also called managers) connect to assigned physical (or virtual) storage and manage orchestration and connectivity. Above this level they communicate using APIs. The volume controller creates volumes on request and assigns them to specific cloud instances. To use traditional virtualization language, it creates a virtual hard drive and connects it to a virtual machine. Data dispersion is often used to provide redundancy and robustness. A volume is essentially a persistent virtual hard drive. It can be of any size supported by the cloud platform and underlying resources, and a volume assigned to a virtual machine exists until it is destroyed (note that tearing down an instance often automatically also returns the associated volume storage back to the free storage pool). Physical servers run hypervisors and cloud connectivity software to tie them into the compute resource pool. This is where instances (virtual machines) run. These servers typically have local hard drives which can be assigned to the volume controller to expand the storage pool, or used locally for non-persistent storage. We call this ‘ephemeral’ storage, and it’s great for swap files and other higher-performance operations that don’t require the resiliency of a full storage volume. If your cloud uses this model, the cloud management software places swap on these local drives. When you move or shut down your instance this data is always lost, although it might be recoverable until overwritten. We like to discuss volumes as if they were virtual hard drives, but they are a bit more complex. Volumes may be distributed and data dispersed across multiple physical drives. There are also implications which we will consider later for considering volumes in the context of your cloud, and how they interact with object storage and things like snapshots and live migrations. How object and volume storage interact Most clouds include both object and volume storage, even if object storage isn’t available directly to users. Here are the key examples: A snapshot is a near-instant backup of a volume that is moved into object storage. The underlying technology varies widely and is too complex for my feeble analyst brain, but a snapshot effectively copies a complete set of the storage blocks in your volume, into a file stored in an object container which has been assigned to snapshots. Since every block in your volume is likely stored in multiple physical locations, typically 3 or more times, taking a snapshot tells the volume controller to copy a complete set of blocks over to object storage. The operation can take a while but it looks instantaneous because the snapshot accurately reflects the state of the volume at that point in time, while the volume is stil fully usable – running on another set of blocks while the snapshot is moved over (this is a (major oversimplification of something that makes my head hurt). Images are pre-defined storage volumes in object storage, which contain operating systems or other virtual hard drives used to launch instances. An

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.