Securosis

Research

DevOpsed to Death

Alan Shimmel asks have we beat “What is DevOps” to death yet? Alan illustrates his point by using the more-than-beaten-to-death, we-wish-it-would-go-away-right-now of Chuck Norris meme. Those of us who have talked about DevOps for a while are certainly beginning to tire of explaining why it is more than automation. But Alan’s question is legit, and I have to say the answer is “No!” We are in the top of the second inning of a game that will be playing out for years. I know no amount of coffee will stifle a yawn when practitioners are confronted with yet another DevOps definition. People who are past simple automated builds and moving down the pathway to continuous integration do not need to be told what DevOps is. What they need help with is practice in how to do it better. But DevOps is still a small portion of the IT and development community, and the rest of the folks out there may still need to hear it a dozen times more before its importance sinks in. There are very good definitions, which do not always resonate with developers. Try getting a definition to stick with people who believe they’ll be force chocked to death by a Sith Lord before code auto-deploys in an afternoon – not an easy task. To put this into context with other development trends, you can compare it to Agile. Within the last year I have had half a dozen inquiries on how to start with Agile development. Yes, I have lost count of how many years ago Agile and Scrum were born. Worse, during the RSA conference this year, I discussed failed Agile deployments with a score of firms. Most fell flat on their faces because they missed one or two of the most basic requirements of what it means to be Agile. If you think you will run a development cycle based on a 200-page specification document and still be Agile, you’re a failure waiting to happen. They failed on the basics, not the hard stuff. From a security perspective I have been talking about Database Activity Monitoring and its principal use cases for the last decade. Still, every few months I get asked “How does DAM work?” And don’t even bother asking Rich about DLP – he gets questions every week. We have repetitive strain injuries from slapping our foreheads in disbelief at the same basic questions; but firms still need help with mature technologies like encryption, firewalls, DAM, DLP, and endpoint security. DevOps is still “cutting edge” for Operations at large, and people will be asking about how DevOps works for a very long time to come.r Share:

Share:
Read Post

Why I design for one cloud at a time

Putting all your eggs in one basket is always a little disconcerting. Anyone who works with risk is always wary of reducing options. So I am never surprised when clients ask about alternative cloud providers and try to design cloud-agnostic applications. Personally I take a different view. Designing cloud-agnostic applications is like building an entirely self-sufficient home because you don’t want to be locked into the local utilities, weather conditions, or environment. Sure, you could try, but the tradeoffs would be immense. Especially cost. The key for any such project is to understand the risk of lock-in, and then select appropriate techniques to minimize the risk while still providing the most benefit from the platform you are using. The only way to really get the cost savings and performance advantages of the cloud is to design specifically for the cloud you are working on. For example use their load balancers and auto scale groups rather than designing your own. (Don’t worry, I’ll get to containers in a second). If you are building or bringing all your own software to the cloud platform, at a certain point, why move to the cloud at all? Practically speaking you will likely reduce your agility, resiliency, and economic benefits. I am talking in generic terms, but I have designed and reviewed some of these deployments so this isn’t just analyst handwaving. For example one common scenario is data transfer for batch analysis. The cloud-agnostic way is to set up a file server at your cloud provider, SFTP the data in, and then send that off to analysis servers. The file server becomes a major weak point (if it goes down, so does everything), and it likely uses the the cloud provider’s most expensive storage (volumes). And all the analysis servers probably need to be running all the time (the file server certainly does), also racking up charges. The cloud-native approach is to transfer the data directly to object storage (e.g., Amazon S3) which is typically the cheapest storage option and highly resilient. Amazon even has an option to transfer that data into its ridiculously cheap Glacier long-term storage when you are done. Then you can use a tool like Lambda to launch analysis servers (using spot instance pricing, which can shave off another 40% or more) and link everything together with a cloud message queue, where you only pay when you actually pump data through. Everything spins up when data appears and shuts down when it’s finished; you can load as many simultaneous jobs as you want but still pay nearly nothing when you have no active jobs. That’s only one example. But I get it – sometimes you really do need to plan for at least some degree of portability. Here’s my personal approach. I tend to go all-in on native cloud features (these days almost always on AWS). I design apps using everything Amazon offers, including SQS, SNS, KMS, Aurora, DynamoDB, etc. However… My core application logic is nearly always self-contained, and I make sure I understand the dependency points. Take my data processing example: the actual processing logic is cloud-agnostic. Only the file transfer and event-driven mechanisms aren’t. Worst case, I could transfer to another service. Yes, there would be overhead, but no more than designing for and running on multiple providers. Even if I used native data analysis services, I’d just ensure I’m good at documenting my logic and code so I could redo it someplace else if needed. But what about containers? In some cases they really can help with portability, but even when using containers you will likely still lock into certain of your cloud provider’s proprietary features. For example it’s just about suicidal to run your database inside containers. And containers need to run on top of something anyway. And certain capabilities simply work better in your provider than in a container. Be smart in your design. Know your lock-in points. Have plans to move if you need to. Micro or mini services is a great design pattern for knowing your dependency points. But in the end if you aren’t using nearly every little tweak your cloud provider offers, you are probably spending more, more prone to breakage, and slower than the competition who does. I can’t move my house, but as long as I hit a certain square footage, my furniture fits just fine. Share:

Share:
Read Post

Incite 11/4/2015: The Taper

As I mentioned, I’m running a half marathon for Team in Training to defeat blood cancers. I’ve raised a bunch of money and still appreciate any donations you can make. I’m very grateful to have made it through my training in one piece (mostly), and ready to go. The race is this coming Saturday and the final two weeks of training are referred to as the taper, when you recover from months of training and get ready to race. This will be my third half, so by this time in the process I’m pretty familiar with how I feel, which is largely impatient. Starting about a month out, I don’t want to run any more because my body starts to break down a bit after about 250+ miles of training. I’m ready to rest when the taper starts – I need to heal and make sure I’m ready to run the real deal. I want to get the race over with and then move on with my life. Training can be a bit consuming and I look forward to sleeping in on a Sunday morning, as opposed to a 10-12 mile training run. It’s not like I’m going to stop running, but I want to be a bit more balanced. I’m going to start cycling (my holiday gift to myself will be a bike) and get back to my 3x weekly yoga practice to switch things up a bit.   The taper is actually a pretty good metaphor for navigating life transitions. Transitions are happening all the time. Sometimes it’s a new job, starting a new hobby, learning something new, relocating, or anything really that shakes up the status quo. Some people have very disruptive transitions, which not only shake their foundations but also unsettle everything around them. To live you need to figure out how to move through these transitions – we are all constantly changing and evolving, and every decade or so you emerge a different person whether you like it or not. Even if you don’t want to change, the world around you is changing, and forces you to adapt. But if you can be aware enough to sense a transition happening, you can taper and make things more graceful – for everyone. So what does that even mean? When you are ready for a change, you likely want to get on with it. But another approach is to slow down, rest a bit, take a pause, and prepare everyone around you for what’s next. I’ve mentioned the concept of slowing down to speed up before, and that’s what I’m talking about. When running a race, you need to slow down in the two weeks prior to make sure you have the energy to do your best on race day. In life, you need to slow down before a key transition and make sure you and those impacted are sufficiently prepared. That requires patience and that’s a challenge for me and most of the people I know. You don’t want to wait for everyone around you to be ready. You want to get on with it and move forward, whatever that means to you. Depending on the nature of the transition, your taper could be a few weeks or it could be a lot longer. Just remember that unless you are a total hermit, transitions reverberate with those around you. It can be a scary time for everyone else because they are not in control of your transitions, but are along for the ride. So try to taper as you get ready to move forward. I try to keep in mind that it’s not a race, even when it’s a race. –Mike Photo credit: “graff la rochelle mur aytre 7” originally uploaded by thierry llansades Thanks to everyone who contributed to my Team in Training run to battle blood cancers. We’ve raised almost $6,000 so far, which is incredible. I am overwhelmed with gratitude. You can read my story in a recent Incite, and then hopefully contribute (tax-deductible) whatever you can afford. Thank you. The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the 2014 conference. You can check it out on YouTube. Take an hour. Your emails, alerts, and Twitter timeline will be there when you get back. Securosis Firestarter Have you checked out our new video podcast? Rich, Adrian, and Mike get into a Google Hangout and… hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail. Nov 3 – Get Your Marshmallows Oct 19 – re:Invent Yourself (or else) Aug 12 – Karma July 13 – Living with the OPM Hack May 26 – We Don’t Know Sh–. You Don’t Know Sh– May 4 – RSAC wrap-up. Same as it ever was. March 31 – Using RSA March 16 – Cyber Cash Cow March 2 – Cyber vs. Terror (yeah, we went there) February 16 – Cyber!!! February 9 – It’s Not My Fault! January 26 – 2015 Trends January 15 – Toddler December 18 – Predicting the Past November 25 – Numbness Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too. Building Security into DevOps The Role of Security in DevOps Tools and Testing in Detail Security Integration Points The Emergence of DevOps Introduction Building a Threat Intelligence Program Using TI Gathering TI Introduction Network Security Gateway Evolution Introduction Recently Published Papers Pragmatic Security for Cloud and Hybrid Networks EMV Migration and the Changing Payments Landscape Applied Threat Intelligence Endpoint Defense: Essential Practices Cracking the Confusion: Encryption & Tokenization for Data Centers, Servers & Applications Security and Privacy on the Encrypted Network Monitoring the Hybrid Cloud Best Practices for AWS Security Securing Enterprise Applications Secure Agile Development The Future of Security Incite 4 U Getting started in InfoSec: Great post/resource here from Lesley Carhart about how to get started in information security. Right up at the top the key

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.