Securosis

Research

Summary: Distract and Deceive

Today I was sitting in my office, window open, enjoying the cold front that finally shoved the summer heat out of Phoenix. I had an ice pack on my leg because my achilles tendon has been a little twitchy as I go into the last 8 weeks of marathon training. My wife was going through the mail, walked in, and dropped a nice little form letter from the United States Office or Personnel Management onto my desk. It’s no secret I’m still an active disaster responder on a federal team. And, as previously mentioned, my data was lost in the OPM hack. However, my previous notification was for the part where they hacked the employment information database. This notification is for the loss of all security investigation records. Which is cool, because I don’t even have a security clearance. What was on there? Aside from my SSN, every address I’ve lived at (once going back to childhood, but I think the most recent form was only 7 years), most of my jobs, all my relatives, and (I think) my wife’s SSN. I’m not sure about that because I can’t remember exactly what year I most recently filled out that form, but I’m pretty sure it was after we were married. Here’s the fun part. The OPM just offered me 3 years of identity theft protection. Three. Years. Which I can only assume means my SSN will expire in that time and I’ll be safe afterwards. And it must mean China wasn’t responsible, because they would go after me as espionage, not identity theft. Right? RIGHT?!? It’s just another example of the old distract and deceive strategy to placate. No one involved in intelligence or security thinks for half a second that ID theft protection for three years is meaningful when an SSN is lost – never mind when it (and all my personal info) is lost to a foreign intelligence adversary. But it sounds good in the press and distracts the many millions of federal workers who don’t work in security and understand the implications. People who trust the government, their employer. This isn’t limited to the OPM hack – it’s really a shitty playbook for the security industry overall. Been hacked? Call it “advanced and persistent” and then announce you hired a top-tier incident response firm. It doesn’t matter that you used default admin passwords, it’s all about looking like you take security seriously, when you don’t. Well, didn’t. Really. Look at all the breach announcements from the past couple of years. Cut and paste. And then there are our security tools. Various point technologies, each designed to stop one particular type of attack during a particular time window. Some of them really work. But we don’t acknowledge that security is really about stopping adversaries (Gunnar Peterson constantly hammers on this), and then the window for that particular tech closes. This throws the vendors into a spin cycle because, let’s be honest, their entire livelihood is on the line. Distract. Deceive. Lather. Rinse. Repeat. Admitting failure is hard. Addressing root causes is hard. Realizing something you built is no longer as valuable as it once was is even harder. Hell, we here at Securosis once spent two years and a couple hundred thousand dollars building something that we had to walk away from because the market shifted. That was cash out of my personal pocket – I get it. This isn’t a security industry problem, it’s basic human behavior. I don’t have an issue with someone covering their ass, but when you deceive and distract to protect yourself, and put others at greater risk? Not cool. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Rich wrote an article at TidBITS on the million dollar iOS exploit. Rich quoted at Wired on using AI to stop malware. Favorite Securosis Posts Rich: Incite 11/4/2015 – The Taper. I’m training for my first marathon right now. Well, second time training, because I got stomach flu the week of my planned first and had to miss it. My entire life right now is focused on starting my taper on December 6th. Other Securosis Posts CSA Guidance V4 content on GitHub. DevOps’ed To Death. Why I design for one cloud at a time. Million Dollar iOS Exploit? Maybe. Get Your Marshmallows. Summary: Edumacation. Favorite Outside Posts Rich: Fast, flexible and free, Linux is taking over the online world. But there is growing unease about security weaknesses. A big WaPo piece on the security state of Linux? I sh*t you not. This is an important article that highlights some of the fundamental tensions at the heart of information security. Adrian: How Carders Use eBay as Virtual ATM. A very clever way to launder money through PayPal. This shouldn’t work – the various merchants should match the Zip code of the recipient to the Zip code associated with the credit card. Gas stations and automated kiosks ask for Zip codes for this reason. But I guess some merchants aren’t checking. Research Reports and Presentations Pragmatic Security for Cloud and Hybrid Networks. EMV Migration and the Changing Payments Landscape. Network-based Threat Detection. Applied Threat Intelligence. Endpoint Defense: Essential Practices. Cracking the Confusion: Encryption and Tokenization for Data Centers, Servers, and Applications. Security and Privacy on the Encrypted Network. Monitoring the Hybrid Cloud: Evolving to the CloudSOC. Security Best Practices for Amazon Web Services. Securing Enterprise Applications. Top News and Posts If the UK government collects browsing data, one day it will be public. How long until you need to register to use the Internet like they do in cybercafes in China? What an astoundingly bad idea. Pentagon Farmed Out Its Coding to Russia. That’s cool. Maybe they outsourced by identity protection to Russia because China supposedly hacked it, and we can let those two fight it out. Chinese Mobile Ad Library Backdoored to Spy on iOS Devices. Xen Patches ‘Worst’-Ever Virtual Machine Escape Vulnerability. I wonder which cloud providers this affects? Mozilla Embraces Private Browsing with Tracking Protection in Firefox 42. Safari? Chrome? Not-IE-but-can’t-remember-the-name? Blog Comment of the Week This week’s best comment

Share:
Read Post

CSA Guidance V4 Content on GitHub

A while back we announced that we were contracted by the Cloud Security Alliance to write the next version of the CSA Guidance. This is actually a community project, not us off writing by ourselves in a corner. The plan is to: Collect feedback on version 3.0 (complete). Post outlines for the updated domains and collect public feedback. Post first drafts for the updated domains and collect more feedback. Post near-final drafts for the last feedback, then complete the final versions. I’m happy to say the content is now going up on the project site at GitHub. The first draft of the architecture section is up, as is the outline for Domain 5 (data governance). Things will start moving along more quickly from here. The best way to use GitHub at this point is to submit Issues rather than Pull Requests. Issues we can use like comments. Pull requests are actual edits we would need to merge, and they will be difficult to handle at scale, especially if we don’t get consensus on a suggested change. I will periodically update things here on the blog, but you can watch all the real-time editing and content creation on GitHub. Share:

Share:
Read Post

DevOpsed to Death

Alan Shimmel asks have we beat “What is DevOps” to death yet? Alan illustrates his point by using the more-than-beaten-to-death, we-wish-it-would-go-away-right-now of Chuck Norris meme. Those of us who have talked about DevOps for a while are certainly beginning to tire of explaining why it is more than automation. But Alan’s question is legit, and I have to say the answer is “No!” We are in the top of the second inning of a game that will be playing out for years. I know no amount of coffee will stifle a yawn when practitioners are confronted with yet another DevOps definition. People who are past simple automated builds and moving down the pathway to continuous integration do not need to be told what DevOps is. What they need help with is practice in how to do it better. But DevOps is still a small portion of the IT and development community, and the rest of the folks out there may still need to hear it a dozen times more before its importance sinks in. There are very good definitions, which do not always resonate with developers. Try getting a definition to stick with people who believe they’ll be force chocked to death by a Sith Lord before code auto-deploys in an afternoon – not an easy task. To put this into context with other development trends, you can compare it to Agile. Within the last year I have had half a dozen inquiries on how to start with Agile development. Yes, I have lost count of how many years ago Agile and Scrum were born. Worse, during the RSA conference this year, I discussed failed Agile deployments with a score of firms. Most fell flat on their faces because they missed one or two of the most basic requirements of what it means to be Agile. If you think you will run a development cycle based on a 200-page specification document and still be Agile, you’re a failure waiting to happen. They failed on the basics, not the hard stuff. From a security perspective I have been talking about Database Activity Monitoring and its principal use cases for the last decade. Still, every few months I get asked “How does DAM work?” And don’t even bother asking Rich about DLP – he gets questions every week. We have repetitive strain injuries from slapping our foreheads in disbelief at the same basic questions; but firms still need help with mature technologies like encryption, firewalls, DAM, DLP, and endpoint security. DevOps is still “cutting edge” for Operations at large, and people will be asking about how DevOps works for a very long time to come.r Share:

Share:
Read Post

Why I design for one cloud at a time

Putting all your eggs in one basket is always a little disconcerting. Anyone who works with risk is always wary of reducing options. So I am never surprised when clients ask about alternative cloud providers and try to design cloud-agnostic applications. Personally I take a different view. Designing cloud-agnostic applications is like building an entirely self-sufficient home because you don’t want to be locked into the local utilities, weather conditions, or environment. Sure, you could try, but the tradeoffs would be immense. Especially cost. The key for any such project is to understand the risk of lock-in, and then select appropriate techniques to minimize the risk while still providing the most benefit from the platform you are using. The only way to really get the cost savings and performance advantages of the cloud is to design specifically for the cloud you are working on. For example use their load balancers and auto scale groups rather than designing your own. (Don’t worry, I’ll get to containers in a second). If you are building or bringing all your own software to the cloud platform, at a certain point, why move to the cloud at all? Practically speaking you will likely reduce your agility, resiliency, and economic benefits. I am talking in generic terms, but I have designed and reviewed some of these deployments so this isn’t just analyst handwaving. For example one common scenario is data transfer for batch analysis. The cloud-agnostic way is to set up a file server at your cloud provider, SFTP the data in, and then send that off to analysis servers. The file server becomes a major weak point (if it goes down, so does everything), and it likely uses the the cloud provider’s most expensive storage (volumes). And all the analysis servers probably need to be running all the time (the file server certainly does), also racking up charges. The cloud-native approach is to transfer the data directly to object storage (e.g., Amazon S3) which is typically the cheapest storage option and highly resilient. Amazon even has an option to transfer that data into its ridiculously cheap Glacier long-term storage when you are done. Then you can use a tool like Lambda to launch analysis servers (using spot instance pricing, which can shave off another 40% or more) and link everything together with a cloud message queue, where you only pay when you actually pump data through. Everything spins up when data appears and shuts down when it’s finished; you can load as many simultaneous jobs as you want but still pay nearly nothing when you have no active jobs. That’s only one example. But I get it – sometimes you really do need to plan for at least some degree of portability. Here’s my personal approach. I tend to go all-in on native cloud features (these days almost always on AWS). I design apps using everything Amazon offers, including SQS, SNS, KMS, Aurora, DynamoDB, etc. However… My core application logic is nearly always self-contained, and I make sure I understand the dependency points. Take my data processing example: the actual processing logic is cloud-agnostic. Only the file transfer and event-driven mechanisms aren’t. Worst case, I could transfer to another service. Yes, there would be overhead, but no more than designing for and running on multiple providers. Even if I used native data analysis services, I’d just ensure I’m good at documenting my logic and code so I could redo it someplace else if needed. But what about containers? In some cases they really can help with portability, but even when using containers you will likely still lock into certain of your cloud provider’s proprietary features. For example it’s just about suicidal to run your database inside containers. And containers need to run on top of something anyway. And certain capabilities simply work better in your provider than in a container. Be smart in your design. Know your lock-in points. Have plans to move if you need to. Micro or mini services is a great design pattern for knowing your dependency points. But in the end if you aren’t using nearly every little tweak your cloud provider offers, you are probably spending more, more prone to breakage, and slower than the competition who does. I can’t move my house, but as long as I hit a certain square footage, my furniture fits just fine. Share:

Share:
Read Post

Incite 11/4/2015: The Taper

As I mentioned, I’m running a half marathon for Team in Training to defeat blood cancers. I’ve raised a bunch of money and still appreciate any donations you can make. I’m very grateful to have made it through my training in one piece (mostly), and ready to go. The race is this coming Saturday and the final two weeks of training are referred to as the taper, when you recover from months of training and get ready to race. This will be my third half, so by this time in the process I’m pretty familiar with how I feel, which is largely impatient. Starting about a month out, I don’t want to run any more because my body starts to break down a bit after about 250+ miles of training. I’m ready to rest when the taper starts – I need to heal and make sure I’m ready to run the real deal. I want to get the race over with and then move on with my life. Training can be a bit consuming and I look forward to sleeping in on a Sunday morning, as opposed to a 10-12 mile training run. It’s not like I’m going to stop running, but I want to be a bit more balanced. I’m going to start cycling (my holiday gift to myself will be a bike) and get back to my 3x weekly yoga practice to switch things up a bit.   The taper is actually a pretty good metaphor for navigating life transitions. Transitions are happening all the time. Sometimes it’s a new job, starting a new hobby, learning something new, relocating, or anything really that shakes up the status quo. Some people have very disruptive transitions, which not only shake their foundations but also unsettle everything around them. To live you need to figure out how to move through these transitions – we are all constantly changing and evolving, and every decade or so you emerge a different person whether you like it or not. Even if you don’t want to change, the world around you is changing, and forces you to adapt. But if you can be aware enough to sense a transition happening, you can taper and make things more graceful – for everyone. So what does that even mean? When you are ready for a change, you likely want to get on with it. But another approach is to slow down, rest a bit, take a pause, and prepare everyone around you for what’s next. I’ve mentioned the concept of slowing down to speed up before, and that’s what I’m talking about. When running a race, you need to slow down in the two weeks prior to make sure you have the energy to do your best on race day. In life, you need to slow down before a key transition and make sure you and those impacted are sufficiently prepared. That requires patience and that’s a challenge for me and most of the people I know. You don’t want to wait for everyone around you to be ready. You want to get on with it and move forward, whatever that means to you. Depending on the nature of the transition, your taper could be a few weeks or it could be a lot longer. Just remember that unless you are a total hermit, transitions reverberate with those around you. It can be a scary time for everyone else because they are not in control of your transitions, but are along for the ride. So try to taper as you get ready to move forward. I try to keep in mind that it’s not a race, even when it’s a race. –Mike Photo credit: “graff la rochelle mur aytre 7” originally uploaded by thierry llansades Thanks to everyone who contributed to my Team in Training run to battle blood cancers. We’ve raised almost $6,000 so far, which is incredible. I am overwhelmed with gratitude. You can read my story in a recent Incite, and then hopefully contribute (tax-deductible) whatever you can afford. Thank you. The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the 2014 conference. You can check it out on YouTube. Take an hour. Your emails, alerts, and Twitter timeline will be there when you get back. Securosis Firestarter Have you checked out our new video podcast? Rich, Adrian, and Mike get into a Google Hangout and… hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail. Nov 3 – Get Your Marshmallows Oct 19 – re:Invent Yourself (or else) Aug 12 – Karma July 13 – Living with the OPM Hack May 26 – We Don’t Know Sh–. You Don’t Know Sh– May 4 – RSAC wrap-up. Same as it ever was. March 31 – Using RSA March 16 – Cyber Cash Cow March 2 – Cyber vs. Terror (yeah, we went there) February 16 – Cyber!!! February 9 – It’s Not My Fault! January 26 – 2015 Trends January 15 – Toddler December 18 – Predicting the Past November 25 – Numbness Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too. Building Security into DevOps The Role of Security in DevOps Tools and Testing in Detail Security Integration Points The Emergence of DevOps Introduction Building a Threat Intelligence Program Using TI Gathering TI Introduction Network Security Gateway Evolution Introduction Recently Published Papers Pragmatic Security for Cloud and Hybrid Networks EMV Migration and the Changing Payments Landscape Applied Threat Intelligence Endpoint Defense: Essential Practices Cracking the Confusion: Encryption & Tokenization for Data Centers, Servers & Applications Security and Privacy on the Encrypted Network Monitoring the Hybrid Cloud Best Practices for AWS Security Securing Enterprise Applications Secure Agile Development The Future of Security Incite 4 U Getting started in InfoSec: Great post/resource here from Lesley Carhart about how to get started in information security. Right up at the top the key

Share:
Read Post

Million Dollar iOS Exploit? Maybe.

I wrote an article over at TidBITS today on the news that Zerodium paid $1M for an iOS exploit. There are a few dynamics working in favor of us normal iOS users. While those that purchase the bug will have incentives to use it before Apple patches it, the odds are they will still restrict themselves to higher-value targets. The more something like this is used, the greater the chance of discovery. That also means there are reasonable odds that Apple can get their hands on the exploit, possibly through a partner company, or even by focusing their own internal security research efforts. And the same warped dynamics that allow a company like Zerodium to exist also pressure it to exercise a little caution. Selling to a criminal organization that profits via widespread crime is far noisier than selling quietly to government agencies out to use it for spying. In large part this is merely a big publicity stunt. Zerodium is a new company and this is one way to recruit both clients and researchers. There is no bigger target than iOS, and even if they lose money on this particular deal they certainly placed themselves on the map. To be honest, part of me wonders whether they really found one in the first place. In their favor is the fact that if they claim the exploit, and don’t have it, odds are they will lose all credibility with their target market. On the other hand, they announced the winner right at the expiration of the contest. Or maybe no one sold them the bug, they found it themselves in the first place (this is former Vupen people we are talking about), so they don’t have to pay a winner but can still sell the bug, and attract future exploit developers with the promise of massive payouts. But really, I know nothing and am just having fun speculating. Oh what a tangled web we weave. Share:

Share:
Read Post

Get Your Marshmallows

Last week we learned that not only did Symantec mess up managing their root SSL certificates, but they also botched their audit so bad Google may remove them from Chrome and other products. This is just one example in a long history of security companies failing to practice what they preach. From poor code development practices to weak internal controls, the only new thing in this instance is the combination of getting caught, potential consequences, and a lack of wiggle room. Watch or listen: Share:

Share:
Read Post

Summary: Edumacation

For those who skip the intro, the biggest security news this week was the passage of CISA, Oracle’s… interesting.. security claims, more discussion on encryption weirdness from the NSA, and security research getting a DMCA exemption. All these stories are linked down below. Yesterday I hopped in the car, drove over to the kid’s school, and participated in the time-honored tradition of the parent-teacher conference. I’m still new to this entire “kids in school” thing, with one in first grade and another in kindergarten. Before our kids ever started school I assumed the education system would fail to prepare them for their technological future. That’s an acceptance of demographic realities, not any particular criticism. Look around your non-IT friends and ask how many of them really understand technology and its fundamental underpinnings? Why should teachers be any different? As large a role as technology plays in every aspect of business and technology, our society still hasn’t crossed the threshold to a majority of the population knowing the fundamentals, beyond surface consumption. That is changing, and will continue to change, but it is a multigenerational shift. And even then I don’t think everyone will (or needs to) understand the full depths of technology like many of us do, but there are entire categories of fundamentals which society will eventually fully integrate – just as we do now with reading, writing, and basic science. Back to the parent-teacher conference. During the meeting one teacher handed us a paper with ‘recommended’ iPad apps, because they now assume most students have access to an iPad or iPhone. When she handed it over she said “here’s what our teachers recommend instead of ‘Minecraft’”. What?!? This was a full stop moment for me. Minecraft is one of the single best screen-based tools to teach kids logical thinking and creativity. And yet the school system is actively discouraging Minecraft. Which is a particularly mixed message because I think Minecraft is integrated into other STEM activities (they are in a STEM school), but I need to check. The apps on the list aren’t terrible. Some are quite good. The vast majority are reading and math focused, but there are also a few science and social studies/atlas style apps and games, and everything is grade-appropriate. There are even some creativity apps, like video makers. On the upside, I think providing a list like this is an exceptionally good idea. Not every parent spends all day reading and writing about technology. On the other hand, nearly all the apps are, well, traditional. There’s only one coding app on the list. Most of the apps are consumption focused, rather than creation focused. I’m not worried about my kids. They have been emerged in technology since before birth, with an emphasis on building and creating (and sure, they also consume a ton). They also have two parents who work(ed) in IT, and a ridiculously geeky dad who builds Halloween decorations with microcontrollers. As for everyone else? Teachers will catch up. Parents will catch up. Probably not for must of my kids’ peers, but certainly by the time they have children themselves. It takes time for such massive change, and it’s already better than what I saw my 20-year-old niece experience when she ran through the same school district. I still can’t help but think of some major missed opportunities. For example, I was… volunteered… to help teach Junior Achievement in the school. It’s a well-structured program to introduce kids to the underpinnings of a capitalist society. From participating in Hackid, it looks like there is huge potential to develop a similar program for technology. Some schools, especially in places like Silicon Valley, already have active parents bringing real-world experience into classrooms. It sure would be nice to have something like this on a national scale – beyond ‘events’ like the annual Hour of Code week. And while we’re at it, we should probably have a program so kids can teach their parents online safety. Because I’m pretty sure most of them intuitively understand it better than most parents I meet. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences David Mortman is giving a talk on DevOps security next week. Dave Lewis over at CSO Online on Groundhog Day security. He also mentions LEGOs and Munich. Mike claims some video of him talking about some security thing will be on the web sometime soon. All I know is it will be at Dark Reading, and he isn’t always great on the details. Other Securosis Posts The Economics of Cloud Security. Hybrid Clouds: An Ugly Reality. How I got a CISSP and ended up nominated for the Board of Directors. Chewie, We’re Home. Favorite Outside Posts Adrian Lane: OMG, the machines are breeding! Mankind is doomed! DOOMED!!! Robert Graham offered a little tidbit on how his Tesla’s WiFi behaves while trolling the security press. And we’re glad he got a new car. David Mortman: Josh Corman and John Willis on containers and supply chains at the DevOps Enterprise Summit. Rich: Telecom companies track everything about you, and sell it. This is why I care so much about privacy. Even the NSA has to go through some nominal process before they can stick a location tracker and packet sniffer in your friggin’ pocket. Mike: Porn websites are the top mobile infection vector, 2015 report shows. With porn, mobile, and infection in the title, how could this not be my favorite link this week? Research Reports and Presentations Pragmatic Security for Cloud and Hybrid Networks. EMV Migration and the Changing Payments Landscape. Network-based Threat Detection. Applied Threat Intelligence. Endpoint Defense: Essential Practices. Cracking the Confusion: Encryption and Tokenization for Data Centers, Servers, and Applications. Security and Privacy on the Encrypted Network. Monitoring the Hybrid Cloud: Evolving to the CloudSOC. Security Best Practices for Amazon Web Services. Securing Enterprise Applications. Top News and Posts CISA Passes Senate Without Addressing Privacy Concerns. New DMCA Exemption is a Positive Step for Security Researchers. Two regulatory stories this week. One good (this one), and one bad (CISA). The NSA is abandoning elliptic curve crypto. That is a

Share:
Read Post

The Economics of Cloud Security

I have talked a lot about this, but I don’t think I’ve ever posted it here on the blog. I am consistently amused by people who fear moving to the cloud (and by people who take random potshots at the cloud) because they are worried about a lack of security. The reality is that cloud providers have a massive financial incentives to be more secure than you. To provide you a rock-solid foundation to build on – and as always, you are free to screw up whatever you want from there. Why? Because if they have a major security failure, it will lose them business, and could become an existential event (an asteroid-vs.-dinosaur type event). Look at it this way: In your own organization, who bears the cost of a security breach? It is almost never the business unit responsible for the breach, but instead almost always paid for out of some central budget. So other priorities nearly always take precedence over security, forcing security teams to block and tackle as best they can. Even the organization itself (depending a bit on the nature of the business) almost never places IT security above priorities such as responding to competitors, meeting product cycle requirements, etc. At a public cloud provider, security is typically one of the top 3 obstacles for obtaining customers and growing the business. If they can’t prove security, they cannot win customers. If they can’t maintain security, they most certainly can’t keep customers. Providers have a strong and direct financial motivation to place security at the top of their priorities. I am not naive enough to think this plays out evenly across the cloud market. I see the most direct correlation with IaaS, largely because those providers are fighting primarily for the enterprise market, where security and compliance are deeper requirements. PaaS is the same way at major IaaS vendors (which is incredibly common), and then prioritization drops off based on: Is it a developer-centric tool, or a larger platform? Does it target smaller or larger shops? SaaS is pretty much the Wild West. Major vendors who push hard for enterprise business are typically stronger, but I see plenty of smaller, underresourced SaaS providers where the economics haven’t caught up yet. For example Dropbox had a string of public failures, but eventually prioritized security in response – and then grew, targeting the business market. Box and Microsoft Azure targeted business from the start, and largely avoided Dropbox’s missteps, because their customers and economics required them to be hardened up front. Once you understand these economics, they can help you evaluate providers. Are they big and aimed at enterprises? Do they have a laundry list of certifications and audit/assessment results? Or are they selling more of a point tool, less mature, still trying to grab market share, and targeting developers or smaller organizations? You cannot quantify this beyond a list of certifications, but it can most certainly feed your Spidey sense. Share:

Share:
Read Post

Hybrid Clouds: An Ugly Reality

In my recent paper on cloud network security I came down pretty hard on hybrid networks. I have been saying similar things in many presentations, including my most recent RSA session. Enough that I got a request for clarification. Here is some additional detail I will add to the paper; feedback or criticism is appreciated. Hybrid deployments often play an essential, yet complex, role in an organization’s transition to cloud computing. On the one hand they allow an organization to extend its existing resources directly into the cloud, with fully compatible network addressing and routing. They allow the cloud to access internal assets directly, and internal assets to access cloud assets, without having to reconfigure everything from scratch. But that also means hybrid deployments bridge risks across environments. Internal problems can extend to the cloud provider, and compromise of something on the cloud side extends to the data center. It’s a situation ripe for error, especially in organizations which already struggle with network compartmentalization. Also, you are bridging two completely different environments – one software defined, the other still managed with boxes and wires. That’s why we recommend trying to avoid hybrid deployments, to retain the single greatest security advantage of cloud computing: compartmentalization. Modern cloud deployments typically use multiple cloud provider accounts for a single project. If anything goes wrong you can blow away the entire account, and start over. Control failures in any account are isolated to that account, and attacks at the network and management levels are also isolated. Those are typically impossible to replicate with hybrid. All that said, nearly every large enterprise we work with still needs some hybrid deployments. There are too many existing internal resources and requirements to drop ship them all to a cloud provider. Applications, assets, and services designed for traditional infrastructure which would all need to be completely re-architected to operate correctly, with acceptable resilience, in the cloud. Yes, someday hybrid clouds will be rare. And for any new project we highly recommend designing to work in an isolated, dedicated set of cloud accounts. But until we all finish this massive 20-year project of moving nearly everything into the public cloud, hybrid is a practical reality. Thinking about the associated risks, bridging networks and reducing compartmentalization, focuses your security requirements. You need to understand those connections, and the network security controls across them. They are two different systems using a common vocabulary, with important implementation differences. Management planes for non-network functions won’t integrate (traditional environments don’t have one). Host, application, and data security are specific to the assets involved and where they are hosted; risks extend whenever they are connected, regardless of deployment model. A hybrid cloud doesn’t change SQL injection detection or file integrity monitoring – you implement them as needed in each environment. The definition of hybrid is connection and extension via networking; understanding those connections, how the security rules are set up on each side, and how to ensure the security of two totally different environments works together, is the focus. Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.