Securosis

Research

Friday Summary: March 29, 2013

Our last nine months of research into identity and access management have yielded quite a few surprises – for me at least. Many of these new perspectives I have shared piecemeal in various blogs, and others not. But it occurred to me today, as we start getting feedback from the dozen or so IAM practitioners we have asked to critique our Cloud IAM research, that some key themes have been lost in the overall complexity of the content. I want to highlight a few points that really hit home with me, and which I think are critical for security professionals in general to understand. BYOD. MDM. MAM. That’s all BS. Mobile security is fundamentally an identity problem. Once you appreciate that a smartphone is essentially a multi-tenant smart card, you start to get a very different idea what mobile security will ultimately look like. How very little IAM and security people – and their respective cultures – overlap. At the Cloud Identity summit last year, the security side was me, Gunnar, and I think one other person. The other side was 400 other IAM folks who had never been to RSA before. This year at the RSA Conference was the first time I saw so many dedicated identity folks. Sure RSA, CA, Oracle, and IBM have had offerings for years, but IAM is not front and center. These camps are going to merge … I smell a Venn diagram coming. Identity is as glamorous as a sidewalk. Security has hackers, stolen bank accounts, ATM skimmers, crypto, scary foreign nationals, Lulz, APT, cyberwar, and stuff that makes it into movies. Identity has … give me a minute … thumbprint scanners? Anyone? Next time security complains about not having a “seat at the management table”, just be thankful you have C-level representation. I’m not aware of a C-level figure or Identity VP in any (consumer) firm. Looking back at directory services models to distribute identity and provide central management … what crap. Any good software architect, even in the mid-90s, should have seen this as a myopic model for services. It’s not that LDAP isn’t a beautifully simplistic design – it’s the inflexible monolithic deployment model. And yet we glued on appendages to get SSO working, until cloud and mobile finally crushed it. We should be thankful for this. Federation with mobile is disruptive. IT folks complain about the blurring of lines between personal and corporate data on smartphones. Now consider provisioning for customers as well as employees. In the same pool. Across web, mobile and in-house systems. Yeah, it’s like that. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Database Security Restart. Adrian’s DR post. Follow The Dumb Security Money. Mike’s DR post. Who has responsibility for cloud security? Mike appears in a NetworkWorld roundtable, and doesn’t say anything (too) stupid. Imagine that! Adrian’s DR paper: Security Implications Of Big Data. Favorite Securosis Posts Adrian Lane: Developers and Buying Decisions. Yeah, it’s my post, spurred by Matt Asay’s piece on how cost structures are changing tech sales. I should have split it into two posts, to fully discuss how Oracle is acting like IBM in the early 90s, and then the influence of developers on product sales. Mike Rothman: Developers and Buying Decisions. Adrian predicts that developers may be more involved in app security buying decisions. What could possibly go wrong with that? Rich: Developers and Buying Decisions. Fail to understand the dynamics and economics around you, and you… er… fail. David Mortman: Defending Cloud Data: IaaS Encryption. Gal Shpantzer: Who’s Responsible for Cloud Security? Other Securosis Posts DDoS Attack Overblown. Estimating Breach Impact. Superior Security Economics. Incite 3/27/2013: Office Space. Server Side JavaScript Injection on MongoDB. How Cloud Computing (Sometimes) Changes Disclosure. Identifying vs. Understanding Your Adversaries. Apple Disables Account Resets in Response to Flaw. Friday Summary: March 22, 2013, Rogue IT Edition. Favorite Outside Posts Rich: What, no Angry Birds? Brian Katz nails it – security gets the blame for poor management decisions. I remember the time I was deploying some healthcare software in a clinic and they asked me to block one employee from playing EverQuest. I politely declined. Gal Shpantzer: Congress Bulls Into China’s Shop David Mortman: Top 3 Proxy Issues That No One Ever Told You. Mike Rothman: You Won’t Believe How Adorable This Kitty Is! Click for More! Security is about to jump the shark. When social engineering becomes Wall Street Journal fodder we are on the precipice of Armageddon. It doesn’t hurt that some of our buddies are mentioned in the article, either… Adrian Lane: Checklist To Prepare Yourself In Advance of a DDoS Attack. A really sweet checklist for DDoS preparedness checklist. Dave Lewis: ICS Vulnerabilities Surface as Monitoring Systems Integrate with Digital Backends. Don’t know if it’s real, but it is funny! Project Quant Posts Email-based Threat Intelligence: To Catch a Phish. Network-based Threat Intelligence: Searching for the Smoking Gun. Understanding and Selecting a Key Management Solution. Building an Early Warning System. Implementing and Managing Patch and Configuration Management. Defending Against Denial of Service (DoS) Attacks. Securing Big Data: Security Recommendations for Hadoop and NoSQL Environments. Tokenization vs. Encryption: Options for Compliance. Top News and Posts Spamhaus DDoS Attacks Evernote: So useful, even malware loves it. Evernote as botnet C and C. Google glasses. Just friggin’ funny! Your WiFi-enabled camera might be spying on you “Browser Crashers” Hit Japanese Users Victim of $440K wire fraud can’t blame bank for loss, judge rules. This is going to be a hot topic for the next several years. FBI Pursuing Real-Time Gmail Spying Powers as “Top Priority” for 2013 Amazing Plaintext Password Blunder Chaos Communication Camps. Or should that be Kamps? “Lucky Thirteen” Attack MI5 undercover spies: People are falsely claiming to be us. This has occurred a few times before. GCHQ attempts to downplay amazing plaintext password blunder Slow Android Phone Patching Prompts Vulnerability Report Lawyer hopeful of success with secure boot complaint Cyberbunker’s Sven Kamphuis says he is victim of conspiracy over Spamhaus attack One in six Amazon S3 storage buckets are ripe for data-plundering That Internet War Apocalypse Is a

Share:
Read Post

1 in 6 Amazon Web Services Users Can’t Read

Rapid7 reported this week on finding a ton of sensitive information in Amazon S3. They scanned public buckets (Amazon S3 containers) by enumerating names, and concluded that 1 in 6 had sensitive information in them. People cried, “Amazon should do something about this!!” S3 buckets are private by default. You have to make them public. Deliberately. If you leave a bucket public, you eventually get an email like this (I have a public bucket we use for CCSK training): Dear Amazon S3 Customer, We’ve noticed that your Amazon S3 account has a bucket with permissions that allow an anonymous requester to perform READ operations, enumerating the contents of the bucket. Bucket READ access is sometimes referred to as “list” access. Amazon S3 buckets are private by default. These S3 buckets grant anonymous list access: REDACTED Periodically we send security notifications to all of our customers with buckets allowing anonymous list access. We typically recommend against anonymous list access. We know there are good reasons why you may choose to allow anonymous list access. This can simplify development against S3. However, some tools and scripts have emerged which scan services like Amazon S3 and enumerate objects in publicly listable buckets. With anonymous list access enabled, anyone (including users of these tools) may obtain a complete list of your bucket content. As a result of calls against your bucket, you may see unintended charges in your account. We’ve included specific steps to remove anonymous list access as well as further information about bucket access considerations. Use the following steps to immediately remove anonymous access to your bucket. Go to the Amazon S3 console at https://console.aws.amazon.com/s3/home. Right-click on the bucket and click Properties. In the Properties pane, click the Permissions tab. The tab shows a list of grants, one row per grant, in the bucket ACL. Each row identifies the grantee and the permissions granted. Select the row that grants permission to everyone. “Everyone” refers to the Amazon S3 All User group. Uncheck all the permissions granted to everyone (or click x to delete the row). This removes all permissions granted to public. Click Save to save the ACL. Learn more about protecting your bucket by reading the AWS article on Amazon S3 Bucket Public Access Considerations at http://aws.amazon.com/articles/5050. This article includes alternative options if you need methods for unauthenticated end users to read and write content, as well as detailed information on configuring bucket access for website hosting if you are hosting your site on Amazon S3. It also describes how you can use Bucket Policies if you would like to specify more granular access control on your bucket. Bucket Policies enable you to add or deny permissions across all or a subset of objects within a bucket. You can use wildcarding to define sets of objects within a bucket against which policy is applied, more specifically control the allowed operations, and even control access based on request properties. For further information on managing permissions on Amazon S3, please visit the Amazon S3 Developer Guide at http://docs.amazonwebservices.com/AmazonS3/latest/dev/Welcome.html for topics on Using ACLs and Using Bucket Policies. Finally, we encourage you to monitor use of your buckets by setting up Server Access Logging. This is described in our Developer Guide under Setting Up Server Access Logging. Sincerely, The Amazon S3 Team My scientific conclusion? 1 in 6 Amazon S3 users can’t read, or don’t care. Seriously, what do you want Amazon to do? Drive to your house if you mess up and then ignore their warnings? Share:

Share:
Read Post

DDoS Attack Overblown

Sam Biddle at Gizmodo says: This guy, Prince said, could back up CloudFlare’s claims. This really was Web Dresden, or something. After an inquiry, I was ready to face vindication. Instead, I received this note from a spokesperson for NTT, one of the backbone operators of the Internet: “Hey Sam, nice to hear from you. I’m afraid that we don’t have anything we can share that substantiates global effects. I’m sure you read the same 300gbps figure that I did, and while that’s a massive amount of bandwidth to a single enterprise or service provider, data on global capacities from sources like TeleGeography show lit capacities in the tbps range in most all regions of the world. I side with you questioning if it shook the global internet. Chris” Bad for SpamHaus, but unnoticeable to everyone else. Share:

Share:
Read Post

Estimating Breach Impact

Russell Thomas and a bunch of his friends recently posted a research paper called How Bad Is It? – A Branching Activity Model to Estimate the Impact of Information Security Breaches, which attempts to provide a structure for estimating the impact of a breach. This work is necessary – we have no benchmarks, or even consensus, about what breached organizations should even be counting. This is an academic research paper, and to be honest I am not a big fan of academic papers. I have pretty bad TL;DR syndrome. But I did check out the introduction, and noted some interesting tidbits. Empirical research on breach losses often use ad hoc taxonomies for “quantified” and “non-quantified” costs as part of surveys or interviews of subject matter experts. There is no theoretical basis for these taxonomies, which limits their generality and research significance. Finally, several consulting firms publish survey-based studies. Most notable is the “Cost of a Data Breach” reports by Ponemon Institute (Ponemon Institute 2012). These survey results are widely publicized and widely quoted, even in policy discussions, but they have no foundation in theoretical or empirical academic research, and they have very serious methodological flaws (Thomas 2011a) In summary, without some reliable and robust breach impact estimation methods, quantified information security will continue to be a “weak hypothesis” (Verendel 2009). This is true. It warms my cockles (can I say that out loud?) that these guys are calling out survey monkeys like Ponemon because the industry seems set on using those numbers to justify what we do. But I have to say I’m a little disappointed by Russell’s attempt to jump on the indicators of compromise bandwagon in his New School blog post on the paper. He unnecessarily concocts a meaningless description of this breach impact estimation model by mentioning Indicators of Impact. Huh? Total non-sequitur, though I do understand the desire to capitalize on the popularity and momentum of the phrase Indicators of XXX. But let’s call this what it is. An attempt to build an academically rigorous model to estimate the cost of a breach, based upon factors that can be reasonably estimated and quantified. It would be nice to see this kind of stuff added to GRC platforms and the like, to enable us to track and estimate these costs. Ultimately I believe that as we mature as a profession we will need this kind of research to help define a common vernacular for estimating loss. Photo credit: “Impact Hoodie Design for 2006” originally uploaded by Will Foster Share:

Share:
Read Post

Defending Cloud Data: How IaaS Storage Works

Infrastructure as a Service storage can be insanely complex when you include operational and performance requirements. First you need to create a resource pool, which might itself be a pool of virtualized and abstracted storage, and then you need to tie it all together with orchestration to support the dynamic requirements of the cloud – such as moving running virtual machines between servers, instantly snapshotting multi-terabyte virtual drives, and other insanity. For security we don’t need to know all the ins and outs of cloud storage, but we do need to understand the high-level architecture and how it affects our security controls. And keep in mind that the implementations of our generic architecture vary widely between different public and private cloud platforms. Public clouds are roughly equivalent to provider-class private clouds, except that they are designed to support multiple external tenants. We will focus on private cloud storage, with the understanding that public clouds are about the same except that customers have less control. IaaS Storage Overview Here’s a diagram to work through: At the lowest level is physical storage. This can be nearly anything that satisfies the cloud’s performance and storage requirements. It might be commodity hard drives in commodity rack servers. It could be high-performance SSD drives in high-end specialized datacenter servers. But really it could be nearly any storage appliance/system you can think of. Some physical storage is generally pooled by a virtual storage controller, like a SAN. This is extremely common in production clouds but isn’t limited to traditional SAN. Basically, as long as you can connect it to the cloud storage manager, you can use it. You could even dedicate certain LUNs from a larger shared SAN to cloud, while using other LUNs for non-cloud applications. If you aren’t a storage person just remember there might be some sort of controller/server above the hard drives, outside your cloud servers, that needs to be secured. That’s the base storage. On top of that we then build out: Object Storage Object storage controllers (also called managers) connect to assigned physical or virtual storage and manage orchestration and connectivity. Above this level they communicate using APIs. Some deployments include object storage connectivity software running on distributed commodity servers to tie the servers’ hard drives into the storage pool. Object storage controllers create virtual containers (also called buckets) which are assigned to cloud users. A container is a pool of storage in which you can place objects (files). Every container stores each bit in multiple locations. This is called data dispersion, and we will talk more about it in a moment. Object storage is something of a cross between a database and a file share. You move files into and out of it; but instead of being managed by a file system you manage it with APIs, at an abstracted layer above whatever file systems actually store the data. Object storage is accessed via APIs (almost always RESTful HTTP APIs) rather than classic network file protocols, which offers tremendous flexibility for integration into different applications and services. Object storage includes logic below the user-accessible layer for features such as quotas, access control, and redundancy management. Volume Storage Volume storage controllers (also called managers) connect to assigned physical (or virtual) storage and manage orchestration and connectivity. Above this level they communicate using APIs. The volume controller creates volumes on request and assigns them to specific cloud instances. To use traditional virtualization language, it creates a virtual hard drive and connects it to a virtual machine. Data dispersion is often used to provide redundancy and robustness. A volume is essentially a persistent virtual hard drive. It can be of any size supported by the cloud platform and underlying resources, and a volume assigned to a virtual machine exists until it is destroyed (note that tearing down an instance often automatically also returns the associated volume storage back to the free storage pool). Physical servers run hypervisors and cloud connectivity software to tie them into the compute resource pool. This is where instances (virtual machines) run. These servers typically have local hard drives which can be assigned to the volume controller to expand the storage pool, or used locally for non-persistent storage. We call this ‘ephemeral’ storage, and it’s great for swap files and other higher-performance operations that don’t require the resiliency of a full storage volume. If your cloud uses this model, the cloud management software places swap on these local drives. When you move or shut down your instance this data is always lost, although it might be recoverable until overwritten. We like to discuss volumes as if they were virtual hard drives, but they are a bit more complex. Volumes may be distributed and data dispersed across multiple physical drives. There are also implications which we will consider later for considering volumes in the context of your cloud, and how they interact with object storage and things like snapshots and live migrations. How object and volume storage interact Most clouds include both object and volume storage, even if object storage isn’t available directly to users. Here are the key examples: A snapshot is a near-instant backup of a volume that is moved into object storage. The underlying technology varies widely and is too complex for my feeble analyst brain, but a snapshot effectively copies a complete set of the storage blocks in your volume, into a file stored in an object container which has been assigned to snapshots. Since every block in your volume is likely stored in multiple physical locations, typically 3 or more times, taking a snapshot tells the volume controller to copy a complete set of blocks over to object storage. The operation can take a while but it looks instantaneous because the snapshot accurately reflects the state of the volume at that point in time, while the volume is stil fully usable – running on another set of blocks while the snapshot is moved over (this is a (major oversimplification of something that makes my head hurt). Images are pre-defined storage volumes in object storage, which contain operating systems or other virtual hard drives used to launch instances. An

Share:
Read Post

Defending Cloud Data: IaaS Encryption

Infrastructure as a Service (IaaS) is often thought of as merely as a more efficient (outsourced) version of our traditional infrastructure. On the surface you still manage things that look like simple virtualized networks, computers, and storage. You ‘boot’ computers (launch instances), assign IP addresses, and connect (virtual) hard drives. But while the presentation of IaaS resembles traditional infrastructure, the reality underneath is anything but business as usual. For both public and private clouds, the architecture of the physical infrastructure that comprises the cloud, as well as the connectivity and abstraction components used to provide it, dramatically alter how we need to manage our security. It isn’t that the cloud is more or less secure than traditional infrastructure, but it is very different. Protecting data in the cloud is a top priorities of most organizations as they adopt cloud computing. In some cases this is due to moving onto a public cloud, with the standard concerns any time you allow someone else to access or hold your data. But private clouds also comes with the same risk changes, even if they don’t trigger the same gut reaction as outsourcing. This series will dig into protecting data stored in and used with Infrastructure as a Service. There are a few options, but we will show why in the end the answer almost always comes down to encryption … with some twists. What Is IaaS Storage? Infrastructure as a Service includes two primary storage models: Object storage is a file repository. This is higher-latency storage with lower performance requirements, which stores individual files (‘objects’). Examples include Amazon S3 and RackSpace Cloud Files for public clouds, and OpenStack Swift for private clouds. Object storage is accessed using an API, rather than a network file share, which opens up a wealth of new uses – but you can layer a file browsing interface on top of the API. Volume storage is effectively a virtual hard drive. These higher-performing volumes attach to virtual machines and are used just like a physical hard drive or array. Examples include VMWare VMFS, Amazon EBS, RackSpace RAID, and OpenStack Cinder. To (over)simplify, object storage replaces file servers and volume storage is a substitute for hard drives. In both cases you take a storage pool – which could be anything from a SAN to hard drives on individual servers – and add abstraction and management layers. There are other kinds of cloud storage such as cloud databases, but they fall under either Platform as a Service (PaaS) or Software as a Service (SaaS). For this IaaS series, we will stick to object and volume storage. Due to the design of Infrastructure as a Service, data storage is very different than keeping it in ‘regular’ file repositories and databases. There are substantial advantages such as resilience, elasticity, and flexibility; as well as new risks in areas such as management, transparency, segregation, and isolation. How IaaS Is Different We will cover details in the next post, but at a high level: In private cloud infrastructure our data is co-mingled extensively, and the physical locations of data are rarely as transparent as before. You can’t point to a single server and say, “there are my credit card numbers” any more. Often you can set things up that way, at the cost of all the normal benefits of cloud computing. Any given piece of data may be located in multiple physical systems or even storage types. Part of the file might be on a server, some of it in a SAN, and the rest in a NAS, but it all looks like it’s in a single place. Your sensitive customer data might be on the same hard drive that, through layers of abstraction, also supports an unsecured development system. Plan incorrectly and your entire infrastructure can land in your PCI assessment scope – all mixed together at a physical level. To top it off, your infrastructure is now managed by a web-based API that, if not properly secured could allow someone on the other side of the planet unfettered access to your (virtual) data center. We are huge proponents of cloud computing, but we are also security guys. It is our job to help you identify and mitigate risks, and we’ll let infrastructure experts tell you why you should use IaaS in the first place. Public cloud infrastructure brings the same risks with additional complications because you no longer control ‘your’ infrastructure, your data might be mingled with anyone else on the Internet, and you lose most or all visibility into who (at your provider) can access your data. Whether private or public, you need to adjust security controls to manage the full abstraction of resources. You cannot rely on knowing where network cables plug into boxes anymore. Here are a few examples of how life changes: In private clouds, any virtual system that connects to any physical system holding credit card account numbers is within the scope of a PCI assessment. So if you run an application that collects credit cards in the same cloud as one that holds unsecured internal business systems, both are within assessment scope. Unless you take precautions we will talk about later. In public clouds an administrator at your cloud provider could access your virtual hard drives. This would violate all sorts of policies and contracts, but it is still technically quite possible. In most IaaS clouds a single command or API call can make an instant copy (snapshot) of an entire virtual hard drive, and then move it around your environment or make it public on the Internet. If your data is on the same hard drive as a criminal organization using the same cloud provider, and ‘their’ hardware is seized as part of an investigation, your data may be exposed. Yes, this has happened. It comes down to less visibility below the abstraction layer, and data from multiple tenants mixed on the same physical infrastructure. This is all manageable – it’s just different. Most of what we want to do, from a security standpoint, is use encryption and other techniques to either restore

Share:
Read Post

Superior Security Economics

MailChimp is offering a 10% discount to customers who enable 2-factor authentication. Impressive. Time to finish migrating our lists over to MailChimp (we only use them for the Friday Summary right now). We need to reward efforts like this. Share:

Share:
Read Post

Incite 3/27/2013: Office Space

A lot of folks ask me how I work from home. My answer is simple: I don’t. I have a home office, but I do the bulk of my work from a variety of coffee shops in my local area. So I give a few minutes’ thought at night to where I want to work the following day. Sometimes I have a craving for a Willy’s Burrito Bowl, which means I drive 20 minutes to one of their coffee shops in Sandy Springs. Other times I just have to have the salad bar’s chocolate mousse at Jason’s Deli, which means there are three different places that I could work that day. Lunch drives office location. For me, anyway. Sometimes I don’t have the foggiest idea what I want to eat for lunch, so I get into the car and drive. Sooner or later I end up where I’m supposed to be and then I get to work. Assuming I can get a seat in the coffee shop, that is. Evidently I’m not the only guy who works like a nomad. Sometimes it’s a packed house and I need to move on to Plan B. There is always another coffee shop to carpet bag. I try not to go to the same coffee shops on the same days or to have any kind of predictable pattern. I usually shrug that off with the excuse that my randomized office location strategy is for operational security. You know, when they come to get me I want to make them work for it. But really it’s because I don’t want to overstay my welcome. I pay $2.50 a day for office space and all the coffee I can drink, because the places I hang out provide free refills. By showing up at a place no more than once a week, I can rationalize that I’m not taking advantage of their hospitality. And yes, analysts have the most highly-functioning rationalization engines of all known species. I also like to see other people. Notice I said see – not talk to. Big difference. I guess I have a little “I am Legend” fear of being the only person left on Earth, so seeing other folks in the coffee shop allays that fear. Sometimes I see someone I know, and they miss the social cues of me having my earbuds in and not making eye contact. I engage in a short chat because I’m not a total douche. Not always, anyway. As long as it’s not a long chat it’s okay, because I have to get back to my Twitter timeline and whatever drivel I need to write that day. The other reality of my office space is that I’m far more productive when I’m out of the house. And evidently I’m not alone. It seems that the ambient noise of a coffee shop can boost productivity, unlike the silence of sitting in my home office. There is even a new web site that can provide a soundtrack that sounds like a coffee shop to stir your creativity. Maybe that works for some night owls, who like to work on the graveyard shift when coffee shops are closed. For me, I’ll head out and find a real coffee shop. With real people for me not to talk to. Speaking of which, must be time for that refill… –Mike Photo credits: Busy Coffeeshop originally uploaded by Kevin Harbor Upcoming Cloud Security Training Interested in Cloud Security? Are you in EMEA (or have a ton of frequent flyer miles)? Mike will be teaching the CCSK Training class in Reading UK April 8-10. Sign up now. Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. Understanding Identity Management for Cloud Services Buyers Guide Architecture and Design Integration Newly Published Papers Email-based Threat Intelligence: To Catch a Phish Network-based Threat Intelligence: Searching for the Smoking Gun Understanding and Selecting a Key Management Solution Building an Early Warning System Implementing and Managing Patch and Configuration Management Defending Against Denial of Service Attacks Incite 4 U Follow the money to DDoS mitigation: Marcus Carey brings up a couple good questions regarding the screwed-up process to defend against volume-based DDoS. You basically contract with a service provider to take the massive traffic hit. But he correctly observes that’s somewhat stupid, because everyone else upstream needs to accept and transmit the bogus traffic aimed at you. Wouldn’t it be smarter for the closest service provider (the first mile) to block clear DDoS attacks? It would be. But it won’t happen, mostly because there is no way to compensate the first-mile provider for blocking the attack. It would also require advanced signaling to identify attack nodes and tell the upstream provider to block the traffic. To be clear, some consumer ISPs do block devices streaming traffic, but that’s because it’s screwing up their network. Not because they care about the target. As always, follow the money to see whether something will happen or not. In this case, the answer is ‘not’. – MR Smash ‘em up old school: Our FNG (Gal Shpantzer) and I were talking about the recent malware attacks in South Korea the other day. Unlike most attacks we see these days, these didn’t target data (at least, on the surface), but instead left a trail of destruction. If you think about it, most of our security defenses over the past 10 years were oriented toward preventing data breaches. Before that it was all about stopping massive proliferation of malware and worms. So we have covered destructive attacks and then targeted attacks, but not necessarily both. I don’t expect this to be a big trend – the financial and political economics, meaning the risk of mutually assured destruction, self-limit the number of possible targeted destruction attacks, but I expect to hear more about this in the next couple years. It is a very tough

Share:
Read Post

Who’s Responsible for Cloud Security? (NetworkWorld Roundtable)

I recently participated in a roundtable for NetworkWorld, tackling the question of Who is responsible for cloud security?. First of all the picture is hilarious, especially because it shows my head photoshopped onto some dude with a tie. Like I’d wear a tie. But some of the discussion was interesting. As with any roundtable, you get a great deal of puffery and folks trying to make themselves sound smart by talking nonsense. Here are a couple good quotes from yours truly, who has never been known to talk nonsense. NW: Let’s start with a basic question. When companies are building hybrid clouds, who is responsible for what when it comes to security? What are the pain points as companies strive to address this? ROTHMAN: A lot of folks think having stuff in the cloud is the same as having it on-premises except you don’t see the data center. They think, “I’ve got remote data centers and that’s fine. I’m able to manage my stuff and get the data I need.” But at some point these folks are in for a rude awakening in terms of what the true impact of not having control over layer four and down is going to mean in terms of lack of visibility. NW: As Sutherland mentioned earlier, a lot of this has to be baked into the contract terms. Are there best practices that addresses how? ROTHMAN: A lot has to do with how much leverage you have with the provider. With the top two or three public cloud providers, there’s not going to be a lot of negotiation. Unless you have a whole mess of agencies coming along with you, as in [Kingsberry’s] case, you’re just a number to these guys. When you deal with smaller, more hungry cloud providers, and this applies to SaaS as well, then you’ll have the ability to negotiate some of these contract variables. NW: How about the maturity of the cloud security tools themselves? Are they where they need to be? ROTHMAN: You’ll walk around the RSA Conference and everybody will say their tools don’t need to change, everything works great and life is wonderful. And then after you’re done smoking the RSA hookah you get back to reality and see a lot of fundamental differences of how you manage when you don’t have visibility. Yes, I actually said RSA hookah and they printed it. Win! Check out the entire roundtable – they have some decent stuff in there. Photo credit: “THE BLAME GAME” originally uploaded by Lou Gold Share:

Share:
Read Post

Developers and Buying Decisions

Matt Asay wrote a very though provoking piece on Oracle’s Big Miss: The End Of The Enterprise Era. While this blog does not deal with security directly, it does highlight a couple of important trends that effect both what customers are buying, and who is making the decisions. Oracle’s miss suggests that the legacy vendors may struggle to adapt to the world of open-source software and Software as a Service (SaaS) and, in particular, the subscription revenue models that drive both. No. Oracle’s miss is not a failure to embrace open source, and it’s not a failure to embrace SaaS; it’s a failure they have not embraced and flat out owned PaaS. Oracle limiting itself to just software would be a failure. A Platform as a Service model would give them the capability of owning all of the data center, and still offering lower cost to customers. And they have the capability to address the compliance and governance issues that slow enterprise adoption of cloud services. That’s the opposite of the ‘cloud in a box’ model being sold. Service fees and burdensome cost structures are driving customers to look for cheaper alternatives. This is not news as Postgres and MySQL, before the dawn of Big Data, were already making significant market gains for test/dev/non-critical applications. It takes years for these manifestations to fully hit home, but I agree with Mr. Asay that this is what is happening. But it’s Big Data – and perhaps because Mr. Asay works for a Big Data firm he felt he could not come out an say it – that shows us commodity computing and virtually free analytics tools provide a very attractive alternative. One which does not require millions in up front investment. Don’t think the irony of this is lost on Google. I believe this so strongly that I divested myself all Oracle stock – a position I’d held for almost 20 years – because they are missing too many opportunities. But while I find all of that interesting as it mirrors the cloud and big data adoption trends I’ve been seeing, it’s a sideline to what I think is most interesting in the article. Redmonk analyst Stephen O’Grady argues: With the rise of open source…developers could for the first time assemble an infrastructure from the same pieces that industry titans like Google used to build their businesses – only at no cost, without seeking permission from anyone. For the first time, developers could route around traditional procurement with ease. With usage thus effectively decoupled from commercial licensing, patterns of technology adoption began to shift…. Open source is increasingly the default mode of software development….In new market categories, open source is the rule, proprietary software the exception. I’m seeing buying decisions coming from development with increasing regularity. In part it’s because developers are selecting agile and open source web technologies for application development. In part it’s that they have stopped relying upon relational concepts to support applications – to tie back to the Oracle issue. But more importantly it’s the way products and service fit within the framework of how they want them to work; both in the sense they have to meld with their application architecture, and because they don’t put up with sales cycle B.S. for enterprise products. They select what’s easy to get access to. Freemium models or cloud services, that you can sample for a few weeks just by supplying a credit card. No sales droid hassles, no contracts to send to legal, no waiting for ‘purchasing cycles. This is not an open-source vs. commercial argument, it’s an ease of use/integration/availability argument. What developers want right now vs. lots of stuff they don’t want with lots of cost and hassles: When you’re trying to ship code, which do you choose? As it pertains to security, development teams play an increasing role in product selection. Development has become the catalyst when deciding between source code analysis tools and DAST. They choose REST-ful APIs over SOAP, which completely alters the application security model. And on more than a few occasions I’ve seen WAF relegated to being a ‘compliance box’ simply because it could not be effective and efficiently integrated into the development-operations (dev-ops) process. Traditionally there has been very little overlap between security, identity and development cultures. But those boundaries thaw when a simple API set can link cloud and on-prem systems, manage clients and employees, accommodate mobile and desktop. Look at how many key management systems are fully based upon identity, and how identity and security meld on mobile platforms. Open source may increasingly be the default model for adoption, but not because it’s lacks licensing issues; it’s because of ease of availability (less hassles) and architectural synergy more than straight cost. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.