Securosis

Research

When to Use Amazon S3 Server Side Encryption

This week Amazon announced that S3 now supports server side encryption. You can encrypt S3 items through either the API or web management console, or you can require encryption for S3 buckets. A few details: They manage the keys. This is full transparent AES-256 encryption, and you only manage the access controls. Encryption is at the object level, not the bucket level. You can set a policy to require any uploads into a bucket to be encrypted. You can manage it via API or the AWS Management Console. It’s interesting, but from a security perspective only protects you from one thing – hard drives lost or stolen from Amazon. Going back to my Three Laws of Data Encryption, you would use this if you are worried about lost/stolen drives or if someone says you have to encrypt. It doesn’t protect from hacking attacks or anything like that. Client-side encryption is more important for improving security. This isn’t really much of a security play, but it’s a big assurance/compliance play. Since I like bullet lists and clear advice, you should use S3 server side encryption: If you are required to encrypt data at rest, and said requirement does not also require you to segregate keys from Amazon. You want to market that you are encrypting the data, but still don’t have a requirement to lock out Amazon. That’s about it. If you are worried about drive loss/theft it’s probably due to a compliance or disclosure requirement, and so I recommend client side encryption instead, for its greater security benefit. This is a checkbox. Sometimes you need them, but if security is that important you have other options which should be higher priority. Share:

Share:
Read Post

Force Attacker Perfection

I will fully admit that I sometimes finding myself parroting standard industry tropes. For example, I can’t recall how many times I’ve said in presentations and interviews: The defender needs to be perfect all the time. The attacker only needs to succeed once. And yes, it’s totally true. But we spend so much time harping on it that we forget how we can turn that same dynamic to our advantage. If all the attacker cares about is getting in once, that’s true. If we only focus on stopping that first attack, it’s still true. But what if we shift our goal to detection and containment? Then we open up some opportunities. As defenders, the more barriers and monitors we put in place, the more we demand perfection from attackers. Look at all those great heist movies like Ocean’s 11 – the thieves have to pass all sorts of hurdles on the way in, while inside, and on the way out to get away with the loot. We can do the same thing with compartmentalization and extensive alert-based monitoring. More monitored internal barriers are more things an attacker needs to slip past to win. Technically it’s defense in depth, but we all know that term has turned into an excuse to buy more useless crap, mostly on the perimeter, as opposed to increasing internal barriers. I am not saying it’s easy. Especially since you need alert-based monitors so you aren’t looking at everything by hand. And let’s be honest – although a SIEM is supposed to fill this role (at least the alerting one) almost no one can get SIEM to work that way without spending more than they wasted on their 7-year ERP project. But I’m an analyst so I get to spout out general philosophical stuff from time to time in hopes of inspiring new ideas. (Or annoy you with my mendacity). Stop wishing for new black boxes. Just drop more barriers, with more monitoring, creating more places for attackers to trip up. Share:

Share:
Read Post

Comment on the Next Version of the Cloud Security Alliance Guidance

Two years ago I edited the Cloud Security Alliance’s Guidance (v2.1) with a couple other folks, and it nearly ended me. Pulling together a consensus with such a diverse group of global contributors, each running with very few constraints, lead to… certain quality issues. The CSA learned their lesson and Version 3.0 is under much better control. Aside from a lot more consistency and dedicated editors (our own Chris Pepper edited v2.1), the process is much better organized. Many groups have finished their initial work (including mine: Data Security) and the documents are up for public review. You can see the drafts and submit comments. I highly encourage you to get involved if you are interested in cloud security at all. This Guidance will probably live for 2-3 years, and it is already used extensively by end users and vendors to help guide their projects. I could also use some specific review in my domain (Information Management and Data Security): What do you think of the new lifecycle? Did we capture the right controls? Is the technology depth where it needs to be? Did we balance the practical with the strategic? If you don’t want to go through the full track-changes thing, feel free to email me directly or comment here. Thanks Share:

Share:
Read Post

Building an SSL Early Warning System

Most security professionals have long understood at least some of the risks of the current ‘web’ or ‘chain’ of trust model for SSL security. To quickly recap for those of you who aren’t hip-deep in this day to day: Your browser knows to trust a digital certificate because it’s signed by a root certificate, generated by a certificate authority, which was included in your browser or operating system. You are trusting that your browser manufacturer properly vetted the organizations which own the roots and sign downstream certificates, and that none of them will issue ‘bad’ certificates. This is not a safe assumption. A new Mac trusts about 175 root certificates, and Apple hasn’t audited any of them. The root certificates are also used to sign certain intermediary certificates, which can then be used to sign other downstream certificates. It’s a chain of trust. You trust the roots, along with every certificate they tell you to trust – both directly and indirectly. There is nothing to stop any trusted (root) certificate authority from issuing a certificate for any domain it chooses. It all comes down to their business practices. To detect a rogue certificate authority, someone who receives a bogus certificate must notice that the certificate they issued is different than the real certificate somehow. If a certificate isn’t signed by a trusted root or intermediary, all browsers warn the user, but they also provide an option to accept the suspicious certificate anyway. That’s because many people issue their own certificates to save money – particularly for internal and private systems. There is a great deal more to SSL security, but this is the core of the problem: we cannot personally evaluate every SSL cert we encounter, so we must trust a core set of root providers to identify (sign) legitimate certs. But the system isn’t centralized, so there are hundreds of root authorities and intermediaries, each with its own business practices and security policies. More than once, we have seen certs fraudulently issued for major brands such as Google and Microsoft, and now we see attackers targeting certificate authorities. We’ve seen two roots hacked this year – Comodo and DigiNotar – and both times the hackers issued themselves fraudulent certs that your browser would accept as valid. There are mechanisms to revoke these things but none of them work well – which is why after major hacks the browser manufactures such as Microsoft, Mozilla, and Apple have to issue software updates. Research in this area has been extensive, with a variety of exploits demonstrated at recent Black Hat/Defcon conferences. I highly recommend you read the EFF’s just-published summary of the DigiNotar issue. It’s a mess. One that’s very hard to fix because: Add-on models, such as Moxie Marlinspike’s Convergence add-on and the Perspectives project are a definite improvement, but only help those educated enough to use them (for the record, I think they are both awesome). The EFF’s SSL Observatory project helps identify the practices of the certificate authorities, but doesn’t attempt to identify breaches or misuse of certificates in real time. DNSSec with DANE could be a big help, but is still nascent and requires fundamental infrastructure changes. Google’s DNS pinning in Chrome is excellent for those using that browser (I don’t – it leaks too much back to Google). I do think this could be a foundation for what I suggest below, but right now it only protects individual users accessing particular sites – for now, only Google. The Google Certificate Catalog is another great endeavor that’s still self-limiting – but again, I think it’s a big piece of what we need. The CA market is big business. There is a very large amount of money involved in keeping the system running (I won’t say working) as it currently does. The browser manufacturers (at least the 3 main ones and maybe Google) would all have to agree to any changes to the core model, which is very deeply embedded into how we use the Internet today. The costs of change would not fall only on evil businesses and browser developers, but would be shared among everyone who uses digital certs today – pretty much every website with users. We don’t even have a way to measure how bad the problem is. DigiNotar knew they had been hacked and had issued bad certs for at least more than a month before telling anyone, and reports claim that these certs were used to sniff traffic in Iran. How many other evil certs are out there? We only notice them when they are presented to someone knowledgeable and paranoid enough to notice, who then reports it. Dan Kaminsky’s post shows just a small slice of how complex this all is. To summarize: We don’t yet have consensus on an alternate system, there are many strong motivations to keep the current system even despite its flaws, and we don’t know how bad the problem is – how many bogus certs have been created, by how many attackers, or how often they are used in real attacks. Imagine how much more confusing this would all be if the DigiNotar hacker had signed certificates in the names of many other certificate authorities. Internally, long before the current hacks, our former intern proposed this as a research area. The consensus was “Yes, it’s a problem and we are &^(%) if a CA issues bad certs”. The problem was that neither he nor we had a solution to propose. But I have an idea that could help us scope out the problem. I call it a ‘transitional’ proposal because it doesn’t solve the problem, but could help identify the issues and raise awareness. Call it an “Early Warning System for SSL” (I’d call it “IDS for SSL”, but you all would burn my house down). The canary in the SSL mine. Conceptually we could build a browser feature or plugin that serves as a sensor. It would need a local list of known certificate signatures for

Share:
Read Post

Friday Summary: September 9, 2011

I suppose that, all things considered, I’m a pretty nice guy. I tip well, stop my car so people can cross the street, and always put my laptop bag under the seat in front of me, instead of taking up valuable overhead luggage space. While I have had plenty of jobs that required the use of physical force over the years, I always made sure to keep my professional detachment and use the minimum amount necessary. (Okay, that’s to keep my ass out of jail as much as anything else, but still…). And animals? I’m a total sucker for them. I don’t mean in an inappropriate way, but I think they are just so darn cute. We even donate a bunch to local shelters and the Phoenix Zoo. Heck, all our cats are basically rescues… one of which randomly showed up in a relative’s yard during a BBQ, severely injured, and which we nursed back to health and kept. Which is why my current murderous rampage against the birds crapping on our patio is completely out of character. We like birds. We even used to fill a bird feeder in the yard. Then all our trees grew out, and it seems we have the best shade in the neighborhood. On any given day, once the temperature tops 100 or so, our back patio is covered with dozens of birds doing nothing more than standing in the shade and crapping. And you know what birds eat, don’t you? Berries. Lots and lots of berries. Think they digest it all? Think again. Our patio is stained so badly we will never be able to get it clean. How do I know? I paid someone to power spray and hand scrub it with the kinds of chemicals banned from Fukushima – all to no avail. Not even with the special stuff I smuggled across the border from Mexico. They’ve even hit my grill. The bastards. I’ve tried all sorts of things to keep them away, but I suspect I’ll need to build out something using an Arduino and chainsaw by next summer. This year is a loss – 2 weeks after the big cleaning, even with me spraying it down every few days, out patio is unusable. I haven’t killed them yet. To be honest I don’t think that will work – more likely it would just land me on the local news. But I do grill a lot more chicken and turkey out there. Oh yeah, smell the sweet smell of superior birds roasting in agony. Hey… did you hear some dudes named DigiNotar got hacked? On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian’s DR article on DAM. Adrian quoted on dangers to law enforcement from the recent hack. My Spanish is good, no? Adrian’s DR article on Fraud Detection and DAM. Favorite Securosis Posts Adrian Lane: Security Management 2.0: Vendor Evaluation. Mike’s pushing the envelope here, but this is the only way to figure out how the product really works. Mike Rothman & David Mortman: Data Security Lifecycle 2.0. With this cloud stuff, our underlying computing foundation is changing. This post assembles a lot of the latest and greatest about how to protect the data. Other Securosis Posts Speaking at OWASP: September 22 and 23. Incite 9/7/2011: Decisions, Decisions. Security Management 2.0: Vendor Evaluation – Culling the Short List. The New Path of Least Resistance. Making Bets. Favorite Outside Posts Gunnar: Do we know how to make software? David Mortman: Quick Blip: Hoff In The Cube at VMworld 2011: On VMware Security. Mike Rothman: The Good, Bad, and Ugly of Technical Acquisitions. Not sure what Amrit is doing now, besides writing great summaries of what happens when Big Company X buys small start-up Y. Adrian Lane: Don’t Hate The ‘Playas’ – Hate The Game. My fav this week is Mike’s Dark Reading post – it gets to the heart of the issue. Pepper: Protecting a Laptop from Simple and Sophisticated Attacks. Mike clearly thought hard about risks, and took some very unusual steps to protect them as well as he could manage. Rich: OS X won’t let you properly remove bad DigiNotar certificates. I know I need to write this up, but being sick has gotten in the way. Apple really needs to address this – for PR reasons as much as for user security. Research Reports and Presentations Tokenization vs. Encryption: Options for Compliance. Security Benchmarking: Going Beyond Metrics. Understanding and Selecting a File Activity Monitoring Solution. Database Activity Monitoring: Software vs. Appliance. React Faster and Better: New Approaches for Advanced Incident Response. Measuring and Optimizing Database Security Operations (DBQuant). Network Security in the Age of Any Computing. The Securosis 2010 Data Security Survey. Top News and Posts Copyright Troll Righthaven Goes on Life Support. Die, troll, die! Star Wars Fans Get Pwned. Fraudulent Google credential found in the wild. Evidence of Infected SCADA Systems Washes Up in Support Forums. VMware: The Console Blog: VMware Acquires PacketMotion. Don Norman: Google doesn’t get people, it sells them. Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Russ, in response to Incite 9/7/2011: Decisions, Decisions. Re Please Stop! Dear Adrian, While I believe one of the useful roles Securosis can play in the industry is to help turn down the hype on over-blown issues, in this particular case I’m not sure I agree with your conclusion. I spent a career in aviation safety, and found that what the average line pilot was talking about every day had nowhere near the amount of aviation safety content we as aviation safety advocates thought to be adequate (an example would be the extraneous cockpit conversation prior to the Colgan Air Flight 3407 crash in Buffalo). Could it be that the fact APTs is not brought up in your daily conversations with firms could be an indication of how far we have to go in creating a

Share:
Read Post

Data Security Lifecycle 2.0

We reference this content a lot, so I decided to compile it all into a single post. This is the original content, including internal links, and has not been re-edited. Introduction Four years ago I wrote the initial Data Security Lifecycle and a series of posts covering the constituent technologies. In 2009 I updated it to better fit cloud computing, and it was incorporated into the Cloud Security Alliance Guidance, but I have never been happy with that work. It was rushed and didn’t address cloud specifics nearly sufficiently. Adrian and I just spent a bunch of time updating the cycle and it is now a much better representation of the real world. Keep in mind that this is a high-level model to help guide your decisions, but we think this time around we were able to identify places where it can more specifically guide your data security endeavors. (As a side note, you might notice I use “data security” and “information-centric security” interchangeably. I think infocentric is more accurate, but data security is more recognized, so that’s what I tend to use.) If you are familiar with the previous model you will immediately notice that this one is much more complex. We hope it’s also much more useful. The old model really only listed controls for data in different phases of the lifecycle – and didn’t account for location, ownership, access methods, and other factors. This update should better reflect the more complex environments and use cases we tend to see these days. Due to its complexity, we need to break the new Lifecycle into a series of posts. In this first post we will revisit the basic lifecycle, and in the next post we will add locations and access. The lifecycle includes six phases from creation to destruction. Although we show it as a linear progression, once created, data can bounce between phases without restriction, and may not pass through all stages (for example, not all data is eventually destroyed). Create: This is probably better named Create/Update because it applies to creating or changing a data/content element, not just a document or database. Creation is the generation of new digital content, or the alteration/updating of existing content. Store: Storing is the act committing the digital data to some sort of storage repository, and typically occurs nearly simultaneously with creation. Use: Data is viewed, processed, or otherwise used in some sort of activity. Share: Data is exchanged between users, customers, and partners. Archive: Data leaves active use and enters long-term storage. Destroy: Data is permanently destroyed using physical or digital means (e.g., cryptoshredding). These high-level activities describe the major phases of a datum’s life, and in a future post we will cover security controls for each phase. But before we discuss controls we need to incorporate two additional aspects: locations and access devices. Locations and Access In our last post we reviewed the Data Security Lifecycle, but other than some minor wording changes (and a prettier graphic thanks to PowerPoint SmartArt) it was the same as our four-year-old original version. But as we mentioned, quite a bit has changed since then, exemplified by the emergence and adoption of cloud computing and increased mobility. Although the Lifecycle itself still applies to basic, traditional infrastructure, we will focus on these more complex use cases, which better reflect what most of you are dealing with on a day to day basis. Locations One gap in the original Lifecycle was that it failed to adequately address movement of data between repositories, environments, and organizations. A large amount of enterprise data now transitions between a variety of storage locations, applications, and operating environments. Even data created in a locked-down application may find itself backed up someplace else, replicated to alternative standby environments, or exported for processing by other applications. And all of this can happen at any phase of the Lifecycle. We can illustrate this by thinking of the Lifecycle not as a single, linear operation, but as a series of smaller lifecycles running in different operating environments. At nearly any phase data can move into, out of, and between these environments – the key for data security is identifying these movements and applying the right controls at the right security boundaries. As with cloud deployment models, these locations may be internal, external, public, private, hybrid, and so on. Some may be cloud providers, other traditional outsourcers, or perhaps multiple locations within a single data center. For data security, at this point there are four things to understand: Where are the potential locations for my data? What are the lifecycles and controls in each of those locations? Where in each lifecycle can data move between locations? How does data move between locations (via what channel)? Access Now that we know where our data lives and how it moves, we need to know who is accessing it and how. There are two factors here: Who accesses the data? How can they access it (device & channel)? Data today is accessed from all sorts of different devices. The days of employees only accessing data through restrictive applications on locked-down desktops are quickly coming to an end (with a few exceptions). These devices have different security characteristics and may use different applications, especially with applications we’ve moved to SaaS providers – who often build custom applications for mobile devices, which offer different functionality than PCs. Later in the model we will deal with who, but the diagram below shows how complex this can be – with a variety of data locations (and application environments), each with its own data lifecycle, all accessed by a variety of devices in different locations. Some data lives entirely within a single location, while other data moves in and out of various locations… and sometimes directly between external providers. This completes our “topographic map” of the Lifecycle. In our next post we will dig into mapping data flow and controls. In the next few posts we will finish covering background material, and

Share:
Read Post

Detecting and Preventing Data Migrations to the Cloud

One of the most common modern problems facing organizations is managing data migrating to the cloud. The very self-service nature that makes cloud computing so appealing also makes unapproved data transfers and leakage possible. Any employee with a credit card can subscribe to a cloud service and launch instances, deliver or consume applications, and store data on the public Internet. Many organizations report that individuals or business units have moved (often sensitive) data to cloud services without approval from, or even notification to, IT or security. Aside from traditional data security controls such as access controls and encryption, there are two other steps to help manage unapproved data moving to cloud services: Monitor for large internal data migrations with Database Activity Monitoring (DAM) and File Activity Monitoring (FAM). Monitor for data moving to the cloud with URL filters and Data Loss Prevention. Internal Data Migrations Before data can move to the cloud it needs to be pulled from its existing repository. Database Activity Monitoring can detect when an administrator or other user pulls a large data set or replicates a database. File Activity Monitoring provides similar protection for file repositories such as file shares. These tools can provide early warning of large data movements. Even if the data never leaves your internal environment, this is the kind of activity that shouldn’t occur without approval. These tools can also be deployed within the cloud (public and/or private, depending on architecture), and so can also help with inter-cloud migrations. Movement to the Cloud While DAM and FAM indicate internal movement of data, a combination of URL filtering (web content security gateways) and Data Loss Prevention (DLP) can detect data moving from the enterprise into the cloud. URL filtering allows you to monitor (and prevent) users connecting to cloud services. The administrative interfaces for these services typically use different addresses than the consumer side, so you can distinguish between someone accessing an admin console to spin up a new cloud-based application and a user accessing an application already hosted with the provider. Look for a tool that offers a list of cloud services and keeps it up to date, as opposed to one where you need to create a custom category and manage the destination addresses yourself. Also look for a tool that distinguishes between different users and groups so you can allow access for different employee populations. For more granularity, use Data Loss Prevention. DLP tools look at the actual data/content being transmitted, not just the destination. They can generate alerts (or block) based on the classification of the data. For example, you might allow corporate private data to go to an approved cloud service, but block the same content from migrating to an unapproved service. Similar to URL filtering, you should look for a tool that is aware of the destination address and comes with pre-built categories. Since all DLP tools are aware of users and groups, that should come by default. This combination isn’t perfect, and there are plenty of scenarios where they might miss activity, but that is a whole lot better than completely ignoring the problem. Unless someone is deliberately trying to circumvent security, these steps should capture most unapproved data migrations. Share:

Share:
Read Post

Friday Summary (Not Too Morbid Edition): August 26, 2011

Last Thursday I thought I was dying. Not a joke. Not an exaggeration. As in “approaching room temperature”. I was just outside D.C. having breakfast with Mike before going to teach the CCSK instructors class. In the middle of a sentence I felt… something. Starting from my chest I felt a rush to my head. An incredibly intense feeling on the edge of losing consciousness. Literally out of nowhere, while sitting. I paused, told Mike I felt dizzy, and then the second wave hit. I said, “I think I’m going down”, told him to call 9-1-1, and had what we in the medical profession call “a feeling of impending doom”. I thought I was having either an AMI (acute myocardial infarction (heart attack), not the cloud thing) or a stroke. I’ve been through a lot over the years and nothing, nothing, has ever hit me like that. The next thoughts in my head were what I know my last thoughts on this planet will be. I never want to experience them again. Seconds after this hit I checked my pulse, since that feeling was like what many patients with an uncontrolled, rapid heart rate described. But mine was only up slightly. It tapered off enough that I didn’t think I was going to crash right then and there. Fortunately Mike is a bit… inexperienced… and instead of calling 9-1-1 with his cell phone he got up to tell the restaurant. I stopped him, it relented more, and I asked if there was a hospital close (Mike lived in that area for 15 years). There was one down the road and he took me there. (Never do that. Call the ambulance – we medical folks are freaking idiots.) I spent the next 29 hours in the hospital being tested and monitored. Other than a slightly elevated heart rate, everything was normal. CT scan of the head, EKG, blood work to rule out a pulmonary embolus (common traveling thing), echocardiogram, chest x-ray, and more. I ate what I was told was a grilled cheese sandwich. Assuming that was true, I’m certain it was microwaved and the toast marks airbrushed. Once they knew I wasn’t going to die they let me loose and I flew home (a day late). I won’t lie – I was pretty shaken up. Worse than when I fell 30 feet rock climbing and punctured my lung. Worse than skiing through avalanche terrain, or the time my doctor called to ask “are you close to the hospital” after a wicked infection. Especially with my rescue and extreme sports background I’ve been in a lot of life-risking situations, but I never before thought “this is it”. Tuesday I went to the doctor, and after a detailed history and reviewing the reports she thinks it was an esophageal spasm. The nerves in your thorax aren’t always very discriminating. They are like old Ethernet cables prone to interference and cross talk. A spasm in the wrong spot will trigger something that is essentially indistinguishable from a heart attack (to your brain). I’ve been having some reflux lately from all the road food, so it makes sense. There are more tests on the way, but it seems you all are stuck with me for much, much longer. All that testing was like the best physical ever, and I’m in killer good shape. but I am going to chill a bit for the next few weeks, which was in the works anyway. False positives suck. Now I know why you all hate IDS. Update: I was talking with our pediatrician and he went through the same thing once. He asked “can I ask you a personal question?” “Sure” I replied. “So what was running through your head when it happened?” I said, “I can’t believe I won’t be there for my girls”. “Oh good” he went, “I’ve never talked to anyone else who went through it, but I was trying to figure out if I had enough life insurance for my family”. And a coworker of my wife’s mentioned she had the same thing, and called her kids to say goodbye. To be honest, now I don’t feel so bad. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian quoted on dangers to law enforcement from recent hack. My Spanish is good, no? Adrian’s DR article on Fraud Detection and DAM. Rich, Zach, and Martin on the Network Security Podcast. Favorite Securosis Posts Adrian Lane: Cloud Security Q&A from the Field. Mike Rothman: Spotting That DAM(n) Fake. Grumpy Adrian is a wonder to behold. And he is definitely grumpy in this post. David Mortman: Spotting That DAM(n) Fake. Rich: Beware Anti-Malware Snake Oil Other Securosis Posts Security Management 2.0: Revisiting Requirements. Fact-based Network Security: Outcomes and Operational Data. Incite 8/24/2011: Living Binary. Security Management 2.0: Platform Evolution. Favorite Outside Posts Adrian Lane: Visa Kills PCI Assessments and Wants Your Processor to Support EMV. This is the carrot I mentioned, which Visa is offering to encourage adoption. As Branden points out, most merchants take more than Visa, but I expect MC to follow suit. Mike Rothman: National Archives Secret Question Fail. H/T to the guys at 37Signals for pointing out this security FAIL. David Mortman: Soft switching might not scale, but we need it. Rich: Wim Remes petitioning to get on the ISC2 ballot. Although I burned someone’s certificate on stage at DefCon, the organization could do some good if they changed direction. (No, I don’t have a CISSP… as a DefCon goon I’m not sure how to answer that whole “Do you associate with hackers?” question.) Research Reports and Presentations Tokenization vs. Encryption: Options for Compliance. Security Benchmarking: Going Beyond Metrics. Understanding and Selecting a File Activity Monitoring Solution. Database Activity Monitoring: Software vs. Appliance. React Faster and Better: New Approaches for Advanced Incident Response. Measuring and Optimizing Database Security Operations (DBQuant). Network Security in the Age of Any Computing. The Securosis 2010 Data Security Survey. Top News and Posts Chinese Military

Share:
Read Post

Cloud Security Q&A from the Field: Questions and Answers from the DC CCSK Class

One of the great things about running around teaching classes is all the feedback and questions we get from people actively working on all sorts of different initiatives. With the CCSK (cloud security) class, we find that a ton of people are grappling with these issues in active projects and different things in various stages of deep planning. We don’t want to lose this info, so we will to blog some of the more interesting questions and answers we get in the field. I’ll skip general impressions and trends today to focus on some specific questions people in last week’s class in Washington, DC, were grappling with: We currently use XXX Database Activity Monitoring appliance, is there any way to keep using it in Amazon EC2? This is a tough one because it depends completely on your vendor. With the exception of Oracle (last time I checked – this might have changed), all the major Database Activity Monitoring vendors support server agents as well as inline or passive appliances. Adrian covered most of the major issues between the two in his Database Activity Monitoring: Software vs. Appliance paper. The main question for cloud )especially public cloud) deployments is whether the agent will work in a virtual machine/instance. Most agents use special kernel hooks that need to be validated as compatible with your provider’s virtual machine hypervisor. In other words: yes, you can do it, but I can’t promise it will work with your current DAM product and cloud provider. If your cloud service supports multiple network interfaces per instance, you can also consider deploying a virtual DAM appliance to monitor traffic that way, but I’d be careful with this approach and don’t generally recommend it. Finally, there are more options for internal/private cloud where you can route the traffic even back to a dedicated appliance if necessary – but watch performance if you do. How can we monitor users connecting to cloud services over SSL? This is an easy problem to solve – you just need a web gateway with SSL decoding capabilities. In practice, this means the gateway essentially performs a man in the middle attack against your users. To work, you install the gateway appliance’s certificate as a trusted root on all your endpoints. This doesn’t work for remote users who aren’t going through your gateway. This is a fairly standard approach for both web content security and Data Loss Prevention, but those of you just using URL filtering may not be familiar with it. Can I use identity management to keep users out of my cloud services if they aren’t on the corporate network? Absolutely. If you use federated identity (probably SAML), you can configure things so users can only log into the cloud service if they are logged into your network. For example, you can configure Active Directory to use SAML extensions, then require SAML-based authentication for your cloud service. The SAML token/assertion will only be made when the user logs into the local network, so they can’t ever log in from another location. You can screw up this configuration by allowing persistent assertions (I’m sure Gunnar will correct my probably-wrong IAM vernacular). This approach will also work for VPN access (don’t forget to disable split tunnels if you want to monitor activity). What’s the CSA STAR project? STAR (Security, Trust & Assurance Registry) is a Cloud Security Alliance program where cloud providers perform and submit self assessments of their security practices. How can we encrypt big data sets without changing our applications? This isn’t a cloud-specific problem, but does come up a lot in the encryption section. First, I suggest you check out our paper on encryption: Understanding and Selecting a Database Encryption or Tokenization Solution. The best cloud option is usually volume encryption for IaaS. You may also be able to use some other form of transparent encryption, depending on the various specifics of your database and application. Some proxy-based in-the-cloud encryption solutions are starting to appear. That’s it from this class… we had a ton of other questions, but these stood out. As we teach more we’ll keep posting more, and I should get input from other instructors as they start teaching their own classes. Share:

Share:
Read Post

Proxies and the Cloud (Public and Private)

Recently I had a conversation with a security vendor offering a proxy-based solution for a particular problem (yes, I’m being deliberately obscure). Their technology is interesting, but fundamental changes in how we consume IT resources challenge the very idea that a proxy can effectively address this problem. The two most disruptive trends in information technology today are mobility and the cloud. With mobility we gain (and demand) anywhere access as the norm, redistributing access across varied devices. At the same time, cloud computing redefines both the data center and the architectures within data centers. Even a private internal cloud dramatically changes the delivery of IT resources. So both delivery and consumption models change simultaneously and dramatically – both distributing and consolidating resources. What does this have to do with proxies? Generally they have been a great solution to a tough problem. It’s a royal pain to distribute security controls across all endpoints, for both performance and management reasons. For example, there is no DLP or URL filtering solution on the market that can fully enforce the same sorts of rules on an endpoint as on a server. Fortunately for us, our traditional IT architectures naturally created chokepoints. Even mobile users needed them to pipe back into the core for normal business/access reasons – quite aside from security. But we’ve all seen this eroding over time. That erosion now reminds me of those massive calving glaciers that sunk the Titanic – not the slow-movers that created all those lovely fjords. From the networking issues inherent to private cloud, to users accessing SaaS resources directly without going through an enterprise gateway, the proxy model is facing challenges. In some cloud deployments you can’t use them at all. There are a many things I still like proxies for, but here are some rough rules I use in figuring out when they make sense. If you have a bunch of access devices in a bunch of locations, you either need to switch to an agent or reroute everything to the proxy (not always easy to do). Proxies don’t need to be in your core network – they can be in the cloud (like our VPN server, which we use for browsing on public WiFi). This means putting more trust in your cloud provider, depending on what you are doing. Proxies in private cloud and virtualization (e.g., encryption or network traffic analysis) need to account for (potentially) mobile virtual machines within the environment. This requires carefully architecting both physical and virtual networks, and considering how to define provisioning rules for the cloud. With a private cloud, unless you move to agents, you’ll need to build inline virtual proxies, bounce traffic out of the cloud, or find a hypervisor-level proxy (not many today – more coming). Performance varies. But the reality is that the more we adopt cloud, the fewer fixed checkpoints we’ll have, and the more we will have to evolve our definition of ‘proxy’ away from its currently meaning. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.