Securosis

Research

Table Stakes

This morning I published a column over at Dark Reading that kicked off some cool comments on Twitter. Since, you know, no one leaves blog comments anymore. The article is the upshot from various frustrations that have annoyed me lately. To be honest, I could have summarized the entire thing as “grow the f* up”. I’m just as tired of the “security is failing” garbage as I am with ridonkulous fake ROI models, our obsession with threats as the only important metric, and the inability of far too many security folks to recognize operational realities. Since I’m trying to be better about linking to major articles, here’s an excerpt: There’s been a lot of hand-wringing in the security community lately. Complaints about compliance, vendors and the industry, or the general short-sightedness of those we work for who define our programs based on the media and audit results. Now we whine about developers ignoring us, executives mandating support for iPads we can’t control (while we still use the patently-insecureable Windows XP) executives who don’t always agree with our priorities, or bad guys coming after us personally. We’re despondent over endless audit and assessment cycles, FUD, checklists, and half-baked products sold for fully-baked prices; with sales guys targeting our bosses to circumvent our veto. My response? Get over it. These are the table stakes folks, and if you aren’t up for the game here’s a dollar for the slot machines. Share:

Share:
Read Post

FAM: Market Drivers, Business Justifications, and Use Cases

Now that we have defined File Activity Monitoring it’s time to talk about why people are buying it, how it’s being used, and why you might want it. Market Drivers As I mentioned earlier the first time I saw FAM was when I dropped the acronym into the Data Security Lifecycle. Although some people were tossing the general idea around, there wasn’t a single product on the market. A few vendors were considering introducing something, but in conversations with users there clearly wasn’t market demand. This has changed dramatically over the past two years; due to a combination of indirect compliance needs, headline-driven security concerns, and gaps in existing security tools. Although the FAM market is completely nascent, interest is slowly growing as organizations look for better handles on their unstructured file repositories. We see three main market drivers: As an offshoot of compliance. Few regulations require continuous monitoring of user access to files, but quite a few require some level of audit of access control, particularly for sensitive files. As you’ll see later, most FAM tools also include entitlement assessment, and they monitor and clearly report on activity. We see some organizations consider FAM initially to help generate compliance reports, and later activate additional capabilities to improve security. Security concerns. The combination of APT-style attacks against sensitive data repositories, and headline-grabbing cases like Wikileaks, are driving clear interest in gaining control over file repositories. To increase visibility. Although few FAM deployments start with the goal of providing visibility into file usage, once a deployment starts it’s not uncommon use it to gain a better understanding of how files are used within the organization, even if this isn’t to meet a compliance or security need. FAM, like its cousin Database Activity Monitoring, typically starts as a smaller project to protect a highly sensitive repository and then grows to expand coverage as it proves its value. Since it isn’t generally required directly for compliance, we don’t expect the market to explode, but rather to grow steadily. Business Justifications If we turn around the market drivers, four key business justifications emerge for deployment of FAM: To meet a compliance obligation or reduce compliance costs. For example, to generate reports on who has access to sensitive information, or who accessed regulated files over a particular time period. To reduce the risk of major data breaches. While FAM can’t protect every file in the enterprise, it provides significant protection for the major file repositories that turn a self-constrained data breach into an unmitigated disaster. You’ll still lose files, but not necessarily the entire vault. To reduce file management costs. Even if you use document management systems, few tools provide as much insight into file usage as FAM. By tying usage, entitlements, and user/group activity to repositories and individual files; FAM enables robust analysis to support other document management initiatives such as consolidation. To support content discovery. Surprisingly; many content discovery tools (mostly Data Loss Prevention), and manual processes, struggle to identify file owners. FAM can use a combination of entitlement analysis and activity monitoring to help determine who owns each file. Example Use Cases By now you likely have a good idea how FAM can be used, but here are a few direct use cases: Company A deployed FAM to protect sensitive engineering documents from external attacks and insider abuse. They monitor the shared engineering file share and generate a security alert if more than 5 documents are accessed in less than 5 minutes; then block copying of the entire directory. A pharmaceutical company uses FAM to meet compliance requirements for drug studies. The tool generates a quarterly report of all access to study files and generates security alerts when IT administrators access files. Company C recently performed a large content discovery project to locate all regulated Personally Identifiable Information, but struggled to determine file owners. Their goal is to reduce sensitive data proliferation, but simple file permissions rarely indicate the file owner, which is needed before removing or consolidating data. With FAM they monitor the discovered files to determine the most common accessors – who are often the file owners. Company D has had problems with sales executives sucking down proprietary customer information before taking jobs with competitors. They use FAM to generate alerts based on both high-volume access and authorized users accessing older files they’ve never touched before. As you can see, the combination of tying users to activity, with the capability to generate alerts (or block) based on flexible use policies, makes FAM interesting. Imagine being able to kick off a security investigation based on a large amount of file access, or low-and-slow access by a service or administrative account. File Activity Monitoring vs. Data Loss Prevention The relationship between FAM and DLP is interesting. These two technologies are extremely complementary – so much that in one case (as of this writing) FAM is a feature of a DLP product – but they also achieve slightly different goals. The core value of DLP is its content analysis capabilities; the ability to dig into a file and understand the content inside. FAM, on the other hand, doesn’t necessarily need to know the contents of a file or repository to provide value. Certain access patterns themselves often indicate a security problem, and knowing the exact file contents isn’t always needed for compliance initiatives such as access auditing. FAM and DLP work extremely well together, but each provides plenty of value on its own. Share:

Share:
Read Post

Introduction to File Activity Monitoring

A new approach to an old problem One of the more pernicious problems in information security is allowing someone to perform something they are authorized to do, but catching when they do it in a potentially harmful way. For example, in most business environments it’s important to allow users broad access to sensitive information, but this exposes us to all sorts of data loss/leakage scenarios. We want to know when a sales executive crosses the line from accessing customer information as part of their job, to siphoning it for a competitor. In recent years we have adopted tools like Data Loss Prevention to help detect data leaks of defined information, and Database Activity Monitoring to expose deep database activity and potentially detect unusual activity. But despite these developments, one major blind spot remains: monitoring and protecting enterprise file repositories. Existing system and file logs rarely offer the level of detail needed to truly track activity, generally don’t correlate across multiple repository types, don’t tie users to roles/groups, and don’t support policy-based alerts. Even existing log management and Security Information and Event Management tools can’t provide this level of information. Four years ago when I initially developed the Data Security Lifecycle, I suggested to a technology called File Activity Monitoring. At the time I saw it as similar to Database Activity Monitoring, in that it would give us the same insight into file usage as DAM provides for database access. Although the technology didn’t yet exist it seemed like a very logical extension of DLP and DAM. Over the past two years the first FAM products have entered the market, and although market demand is nascent, numerous calls with a variety of organizations show that interest and awareness are growing. FAM addresses a problem many organizations are now starting to tackle, and the time is right to dig into the technology and learn what it provides, how it works, and what features to look for. Imagine having a tool to detect when an administrator suddenly copies the entire directory containing the latest engineering plans, or when a user with rights to a file outside their business unit accesses it for the first time in 3 years. Or imagine being able to hand an auditor a list of all access, by user, to patient record files. Those are merely a few of the potential uses for FAM. Defining FAM We define FAM as: Products that monitor and record all activity within designated file repositories at the user level, and generate alerts on policy violations. This leads to the key defining characteristics: Products are able to monitor a variety of file repositories, which include at minimum standard network file shares (SMB/CIFS). They may additionally support document management systems and other network file systems. Products are able to collect all activity, including file opens, transfers, saves, deletions, and additions. Activity can be recorded and centralized across multiple repositories with a single FAM installation (although multiple products may be required, depending on network topology). Recorded activity is correlated to users through directory integration, and the product should understand file entitlements and user/group/role relationships. Alerts can be generated based on policy violations, such as an unusual volume of activity by user or file/directory. Reports can be generated on activity for compliance and other needs. You might think much of this should be possible with DLP, but unlike DLP, File Activity Monitoring doesn’t require content analysis (although FAM may be part of, or integrated with, a DLP solution). FAM expands the data security arsenal by allowing us to understand how users interact with files, and identify issues even when we don’t know their contents. DLP, DAM, and FAM are all highly complementary. Through the rest of this series we will dig more into the use cases, technology, and selection criteria. Note – the rest of the posts in the series will appear in our Complete Feed. Share:

Share:
Read Post

What No One Is Saying about That Big HIPAA Fine

By now you have probably seen that the U.S. Department of Health and Human Services (HHS) fined Cignet healthcare a whopping $4.3M for, and I believe this is a legal term, being total egotistical assholes. (Because “willfull neglect” just doesn’t have a good ring to it). This is all over the security newsfeeds, despite it having nothing to do with security. It’s so egregious I suggest that, if any vendor puts this number in their sales presentation, you should simply stand up and walk out of the room. Don’t even bother to say anything – it’s better to leave them wondering. Where do I come up with this? The fine was due to Cignet pretty much telling HHS and a federal court to f* off when asked for materials to investigate some HIPAA complaints. To quote the ThreatPost article: Following patient complaints, repeated efforts by HHS to inquire about the missing health records were ignored by Cignet, as was a subpoena granted to HHS’s Office of Civil Rights ordering Cignet to produce the records or defend itself in any way. When the health care provider was ordered by a court to respond to the requests, it disgorged not just the patient records in question, but 59 boxes of original medical records to the U.S. Department of Justice, which included the records of 11 individuals listed in the Office of Civil Rights Subpoena, 30 other individuals who had complained about not receiving their medical records from Cignet, as well as records for 4,500 other individuals whose information was not requested by OCR. No IT. No security breach. No mention of security issues whatsoever. Just big boxes of paper and a bad attitude. Share:

Share:
Read Post

Friday Summary: February 25, 2011

In the relatively short period of time I have been on this planet, there are three time periods that really stand out to me as watershed moments in computing technology. The first was the dawn of the personal computing era that conveniently overlapped with the golden age of video arcades. For me it started the day my elementary school teacher introduced us to a Commodore PET, through the first Mac, and tapered off in the late 80s when home computers stopped being an anomaly. I don’t think the excitement I felt was merely the result of being an enthusiastic young male. ASCII porn didn’t really cut it, even for a 14 year old geek. Next was the dot-com era: around the time I should have graduated college if I hadn’t dragged out my undergrad a solid 8 years. In my memories it started when I signed up with my first dial-up ISP and played with Gopher and newsgroups – through the emergence of Mosaic, Netscape, and my first web sites (ugly) – and faded with the dot-com crash and crappy TV studio websites (which still, mostly, suck). Personally I went from paramedic, to PC tech, to sysadmin, to network admin, to developer in these short years. (Fast learner, I guess). The third era? Right now. It started with the dual emergences of the iPhone and Amazon Web Services, and it’s years away from ending. For me the bellwether moments were my first Intel-based MacBook Pro running Parallels (I converted the official Gartner image into a VM to run it there), followed by the iPhone, with a little Dropbox mixed in. The overlapping models of mobility and cloud computing are creating one of the most exciting times to be in technology I can remember. With lower barriers to entry in terms of costs and hardware, and near-ubiquitous accessibility (even accounting for AT&T wireless), I’m more psyched today than even when I built my first little company to make doinky web apps and do a little security consulting. I seriously wish I was out there doing startups, but it’s not quite the time for a career change. When I can spin up 5 different servers, on 5 different operating systems, in 5 minutes for under $5? From my iPad? That kicks so much more ass than making a crappy embossed background for my old ‘professional’ looking site. As for security? Oh my god, is this a freaking awesome time to do what we do. The threats matter, the assets are important, and the opportunities are nearly endless. I realize a lot of people are depressed about the whole industry game and compliance cycle, but that’s a small penalty to pay for the excitement and meaning of our work. You don’t get a seat at the table unless the stakes are high. Life is good. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Video of Rich on MSNBC. He apologizes for the eyebrow thing. Mort cited talking about cloud security at Bsides. Rich quoted at SearchSecurity on cloud. RSA Podcast on Agile development, Security Fail. Protegrity calls Securosis one of their favorite blogs. Data is Safe – Until It’s Not. Apparently Adrian telling the retail sector they suck at security has legs. And fortunately for us WhiteHat Security published data to back up his claim. Clearing The Air On DAM. Adrian’s Dark Reading post. Favorite Securosis Posts Rich: FireStarter: The New Cold War. There seems to be lots of naivete out there. Guess what – they hack us, we hire people to hack them. The world goes on. Mike Rothman & Adrian Lane: What You Really Need to Know about Oracle Database Firewall. Rich calls out marketing buffoonery. FTW. Other Securosis Posts React Faster and Better: Respond, Investigate, and Recover. Could This Be WikiLeaks for the Criminal Computer Underground?. What I Learned at RSAC. Incite 2/23/2011: Giving up. RSA: the Only Difference Between a Rut and a Grave Is the Depth. RSA: We Now Go Live to Our Reporters on the Scene. How to Encrypt Block Storage in the Cloud with SecureCloud. RSA 2011: A Few Pointers. The Securosis Guide to RSA 2011: The Full Monty. Favorite Outside Posts Rich: Gunnar follows the Heartland cash. I haven’t seen anyone else track the financials of a company involved in a major breach so closely. Before we start talking “dollars per record lost”, we need more of this kind of work. Mike Rothman: The obsession with next. Given that next is all we saw at RSA, this was a timely post on the 37Signals blog. Adrian Lane: Russian Cops Crash Pill Pusher Party. Oddly no arrests have been reported, but a great story. Research Reports and Presentations The Securosis 2010 Data Security Survey. Monitoring up the Stack: Adding Value to SIEM. Network Security Operations Quant Metrics Model. Network Security Operations Quant Report. Understanding and Selecting a DLP Solution. White Paper: Understanding and Selecting an Enterprise Firewall. Understanding and Selecting a Tokenization Solution. Security + Agile = FAIL Presentation. Top News and Posts Zeus malware integrating SMS for hacking out of band authentication. More on HBGary Hack. Lion Watch. With new FileVault. When to implement that is an open question. SSDs resistant to erasure. Updated SAFEcode Development Practices. Oracle Releases Database Firewall. Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Shrdlu, in response to What I Learned at RSAC. Nice piece, Adrian–and it was good to meet you too. The general sentiment I heard from vendors I talked to was that the overall mood was better at RSA this year and there were more end-users (as opposed to vendors and partners selling to one another). I can’t form an opinion, as this was my first RSA, but I’ve been to a lot of other conferences and I really didn’t see much difference between this one and other “commercial” ones. That being said, I did see some interesting stuff going on, and I think it’s our job to seek it out and

Share:
Read Post

Could This Be WikiLeaks for the Criminal Computer Underground?

When Brian Krebs sent me a link to his latest article on illegal pharmacy networks my only response was: Holy friggin’ awesomesauce!!! Brian got his hands on 9GB of financial records for what is likely the world’s biggest online spammer/illegal pharmacy network: In total, these promoters would help Glavmed sell in excess of 1.5 million orders from more than 800,000 consumers who purchased knockoff prescription drugs between May 2007 and June 2010. All told, Glavmed generated revenues of at least $150 million. Brian told me this is merely the first of a lengthy series he is putting together as he digs through the data and performs additional research. This is true investigative reporting, folks. Here’s why I think this could be a watershed moment in computer crime. While this may only be the books for a big criminal pharmacy, it shows all the linkages to other corners of the global criminal networks. Spammers, black hat hackers, SEO, money launderers… it’s probably in there. Especially once Brian correlates with his other sources. He did answer one little question I’ve always had… do they actual send people the little blue pills? Yep. And Brian has the shipping records to prove it. Share:

Share:
Read Post

React Faster and Better: Respond, Investigate, and Recover

After you have validated and filtered the initial alert, then escalated to contain and respond to the incident, you may need to escalate for further specialized response, investigation, and (hopefully) recovery. This progression to the next layer of escalation varies more among organizations we have talked with than the others – due to large differences in available resources, skill sets, and organizational priorities, but as with the rest of this series the essential roles are fairly consistent. Tier 3: Respond, Investigate, and Recover Tier 3 is where incident response management, specialized resources, and the heavy hitters reside. In some cases escalation may be little more than a notification that something is going on. In others it might be a request for a specialist such as a malware analyst for endpoint forensics analysis. This is also the level where most in-depth investigation is likely to occur – including root cause analysis and management of recovery operations. Finally, this level might include all-hands-on-deck response for a massive incident with material loss potential. Despite the variation in when Tier 3 begins, the following structure aligns at a high level with the common processes we see: Escalate response: Some incidents, while not requiring the involvement of higher management, may need specialized resources that aren’t normally involved in a Tier 2 response. For example, if an employee is suspected of leaking data you may need a forensic examiner to look at their laptop. Other incidents require the direct involvement of incident response management and top-tier response professionals. We have listed this as a single step, but it is really a self-contained response cycle of constantly evaluating needs and pulling in the right people – all the way up to executive management if necessary. Investigate: You always investigate to some degree during an incident, but depending on its nature there may be far more investigation after initial containment and remediation. As with most steps in Tier 3, the lines aren’t necessarily black and white. For certain kinds of incidents – particularly advanced attacks – the investigation and response (and even containment) are carried out in lockstep. For example, if you detect customized malware, you will need to perform a concurrent malware analysis, system forensic analysis, and network forensic analysis. Determine root cause: Before you can close an incident you need to know why it happened and how to prevent it from happening again. Was it a business process failure? Human error? Technical flaw? You don’t always need this level of detail to remediate and get operations back up and running on a temporary basis, but you do need it to fully recover – and more importantly to ensure it doesn’t happen again. At least not using the same attack vector. Recover: Remediation gets you back up and running in the short term, but in recovery you finish closing the holes and restore normal operations. The bulk of recovery operations are typically handled by non-security IT operations teams, but at least partially under the direction of the security team. Permanent fixes are applied, permanent holes closed, and any restored data examined to ensure you aren’t re-introducing the very problems that allowed the incident in the first place. (Optional) Prosecute or Discipline: Depending on the nature of the incident you may need to involve law enforcement and carry a case through to prosecution, or at least discipline/fire an employee. Since nothing involving lawyers except billing ever moves quickly, this can extend many years beyond the official end of an incident. Tier 3 is where the buck stops. There are no other internal resources to help if an incident exceeds capabilities. In that case, outside contractors/specialists need to be brought in, who are then (effectively) added to your Tier 3 resources. The Team We described Tier 1 as dispatchers, and Tier 2 as firefighters. Sticking with that analogy, Tier 3 is composed of chiefs, arson investigators, and rescue specialists. These are the folks with the strongest skills and most training in your response organization. Primary responsibilities: Ultimate incident management. Tier 3 handles incidents that require senior incident management and/or specialized skills. These senior individuals manage incidents, use their extensive skills for complex analysis and investigation, and coordinate multiple business units and teams. They also coordinate, train, and manage lower level resources. Incidents they manage: Anything that Tier 2 can’t handle. These are typically large or complex incidents, or more-constrained incidents that might involve material losses or extensive investigation. A good rule of thumb is that if you need to inform senior or executive management, or involve law enforcement and/or human resources, it’s likely a Tier 3 incident. This tier also includes specialists such as forensics investigators, malware analysts, and those who focus on a specific domain as opposed to general incident response. When they escalate: If the incident exceeds the combined response capabilities of the organization. In other words, if you need outside help, or if something is so bad (e.g., a major public breach) that executive management becomes directly involved. The Tools These responders and managers have a combination of broad and deep skills. They manage large incidents with multiple factors and perform the deep investigations to support full recovery and root cause analysis. They tend to use a wide variety of specialized tools, including those they write themselves. It’s impossible to list all the options out, but here are the main categories: Network (full packet capture) forensics: You’ve probably noticed this category appearing at all the levels. While the focus in the other response tiers is more on alerting and visualization, at this level you are more likely to dig deep into the packets to fully understand what’s going on for both immediate response and later investigation. If you don’t capture it you can’t analyze it, and full packet capture is essential for the advanced incident response which provides the focus here. Once data is gone you can’t get it back – thus our incessant focus on capturing as much as you can, when you can. Endpoint

Share:
Read Post

What You *Really* Need to Know about Oracle Database Firewall

Nothing amuses me more than some nice vendor-on-vendor smackdown action. Well, plenty of things amuse me more, especially Big Bang Theory and cats on YouTube, but the vendor thing is still moderately high on my list. So I quite enjoyed this Dark Reading article on the release of the Oracle Database Firewall. But perhaps a little outside perspective will help. Here are the important bits: As mentioned in the article, this is the first Secerno product release since their acquisition. Despite what Oracle calls it, this is a Database Activity Monitoring product at its core. Just one with more of a security focus than audit/compliance, and based on network monitoring (it lacks local activity monitoring, which is why it’s weaker for compliance). Many other DAM products can block, and Secerno can monitor. I always thought it was an interesting product. Most DAM products include network monitoring as an option. The real difference with Secerno is that they focused far more on the security side of the market, even though historically that segment is much smaller than the audit/monitoring/compliance side. So Oracle has more focus on blocking, and less on capturing and storing all activity. It is not a substitute for Database Activity Monitoring products, nor is it “better” as Oracle claims. Because it is a form of DAM, but – as mentioned by competitors in the article – you still need multiple local monitoring techniques to handle direct access. Network monitoring alone isn’t enough. I’m sure Oracle Services will be more than happy to connect Secerno and Oracle Audit Vault to do this for you. Secerno basically whitelists queries (automatically) and can block unexpected activity. This appears to be pretty effective for database attacks, although I haven’t talked to any pen testers who have gone up against it. (They do also blacklist, but the whitelist is the main secret sauce). Secerno had the F5 partnership before the Oracle acquisition. It allowed you to set WAF rules based on something detected in the database (e.g., block a signature or host IP). I’m not sure if they have expanded this post-acquisition. Imperva is the only other vendor that I know of to integrate DAM/WAF. Oracle generally believes that if you don’t use their products your are either a certified idiot or criminally negligent. Neither is true, and while this is a good product I still recommend you look at all the major competitors to see what fits you best. Ignore the marketing claims. Odds are your DBA will buy this when you aren’t looking, as part of some bundle deal. If you think you need DAM for security, compliance, or both… start an assessment process or talk to them before you get a call one day to start handling incidents. In other words: a good product with advantages and disadvantages, just like anything else. More security than compliance, but like many DAM tools it offers some of both. Ignore the hype, figure out your needs, and evaluate to figure out which tool fits best. You aren’t a bad person if you don’t buy Oracle, no matter what your sales rep tells your CIO. And seriously – watch out for the deal bundling. If you haven’t learned anything from us about database security by now, hopefully you at least realize that DBAs and security don’t always talk as much as they should (the same goes for Guardium/IBM). If you need to be involved in any database security, start talking to the DBAs now, before it’s too late. BTW, not to toot our own horns, but we sorta nailed it in our original take on the acquisition. Next we will see their WAF messaging. And we have some details of how Secerno works. Share:

Share:
Read Post

How to Encrypt Block Storage in the Cloud with SecureCloud

This is a bit of a different post for me. One exercise in the CCSK Enhanced Class which we are developing for the Cloud Security Alliance is to encrypt a block storage (EBS) volume attached to an AWS instance. There are a few different ways to do this but we decided on Trend Micro’s SecureCloud service for a couple reasons. First of all, setting it up is something we can handle within the time constraints of the class. The equivalent process with TrueCrypt or some other native encryption services within our AWS instance would take more time than we have, considering the CCSK Enhanced class is only one day and covers a ton of material. The other reason is that it supports my preferred architecture for encryption: the key server is separate from the encryption engine, which is separate from the data volume. This is actually pretty complex to set up using free/open source tools. Finally, they offer a free 60-day trial. The downside is that I don’t like using a vendor-specific solution in a class since it could be construed as endorsement. So please keep in mind that a) there are other options, and b) the fact that we use the tool for the class doesn’t mean this is the best solution for you. Ideally we will rotate tools as the class develops. For example, Porticor is a new company focusing on cloud encryption, and Vormetric is coming out with cloud-focused encryption. I think one of the other “V” companies is also bringing a cloud encryption product out this week. That said, SecureCloud does exactly what we need for this exercise. Especially since it’s SaaS based, which makes setting it up in the classroom much easier. Here’s how it works: The SaaS service manages keys and users. There is a local proxy AMI you instantiate in the same availability zone as your main instances and EBS volumes. Agents for Windows Server 2008 or CentOS implement the encryption operations. When you attach a volume, the agent requests a key from the proxy which communicates with the SaaS server. Once you approve the operation the key is sent back to the proxy, and then the agent, for local decryption. The keys are never stored locally in your availability zone, only used at the time of the transaction. You can choose to manually or automatically allow key delivery based on a variety of policies. This does, for example, give you control of multiple instances of the same image connecting to the encrypted volume on a per-instance basis. Someone can’t pull your image out of S3, run it, and gain access to the EBS volume, because the key is never stored with the AMI. This is my preferred encryption model to teach – especially for enterprise apps – because it separates out the key management and encryption operations. The same basic model is the one most well-designed applications use for encrypting data – albeit normally at the data/database level, rather than by volume. I’ve only tested the most basic features of the service and it works well. But there are a bunch of UI nits and the documentation is atrocious. It was much harder to get this up and running the first time than I expected. Now for the meat. I’m posting this guide mostly for our students so they can cut and paste command lines, instead of having to do everything manually. So this is very specific to our class; but for the rest of you, once you run through the process you should be able to easily adjust it for your own requirements. Hopefully this will help fill the documentation gaps a bit… but you should still read Trend’s documentation, beacuse I don’t explain why I have you do all these steps. This also covers 2 of the class exercises because I placed some of the requirements we need later for encryption into the first, more basic, exercise: CCSK Enhanced Hands-on Exercises Preparation (Windows only) If you are a Windows user you must download an ssh client and update your key file to work with it. Download and run http://www.chiark.greenend.org.uk/~sgtatham/putty/latest/x86/putty-0.60-installer.exe. Go to Start > Program Files > PuTTY > PuTTYgen Click File, select *.*, and point it to your _name_.PEM key file. Click okay, and then Save Key, somewhere you will remember it. Download and install Firefox from http://mozilla.org. Create your first cloud server In this exercise we will launch our first AMI (Amazon Machine Image) Instance and apply basic security controls. Steps Download and install ElasticFox: http://aws.amazon.com/developertools/609?_encoding=UTF8&jiveRedirect=1. Log into the AWS EC2 Console: https://console.aws.amazon.com/ec2/home. Go to Account, then Security Credentials. Note your Access Keys. Direct link is https://aws-portal.amazon.com/gp/aws/developer/account/index.html. Click X.509 Certificates. Click Create a new Certificate. Download both the private key and certificate files, and save them where you will remember them. In Firefox, go to Tools > ElasticFox. Click Credentials, and then enter your Access Key ID and Secret Access Key. Then click Add. You are now logged into your account. If you do not have your key pair (not the certificate key we just created, but the AWS key you created when you set up your account initially) on your current system, you will need to create a new key pair and save a copy locally. To do this, click KeyPairs and then click the green button to create a new pair. Save the file where you will remember it. If you lose this key file, you will no longer be able to access the associated AMIs. Click Images. Set your Region to us-east-1. Paste “ami-8ef607e7” into the Search box. You want the CentOS image. Click the green power button to launch the image. In the New Instance(s) Tag field enter CCSK_Test1. Choose the Default security group, and availability zone us-east-1. Click Launch. ElasticFox will switch to the Instances tab, and your instance will show as Pending. Right-click and select Connect to Instance. You will be asked to open the Private Key File you saved when you set

Share:
Read Post

RSA Guide 2011: Virtualization and Cloud

2010 was a fascinating year for cloud computing and virtualization. VMWare locked down the VMSafe program, spurring acquisition of smaller vendors in the program with access to the special APIs. Cloud computing security moved from hype to hyper-hype at the same time some seriously interesting security tools hit the market. Despite all the confusion, there was a heck of a lot of progress and growing clarity. And not all of it was from the keyboard of Chris Hoff. What We Expect to See For virtualization and cloud security, there are four areas to focus on: Innovation cloudination: For the second time in this guide I find myself actually excited by new security tech (don’t tell my mom). While you’ll see a ton of garbage on the show floor, there are a few companies (big and small) with some innovative products designed to help secure cloud computing. Everything from managing your machine keys to encrypting IaaS or SaaS data. These aren’t merely virtual appliance versions of existing hardware/software, but ground-up, cloud-specific security tools. The ones I’m most interested in are around data security, auditing, and identity management. Looking SaaSy: Technically speaking, not all Software as a Service counts as cloud computing, but don’t tell the marketing departments. But this is another area that’s more than mere hype- nearly every vendor I’ve talked with (and worked with) is looking at leveraging cloud computing in some way. Not merely because it’s sexy, but since SaaS can help reduce management overhead for security in a bunch of ways. And since all of you already pay subscription and maintenance licenses anyway, pure greed isn’t the motivator. These offerings work best for small and medium businesses, and reduce the amount of equipment you need to maintain on site. They also may help with distributed organizations. SaaS isn’t always the answer, and you really need to dig into the architecture, but I’ve been pleasantly surprised at how well some of these services can work. VMSafe cracking: VMWare locked down its VMSafe program that allowed security vendors direct access to certain hypervisor functions via API. The program is dead, except the APIs are maintained for any existing members in the program. This was probably driven by VMWare wanting to control most of the security action, and they forced everyone to move to the less-effective VShield Zones system. What does this mean? Anyone with VMSafe access has a leg up on the competition, which spurred some acquisitions. Everyone else is a bit handcuffed in comparison, so when looking at your private cloud security (on VMware) focus on the fundamental architecture (especially around networking). Virtual appliances everywhere: You know all those security vendors that promoted their amazing performance due to purpose-built hardware? Yeah, now they all offer the same performance in virtual (software) appliances. Don’t ask the booth reps too much about that though or they might pull a Russell Crowe on you. On the upside, many security tools do make sense as virtual appliances. Especially the ones with lower performance requirements (like management servers) or for the mid-market. We guarantee your data center, application, and storage teams are looking hard at, or are already using, cloud and virtualization, so this is one area you’ll want to pay attention to despite the hype. And that’s it for today. Tomorrow will wrap up with Security Management and Compliance, as well as a list of all the places you can come heckle me and the rest of the Securosis team. And yes, Mike will be up all night assembling this drivel into a single document to be posted on Friday. Later… Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.