Securosis

Research

When Closed Is Good

I don’t really know how to take this article on Eugene Kaspersky’s interview at InfoSec The iPhone will be niche in 5 years because it’s closed? We should have databases of smartphone users? I’m really hoping some if it is few translation and context issues, which is quite possible. And I’m glad he didn’t say the iPhone is less secure because it’s closed, which is a common trope from a few folks in the AV world. I believe that closed systems can actually be better for security, when designed properly. Otherwise why are we all obsessed with FIPS-140 tamper resistance? Perhaps it’s because ‘closed’ has multiple meanings – and we need to differentiate between three of them for security: Closed as in locked down. The platform uses controls to restrict what can run on it. Closed as in proprietary. In other words, not Open Source. Closed as in super secret. Code/hardware/etc. is hidden and/or obfuscated. The common argument for proprietary or hidden being bad is that you can’t see what’s inside and evaluate it (or fix it). I do think this is true for things like crypto algorithms, but not for complex applications. A little obfuscation could help security, and to be honest your odds of crawling the code and finding problems are pretty low. Especially since dynamic analysis/fuzzing are so effective at finding holes. There is a ton of testing you can do without access to the source code. But the closed I think is important to security is the locked platform. If done properly, this reduces attackers’ ability to run arbitrary commands/code, and thus improves security. This assumes the vendor is responsive when cracks are discovered. So back to the iPhone. It sufferings far fewer real-world security incidents than Android because it’s closed. It’s not perfect, but how many apps has Apple had to pull? Compared to Google? If they can even pull them (there are other marketplaces, remember)? And hardware controls make it pretty darn hard to perform deep exploitation (so some really smart researchers tell me). In an interview last week I suggested that Apple should do the same thing with the App Store on Macs, but there make it optional. Opt in and the system will only let you install App Store apps. Us geeks can opt out and continue to do what we want. I suspect this would go a heck of a long way toward protecting nontechnical users, especially from things like phishing attacks. Anyway, just some random thoughts. And keep them in context – I’m not saying closed is always better, but that it can be. Share:

Share:
Read Post

How to Encrypt IaaS Volumes

Encrypting IaaS storage is a hot topic, but it’s time to drop the esoterica and provide some technical details. I will use a lot of terminology from last week’s post on IaaS storage options, so you should probably read that one first if you haven’t already. Within the cloud you have all the same data storage options as in traditional infrastructure – from the media layer all the way up to the application. To keep this post from turning into a white paper, we will limit ourselves to volume storage, such as Amazon Elastic Block Storage (EBS), OpenStack volumes, and Rackspace RAID volumes. We’ll cover object storage and database/application options in future posts. Before we delve into the technology we should cover the risk/use cases. Volume encryption is very interesting, because it highlights some key differences between cloud and traditional infrastructure. In your non-cloud environment the only way for someone to steal an entire drive is to walk in and yank it from the rack, or plug in a second drive, make a byte-level copy, and walk out with that. I’m simplifying a bit, but for the most part they would need some type of physical access to get the entire drive. In the cloud it’s very different. Anyone with access to your management plane (with sufficient rights) can snapshot a volume and move it around. It only takes 2-3 command lines to snapshot a drive off to object storage, make it public, and then load it up in a hostile environment. So IaaS encryption: Protects volumes from snapshot cloning/exposure. Protects volumes from being explored by the cloud provider (and private cloud admins). Protects volumes from being exposed by physical loss of drives (more for compliance than a real-world security issue). Personally I worry much more about management plane/snapshot abuse than a malicious cloud admin. Now let’s delve into the technology. The key to evaluating data at rest encryption is to look at the locations of the three main components: The data (what you are encrypting). The encryption engine (the code/hardware that encrypts). The key manager. For example, our entire Understanding and Selecting a Database Encryption or Tokenization Solution paper is about figuring out where these bits to satisfy your requirements. IaaS volume encryption is very similar to media encryption in physical infrastructure. It’s a coarse control designed to encrypt entire ‘drives’, which in our case are virtual instead of physical. Whenever you mount a cloud volume to an instance it appears as a drive, which actually makes our lives easier. This protects against admin abuse, because the only way to see the data is to go through a running instance. It protects against snapshot abuse, because cloning only gets encrypted data. Today there are three main models: Instance-managed encryption: The encryption engine runs within the instance, and the key is stored in the volume but protected by a passphrase or public/private keypair. We use this model in the CCSK cloud security training – the volume is encrypted with the standard Linux dm-crypt (managed by the cryptsetup utility), with the key protected by a SHA-256 passphrase on the volume. This is great for portability – you can detach and move the volume anywhere you need, or even snapshot it, and can only open it if you have the passphrase. The passphrase should only be in volatile memory in your instance, which isn’t recorded during a snapshot. The downside is that if you want to automatically mount volumes (say as you spin up additional instances or if you need to reboot) you must either embed the passphrase/key in the instance (bad) or rely on a manual process (which can be automated with cloud-init, but that’s another big risk). You also can’t really build in integrity checking (which we will discuss in a moment). This method isn’t perfect but is well suited to many use cases. I don’t know of any commercial options, but this is free in many operating systems. Externally managed encryption The encryption engine runs in the instance, but the keys are managed externally and issued to the instance on request. This is more suitable for enterprise deployments because it scales far better and provides better security. One great advantage is that if your key manager is cloud aware, you can run additional integrity checks via the API and get quite granular in your policies for issuing keys. For example, you can automate key issuance if the instance was launched from a certain account, has an approved instance ID, or other criteria. Or you can add a manual check into the process where the instance requests the key and a security admin has to approve it, providing excellent separation of duties. The key manager can run in any of 3 locations: as dedicated hardware/server, as an instance, or as a service. The dedicated hardware or server needs to be connected to your cloud and is used only in private/hybrid clouds – its appeal is higher security or convenient extension of an existing key management deployment. Vormetric, SafeNet, and (I believe) Voltage offer this. Running in an instance is more convenient and likely relatively secure if you don’t need FIPS-140 certified hardware, and trust the hypervisor it’s running on. No one offers this yet, but it should be on the market later this year. Lastly, you can have a service manage your keys, like Trend SecureCloud. Proxy encryption In this model you connect the volume to a special instance or appliance/software, and then connect your instance to the encryption instance. The proxy handles all crypto operations, and may keep keys either onboard or in an external manager. This model is similar to the way many backup encryption tools work. The advantage is that even the engine runs (hopefully) in a more secure environment. Porticor is an option here. This should give you a good overview of the different options. One I didn’t mention, since I don’t know of any commercial or freeware options, is hypervisor-managed encryption. Technically you could have

Share:
Read Post

File Activity Monitoring Webinar This Wednesday

Ever hear of File Activity Monitoring? You know, that cool new data security tech I published a white paper on? This Wednesday at 11 PT I will be giving a webinar on FAM (sponsored by Imperva – a guy’s gotta eat). I’ll cover the basics of the technology, why it’s useful, and some deployment scenarios/use cases. I do think this is something most of you are going to be looking at over the next few years (even if you don’t buy it), so might as well get started early 🙂 If you’re interested, you can register now. Share:

Share:
Read Post

IaaS Storage 101

I started writing up a post on IaaS encryption options and quickly realized I should probably precede it with a post outlining the IaaS storage options first. One slightly confusing bit is that IaaS storage really falls into two categories: storage as a service where the storage itself is the product, and storage for IaaS compute instances, where the storage is tied to running virtual machines. IaaS storage options include: Raw storage: As far as I can tell, this is only available for private clouds, and not on every platform. For certain high-speed operations it allows you to map a virtual volume to dedicated raw media. This skips abstraction layers for increased performance, but you lose many of the advantages of cloud storage. It’s rarely used, and may only be available on VMWare. Volume storage: The easiest way to think of volume storage is as a virtual hard drive for your instances. There are a few different architectures, but volumes are typically a clump of assigned blocks (often stored redundantly in the back end). When you create a volume the volume controller assigns the blocks, distributes them onto the physical storage infrastructure, and presents them as a raw volume. You then need to attach the volume to an instance, install partitions and file systems on it, and manage it like a drive. Although it presents as a single drive to your instance, volume storage is more like RAID – each block is replicated in multiple locations on different physical drives. Amazon EBS and Rackspace RAID volumes are examples. Object storage: Object storage is sometimes referred to as file storage. Rather than a virtual hard drive, object storage is more like a file share. Object storage performs more slowly, but is more efficient. The back end can be structured in different ways – most often a database / file system hybrid, with a bunch of processes to keep track of where everything is stored, replication, cleanup, and other housekeeping functions. Amazon S3, Rackspace Cloud Files, and OpenStack Swift are examples. For our purposes, we will consider cloud databases part of PaaS. So when we talk about IaaS storage, we are mostly talking volumes and objects. Volumes are like hard drives, and object storage is effectively a file share with a nifty API. An additional piece is important for running IaaS instances: image management. Images (such as Virtual Machine Images and Amazon Machine Images) can be stored in a variety of ways, but most often in object storage because it’s cheaper and more efficient. Layered on top is an image manager such as OpenStack Glance, which tracks the images and ties them into the compute management plane. When you create an IaaS instance you pick an image, which the image manager then pulls from object storage and streams to the hypervisor/system that will host the instance. But the image manager doesn’t need to use object storage. Glance, for example, can use pretty much anything – including local file storage, which is particularly handy in test environments. Lastly, we can’t forget about snapshots. Snapshotting an instance essentially makes a block-level copy of the volume it’s running on or attached to. Snapshot creation is just about instantaneous, but they need not be kept as volumes. The snapshot may be sent off to more-efficient object storage instead. If you want to turn a snapshot back into a volume you send a request, storage is assigned, and the image streams back into volume storage from object storage; you can then attach it to instances. You’ll notice some nice interplays between object and volume storage to keep things as efficient as possible. It’s one of the cool things about cloud computing. Hopefully this gives you a better idea of how the back end works. In a future post I will talk about volume encryption and the relationship between volume and object storage. Share:

Share:
Read Post

Friday Summary (OS/2 Edition): June 24, 2011

There’s something I need to admit. I’m not proud of it, but it’s time to get it off my chest and stop hiding, no matter how embarrassing it is. You see, it happened way back in 1994. I was working as a paramedic at the time, so a lot of my decisions were affected by sleep deprivation. Oh heck – I’ll just say it. One day I walked into a store, pulled out my checkbook, and bought a copy of OS/2 Warp. To top it off I then installed it on the only (dreadfully underpowered) laptop I could afford at the time. I can’t really explain my decision. I think it was that geek hubris most of us pass through at some point in our young lives. I fell for the allure of a technically superior technology, completely ignoring the importance of the application ecosystem around it. I tried to pretend that more efficient memory management and true multitasking could make up for little things like being limited to about 1.5 models of IBM printers. It wouldn’t be the last time I underestimated the power of ecosystem vs. technology. I’m also the guy who militantly avoided iPods in favor of generic MP3 players. I was thinking features, not design. Until I finally broke down and bought my first iPod, that is. The damn thing just worked, and it looked really nice in the process, even though it lacked external storage. After Dropbox’s colossal screwup I started looking at alternatives again. I didn’t need to look very hard, because people emailed and tweeted some options pretty quickly. A few look very interesting, and they are all dramatically more secure. The problem is that none of them look as polished or simple – never mind as stable. I’m not talking about giving up security for simplicity – Dropbox could easily keep their current simplicity and still encrypt on the client. I mean that Dropbox nailed the consumer cloud storage problem early and effectively, quickly building up an ecosystem around it. It’s this ecosystem that provides the corporate-level stability all the alternatives lack. These alternatives do have a chance to make it if they learn the lessons of Dropbox and Apple; and pay as much attention to design, simplicity, and ecosystem as they do to raw technology. But none of them seem quite that mature yet, so I will mostly watch and play rather than dump what I’m doing and switch over completely. Which is too bad. Because I’m starting to regret paying for Dropbox based on their latest error. If they address it directly, then it won’t be a long term problem at all. If they don’t I’ll have to eat my own dog food and move to an alternative provider that meets my minimum security requirements, even though they are at greater risk of failing. Which also forces me to always have contingency options so I don’t lose my data. Sigh. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Rich quoted on RSA at The Street. Rich at Newsweek on Mac Defender. Rich on iPad security at Macworld. (Yes, I’m a major media whore this week). Our Dropbox story bit BoingBoing. Adrian over at Network Computing on GreenSQL. Favorite Securosis Posts Adrian Lane: How to Encrypt Your Dropbox Files, at Least until Dropbox Wakes the F* up. Great product but they need to fix both server and client side security architectures. David Mortman: Tokenization vs. Encryption: Payment Data Security. Rich: My older Securing Cloud Data with Virtual Private Storage post. Other Securosis Posts 7 Myths, Expanded. IaaS Storage 101. Is Your Email Address Worth More Than Your Credit Card Number? New White Paper: Security Benchmarking: Going Beyond Metrics. Favorite Outside Posts Adrian Lane: Creating Public AMIs Securely for EC2. This is difficult to do correctly. David Mortman: Security Expert, Gunnar Peterson, on Leveraging Enterprise Credentials to connect with Cloud applications. Rich: Why Sony is no surprise. A true must-read. Simplicity doesn’t scale. Chris Pepper: Fired IT manager hacks into CEO’s presentation, replaces it with porn. I’m more amused than the fired manager or the CEO. Research Reports and Presentations Security Benchmarking: Going Beyond Metrics. Understanding and Selecting a File Activity Monitoring Solution. Database Activity Monitoring: Software vs. Appliance. React Faster and Better: New Approaches for Advanced Incident Response. Measuring and Optimizing Database Security Operations (DBQuant). Network Security in the Age of Any Computing. The Securosis 2010 Data Security Survey. Monitoring up the Stack: Adding Value to SIEM. Top News and Posts Dropbox Left User Accounts Unlocked for 4 Hours Sunday. Feeling like a sooper-genius for encrypting my stuff Saturday. Antichat Forum Hacker Breach. Shocker – they used weak passwords. Teen Alleged Member of LulzSec. Interesting Graphic on data breaches. Toward Trusted Infrastructure for the Cloud Era. Pentagon Gets Cyberwar Guidelines. New views into the 2011 DBIR. Mozilla retires Firefox 4 from security support. Northrop Grumman constantly under attack by cyber-gangs. Analysis: LulzSec trackers say authorities are closing. WordPress.com hacked. Amazon’s cloud is full of holes. Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Mark, in response to Is Your Email Address Worth More Than Your Credit Card Number?. Spot on Rich. NIST already defines Email address as PII under 800-122. It seems everyone’s turning a bind eye to the contextual aspect today – conveniently. http://csrc.nist.gov/publications/nistpubs/800-122/sp800-122.pdf “One of the most widely used terms to describe personal information is PII. Examples of PII range from an individual’s name or email address to an individual’s financial and medical records or criminal history.” In my opinion, what’s often worse is that an email address is also now a primary index to social networking sites (facebook, LinkedIn etc) which immediately presents more gold to mine for a spearphishing attack to present a APT payload – even if the attacker doesn’t have complete access, its all too easy these days to build a personal profile from one data

Share:
Read Post

Is Your Email Address Worth More Than Your Credit Card Number?

It used to be that we didn’t care too much if someone stole a pile of email addresses. At worst we’d end up on yet another spam list, and these days most folks have pretty decent spam filters. Sure, it’s annoying, but it was pretty low on the scale of security risks. But I’m starting to think that email addresses – depending on context – are now worth far more to certain attackers than credit card numbers. As annoying as credit card fraud is, it’s generally a manageable problem. For us as consumers it’s mostly a nuisance, because we are protected from financial loss. It’s a bigger problem for merchants and banks, but fraud detection systems and law enforcement together manage to keep losses to an acceptable level – otherwise we would see Chip and PIN or other technologies, as opposed to PCI, as the security focus. In terms of economics, we have seen bad guys shift to lower-level persistent fraud rather than big breaches. They’re stealing a lot, but the big lesson from the Verizon Data Breach Investigations Report is that they are stealing smaller batches, and are much more likely to get caught than in the past. Your email, on the other hand, may be far more valuable. Not necessarily to random online street criminals (although it’s still valuable to them, too), but to more sophisticated attackers. At least if they get your email address with ‘interesting’ context. Let’s look at the main method of attacks these days. From APT to botnets, we see one consistent trend – reliance on phishing to get past user defenses and gain a beachhead on the target. Get the user to click a link or open a file, and you own their system. “Spear phishing” (highly targeted phishing) has been identified as the primary attack technique currently being used by the APT – they will shift once it stops working so well. Now think about last week’s breach of Sega, or back to the Epsilon breach. In these cases emails, first names, and context were obtained. Not just an email, but an email with a real name and a site you registered to receive email from. We like to hammer users on how stupid they are for clicking any link in a storm, but what are the odds of even the most seasoned security professionals defending themselves from every single one of these attacks with, in effect, detailed dossiers on the targets? When you get a correctly formatted email with your name from a site you registered with, there’s a reasonable chance you will click – and they can easily afford to send more fishing messages than real mail (spam has been up as high as 90% of email on the Internet, and these are much better at looking legitimate and getting past spam filters). Don’t play coy and claim you’ll check the From: address every time – these all come from services you don’t know personally, and often from a third party domain as part of the service. Considering everything an attacker can do with those resources, I suspect email addresses + context might be the new bad guy hotness. Hit every TiVo subscriber with a personally addressed phishing message, perhaps modeled from the last email blast TiVo actually sent out? Gold. Share:

Share:
Read Post

How to Encrypt Your Dropbox Files, at Least until Dropbox Wakes the F* up

With the news that Dropbox managed to leave every single user account wide open for four hours, it’s time to review encryption options. We are fans of Dropbox here at Securosis. We haven’t found any other tools that so effectively enable us to access our data on all our systems. I personally use two primary computers, plus an iPad and iPhone, and with my travel I really need seamless synchronization of all that content. I always knew the Dropbox folks could access my data (easy to figure out with a cursory check of their web interface code in the browser), so we have always made sure to encrypt sensitive stuff. Our really sensitive content is on a secure internal server, and Dropbox is primarily for working documents and projects – none of which are highly sensitive. That said, I’m having serious doubts about continued use of the service. It’s one thing for their staff to potentially access my data. It’s another to reveal fundamental security flaws that could expose my data to the world. It’s unacceptable, and the only way they can regain user trust is to make architectural changes and allow users to encrypt their content at the client, even if it means sacrificing some server capabilities. I wrote about some options they could implement a while ago, and if they encrypt file contents while leaving metadata unencrypted (at least as a user option), they could even keep a lot of the current web interface functionality, such as restoring deleted files. That said, here are a couple easy ways to encrypt your data until Dropbox wakes up, or someone else comes out with a secure and well-engineered alternative service. (Update: Someone suggested Spideroak as a secure alternative… time to research.) Warning!! Sharing encrypted files is a risk. It is far easier to corrupt data, especially using encrypted containers as described below. Make darn sure you only have the container/directory open on a single system at a time. Also, you cannot access files using these encryption tools from iOS or Android. Encrypted .dmg (Mac only): All Macs support encrypted disk images that mount just like an external drive when you open them and supply your password. To create one, open Disk Utility and click New Image. Save the encrypted image to Dropbox, set a maximum size, and select AES-256 encryption. The only other option to change is to use “sparse bundle disk image” as Image Format. This breaks your encrypted ‘disk’ into a series of smaller files, which means Dropbox only has to sync the changes rather than copying the whole image on every single modification. This is the method I use –. to access my file I double-click the image and enter the password, which mounts it like an external drive. When I’m done I eject it in the Finder. TrueCrypt (Mac/Windows/Linux): TrueCrypt is a great encryption tool supported by all major platforms. First, download TrueCrypt. Run TrueCrypt and select Create Volume, then “create an encrypted file container”. Follow the wizard with the defaults, placing your file in Dropbox and selecting the FAT file system if you want access to it from different operating systems. If you know what you’re doing, you can use key files instead of passwords, but either is secure enough for our purposes. Those are my top two recommendations. Although a variety of third-party encryption tools are available, even TrueCrypt is easy enough for an average user. Additionally, some products (particularly security products such as 1Password) properly encrypt anything they store in Dropbox by default. Again, be careful. Don’t ever open these containers on two systems at the same time. You might be okay, or you might lose everything. And (especially for TrueCrypt) you might want to use a few smaller containers to reduce the data sync overhead. Dropbox attempts to only synchronize deltas, but encryption can break this, meaning even a small change may require a recopy of the entire container to or from every Dropbox client. And Dropbox may only detect changes when you close the encrypted container, which flushes all changes to the file. I really love how well Dropbox works, but this latest fumble shows the service can’t be trusted with anything sensitive. If their response to this exposure is to improve processes instead of hardening the technology, that will demonstrate a fundamental misunderstanding of the security needs of customers. The alarm went off – let’s see if they hit the snooze button. Share:

Share:
Read Post

Stop Asking for Crap You Don’t Need and Won’t Use

I recently had a conversation with a vendor about a particular feature in their product: Me: “So you just added XZY to the product?” Them: “Yep.” Me: “You know that no one uses it.” Them: “Yep.” Me: “But it’s on all the RFPs, isn’t it?” Them: “Yep.” I hear this scenario time and time again. Users ask for features they will never really use in RFPs, simply because they saw it on a competitor’s marketing brochure, or because “it sounds like it could be cool.” The vendors are then forced to either build it in, or just have their sales folks lie about it (it isn’t like you’ll notice). And then users complain about how bloated the products are. This is a vicious, abusive loop of a relationship. It usually starts when one VERY LARGE client asks for something (which they may or may not use), or a VERY LARGE potential partner asks for some interoperability. It never works right because no one really tests it outside the lab, and almost no one uses it anyway. But it’s on every damn RFP so all the other vendors sigh in frustration and mock up their own versions. My favorite is DLP/DRM integration. Sure, I’m a firm believer that someday it will be extremely useful. But right now? A bunch of management dudes are throwing it into every RFP, probably after reading something from Jericho, and I’m not sure I know of a single production deployment. Tired of bloat in your products? Ask for what you need and then buy it. Stop building RFPs with cut and paste. Don’t order the 7 course meal when you only want PB&J. A nice, fulfilling, yummy PB&J that gets the job done. (No, this doesn’t excuse vendors when the important stuff doesn’t work, but seriously… if you’re going to bitch about bloat, stop demanding it!) Share:

Share:
Read Post

More Control Doesn’t Equal More Secure

Last week, while teaching the CCSK (cloud security) class, the discussion reached a point I often find myself in these days. We were discussing the risk of cloud computing, and one of the students listed “less control” as a security risk. To be honest, this weaves itself through not only the Guidance but most risk analyses I have seen. And it’s not limited to cloud discussions. One of the places I hear it most often is in reference to mobile computing – especially iOS devices. For example, while hosting an event at RSA earlier this year we had a security pro with over 10 years experience state that they don’t let iPads/iPhones in, but they still use Windows XP. When I asked why they allow a patently out of date and insecure OS, while blocking one of the most secure devices on the market, his response was “we know Windows XP and can control it”. Which, to me, is like saying you are satisfied to pick exactly which window the burglar will come and leave through. More knowledge or control doesn’t necessarily translate into better security. In fact, uncertainty can be a powerful motivator to implement security controls you otherwise neglect due to a misplaced sense of certainty. We all know you are far less likely to crash in a plane than to die in a car accident. Or that your children are far more at risk of drowning or (again) car accidents than of being abducted by a stranger. But we feel in control when driving a car, so we feel safer even though that’s flat-out wrong. You can’t control everything. Not your own systems or employees, no matter where they are located. Design for uncertainty, and you can better adapt to new opportunities or threats, at (I suspect, but can’t prove) the same costs. Not that you shouldn’t maintain some degree of control, but don’t assume control means security. Share:

Share:
Read Post

A Different Take on the Defense Contractor/RSA Breach Miasma

I have been debating writing anything on the spate of publicly reported defense contractor breaches. It’s always risky to talk about breaches when you don’t have any direct knowledge about what’s going on. And, to be honest, unless your job is reporting the news it smells a bit like chasing a hearse. But I have been reading the stories, and even talking to some reporters (to give them background info – not pretending I have direct knowledge). The more I read, and the more I research, the more I think the generally accepted take on the story is a little off. The storyline appears to be that RSA was breached, seed tokens for SecurID likely lost, and those were successfully used to attack three major defense contractors. Also, the generic term “hackers” is used instead of directly naming any particular attacker. I read the situation somewhat differently: I do believe RSA was breached and seeds lost, which could allow that attacker to compromise SecurID if they also know the customer, serial number of the token, PIN, username, and time sync of the server. Hard, but not impossible. This is based on the information RSA has released to their customers (the public pieces – again, I don’t have access to NDA info). In the initial release RSA stated this was an APT attack. Some people believe that simply means the attacker was sophisticated, but the stricter definition refers to one particular country. I believe Art Coviello was using the strict definition of APT, as that’s the definition used by the defense and intelligence industries which constitute a large part of RSA’s customer base. By all reports, SecurIDs were involved in the defense contractor attacks, but Lockheed in particular stated the attack wasn’t successful and no information was lost. If we tie this back to RSA’s advice to customers (update PINs, monitor SecurID logs for specific activity, and watch for phishing) it is entirely reasonable to surmise that Lockheed detected the attack and stopped it before it got far, or even anywhere at all. Several pieces need to come together to compromise SecurID, even if you have the customer seeds. The reports of remote access being cut off seem accurate, and are consistent with detecting an attack and shutting down that vector. I’d do the same thing – if I saw a concerted attack against my remote access by a sophisticated attacker I would immediately shut it down until I could eliminate that as a possible entry point. Only the party which breached RSA could initiate these attacks. Countries aren’t in the habits of sharing that kind of intel with random hackers, criminals, or even allies. These breach disclosures have a political component, especially in combination with Google revealing that they stopped additional attacks emanating from China. These cyberattacks are a complex geopolitical issue we have discussed before. The US administration just released an international strategy for cybersecurity. I don’t think these breaches would have been public 3 years ago, and we can’t ignore the political side when reading the reports. Billions – many billions – are in play. In summary: I do believe SecurID is involved, I don’t think the attacks were successful, and it’s only prudent to yank remote access and swap out tokens. Politics are also heavily in play and the US government is deeply involved, which affects everything we are hearing, from everybody. If you are an RSA customer you need to ask yourself whether you are a target for international espionage. All SecurID customers should change out PINs, inform employees to never give out information about their tokens, and start looking hard at logs. If you think you’re on the target list, look harder. And call your RSA rep. But the macro point to me is whether we just crossed a line. As I wrote a couple months ago, I believe security is a self-correcting system. We are never too secure because that’s more friction than people will accept. But we are never too insecure (for long at least) because society stops functioning. If we look at these incidents in the context of the recent Mac Defender hype, financial attacks, and Anonymous/Lulz events, it’s time to ask whether the pain is exceeding our thresholds. I don’t know the answer, and I don’t think any of us can fully predict either the timing or what happens next. But I can promise you that it doesn’t translate directly into increased security budgets and freedom for us security folks to do whatever we want. Life is never so simple. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.