Securosis

Research

IaaS Storage 101

I started writing up a post on IaaS encryption options and quickly realized I should probably precede it with a post outlining the IaaS storage options first. One slightly confusing bit is that IaaS storage really falls into two categories: storage as a service where the storage itself is the product, and storage for IaaS compute instances, where the storage is tied to running virtual machines. IaaS storage options include: Raw storage: As far as I can tell, this is only available for private clouds, and not on every platform. For certain high-speed operations it allows you to map a virtual volume to dedicated raw media. This skips abstraction layers for increased performance, but you lose many of the advantages of cloud storage. It’s rarely used, and may only be available on VMWare. Volume storage: The easiest way to think of volume storage is as a virtual hard drive for your instances. There are a few different architectures, but volumes are typically a clump of assigned blocks (often stored redundantly in the back end). When you create a volume the volume controller assigns the blocks, distributes them onto the physical storage infrastructure, and presents them as a raw volume. You then need to attach the volume to an instance, install partitions and file systems on it, and manage it like a drive. Although it presents as a single drive to your instance, volume storage is more like RAID – each block is replicated in multiple locations on different physical drives. Amazon EBS and Rackspace RAID volumes are examples. Object storage: Object storage is sometimes referred to as file storage. Rather than a virtual hard drive, object storage is more like a file share. Object storage performs more slowly, but is more efficient. The back end can be structured in different ways – most often a database / file system hybrid, with a bunch of processes to keep track of where everything is stored, replication, cleanup, and other housekeeping functions. Amazon S3, Rackspace Cloud Files, and OpenStack Swift are examples. For our purposes, we will consider cloud databases part of PaaS. So when we talk about IaaS storage, we are mostly talking volumes and objects. Volumes are like hard drives, and object storage is effectively a file share with a nifty API. An additional piece is important for running IaaS instances: image management. Images (such as Virtual Machine Images and Amazon Machine Images) can be stored in a variety of ways, but most often in object storage because it’s cheaper and more efficient. Layered on top is an image manager such as OpenStack Glance, which tracks the images and ties them into the compute management plane. When you create an IaaS instance you pick an image, which the image manager then pulls from object storage and streams to the hypervisor/system that will host the instance. But the image manager doesn’t need to use object storage. Glance, for example, can use pretty much anything – including local file storage, which is particularly handy in test environments. Lastly, we can’t forget about snapshots. Snapshotting an instance essentially makes a block-level copy of the volume it’s running on or attached to. Snapshot creation is just about instantaneous, but they need not be kept as volumes. The snapshot may be sent off to more-efficient object storage instead. If you want to turn a snapshot back into a volume you send a request, storage is assigned, and the image streams back into volume storage from object storage; you can then attach it to instances. You’ll notice some nice interplays between object and volume storage to keep things as efficient as possible. It’s one of the cool things about cloud computing. Hopefully this gives you a better idea of how the back end works. In a future post I will talk about volume encryption and the relationship between volume and object storage. Share:

Share:
Read Post

7 Myths, Expanded

I really enjoyed the 7 myths of Entrepreneurship on Tim Ferriss’ site. The examples are from software development, but apply to most small tech firms. Having been through 6 startups of my own, I pretty much agree with everything said. More to the point, these ‘myths’ are the more common pitfalls I witnessed over and over again. That said, I think there is more to be gained here, and some important points were left on the cutting room floor. Specifically: Code Ninjas: If you have been in development long enough, you have run into a code ninja. I have seen a single person architect, write, and maintain a full-featured OS ultimately installed on a quarter-million machines. My friend Tony tells an awe-inspiring story of a ninja rewriting the core of a UNIX Kernel in a week – after 115 other engineers had failed for a year. I don’t think Java could have happened without Gosling. I will say you don’t have to hire ninjas to succeed, and many excellent teams lack one. People get caught up in striving for greatness, and think a ninja is their key to greatness. Sure, it’s better to have one than not. But the real trick is to find a ninja who’s not a prima donna, as they have the capacity to belittle, pout, and de-motivate as easily as they can produce, teach and inspire. Software development is not a lone-wolf exercise, so if you’re not sure if a possible ninja can coexist with the rest of the team, play it safe. Running Hot: It’s not just that running hot burns developers out – it’s a sign of mismanagement. Management pushing too hard means unrealistic expectations, or a willingness to push developers to the breaking point (typically using pride as motivation), or both. “Instilling a sense of urgency” is usually a BS way of saying “work harder”. Don’t get me wrong – sometimes you need to push. I have seen engineering oriented-companies be very lackadaisical about delivering product. The Ask version of Ingres was a prime example. But running hot means burnout, lower quality, and turnover. My tactic was to get developers to invest the extra hours in reading about their profession on the train ride home. Technical books, magazines, web groups, conferences, and classes educate. More importantly, learning tends to inspire. It’s hard to be creative when you can’t sleep and are stressed out, and inspiration doesn’t come from slogging through 40 task cards without a break. Deadlines: The single biggest friction point, and one of the hardest management tasks, is managing to deadlines. It also shows the greatest disconnect between sales and development teams. Builders view deadlines as arbitrary, and in their cycle, the code is done when it is done. Sales needs something – anything – to sell, and in their cycle predicable delivery is everything. Yanking stuff at deadline pisses sales and prospects off regardless, and getting stuff back on the queue is a nightmare. Agile can help. Better and stronger product management helps. Vetting sales requests helps. Promising less helps. Ultimately there is no right answer, but the friction can be mitigated. Hiring: HR is the single greatest detractor to hiring the right people. There, I said it. HR tends to enact hiring standards that weed out the best candidates before they are even interviewed. Hiring managers get the same stale set of resumes because they are what made it through the HR weeding process. And HR only goes by a) a misinterpretation of what you told them to look for, and b) what their peers are doing. To avoid the resultant poop-colander effect – where only correctly shaped poop gets through – many companies adopted ‘quirky’ hiring practices. And these tricks work – you get a different set of poop candidates. Not better, just different. Two years later you contract with a head hunter – who simply does a better job of understanding your requirements, idiosyncrasies, and biases than HR – and they find you candidates you can accept. Because they are paid very well to understand what you want. Managers: You want better candidates, so do your own screening! Share:

Share:
Read Post

Friday Summary (OS/2 Edition): June 24, 2011

There’s something I need to admit. I’m not proud of it, but it’s time to get it off my chest and stop hiding, no matter how embarrassing it is. You see, it happened way back in 1994. I was working as a paramedic at the time, so a lot of my decisions were affected by sleep deprivation. Oh heck – I’ll just say it. One day I walked into a store, pulled out my checkbook, and bought a copy of OS/2 Warp. To top it off I then installed it on the only (dreadfully underpowered) laptop I could afford at the time. I can’t really explain my decision. I think it was that geek hubris most of us pass through at some point in our young lives. I fell for the allure of a technically superior technology, completely ignoring the importance of the application ecosystem around it. I tried to pretend that more efficient memory management and true multitasking could make up for little things like being limited to about 1.5 models of IBM printers. It wouldn’t be the last time I underestimated the power of ecosystem vs. technology. I’m also the guy who militantly avoided iPods in favor of generic MP3 players. I was thinking features, not design. Until I finally broke down and bought my first iPod, that is. The damn thing just worked, and it looked really nice in the process, even though it lacked external storage. After Dropbox’s colossal screwup I started looking at alternatives again. I didn’t need to look very hard, because people emailed and tweeted some options pretty quickly. A few look very interesting, and they are all dramatically more secure. The problem is that none of them look as polished or simple – never mind as stable. I’m not talking about giving up security for simplicity – Dropbox could easily keep their current simplicity and still encrypt on the client. I mean that Dropbox nailed the consumer cloud storage problem early and effectively, quickly building up an ecosystem around it. It’s this ecosystem that provides the corporate-level stability all the alternatives lack. These alternatives do have a chance to make it if they learn the lessons of Dropbox and Apple; and pay as much attention to design, simplicity, and ecosystem as they do to raw technology. But none of them seem quite that mature yet, so I will mostly watch and play rather than dump what I’m doing and switch over completely. Which is too bad. Because I’m starting to regret paying for Dropbox based on their latest error. If they address it directly, then it won’t be a long term problem at all. If they don’t I’ll have to eat my own dog food and move to an alternative provider that meets my minimum security requirements, even though they are at greater risk of failing. Which also forces me to always have contingency options so I don’t lose my data. Sigh. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Rich quoted on RSA at The Street. Rich at Newsweek on Mac Defender. Rich on iPad security at Macworld. (Yes, I’m a major media whore this week). Our Dropbox story bit BoingBoing. Adrian over at Network Computing on GreenSQL. Favorite Securosis Posts Adrian Lane: How to Encrypt Your Dropbox Files, at Least until Dropbox Wakes the F* up. Great product but they need to fix both server and client side security architectures. David Mortman: Tokenization vs. Encryption: Payment Data Security. Rich: My older Securing Cloud Data with Virtual Private Storage post. Other Securosis Posts 7 Myths, Expanded. IaaS Storage 101. Is Your Email Address Worth More Than Your Credit Card Number? New White Paper: Security Benchmarking: Going Beyond Metrics. Favorite Outside Posts Adrian Lane: Creating Public AMIs Securely for EC2. This is difficult to do correctly. David Mortman: Security Expert, Gunnar Peterson, on Leveraging Enterprise Credentials to connect with Cloud applications. Rich: Why Sony is no surprise. A true must-read. Simplicity doesn’t scale. Chris Pepper: Fired IT manager hacks into CEO’s presentation, replaces it with porn. I’m more amused than the fired manager or the CEO. Research Reports and Presentations Security Benchmarking: Going Beyond Metrics. Understanding and Selecting a File Activity Monitoring Solution. Database Activity Monitoring: Software vs. Appliance. React Faster and Better: New Approaches for Advanced Incident Response. Measuring and Optimizing Database Security Operations (DBQuant). Network Security in the Age of Any Computing. The Securosis 2010 Data Security Survey. Monitoring up the Stack: Adding Value to SIEM. Top News and Posts Dropbox Left User Accounts Unlocked for 4 Hours Sunday. Feeling like a sooper-genius for encrypting my stuff Saturday. Antichat Forum Hacker Breach. Shocker – they used weak passwords. Teen Alleged Member of LulzSec. Interesting Graphic on data breaches. Toward Trusted Infrastructure for the Cloud Era. Pentagon Gets Cyberwar Guidelines. New views into the 2011 DBIR. Mozilla retires Firefox 4 from security support. Northrop Grumman constantly under attack by cyber-gangs. Analysis: LulzSec trackers say authorities are closing. WordPress.com hacked. Amazon’s cloud is full of holes. Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Mark, in response to Is Your Email Address Worth More Than Your Credit Card Number?. Spot on Rich. NIST already defines Email address as PII under 800-122. It seems everyone’s turning a bind eye to the contextual aspect today – conveniently. http://csrc.nist.gov/publications/nistpubs/800-122/sp800-122.pdf “One of the most widely used terms to describe personal information is PII. Examples of PII range from an individual’s name or email address to an individual’s financial and medical records or criminal history.” In my opinion, what’s often worse is that an email address is also now a primary index to social networking sites (facebook, LinkedIn etc) which immediately presents more gold to mine for a spearphishing attack to present a APT payload – even if the attacker doesn’t have complete access, its all too easy these days to build a personal profile from one data

Share:
Read Post

Is Your Email Address Worth More Than Your Credit Card Number?

It used to be that we didn’t care too much if someone stole a pile of email addresses. At worst we’d end up on yet another spam list, and these days most folks have pretty decent spam filters. Sure, it’s annoying, but it was pretty low on the scale of security risks. But I’m starting to think that email addresses – depending on context – are now worth far more to certain attackers than credit card numbers. As annoying as credit card fraud is, it’s generally a manageable problem. For us as consumers it’s mostly a nuisance, because we are protected from financial loss. It’s a bigger problem for merchants and banks, but fraud detection systems and law enforcement together manage to keep losses to an acceptable level – otherwise we would see Chip and PIN or other technologies, as opposed to PCI, as the security focus. In terms of economics, we have seen bad guys shift to lower-level persistent fraud rather than big breaches. They’re stealing a lot, but the big lesson from the Verizon Data Breach Investigations Report is that they are stealing smaller batches, and are much more likely to get caught than in the past. Your email, on the other hand, may be far more valuable. Not necessarily to random online street criminals (although it’s still valuable to them, too), but to more sophisticated attackers. At least if they get your email address with ‘interesting’ context. Let’s look at the main method of attacks these days. From APT to botnets, we see one consistent trend – reliance on phishing to get past user defenses and gain a beachhead on the target. Get the user to click a link or open a file, and you own their system. “Spear phishing” (highly targeted phishing) has been identified as the primary attack technique currently being used by the APT – they will shift once it stops working so well. Now think about last week’s breach of Sega, or back to the Epsilon breach. In these cases emails, first names, and context were obtained. Not just an email, but an email with a real name and a site you registered to receive email from. We like to hammer users on how stupid they are for clicking any link in a storm, but what are the odds of even the most seasoned security professionals defending themselves from every single one of these attacks with, in effect, detailed dossiers on the targets? When you get a correctly formatted email with your name from a site you registered with, there’s a reasonable chance you will click – and they can easily afford to send more fishing messages than real mail (spam has been up as high as 90% of email on the Internet, and these are much better at looking legitimate and getting past spam filters). Don’t play coy and claim you’ll check the From: address every time – these all come from services you don’t know personally, and often from a third party domain as part of the service. Considering everything an attacker can do with those resources, I suspect email addresses + context might be the new bad guy hotness. Hit every TiVo subscriber with a personally addressed phishing message, perhaps modeled from the last email blast TiVo actually sent out? Gold. Share:

Share:
Read Post

Tokenization vs. Encryption: Payment Data Security

Continuing our series on tokenization for compliance, it’s time to look at how tokens are used to secure payment data. I will focus on how tokenization is employed for credit card security and helps with compliance because this model is driving adoption today. As defined in the introduction, tokenization is the process of replacing sensitive information with tokens. The tokens are ‘random’ values that resemble the sensitive data they replace, but lack intrinsic value. In terms of payment data security, tokenization is used to replace sensitive payment data such as bank account numbers. But its recent surge in popularity has been specifically about replacing credit card data. The vast majority of current tokenization projects are squarely intended to reduce the cost of achieving PCI compliance. Removing credit cards from all or part of your environment sounds like a good security measure, and it is. After all, thieves can’t steal what’s not there. But that’s not actually why tokenization has become popular for credit card replacement. Tokenization is popular because it saves money. Large merchants must undergo extensive examinations of their IT security and processes to verify compliance with the Payment Card Industry Data Security Standard (PCI-DSS). Every system that transmits or stores credit card data is subject to review. Small and mid-sized merchants must go through all the same steps as large merchants except the compliance audit, where they are on the honor system. The list of DSS requirements is lengthy – a substantial investment of time and money is required to create policies, secure systems, and generate the reports PCI assessors need. While the Council’s prescribed security controls are conceptually simple, in practice they demand a security review of the entire IT infrastructure. Over the last couple decades firms have used credit card numbers to identify and reference customers, transactions, payments, and chargebacks. As the standard reference key, credit card numbers were stored in billing, order management, shipping, customer care, business intelligence, and even fraud detection systems. They were used to cross-reference data from third parties in order to gather intelligence on consumer buying trends. Large retail organizations typically stored credit card data in every critical business processing system. When firms began suffering data breaches they started to encrypt databases and archives, and implemented central key management systems to control access to payment data. But faulty encryption deployments, SQL injection attacks, and credential hijacking continued to expose credit cards to fraud. The Payment Card Industry quickly stepped in to require a standardized set of security measures of everyone who processes and stores credit card data. The problem is that it is incredibly expensive to audit network, platform, application, user, and data security across all these systems – and then document usage and security policies to demonstrate compliance with PCI-DSS. If credit card data is replaced with tokens, almost half of the security checks no longer apply. For example, the requirement to encrypt databases or archives goes away with credit card numbers. Key management systems shrink, as they no longer need to manage keys across the entire organization. You don’t need to mask report data, rewrite applications, or reset user authorization to restrict access. Tokenization drastically reduces the complexity and scope of auditing and securing operations. That doesn’t mean you don’t need to maintain a secure network, but the requirements are greatly reduced. Even for smaller merchants who can self-assess, tokenization reduces the workload. You must secure your systems – primarily to ensure token and payment services are not open to attack – but the burden is dramatically lightened. Tokens can be created and managed in-house, or by third party service providers. Both models support web commerce and point-of-sale environments, and integrate easily with existing systems. For in-house token platforms, you own and operate the token system, including the token database. The token server is integrated with back-end transaction systems and swaps tokens in during transactions. You still keep credit card data, but only a single copy of each card, in the secure token database. This type of systems is most common with very large merchants who need to keep the original card data and want to keep transaction fees to a minimum. Third-party token services, such as those provided directly by payment processors – return a token to signify a successful payment. But the merchant retains only the token rather than the credit card. The payment processor stores the card data along with the issued token for recurring payments and dispute resolution. Small and mid-sized merchants with no need to retain credit card numbers lean towards this model – they sacrifice some control and pay higher transaction fees in exchange for convenience, reduced liability, and compliance costs. Deployment of token systems can still be tricky, as you need to substitute existing payment data with tokens. Updates must be synchronized across multiple systems so keys and data maintain relational integrity. Token vendors, both in-house and third party service providers, offer tools and services to perform the conversion. If you have credit card data scattered throughout your company, plan on paying a bit more during the conversion. But tokenization is mostly a drop-in replacement for encryption of credit card data. It requires very little in the way of changes to your systems, processes, or applications. While encryption can provide very strong security, customers and auditors prefer tokenization because it’s simpler to implement, simpler to manage, and easier to audit. Today, tokenization of payment data is driving the market. But there are many other uses for data tokenization, particularly in health care and for other Personally Identifiable Information (PII). In the mid-term I expect to see tokenization increasingly applied to databases containing PII, which is the topic for our next post. Share:

Share:
Read Post

How to Encrypt Your Dropbox Files, at Least until Dropbox Wakes the F* up

With the news that Dropbox managed to leave every single user account wide open for four hours, it’s time to review encryption options. We are fans of Dropbox here at Securosis. We haven’t found any other tools that so effectively enable us to access our data on all our systems. I personally use two primary computers, plus an iPad and iPhone, and with my travel I really need seamless synchronization of all that content. I always knew the Dropbox folks could access my data (easy to figure out with a cursory check of their web interface code in the browser), so we have always made sure to encrypt sensitive stuff. Our really sensitive content is on a secure internal server, and Dropbox is primarily for working documents and projects – none of which are highly sensitive. That said, I’m having serious doubts about continued use of the service. It’s one thing for their staff to potentially access my data. It’s another to reveal fundamental security flaws that could expose my data to the world. It’s unacceptable, and the only way they can regain user trust is to make architectural changes and allow users to encrypt their content at the client, even if it means sacrificing some server capabilities. I wrote about some options they could implement a while ago, and if they encrypt file contents while leaving metadata unencrypted (at least as a user option), they could even keep a lot of the current web interface functionality, such as restoring deleted files. That said, here are a couple easy ways to encrypt your data until Dropbox wakes up, or someone else comes out with a secure and well-engineered alternative service. (Update: Someone suggested Spideroak as a secure alternative… time to research.) Warning!! Sharing encrypted files is a risk. It is far easier to corrupt data, especially using encrypted containers as described below. Make darn sure you only have the container/directory open on a single system at a time. Also, you cannot access files using these encryption tools from iOS or Android. Encrypted .dmg (Mac only): All Macs support encrypted disk images that mount just like an external drive when you open them and supply your password. To create one, open Disk Utility and click New Image. Save the encrypted image to Dropbox, set a maximum size, and select AES-256 encryption. The only other option to change is to use “sparse bundle disk image” as Image Format. This breaks your encrypted ‘disk’ into a series of smaller files, which means Dropbox only has to sync the changes rather than copying the whole image on every single modification. This is the method I use –. to access my file I double-click the image and enter the password, which mounts it like an external drive. When I’m done I eject it in the Finder. TrueCrypt (Mac/Windows/Linux): TrueCrypt is a great encryption tool supported by all major platforms. First, download TrueCrypt. Run TrueCrypt and select Create Volume, then “create an encrypted file container”. Follow the wizard with the defaults, placing your file in Dropbox and selecting the FAT file system if you want access to it from different operating systems. If you know what you’re doing, you can use key files instead of passwords, but either is secure enough for our purposes. Those are my top two recommendations. Although a variety of third-party encryption tools are available, even TrueCrypt is easy enough for an average user. Additionally, some products (particularly security products such as 1Password) properly encrypt anything they store in Dropbox by default. Again, be careful. Don’t ever open these containers on two systems at the same time. You might be okay, or you might lose everything. And (especially for TrueCrypt) you might want to use a few smaller containers to reduce the data sync overhead. Dropbox attempts to only synchronize deltas, but encryption can break this, meaning even a small change may require a recopy of the entire container to or from every Dropbox client. And Dropbox may only detect changes when you close the encrypted container, which flushes all changes to the file. I really love how well Dropbox works, but this latest fumble shows the service can’t be trusted with anything sensitive. If their response to this exposure is to improve processes instead of hardening the technology, that will demonstrate a fundamental misunderstanding of the security needs of customers. The alarm went off – let’s see if they hit the snooze button. Share:

Share:
Read Post

Friday Summary: June 17, 2011

Where would you invest? The Reuters article about Silicon Valley VCs betting on new technologies to protect computer networks got me thinking about where I would invest in computer security. This is a very tough question, because where I would invest in security technologies as a CIO is different than where I would invest as a venture capitalist. I can see security bets to address most CIOs’ need to spend money, or and quite different technologies address noisy threats, which could make investors money. As Gunnar pointed out in Unfrozen Caveman Attacker (my favorite post this week) firewalls, anti-virus, and anti-malware are SSDD – but clearly people are buying plenty of it. As long as we are playing with Monopoly money, as a CIO facing today’s threats I would invest in the following areas (regardless of business type): Endpoint encryption – the easiest-to-use products I could find – to protect USB sticks, laptops, mobile and cloud data. As little as possible in ‘content’ security for email and web to slow down spam, phishing, and malware. Browser security to thwart drive-by attacks. Application layer monitoring both for specific applications like web apps and databases, alongside generic application controls and monitoring for approved applications. And (probably) file integrity monitoring tools. A logging service. Identity, Access, and Authorization management systems – the basis for determining what users are allowed access and what they can do. From there it’s all about effective deployment of these technologies, with small shifts in focus to fit specific business requirements. Note that I am ignoring compliance considerations, just thinking about data and system security. But as a VC, I would invest in what I think will sell. And I can sell lots of things: “Next Generation Firewalls” Cloud and virtual security products – whatever that may be. WAF. Anti-Virus, in response to the pervasive fear of system takeover – despite its lack of effectiveness for detection or removal. Anti-malware – with the escalating number of attacks in the news, this another easy sell. Anything under the label “Mobile Security”. Finally, anything compliance related: technologies that help people quickly achieve compliance with some aspect of PCI, HITECH or some portion of a requirement. Quick sales growth is about addressing visible customer pain points – real or perceived. It’s not about selling snake oil – it’s about quick wins and whatever customers demand. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Rich quoted on Chinese hacking. Rich discusses Cloud Security. Rich on LulzSec at BoingBoing. Favorite Securosis Posts Adrian Lane: Truth and (Dis)Information. Mike Rothman: Secure Passwords Sans Sales Pitch. The antidote for brute force is: a password manager. Other Securosis Posts The Hazards of Generic Communications. Stop Asking for Crap You Don’t Need and Won’t Use. Incite 6/15/2011: Shortcut to Hypocrisy. More Control Doesn’t Equal More Secure. Balancing the Short & Long Term. Favorite Outside Posts Adrian Lane: Unfrozen Caveman Attacker. Moog like SQL injection! SQL injection WORK! Mike Rothman: Asymmetry of People’s Time in Security Incidents. Lenny points out why it’s hard to be a security professional. We have more to cover and have to expend exponentially more resources than the bad guys. And this asymmetry goes way beyond incident response. Project Quant Posts DB Quant: Index. NSO Quant: Index of Posts. NSO Quant: Health Metrics–Device Health. NSO Quant: Manage Metrics–Monitor Issues/Tune IDS/IPS. NSO Quant: Manage Metrics–Deploy and Audit/Validate. Research Reports and Presentations Understanding and Selecting a File Activity Monitoring Solution. Database Activity Monitoring: Software vs. Appliance. React Faster and Better: New Approaches for Advanced Incident Response. Measuring and Optimizing Database Security Operations (DBQuant). Top News and Posts Use of Exploit Kits on the Rise Why? Because they work. And because you can create hacks quickly. Sound like a good productivity app? Big Blue at 100. Citi Credit Card Hack Bigger Than Originally Disclosed. Apparently the vulnerability was to simple URL substitution – you know, randomly editing the credit card number or user ID. Shocking if true! Adobe’s Quarterly Patch Update. 34 Security Flaws Patched (Microsoft). New PCI Guidance around Virtualization (PDF). Rich and Adrian will post analysis of this next week. EU Wants to Criminalize Hacking Tools. D’oh! Lulz DDoS on CIA.gov. Beaker vMotioned. Projector Passwords? Valid point about security prohibiting you from doing your job, and more evidence that Sony is focused on the wrong threats and shooting itself in the foot as a result. More Malicious Android Apps. Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to kurk wismer, in response to FireStarter: Trust and (Dis)Information. you’re not nuts. telling your opponent how you intend to attack them, thereby giving them an opportunity to deploy countermeasures, would be a great way to cause your strategy to fail. even in the unlikely event that the authorities believe they’ve already gotten all the information they need out of these informants, there are always new actors entering the arena that the informants could have been useful against if their existence hadn’t been given away. the only way this makes sense for an intelligent actor is if the claim about informants is psyops, as you suggest. unfortunately, i don’t think we can’t assume the authorities are that intelligent. it would certainly be nice if they were, but high-level stupidity is not unheard of. Share:

Share:
Read Post

New White Paper: Security Benchmarking: Going Beyond Metrics

Ever since I wrote the Pragmatic CSO a lifetime ago (okay, 4 years, but it feels like a lifetime), I have been evangelizing about better quantification of security programs. Even without context, quantification is valuable, but they are much more useful together. So I have been pushing hard for finding a set of similar companies to compare your metrics against, to provide that needed context. Alas, with the number of fires we have to fight every day, most security folks just don’t make the time to embrace metrics. This paper focuses on why you should. We consider security metrics at a high level to lay the foundation, then spend most of the paper explaining what benchmarking offers your security program and how to do it. A brief excerpt from the Executive Summary explains it well: A key aspect of maturing our security programs must be the collection of security metrics and their use to improve operational processes. Even those with broad security metrics programs still have trouble communicating the relative effectiveness of their efforts – largely because they have no point of comparison. Thus when talking about the success/failure of any security program, without an objective reference point senior management has no idea if your results are good. Or bad. Enter the Security Benchmark, which involves comparing your security metrics to a peer group of similar companies. If you can get a fairly broad set of consistent data (both quantitative and qualitative), then compare your numbers to that dataset, you can get a feel for relative performance. Obviously this is very sensitive data, so due care must be exercised when sharing it, but the ability to transcend the current and arbitrary identification of problem areas as ‘red’ (bad), ‘yellow’ (not so bad), or ‘green’ (a bit better) enables us to finally have some clarity on the effectiveness of our security programs. Additionally, the metrics and benchmark data can be harnessed internally to provide objectives and illuminate trends to improve key security operations. Those of you who embrace quantification gain an objective method for making decisions about your security program. No more black magic, voodoo, or hypnosis to get your budget approved, okay? The paper has a landing page, or you can download the paper directly: Security Benchmarking: Going Beyond Metrics (PDF). While you are enjoying the paper, please send a thank you to nCircle for licensing it. Share:

Share:
Read Post

The Hazards of Generic Communications

Rich, Adrian, and I are pretty lucky. We are bombarded by data coming at us from every direction. What’s working, what’s not, who’s attacking who, what new widgets are out there – and that’s just the tip of the iceberg. For an information junkie like me, it’s a sort of nirvana. But absorbing all this information without being able to relay it to folks who need it defeats the purpose. Success in an analyst role comes down to talking to folks at the level and in the language that they need, to digest and use whatever you are telling them. I would expand the scope of that idea: being able to communicate is a critical success factor for any role. As I mentioned in my recent Dark Reading post (The Truth Will Set You Free), as an industry we aren’t very good at communicating, and this is a big problem as security gets a higher profile. Far too many folks make generic statements about threats and controls, assuming their own perspectives work for everyone. Lonervamp points this out in the cold, harsh light of reality by dismantling a recent post on McAfee’s blog in smb security advice: don’t read this article. McAfee’s post allegedly targeted an SMB audience with Five Simple Steps SMBs Can Take to Avoid a Disastrous Data Breach. But its language and guidance were more appropriate to an enterprise reader. LV did a great job discussing why each of their 5 steps were really ridiculous, given SMBs’ general lack of sophistication. Yes, that is another generalization, but it is generally correct in my experience. I’ll cut McAfee some slack because this came from their risk/compliance group – and they’re not really selling anything an SMB would buy. But that’s just one of about a zillion examples of how we screw up communications. This is vendor to customer communication, but both security folks talking to their organization (at both high and low levels), and consultants talking to customers, suffer from the same tone-deaf approach of figuring a single message works for everyone. It doesn’t. I should know – I have screwed this up countless times in pretty much every role I’ve ever had. So at least I have a few ideas about how to do it better. I’m particularly sensitive to this because we are starting to spend many more cycles on Securosis’ SMB-targeted offering. It literally requires us to shut down the enterprise part of our brains (which the three of us have honed for years) and think like an SMB. At the end of the day a little reminder can make a world of difference: it’s about understanding your audience. Really? Yes, it’s that simple. But still very difficult in practice. Which is why it’s important to sprinkle in industry vernacular if you are talking to a certain industry group. Why you need to focus on business-centric issues and outcomes if you speak to senior management. And why you need to keep things simple if you are addressing a group of small business people. Again, if you are in an SMB, or you are a senior manager, or you work in a certain industry: please don’t take offense. I’m not saying you can’t understand generic language. My point is that you shouldn’t have to. Any person communicating with you should respect you and your time enough to make sure their information is relevant to you, and to consider their presentation rather than merely repeating what they say to everyone else. Share:

Share:
Read Post

Stop Asking for Crap You Don’t Need and Won’t Use

I recently had a conversation with a vendor about a particular feature in their product: Me: “So you just added XZY to the product?” Them: “Yep.” Me: “You know that no one uses it.” Them: “Yep.” Me: “But it’s on all the RFPs, isn’t it?” Them: “Yep.” I hear this scenario time and time again. Users ask for features they will never really use in RFPs, simply because they saw it on a competitor’s marketing brochure, or because “it sounds like it could be cool.” The vendors are then forced to either build it in, or just have their sales folks lie about it (it isn’t like you’ll notice). And then users complain about how bloated the products are. This is a vicious, abusive loop of a relationship. It usually starts when one VERY LARGE client asks for something (which they may or may not use), or a VERY LARGE potential partner asks for some interoperability. It never works right because no one really tests it outside the lab, and almost no one uses it anyway. But it’s on every damn RFP so all the other vendors sigh in frustration and mock up their own versions. My favorite is DLP/DRM integration. Sure, I’m a firm believer that someday it will be extremely useful. But right now? A bunch of management dudes are throwing it into every RFP, probably after reading something from Jericho, and I’m not sure I know of a single production deployment. Tired of bloat in your products? Ask for what you need and then buy it. Stop building RFPs with cut and paste. Don’t order the 7 course meal when you only want PB&J. A nice, fulfilling, yummy PB&J that gets the job done. (No, this doesn’t excuse vendors when the important stuff doesn’t work, but seriously… if you’re going to bitch about bloat, stop demanding it!) Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.