Securosis

Research

Threatpost: What Have We Learned in 2012

2012: What Have We Learned The biggest shift in 2012 was the emergence of state-sponsored malware and targeted attacks as major factors. The idea of governments developing and deploying highly sophisticated malware is far from new. Such attacks have been going on for years, but they’ve mainly stayed out of the limelight. Security researchers and intelligence analysts have seen many of these attacks, targeting both enterprises and government agencies, but they were almost never discussed openly and were not something that showed up on the front page of a national newspaper. A good read by Dennis Fisher, but I have a slightly different take. State-level cyberattacks definitely “broke out” in 2012, but I think a bigger lesson is that pretty much every organization finally realized that signature-based AV isn’t very helpful. Some of this is related to what China has been up to, but not all of it. Over the past year I couldn’t talk to any large organization, or many medium, that isn’t struggling with malware. I couldn’t say that on December 31, 2011. Share:

Share:
Read Post

The New York Times on Antivirus

Outmaneuvered at Their Own Game, Antivirus Makers Struggle to Adapt The antivirus industry has a dirty little secret: its products are often not very good at stopping viruses. Consumers and businesses spend billions of dollars every year on antivirus software. But these programs rarely, if ever, block freshly minted computer viruses, experts say, because the virus creators move too quickly. Everyone in security knows this, but it is a bigger deal when the mainstream press starts covering it. Although too much of the article is a love song to various vendors who still need to prove themselves more. Share:

Share:
Read Post

The CloudSec Chicken or the DevOps Egg?

I am on a plane headed home after a couple days of business development meetings in Northern California, and I am starting to notice a bit of a chasm in the cloud security world. Companies, for good reason, tend to be wary of investing in new products and features before they smell customer demand (the dream-build-pray contingent exempted). The winners of the game invest just enough ahead of the curve that they don’t lose out too badly to a competitor, but they don’t pay too much for shiny toys on the shelf. Or they wait and buy a startup. This is having an interesting inhibiting effect on the security industry, particularly in terms of the cloud. Security companies tend to sell to security buyers. Security buyers, as a group, are not overly informed on the nitty-gritty details of cloud security and operations. So demand from traditional security buying centers is somewhat limited. Dev and Ops, however, are deep in the muck of cloud. They are the ones tasked with building and maintaining this infrastructure. They buy from different vendors and have different priorities, but are often still tasked with meeting security policy requirements (if they exist). They have the knowledge and tools, and in many cases (such as identity, access, and entitlement management), the implementation ball is in their court. The result is that Dev and Ops are the ones spending on cloud management tools, many of which include security features. Security vendors aren’t necessarily seeing these deals, and thus the demand. Also, their sales forces are poorly aligned to talk to the right buying centers, in the right language, which inhibits opportunities. Because they don’t see the opportunity they don’t have the motivation to build solutions. It’s better to cloudwash, spin the marketing brochures, and wait. My concern is that we see more security functionality being pushed into the DevOps tool sets. Not that I care who is selling and buying as long as the job gets done, but my suspicion is that this is inhibiting at least some of the security development we need, as cloud adoption continues and we start moving into more advanced deployment scenarios. There are certainly some successes out there, but especially on the public cloud and the programmatic/software defined security side, advancement is lacking (specifically more API support for security automation and orchestration). There are reasonable odds that both security teams and security vendors will fall behind, and there are some things DevOps simply will not do, which may result in a cloud security gap – as we have seen in application security and other fast-moving areas that broke ‘traditional’ models. It will probably also mean missed opportunities for some security vendors, especially as infrastructure vendors eat their lunch. This isn’t an easy problem for the vendors to solve – they need to tap into the right buying centers and align sales forces before they will see enough demand – and their tools will need to offer more than pure security, to appeal to the DevOps problem space. The problem is easier for security pros – educate yourself on cloud, understand the technical nuances and differences from traditional infrastructure and operating models, and get engaged with DevOps beyond setting policies that break with operational realities. Or maybe I’m just bored on an airplane, and spent too much time driving rental cars the past few days. Share:

Share:
Read Post

Friday Summary: December 13, 2012—You, Me, and Twitter

I have an on again / off again, love/hate relationship with Twitter. Those of you who follow me might have noticed I suddenly went from barely posting to fully re-engaging with the community. Sometimes I find myself getting fed up with the navel gazing of the echo chamber, as we seem to rehash the same issues over and over again, looking for grammatical and logical gotchas in 140 characters. Twitter lacks context and nuance, and so all too easily degrades into little more than a political talk show. When I’m in a bad mood, or am drowning at work, it’s one of the first things to go. But Twitter also plays a powerful, positive role in my life. It connects me to people in a unique manner unlike any other social media. As someone who works at home alone, Twitter is my water cooler, serving up personal and professional interactions across organizational and geographic boundaries. It isn’t a substitute for human proximity, but satisfies part of that need while providing a stunning scope and scale. Twitter, for me, isn’t a substitute for physical socialization, but is instead an enhancer that extends and augments our reach. When a plane disgorges me in some foreign city, any city, it is Twitter that guarantees I can find someone to have a beer or coffee with. It’s probably good that it wasn’t invented until I was a little older, a little more responsible, and a lot married. As a researcher it is also one of the most powerful tools in the arsenal. Need a contact at a company? Done. Have a question on some obscure aspect of security or coding? Done. Need to find some references using a product? Done. It’s a real-time asynchronous peer network – which is why it is so much better for this stuff than LinkedIn or Facebook. But as a professional, and technically an executive (albeit on a very small scale) Twitter challenges me to decide where to draw the line between personal and professional. Twitter today is as much, or more, a media tool as a social network. It is an essential outlet for our digital personas, and plays a critical role in shaping public perceptions. This is as true for a small-scale security analyst as for the Hollywood elite or the Pope. What we tweet defines what people think of us, like it or not. For myself I made the decision a long time ago that Twitter should reflect who I am. I decided on honesty instead of a crafted facade. This is a much bigger professional risk than you might think. I regularly post items that could offend customers, prospects, or anyone listening. It also reveals more about me than I am sometimes comfortable with in public. For example, I know my tweet stream is monitored by PR and AR handlers from companies of all sizes. They now know my propensity for foul language, the trials and tribulations of my family life, my favorite beers, health and workout histories, travel schedules, and more. I don’t put my entire life up there, but that’s a lot more than I want in an analyst database (yes, they exist). One day Twitter will help me fill a cancelled meeting on a business development trip, and the next it will draw legal threats or lose me a deal. Tweets also have a tendency to reflect what’s on my mind at a point in time, but completely out of context. Take this morning for example: I tweeted out my frustration at the part of the industry and community that spends inordinate time knocking others down in the furtherment of its own egos and agendas. But I failed to capture the nuance of my thought, and the tweet unfortunately referred to the entire industry. That wasn’t my intention, and I tried to clarify, but additional context is a poor substitute for initial clarity. My choice was to be honest or crafted. Either Twitter reflects who I am, or I create a digital persona not necessarily aligned with my real self. I decided I would rather reveal too much about who I am than play politician and rely on a ‘managed’ image. Twitter is never exactly who I am, but neither is any form of writing or public interaction. This explains my relationship with Twitter. It reflects who I am, and when I’m down and out I see (and use) Twitter as an extension of my frustration. When I’m on top Twitter is a source of inspiration and connection. It really isn’t any different than physical social interaction. As an introvert, when I’m in a bad mood, the last thing I want is to sit in a crowded room listening to random discussions. When I’m flying high (metaphorically – I’m not into that stuff despite any legalization) I have no problem engaging in spirited debate on even the most inane subjects, without concern for the consequences. For me, Twitter is an extension of the real world, bringing the same benefits and consequences. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian’s Dark Reading post on Big Data Security Recommendations. Rich quoted on DLP at TechTarget. Favorite Securosis Posts Mike Rothman (and David Mortman): The CloudSec Chicken or the DevOps Egg. I had a very similar conversation regarding the impact of SDN on network security this week. It’s hard to balance being ahead of the market and showing ‘thought leadership’ against building something the market won’t like. Most of the network security players are waiting for VMWare to define the interfaces and interactions before they commit to much of anything. Adrian Lane: Can we effectively monitor big data?. Yes, it’s my post, but I think DAM needs to be re-engineered to accommodate big data. Rich: Building an Early Warning System: Deploying the EWS. Mike is taking a very cool approach with this series. Other Securosis Posts Selecting an Enterprise Key Manager. Incite 12/12/2012: Love the Grind. Building an Early Warning System: Determining

Share:
Read Post

Selecting an Enterprise Key Manager

Now that you have a better understanding of major key manager features and options we can spend some time outlining the selection process. This largely comes down to understanding your current technical and business requirements (including any pesky compliance requirements), and trying to plan ahead for future needs. Yes, this is all incredibly obvious, but with so many different options out there – from HSMs with key management to cloud key management services – it’s too easy to get distracted by the bells and whistles and miss some core requirements. Determine current project requirements Nobody buys a key manager for fun. It isn’t like you find yourself with some spare budget and go, “Hey, I think I want a key manager”. There is always a current project driving requirements, so that’s the best place to start. We will talk about potential future requirements, but never let them interfere with your immediate needs. We have seen projects get diverted or derailed as the selection team tries to plan for every contingency, then buys a product that barely meets current needs, which quickly causes more major technical frustration. List existing systems, applications, and services involved: The best place to start is to pull together a list of the different systems, application stacks, and services that need key management. This could be as simple as “Oracle databases” or as complex as a list of different databases, programming languages used in applications, directory servers, backup systems, and other off-the-shelf application stacks. Determine required platform support: With your list of systems, applications, and services in hand; determine exactly what platform support you need. This includes which programming languages need API support, any packaged applications needing support (including versions), and even potentially the operating systems and hardware involved. This should also include encryption algorithms to support. Be as specific as possible because you will send this list to prospective vendors to ensure their product can support your platforms. Map out draft architectures: Not only is it important to know which technical platforms need support, you also need an idea of how you plan on connecting them to the key manager. For example there is a big difference between connecting a single database to a key manager in the same data center, and supporting a few hundred cloud instances connected back to a key manager in a hybrid cloud scenario. If you plan to one or more key managers in an enterprise deployment, with multiple different platforms, in different locations within your environment, you will want to bee sure you can get all the appropriate bits connected – and you might need features such as multiple network interfaces. Calculate performance requirements: It can be difficult to accurately calculate performance needs before you start testing, but do your best. The two key pieces to estimate are the number of key operations per second and your network requirements (both speed and number of concurrent network connections/sockets). If you plan to use a key manager that also performs cryptographic operations include those performance requirements. List any additional encryption support required: We have focused almost exclusively on key management, but as we mentioned some key managers also support a variety of other crypto operations. If you plan to use the tool for encryption, decryption, signing, certificate management, or other functions; make a list and include any detailed requirements like the ones above (platform support, performance, etc.). Determine compliance, administration, reporting, and certification requirements: This is the laundry list of any compliance requirements (such as which operations require auditing), administrative and systems management features (backup, administrator separation of duties, etc.), reporting needs (such as pre-built PCI reports), and certifications (nearly always FIPS 140). We have detailed these throughout this series, and you can use it as a guide when you build your checklist. List additional technical requirements: By now you will have a list of your core requirements but there are likely additional technical features on your required or optional list. And last but not least spend some time planning for the future. Check with other business units to see if they have key management needs or are already using a key manager. Talk to your developers to see if they might need encryption and key management for an upcoming project. Review any existing key management, especially in application stacks, that might benefit from a more robust solution. You don’t want to list out every potential scenario to the point where no product can possibly meet all your needs, but it is useful to take a step back before you put together an RFP, to see if you can take a more strategic approach. Write your draft RFP Try to align your RFP to the requirements collected above. There is often a tendency to ask for every feature you have ever seen in product demos, but this frequently results in bad RFP responses that make it even harder to match vendor responses against your priorities. (That’s a polite way of saying that an expansive cookie-cutter RFP request is likely to result in a cookie-cutter response rather than one tailored to your needs). Enough of the soapbox – let’s get into testing. Testing Integrating a key manager into your operations will either be incredibly easy or incredibly tricky, depending on everything from network topography to applications involved to overall architecture. For any project of serious scale it is absolutely essential to test compatibility and performance before you buy. Integration Testing The single most crucial piece to test is whether the key manager will actually work with whatever application or systems you need to connect it with. Ideally you will test this in your own environment before you buy, but this isn’t always possible. Not all vendors can provide test systems or licenses to all potential customers, and in many situations you will need services support or even custom programming to integrate the key manager. Here are some suggestions and expectations for how to test and minimize your risk: If you are deploying the

Share:
Read Post

Enterprise Key Manager: Management Features

It’s one thing to collect, secure, and track a wide range of keys; but doing so in a useful, manageable manner demonstrates the differences between key management products. Managing disparate keys from distributed applications and systems, for multiple business units, technical silos, and IT management teams, is more than a little complicated. It involves careful segregation and management of keys; multiple administrative roles; abilities to organize and group keys, users, systems, & administrators; appropriate reporting; and an effective user interface to tie it all together. Role management and separation of duties If you are managing more than a single set of keys for a single application or system you need a robust role-based access control system (RBAC) – not only for client access, but for the administrators managing the system. It needs to support ironclad separation of duties, and multiple levels of access and administration. Role management and separation of duties An enterprise key manager should support multiple roles, especially multiple administrative roles. Regular users never directly access the key manager, but system and application admins, auditors, and security personnel may all need some level of access at different points of the key management lifecycle. For instance: A super-admin role for administration of the key manager itself, with no access to the actual keys. Limited administrator roles that allow access to subsets of administrative functions such as backup and restore, creating new key groups, and so on. An audit and reporting role for viewing reports and audit logs. This may be further subsetted to allow access only to certain audit logs (e.g., a specific application). System/application manager roles for individual systems and application administrators who need to generate and manage keys for their respective responsibilities. Sub-application manager roles which only have access to a subset of the rights of a system or application manager (e.g., create new keys only but not view keys). System/application roles for the actual technical components that need access to keys. Any of these roles may need access to a subset of functionality, and be restricted to groups or individual key sets. For example, a database security administrator for a particular system gains full access to create and manage keys only for the databases associated with those systems, but not to manage audit logs, and no ability to create or access keys for any other applications or systems. Ideally you can build an entitlement matrix where you take a particular role, then assign it to a specific user and group of keys. Such as assigning the “application manager” role to “user bob” for group “CRM keys”. Split administrative rights There almost always comes a time where administrators need deeper access to perform highly-sensitive functions or even directly access keys. Restoring from backup, replication, rotating keys, revoking keys, or accessing keys directly are some functions with major security implications which you may not want to trust to a single administrator. Most key managers allow you to require multiple administrators to apporve for these functions, to limit the ability of any one administrator to compromise security. This is especially important when working with the master keys for the key manager, which are needed for taks including replication and restoration from backup. Such functions which involve the master keys are often handled through a split key. Key splitting provides each administrator with a portion of a key, all or some of which are required. This is often called “m of n” since you need m sub-keys out of a total of n in existence to perform an operation (e.g., 3 of 5 admin keys). These keys or certificates can be stored on a smart card or similar security device for better security. Key grouping and segregation Role management covers users and their access to the system, while key groups and segregation manage the objects (keys) themselves. No one assigns roles to individual keys – you assign keys to groups, and then parcel out rights from there (as we described in some examples above). Assigning keys and collections of keys to groups allows you to group keys not only by system or application (such as a single database server), but for entire collections or even business units (such as all the databases in accounting). These groups are then segregated from each other, and rights are assigned per group. Ideally groups are hierarchical so you can group all application keys, then subset application keys by application group, and then by individual application. Auditing and reporting In our compliance-driven security society, it isn’t enough to merely audit activity. You need fine-grained auditing that is then accessible with customized reports for different compliance and security needs. Type of activity to audit include: All access to keys All administrative functions on the key manager All key operations – including generating or rotating keys A key manager is about as security-sensitive as it gets, and so everything that happens to it should be auditable. That doesn’t mean you will want to track every time a key is sent to an authorized application, but you should have the ability for when you need it. Some tools support Reporting Raw audit logs aren’t overly useful on a day to day basis, but a good reporting infrastructure helps keep the auditors off your back while highlighting potential security issues. Key managers may include a variety of pre-set reports and support creation of custom reports. For example, you could generate a report of all administrator access (as opposed to application access) to a particular key group, or one covering all administrative activity in the system. Reports might be run on a preset schedule, emailing summaries of activity out on a regular basis to the appropriate stakeholders. User interface In the early days of key management everything was handled using command line interfaces. Most current systems implement graphical user interfaces (often browser based) to improve usability. There are massive differences in look and feel across products, and a GUI that fits the workflow of your staff can save a great

Share:
Read Post

Enterprise Key Managers: Technical Features, Part 2

Our last post covered two of the main technical features of an enterprise key manager: deployment and client access options. Today we will finish up with the rest of the technical features – including physical security, standards support, and a discussion of Hardware Security Modules (HSMs). Key Generation, Encryption, and Cryptographic Functions Due to their history, some key managers also offer cryptographic functions, such as: Key generation Encryption and decryption Key rotation Digital signing Key generation and rotation options are fairly common because they are important parts of the key management lifecycle; encryption and decryption are less common. If you are considering key managers that also perform cryptographic functions, you need to consider additional requirements, such as: How are keys generated and seeded? What kinds of keys and cryptographic functions are supported? (Take a look at the standards section a bit later). Performance: How many cryptographic operations of different types can be performed per second? But key generation isn’t necessarily required – assuming you only plan to use the tool to manage existing keys – perhaps in combination with separate HSMs. Physical Security and Hardening Key managers deployed as hardware appliances tend to include extensive physical security, hardening, and tamper resistance. Many of these are designed to meet government and financial industry standards. The products come in sealed enclosures designed detect attempts to open or modify them. They only include the external ports needed for core functions, without (for example) USB ports that could be used to insert malware. Most include one or more smart card ports to insert physical keys for certain administrative functions. For example, they could require two or three administrator keys to allow access to more-secure parts of the system (and yes, this means physically walking up to the key manager and inserting cards, even if the rest of administration is through a remote interface). All of these features combine to ensure that the key manager isn’t tampered with, and that data is still secure, even if the manager is physically stolen. But physical hardening isn’t always necessary – or we wouldn’t have software and virtual machine options. Those options are still very secure, and the choice all comes down to the deployment scenarios you need to support. The software and virtual appliances also include extensive security features – just nothing tied to the hardware enclosure or other specialized hardware. Anyone claiming physical security of an appliance should meet the FIPS 140-2 standard specified by the United States National Institute of Standards and Technology (NIST), or the regional equivalent. This includes requirements for both the software and hardware security of encryption tools. Encryption Standards and Platform Support As the saying goes, the wonderful thing about standards is there are so many to choose from. This is especially true in the world of encryption, which lives and breathes based on a wide array of standards. An enterprise key manager needs to handle keys from every major encryption algorithm, plus all the communications and exchange standards (and proprietary methods) to actually manage keys outside the system or service where they are stored. As a database, technically storing the keys for different standards is easy. On the other hand, supporting all the various ways of managing keys externally, for both open and proprietary products, is far more complex. And when you add in requirements to generate, rotate, or change keys, life gets even harder. Here are some of feature options for standards and platform support: Support for storing keys for all major cryptographic standards. Support for key communications standards and platforms to exchange keys, which may include a mix of proprietary implementations (e.g., a specific database platform) and open standards (e.g., the evolving Key Management Interoperability Protocol (KMIP)). Support for generating keys for common cryptographic standards. Support for rotating keys in common applications. It all comes down to having a key manager that supports the kinds of keys you need, on the types of systems that use them. System Maintenance and Deployment Features As enterprise tools, key managers need to support a basic set of core maintenance features and configuration options: Backup and Restore Losing an encryption key is worse than losing the data. When you lose the key, you effectively lose access to every version of that data that has ever been protected. And we try to avoid unencrypted copies of encrypted data, so you are likely to lose every version of the data, forever. Enterprise key managers need to handle backups and restores in an extremely secure manner. Usually this means encrypting the entire key database (including all access control & authorization rules) in backup. Additionally, backups are usually all encrypted with multiple or split keys which require more than one administrator to access or restore. Various products use different implementation strategies to handle secure incremental backups so you can back up the system regularly without destroying system performance. High Availability and Load Balancing Some key managers might only be deployed in a limited fashion, but generally these tools need to be available all the time, every time, sometimes to large volumes of traffic. Enterprise key managers should support both high availability and load balancing options to ensure they can meet demand. Another important high-availability option is key replication. This is the process of synchronizing keys across multiple key managers, sometimes in geographically separated data centers. Replication is always tricky and needs to scale effectively to avoid either loss of a key, or conflicts in case of a breakdown during rekeying or new key issuance. Hierarchical Deployments There are many situations in which you might use multiple key managers to handle keys for different application stacks or business-unit silos. Hierarchical deployment support enables you to create a “manager of managers” to enforce consistent policies across these individual-system boundaries and throughout distributed environments. For example, you might use multiple key managers in multiple data centers to generate new keys, but have those managers report back to a central master manager for auditing and reporting. Tokenization Tokenization is an

Share:
Read Post

Enterprise Key Manager Features: Deployment and Client Access Options

Key Manager Technical Features Due to the different paths and use cases for encryption tools, key management solutions have likewise developed along varied paths, reflecting their respective origins. Many evolved from Hardware Security Managers (HSMs), some were built from the ground up, and others are offshoots from key managers developed for a single purpose, such as full disk or email encryption. Most key managers include a common set of base features but there are broad differences in implementation, support for deployment scenarios, and additional features. The next few posts focus on technical features, followed by some on management features (such as user interface) before we conclude with the selection process. Deployment options There are three deployment options for enterprise key managers: Hardware Appliance Software Virtual Appliance Let’s spend a moment on the differences between these approaches. Hardware Appliance The first key managers were almost all appliances – most frequently offshoots of Hardware Security Modules (HSMs). HSMs are dedicated hardware tools for the management and implementation of multiple cryptographic operations, and are in wide use (especially in financial services), so key management was a natural evolution. Hardware appliances have two main advantages: Specialized processors improve security and speed up cryptographic operations. Physical hardening provides tamper resistance. Some non-HSM-based key managers also started as hardware appliances, especially due to customer demand for physical hardening. These advantages are still important for many use cases, but within the past five to ten years the market segment of users without hardening requirements has expanded and matured. Key management itself doesn’t necessarily require encryption acceleration or hardware chains of trust. Physical hardening is still important, but not mandatory in many use cases. Software Enterprise key managers can also be deployed as software applications on your own hardware. This provides more flexibility in deployment options when you don’t need additional physical security or encryption acceleration. Running the software on commodity hardware may also be cheaper. Aside from cost savings, key management deployed as software can offer more flexibility – such as multiple back-end database options, or the ability to upgrade hardware without having to replace the entire server. Of course software running on commodity server hardware is less locked down than a secure hardware appliance, but – especially running on a dedicated properly configured server – it is more than sufficiently secure for many use cases. Virtual Appliance A virtual appliance is a pre-built virtual machine. It offers some deployment advantages from both hardware appliances and software. Virtual appliances are pre-configured, so there is no need to install software components yourself. Their bundled operating systems are generally extremely locked down and tuned to support the key manager. Deployment is similar to a hardware appliance – you don’t need to build or secure a server yourself, but as a virtual machine you can deploy it as flexibly as software (assuming you have a suitable virtualization infrastructure). This is a great option for distributed or cloud environments with an adequate virtual infrastructure. That’s a taste of the various advantages and disadvantages, and we will come back to this choice again for the selection process. Client access options Whatever deployment model you choose, you need some way of getting the keys where they need to be, when they need to be there, for cryptographic operations. Remember, for this report we are always talking about using an external key manager, which means a key exchange is always required. Clients (whatever needs the key) usually need support for the following core functions fo a complete key management lifecycle: Key generation Key exchange (gaining access to the key) Additional key lifecycle functions, such as expiring or rotating a key Depending on what you are doing, you will allow or disallow these functions under different circumstances. For example you might allow key exchange for a particular application, but not allow it any other management functions (such as generation and rotation). Access is managed one of three ways, and many tools support more than one: Software agent: A dedicated agent handles the client’s side of the key functions. These are generally designed for specific use cases – such as supporting native full disk encryption, specific backup software, various database platforms, and so on. Some agents may also perform cryptographic functions to additional hardening such as wiping the key from memory after each use. Application Programming Interfaces: Many key managers are used to handle keys from custom applications. An API allows you to access key functions directly from application code. Keep in mind that APIs are not all created equal – they vary widely in platform support, programming languages supported, the simplicity or complexity of the API calls, and the functions accessible via the API. Protocol & standards support: The key manager may support a combination of proprietary and open protocols. Various encryption tools support their own protocols for key management, and like a software agent, the key manager may include support – even if it is from a different vendor. Open protocols and standards are also emerging but not in wide use yet, and may be supported. That’s it for today. The next post will dig into the rest of the core technical functions, including a look at the role of HSMs. Share:

Share:
Read Post

Friday Summary: November 16, 2012

A few weeks ago I was out in California, transferring large sums of my personal financial worth to a large rodent. This was the third time in about as many years I engaged in this activity – spending a chunk of my young children’s college fund on churros, overpriced hotel rooms, and tickets for the privilege of walking in large crowds to stand in endless lines. As a skeptical sort of fellow, I couldn’t help but ask myself why the entire experience makes me So. Darn. Happy. Every. Single. Time. When you have been working in security for a while you tend to become highly attuned to the onslaught of constant manipulation so endemic to our society. The constant branding, marketing lies, and subtle (and not-so-subtle) abuse of psychological cues to separate you from every penny you can borrow on non-existent assets – at least that’s how it works here in Arizona. When I walk into a Disney park I know they fill the front with overpriced balloons, time the parades and events to distribute the crowd, and conveniently offer a small token of every small experience, all shippable to your home for a minor fee. Even with that knowledge, I honestly don’t give a crap and surrender myself to the experience. This begs the question: why don’t I get as angry with Disney as I do with the FUD from security vendors? It certainly isn’t due to the smiles of my children – I have been enjoying these parks since before I even conceived (get it?) of having kids. And it isn’t just Disney – I also tend to disable the skepticnator for Jimmy Buffett, New Zealand, and a few (very few) other aspects of life. The answer comes down to one word: value. Those balloons? We bought one once… and the damn thing didn’t lose a mole of helium molecules over the 5 days we had it before giving it away to some incoming kid while departing our hotel. I think her parents hate us now. As expensive as Disney is, the parks (and much of the rest of the organization) fully deliver value for dollar. You might not agree, but that isn’t my problem. The parks are the best maintained in the business. The attention to detail goes beyond nearly anything you see anywhere else. For example, at Disneyland they update the Haunted Mansion with a whole Nightmare Before Christmas theme. They don’t merely add some external decorations and window dressing – they literally replace the animatronics inside the ride between Halloween and Christmas. It’s an entirely different experience. Hop on Netflix and compare the animation from nearly any other kids channel to the Disney stuff – there is a very visible quality difference. If you have a kid of the right age, there is no shortage of free games on the website. Download the Watch Disney app for your iDevice and they not only rotate the free shows, but they often fill it with some of the latest episodes and the holiday ones kids go nuts for. I am not saying they get everything right, but overall you get what you pay for, even if it costs more than some of the competition. And I fully understand that it’s a cash extraction machine. Buffett is the same way: I have never been to a bad concert, and even if his branded beer and tequila are crap, I get a lot of enjoyment value for each dollar I pay. Even after I sober up. It seems not many companies offer this sort of value. For example, I quite like my Ford but it is crystal clear that dealerships ‘optimize’ by charging more, doing less, and insisting that I am getting my money’s worth despite any contradictory evidence. How many technology vendors offer this sort of value? I think both Apple and Amazon are good examples on different ends of the cost spectrum, but what percentage of security companies hit that mark? To be honest, it’s something I worry about for Securosis all the time – value is something I believe in, and when you’re inside the machine it’s often hard to know if you are providing what you think. With another kid on the way the odds are low we’ll be getting back to Disney, or Buffett, any time soon. I suppose that’s good for the budget, but to be honest I look forward to the day the little one is big enough to be scared by a six foot rat in person. On to the Summary: Once again our writing volume is a little low due to extensive travel and end-of-year projects… Webcasts, Podcasts, Outside Writing, and Conferences Mr. Mortman on cloud security at VentureBeat. Adrian gets a nod on big data security. Favorite Securosis Posts Adrian Lane & David Mortman: Incite 11/7/2012: And the winner is… Math. Mike Rothman: Defending Against DoS Attacks [New Paper] and Index of Posts. Yes it’s a paper I wrote and that makes me a homer. But given the increasing prevalence of DoS attacks, it’s something you should get ahead of by reading the paper. Other Securosis Posts Implementing and Managing Patch and Configuration Management: Leveraging the Platform. Implementing and Managing Patch and Configuration Management: Configuration Management Operations. Implementing and Managing Patch and Configuration Management: Patch Management Operations. Implementing and Managing Patch and Configuration Management: Defining Policies. Building an Early Warning System: Internal Data Collection and Baselining. Building an Early Warning System: The Early Warning Process. Incite 11/14/2012: 24 Hours. Securing Big Data: Security Recommendations for Hadoop and NoSQL [New Paper]. Favorite Outside Posts (A few extras because we missed last week) Rich: Wher is Information Security’s Nate Silver? David Mortman: Maker of Airport Body Scanners Suspected of Falsifying Software Tests. Dave Lewis: Are you scared yet? Why cloud security keeps these 7 execs up at night. Mike Rothman: Superstorm Sandy Lessons: 100% Uptime Isn’t Always Worth It. Another key question is how much are you willing to pay to

Share:
Read Post

New Paper: Pragmatic Key Management for Data Encryption

Hey everyone, I am pleased to finally announce the release of Pragmatic Key Management for Data Encryption. If you didn’t follow the posts that lead to this paper, the focus is on key management strategies for data encryption – rather than on certificate management, signing, or other crypto operations. I was able to narrow things down to four key strategies, and I also spend a little time talking about data encryption systems, as opposed to crypto operations (hashing, algorithms, etc.). You can visit the paper’s permanent home, and the direct download is: Pragmatic Key Management for Data Encryption (pdf) Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.