Securosis

Research

Gunnar Peterson Joins Securosis As a Contributing Analyst

We are ridiculously excited to announce that Gunnar Peterson is the newest member of Securosis, joining us as a Contributing Analyst. For those who don’t remember, our Contributor program is our way of getting to work with extremely awesome people without asking them to quit their day jobs (contributors are full members of the team and covered under our existing contracts/NDAs, but aren’t full time). Gunnar joins David Mortman and officially doubles our Contributing Analyst team. Gunnar’s primary coverage areas are identity and access management, large enterprise applications, and application development. Plus anything else he wants, because he’s wicked smart. Gunnar can be reached at gpeterson at securosis.com on top of his existing emails/Skype/etc. And now for the formal bio: Gunnar Peterson is a Managing Principal at Arctec Group. He is focused on distributed systems security for large mission critical financial, financial exchange, healthcare, manufacturer, and insurance systems, as well as emerging start ups. Mr. Peterson is an internationally recognized software security expert, frequently published, an Associate Editor for IEEE Security & Privacy Journal on Building Security In, a contributor to the SEI and DHS Build Security In portal on software security, a Visiting Scientist at Carnegie Mellon Software Engineering Institute, and an in-demand speaker at security conferences. He maintains a popular information security blog at http://1raindrop.typepad.com. Share:

Share:
Read Post

Friday Summary: August 13, 2010

A couple days ago I was talking with the masters swim coach I’ve started working with (so I will, you know, drown less) and we got to that part of the relationship where I had to tell him what I do for a living. Not that I’ve ever figured out a good answer to that questions, but I muddled through. Once he found out I worked in infosec he started ranting, as most people do, about all the various spam and phishing he has to deal with. Aside from wondering why anyone would run those scams (easily answered with some numbers) he started in on how much of a pain in the ass it is to do anything online anymore. The best anecdote was asking his wife why there were problems with their Bank of America account. She gently reminded him that the account is in her name, and the odds were pretty low that B of A would be emailing him instead of her. When he asked what he should do I made sure he was on a Mac (or Windows 7), recommended some antispam filtering, and confirmed that he or his wife check their accounts daily. I’ve joked in the past that you need the equivalent of a black belt to survive on the Internet today, but I’m starting to think it isn’t a joke. The majority of my non-technical friends and family have been infected, scammed, or suffered fraud at least once. This is just anecdote, which is dangerous to draw assumptions from, but the numbers are clearly higher than people being mugged or having their homes broken into. (Yeah, false analogy – get over it). I think we only tolerate this for three reasons: Individual losses are still generally low – especially since credit cards losses to a consumer are so limited (low out of pocket). Having your computer invaded doesn’t feel as intrusive as knowing someone was rummaging through your underwear drawer. A lot of people don’t notice that someone is squatting on their computer… until the losses ring up. I figure once things really get bad enough we’ll change. And to be honest, people are a heck of a lot more informed these days than five or ten years ago. On another note we are excited to welcome Gunnar Peterson as our latest Contributing Analyst! Gunnar’s first post is the IAM entry in our week-long series on security commoditization, and it’s awesome to already have him participating in research meetings. And on yet another note it seems my wife is more than a little pregnant. Odds are I’ll be disappearing for a few weeks at some random point between now and the first week of September, so don’t be offended if I’m slow to respond to email. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences The official Defcon Security Jam waffle iron is up for auction! Not only was this used by David Mortman to produce mouth watering morsels of joy on stage, but Chris Hoff ensured the waffle iron attended the exclusive Ninja Networks party! (Proceeds benefit the EFF). Adrian on How to Protect Oracle Database Vault at Dark Reading. Rich wrote an article on iOS security over at TidBITS. Rich, Martin, and Zach on the Network Security Podcast. Favorite Securosis Posts Gunnar: Anton Chuvakin in depth SIEM Use Cases. Written from a hands on perspective, covers core SIEM workflows inlcuding Server user activity monitoring, Tracking user actions across systems, firewall monitoring (security + network), Malware protection, and Web server attack detection. The Use Cases show the basic flows and they are made more valuable by Anton’s closing comments which address how SIEM enables Incident Response activities. Adrian Lane: FireStarter: Why You Care about Security Commoditization. Maybe no one else liked it, but I did. Mike Rothman: The Yin and Yang of Security Commoditization. Love the concept of “covering” as a metaphor for vendors not solving customer problems, but trying to do just enough to beat competition. This was a great series. Rich: Gunnar’s post on the lack of commoditization in IAM. A little backstory – I was presenting my commoditization thoughts on our internal research meeting, and Gunnar was the one who pointed out that some markets never seem to reach that point… which inspired this week’s series. Other Securosis Posts Gunnar Peterson Joins Securosis as a Contributing Analyst. Incite 8/11/2010: No Goal! Tokenization: Use Cases, Part 3. iOS Security: Challenges and Opportunities. Tokenization Topic Roundup. When Writing on iOS Security, Stop Asking AV Vendors Whether Apple Should Open the Platform to AV. Commoditization and Feature Parity on the Perimeter. Tokenization: Use Cases, Part 2. Favorite Outside Posts Adrian Lane: Researchers Hack Your Vehicle (again). Looks like the auto industry will continue making idiotic decisions regarding computers and control systems until they walk head-on into a major hack. Mike Rothman: Fuel Not Powerpoint. From our newest contributing analyst Gunnar. Funny how in some industries a cool PowerPoint is not enough. Pepper: Anatomy Of An Attempted Malware Scam. I’ve never thought much about ‘badvertising’, but I enjoyed this detective story. Rich: National Geographic’s awesome story on DefCon. The reporter really captured the essence of the event. Project Quant Posts NSO Quant: Manage Firewall Process Revisited. NSO Quant: Manage Firewall – Audit/Validate. NSO Quant: Manage Firewall – Deploy. NSO Quant: Manage Firewall – Test and Approve. NSO Quant: Manage Firewall – Process Change Request. Research Reports and Presentations White Paper: Endpoint Security Fundamentals. Understanding and Selecting a Database Encryption or Tokenization Solution. Low Hanging Fruit: Quick Wins with Data Loss Prevention. Top News and Posts Critical Updates for Windows, Flash Player. Questions and Answers on the [iPhone] JailbreakMe Vulnerability. Wireshark review. RBS WorldPay ringleader being extradited to the US. Illogical cloud positivism. Google CEO says no anonymity on the web. First clue to crack the Verizon DBIR contest. Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes

Share:
Read Post

FireStarter: Why You Care about Security Commoditization

This is the first in a series we will be posting this week on security markets. In the rest of this series we will look at individual markets, and discuss how these forces work to help with buying decisions. Catching up with recent news, Check Point has joined the crowd and added application control as a new option on their gateway products. Sound like you’ve heard this one before? That’s because this function was pioneered by Palo Alto, then added by Fortinet and even Websense (on their content gateways). Yet again we see multiple direct and indirect competitors converge on the same set of features. Feature parity can be problematic, because it significantly complicates a customer’s ability to differentiate between solutions. I take a ton of calls from users who ask, “should I buy X or Y” – and I’m considerate enough to mute the phone so they don’t hear me flipping my lucky coin. During last week’s Securosis research meeting we had an interesting discussion on the relationship between feature parity, commoditization, and organization size. In nearly any market – both security and others – competitors tend to converge on a common feature set rather than run off in different innovative directions. Why? Because that’s what the customers think they need. The first mover with the innovative feature makes such a big deal of it that they manage to convince customers they need the feature (and that first product), so competitors in that market must add the feature to compete. Sometimes this feature parity results in commoditization – where prices decline in lockstep with the reduced differentiation – but in other cases there’s only minimal impact on price. By which I mean the real price, which isn’t always what’s advertised. What we tend to find is that products targeting small and mid-sized organizations become commoditized (prices and differentiation drop); but those targeting large organizations use feature parity as a sales, upgrade, and customer retention tool. So why does this matter to the average security professional? Because it affects what products you use and how much you pay for them, and because understanding this phenomenon can make your life a heck of a lot easier. Commoditization in the Mid-Market First let’s define organization size – we define ‘mid’ as anything under about 5,000 employees and $1B in annual revenue. If you’re over $1B you’re large, but this is clearly a big bucket. Very large tends to be over 50K employees. Mid-sized and smaller organizations tend to have more basic needs. This isn’t an insult, it’s just that the complexity of the environment is constrained by the size. I’ve worked with some seriously screwed up mid-sized organizations, but they still pale in comparison to the complexity of a 100K + employee multinational. This (relative) lack for complexity in the mid-market means that when faced with deciding among a number of competing products – unless your situation is especially wacky – you pick the one that costs less, has the easiest management interface (reducing the time you need to spend in the product), or simply strikes your fancy. As a result the mid-market tends to focus on the lowest cost of ownership: base cost + maintenance/support contract + setup cost + time to use. A new feature only matters if it solves a new problem or reduces costs. Settle down, mid-market folks! This isn’t an insult. We know you like to think you are different and special, but you probably aren’t. Since mid-market customers have the same general needs and desire to save costs, vendors converge on the lowest common denominator feature set and shoot for volume. They may keep one-upping each other with prettier dashboards or new tweaks, but unless those result in filling a major need or reducing cost, they can’t really charge a lot more for them. Will you really pay more for a Coke than a Pepsi? The result is commoditization. Not that commoditization is bad – vendors make it up in volume and lower support costs. I advise a ton of my vendor clients to stop focusing on the F100 and realize the cash cow once they find the right mid-market product fit. Life’s a lot easier when you don’t have 18-month sales cycles, and don’t have to support each F100 client with its own sales team and 82 support engineers. Feature Parity in the Large Enterprise Market This doesn’t really play out the same when playing with the big dogs. Vendors still tend to converge on the same feature sets, but it results in less overt downward price pressure. This is for a couple reasons: Larger organizations are more locked into products due to higher switching costs. In such complex environments, with complicated sales cycles involving multiple competitors, the odds are higher that one niche feature or function will be critical for success, making effective “feature equivalence” much tougher for competitors. I tend to see switching costs and inertia as the biggest factor, since these products become highly customized in large environments and it’s hard to change existing workflows. Retraining is a bigger issue, and a number of staff specialize in how the vendor does things. These aren’t impossible to change, but make it much harder to embrace a new provider. But vendors add the features for a reason. Actually, 3 reasons: Guard the henhouse: If a new feature is important enough, it might cause either a customer shift (loss), or more likely in the customer deploying a competitive product in parallel for a while – vendors, of course, are highly motivated to keep the competition away from their golden geese. Competitive deployments, either as evaluations or in small niche roles, substantially raise the risk of losing the customer – especially when the new sales guy offers a killer deal. Force upgrade: The new features won’t run on existing hardware/software, forcing the customers to upgrade to a new version. We have seen a number of infrastructure providers peg new features to the latest codebase or appliance,

Share:
Read Post

When Writing on iOS Security, Stop Asking AV Vendors Whether Apple Should Open the Platform to AV

A long title that almost covers everything I need to write about this article and many others like it. The more locked down a platform, the easier it is to secure. Opening up to antivirus is about 987 steps down the priority list for how Apple could improve the (already pretty good) iOS security. You want email and web filtering for your iPhone? Get them from the cloud… Share:

Share:
Read Post

iOS Security: Challenges and Opportunities

I just posted an article on iOS (iPhone/iPad) security that I’ve been thinking about for a while over at TidBITS. Here are excerpts from the beginning and ending: One of the most controversial debates in the security world has long been the role of market share. Are Macs safer because there are fewer users, making them less attractive to serious cyber-criminals? Although Mac market share continues to increase slowly, the answer remains elusive. But it’s more likely that we’ll see the answer in our pockets, not on our desktops. The iPhone is arguably the most popular phone series on the face of the planet. Include the other iOS devices – the iPad and iPod touch – and Apple becomes one of the most powerful mobile device manufacturers, with over 100 million devices sold so far. Since there are vastly more mobile phones in the world than computers, and since that disparity continues to grow, the iOS devices become far more significant in the big security picture than Macs. … Security Wins, For Now – In the overall equation of security risks versus advantages, Apple’s iOS devices are in a strong position. The fundamental security of the platform is well designed, even if there is room for improvement. The skill level required to create significant exploits for the platform is much higher than that needed to attack the Mac, even though there is more motivation for the bad guys. Although there have been some calls to open up the platform to additional security software like antivirus tools (mostly from antivirus vendors), I’d rather see Apple continue to tighten down the screws and rely more on a closed system, faster patching rate, and more sandboxing. Their greatest opportunities for improvement lie with increased awareness, faster response (processes), and greater realization of the potential implications of security exposures. And even if Apple doesn’t get the message now, they certainly will the first time there is a widespread attack. Share:

Share:
Read Post

Tokenization: Use Cases, Part 2

In our last use case we presented an architecture for securely managing credit card numbers in-house. But in response to a mix of breaches and PCI requirements, some payment processors now offer tokenization as a service. Merchants can subscribe in order to avoid any need to store credit cards in their environment – instead the payment processor provides them with tokens as part of the transaction process. It’s an interesting approach, which can almost completely remove the PAN (Primary Account Number) from your environment. The trade-off is that this closely ties you to your processor, and requires you to use only their approved (and usually provided) hardware and software. You reduce risk by removing credit card data entirely from your organization, at a cost in flexibility and (probably) higher switching costs. Many major processors have built end-to-end solutions using tokenization, encryption, or a combination the two. For our example we will focus on tokenization within a fairly standard Point of Sale (PoS) terminal architecture, such as we see in many retail environments. First a little bit on the merchant architecture, which includes three components: Point of Sale terminals for swiping credit cards. A processing application for managing transactions. A database for storing transaction information. Traditionally, a customer swipes a credit card at the PoS terminal, which then communicates with an on-premise server, that then connects either to a central processing server (for payment authorization or batch clearing) in the merchant’s environment, or directly to the payment processor. Transaction information, including the PAN, is stored on the on-premise and/or central server. PCI-compliant configurations encrypt the PAN data in the local and central databases, as well as all communications. When tokenization is implement by the payment processor, the process changes to: Retail customer swipes the credit card at the PoS. The PoS encrypts the PAN with the public key of the payment processor’s tokenization server. The transaction information (including the PAN, other magnetic stripe data, the transaction amount, and the merchant ID) are transmitted to the payment processor (encrypted). The payment processor’s tokenization server decrypts the PAN and generates a token. If this PAN is already in the token database, they can either reuse the existing token (multi-use), or generate a new token specific to this transaction (single-use). Multi-use tokens may be shared amongst different vendors. The token, PAN data, and possibly merchant ID are stored in the tokenization database. The PAN is used by the payment processor’s transaction systems for authorization and charge submission to the issuing bank. The token is returned to the merchant’s local and/or central payment systems, as is the transaction approval/denial, which hands it off to the PoS terminal. The merchant stores the token with the transaction information in their systems/databases. For the subscribing merchant, future requests for settlement and reconciliation to the payment processor reference the token. The key here is that the PAN is encrypted at the point of collection, and in a properly-implemented system is never again in the merchant’s environment. The merchant never again has the PAN – they simply use the token in any case where the PAN would have been used previously, such as processing refunds.This is a fairly new approach and different providers use different options, but the fundamental architecture is fairly consistent.In our next example we’ll move beyond credit cards and show how to use tokenization to protect other private data within your environment. Share:

Share:
Read Post

GSM Cell Phones to Be Intercepted in Defcon Demonstration

This hit Slashdot today, and I expect the mainstream press to pick it up fairly soon. Chris Paget will be intercepting cell phone communications at Defcon during a live demonstration. I suspect this may be the single most spectacular presentation during all of this year’s Defcon and Black Hat. Yes, people will be cracking SCADA and jackpotting ATMs, but nothing strikes closer to the heart than showing major insecurities with the single most influential piece of technology in society. Globally I think cell phones are even more important than television. Chris is taking some major precautions to stay out of jail. He’s working hand in hand with the Electronic Frontier Foundation on the legal side, and there will be plenty of warnings on-site and no information from any calls recorded or stored. I suspect he’s setting up a microcell under his control and intercepting communications in a man in the middle attack, but we’ll have to wait until his demo to get all the details. For years the mobile phone companies have said this kind of interception is impractical or impossible. I guess we’ll all find out this weekend… Share:

Share:
Read Post

Tokenization: Token Servers, Part 2 (Architecture, Integration, and Management)

Our last post covered the core functions of the tokenization server. Today we’ll finish our discussion of token servers by covering the externals: the primary architectural models, how other applications communicate with the server(s), and supporting systems management functions. Architecture There are three basic ways to build a token server: Stand-alone token server with a supporting back-end database. Embedded/integrated within another software application. Fully implemented within a database. Most of the commercial tokenization solutions are stand-alone software applications that connect to a dedicated database for storage, with at least one vendor bundling their offering into an appliance. All the cryptographic processes are handled within the application (outside the database), and the database provides storage and supporting security functions. Token servers use standard Database Management Systems, such as Oracle and SQL Server, but locked down very tightly for security. These may be on the same physical (or virtual) system, on separate systems, or integrated into a load-balanced cluster. In this model (stand-alone server with DB back-end) the token server manages all the database tasks and communications with outside applications. Direct connections to the underlying database are restricted, and cryptographic operations occur within the tokenization server rather than the database. In an embedded configuration the tokenization software is embedded into the application and supporting database. Rather than introducing a token proxy into the workflow of credit card processing, existing application functions are modified to implement tokens. To users of the system there is very little difference in behavior between embedded token services and a stand-alone token server, but on the back end there are two significant differences. First, this deployment model usually involves some code changes to the host application to support storage and use of the tokens. Second, each token is only useful for one instance of the application. Token server code, key management, and storage of the sensitive data and tokens all occur within the application. The tightly coupled nature of this model makes it very efficient for small organizations, but does not support sharing tokens across multiple systems, and large distributed organizations may find performance inadequate. Finally, it’s technically possible to manage tokenization completely within the database without the need for external software. This option relies on stored procedures, native encryption, and carefully designed database security and access controls. Used this way, tokenization is very similar to most data masking technologies. The database automatically parses incoming queries to identify and encrypt sensitive data. The stored procedure creates a random token – usually from a sequence generator within the database – and returns the token as the result of the user query. Finally all the data is stored in a database row. Separate stored procedures are used to access encrypted data. This model was common before the advent of commercial third party tokenization tools, but has fallen into disuse due to its lack for advanced security features and failure to leverage external cryptographic libraries & key management services. There are a few more architectural considerations: External key management and cryptographic operations are typically an option with any of these architectural models. This allows you to use more-secure hardware security modules if desired. Large deployments may require synchronization of multiple token servers in different, physically dispersed data centers. This support must be a feature of the token server, and is not available in all products. We will discuss this more when we get to usage and deployment models. Even when using a stand-alone token server, you may also deploy software plug-ins to integrate and manage additional databases that connect to the token server. This doesn’t convert the database into a token server, as we described in our second option above, but supports communications for distributed systems that need access to either the token or the protected data. Integration Since tokenization must be integrated with a variety of databases and applications, there are three ways to communicate with the token server: Application API calls: Applications make direct calls to the tokenization server procedural interface. While at least one tokenization server requires applications to explicitly access the tokenization functions, this is now a rarity. Because of the complexity of the cryptographic processes and the need for precise use of the tokenization server; vendors now supply software agents, modules, or libraries to support the integration of token services. These reside on the same platform as the calling application. Rather than recoding applications to use the API directly, these supporting modules accept existing communication methods and data formats. This reduces code changes to existing applications, and provides better security – especially for application developers who are not security experts. These modules then format the data for the tokenization API calls and establish secure communications with the tokenization server. This is generally the most secure option, as the code includes any required local cryptographic functions – such as encrypting a new piece of data with the token server’s public key. Proxy Agents: Software agents that intercept database calls (for example, by replacing an ODBC or JDBC component). In this model the process or application that sends sensitive information may be entirely unaware of the token process. It sends data as it normally does, and the proxy agent intercepts the request. The agent replaces sensitive data with a token and then forwards the altered data stream. These reside on the token server or its supporting application server. This model minimizes application changes, as you only need to replace the application/database connection and the new software automatically manages tokenization. But it does create potential bottlenecks and failover issues, as it runs in-line with existing transaction processing systems. Standard database queries: The tokenization server intercepts and interprets the requests. This is potentially the least secure option, especially for ingesting content to be tokenized. While it sounds complex, there are really only two functions to implement: Send new data to be tokenized and retrieve the token. When authorized, exchange the token for the protected data. The server itself should handle pretty much everything else. Systems Management Finally, as with any

Share:
Read Post

The Cancer within Evidence Based Research Methodologies

Alex Hutton has a wonderful must-read post on the Verizon security blog on Evidence Based Risk Management. Alex and I (along with others including Andrew Jaquith at Forrester, as well as Adam Shostack and Jeff Jones at Microsoft) are major proponents of improving security research and metrics to better inform the decisions we make on a day to day basis. Not just generic background data, but the kinds of numbers that can help answer questions like “Which security controls are most effective under XYZ circumstances?” You might think we already have a lot of that information, but once you dig in the scarcity of good data is shocking. For example we have theoretical models on password cracking – but absolutely no validated real-world data on how password lengths, strengths, and forced rotation correlate with the success of actual attacks. There’s a ton of anecdotal information and reports of password cracking times – especially within the penetration testing community – but I have yet to see a single large data set correlating password practices against actual exploits. I call this concept outcomes based security, which I now realize is just one aspect/subset of what Alex defines as Evidence Based Risk Management. We often compare the practice of security with the practice of medicine. Practitioners of both fields attempt to limit negative outcomes within complex systems where external agents are effectively impossible to completely control or predict. When you get down to it, doctors are biological risk managers. Both fields are also challenged by having to make critical decisions with often incomplete information. Finally, while science is technically the basis of both fields, the pace and scope of scientific information is often insufficient to completely inform decisions. My career in medicine started in 1990 when I first became certified as an EMT, and continued as I moved on to working as a full time paramedic. Because of this background, some of my early IT jobs also involved work in the medical field (including one involving Alex’s boss about 10 years ago). Early on I was introduced to the concepts of Evidence Based Medicine that Alex details in his post. The basic concept is that we should collect vast amounts of data on patients, treatments, and outcomes – and use that to feed large epidemiological studies to better inform physicians. We could, for example, see under which circumstances medication X resulted in outcome Y on a wide enough scale to account for variables such as patient age, gender, medical history, other illnesses, other medications, etc. You would probably be shocked at how little the practice of medicine is informed by hard data. For example if you ever meet a doctor who promotes holistic medicine, acupuncture, or chiropractic, they are making decisions based on anecdotes rather than scientific evidence – all those treatments have been discredited, with some minor exceptions for limited application of chiropractic… probably not what you used it for. Alex proposes an evidence-based approach – similar to the one medicine is in the midst of slowly adopting – for security. Thanks to the Verizon Data Breach Investigations Report, Trustwave’s data breach report, and little pockets of other similar information, we are slowly gaining more fundamental data to inform our security decisions. But EBRM faces the same near-crippling challenge as Evidence Based Medicine. In health care the biggest obstacle to EBM is the physicians themselves. Many rebel against the use of the electronic medical records systems needed to collect the data – sometimes for legitimate reasons like crappy software, and at other times due to a simple desire to retain direct control over information. The reason we have HIPAA isn’t to protect your health care data from a breach, but because the government had to step in and legislate that doctors must release and share your healthcare information – which they often considered their own intellectual property. Not only do many physicians oppose sharing information – at least using the required tools – but they oppose any restrictions on their personal practice of medicine. Some of this is a legitimate concern – such as insurance companies restricting treatments to save money – but in other cases they just don’t want anyone telling them what to do – even optional guidance. Medical professionals are just as subject to cognitive bias as the rest of us, and as a low-level medical provider myself I know that algorithms and checklists alone are never sufficient in managing patients – a lot of judgment is involved. But it is extremely difficult to balance personal experience and practices with evidence, especially when said evidence seems counterintuitive or conflicts with existing beliefs. We face these exact same challenges in security: Organizations and individual practitioners often oppose the collection and dissemination of the raw data (even anonymized) needed to learn from experience and advance based practices. Individual practitioners, regulatory and standards bodies, and business constituents need to be willing to adjust or override their personal beliefs in the face of hard evidence, and support evolution in security practices based on hard evidence rather than personal experience. Right now I consider the lack of data our biggest challenge, which is why we try to participate as much as possible in metrics projects, including our own. It’s also why I have an extremely strong bias towards outcome-based metrics rather than general risk/threat metrics. I’m much more interested in which controls work best under which circumstances, and how to make the implementation of said controls as effective and efficient as possible. We are at the very beginning of EBRM. Despite all our research on security tools, technologies, vulnerabilities, exploits, and processes, the practice of security cannot progress beyond the equivalent of witch doctors until we collectively unite behind information collection, sharing, and analysis as the primary sources informing our security decisions. Seriously, wouldn’t you really like to know when 90-day password rotation actually reduces risk vs. merely annoying users and wasting time? Share:

Share:
Read Post

FireStarter: an Encrypted Value Is *Not* a Token!

We’ve been writing a lot on tokenization as we build the content for our next white paper, and in Adrian’s response to the PCI Council’s guidance on tokenization. I want to address something that’s really been ticking me off… In our latest post in the series we described the details of token generation. One of the options, which we had to include since it’s built into many of the products, is encryption of the original value – then using the encrypted value as the token. Here’s the thing: If you encrypt the value, it’s encryption, not tokenization! Encryption obfuscates, but a token removes, the original data. Conceptually the major advantages of tokenization are: The token cannot be reversed back to the original value. The token maintains the same structure and data type as the original value. While format preserving encryption can retain the structure and data type, it’s still reversible back to the original if you have the key and algorithm. Yes, you can add per-organization salt, but this is still encryption. I can see some cases where using a hash might make sense, but only if it’s a format preserving hash. I worry that marketing is deliberately muddling the terms. Opinions? Otherwise, I declare here and now that if you are using an encrypted value and calling it a ‘token’, that is not tokenization. Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.