Securosis

Research

Incite 8/4/2010: Letters for Everyone

As I mentioned in the Mailbox Vigil, we don’t put much stock in snail mail anymore. Though we did get a handful of letters from XX1 (oldest daughter) from sleepaway camp, aside from that it’s bills and catalogs. That said, every so often you do get entertained by the mail. A case in point happened when we got back from our summer pilgrimage to the Northern regions this weekend (which is why there was no Incite last week). On arriving home (after a brutal 15 hour car ride, ugh!) we were greeted by a huge box of mail delivered by our trusty postal worker. Given that the Boss was occupied doing about 100 loads of laundry and I had to jump back into work, we let XX1 express her newfound maturity and sort our mail. It was pretty funny. She called out every single piece and got genuinely excited by some of the catalogs. She got a thank you note from a friend, a letter from another, and even a few of her own letters to us from camp (which didn’t arrive before we left on holiday). XX2 (her twin) got a thank you note also. But nothing for the boy. I could tell he was moping a bit and I hoped something would come his way. Finally he heard the magic words: “Sam got a letter.” Reminded me of Blue’s Clues. It was from someone with an address at the local mall. Hmmm. But he dutifully cracked it open and had me read it to him. It was from someone at LensCrafters reminding him that it’s been a year since he’s gotten his glasses and he’s due for a check-up. He was on the edge of his seat as I read about how many adults have big problems with their eyes and how important it is to get an annual check-up. Guess they didn’t realize the Boy is not yet 7 and also that he sees his Opthamologist every 6 weeks. But that didn’t matter – he got a letter. So he’s carrying this letter around all day, like he just got a toy from Santa Claus or the Hanukkah fairy. He made me read it to him about 4 times. Now he thinks the sales person at LensCrafters is his pal. Hopefully he won’t want to invite her to his birthday party. Normally I would have just thrown out the direct mail piece, but I’m glad we let XX1 sort the mail. The Boy provided me with an afternoon of laughter and that was certainly worth whatever it cost to send us the piece. – Mike. Photo credits: “surprise in the mailbox” originally uploaded by sean dreilinger Recent Securosis Posts The Cancer within Evidence Based Research Methodologies Friday Summary: July 23, 2010 Death, Irrelevance, and a Pig Roast What Do We Learn at Black Hat/DefCon? Tokenization Series: Token Servers Token Servers, Part 2 (Architecture, Integration, and Management) Token Servers, Part 3 (Deployment Models) Various NSO Quant Posts: Monitoring Health Maintenance Subprocesses Monitor Process Revisited Incite 4 U We’re AV products. Who would try to hack us? – More great stuff from Krebs. This time he subjected himself to installing (and reinstalling) AV products in his VM to see which of them actually use Windows anti-exploitations technologies (like DEP and ASLR). The answer? Not many, though it’s good to see Microsoft eating their own dog food. I like the responses from the AV vendors, starting with F-Secure’s “we’ve been working on performance,” which means they are prioritizing not killing your machine over security – go figure. And Panda shows they have ostriches in Spain as well, as they use their own techniques to protect their software. OK, sure. This is indicative of the issues facing secure software. If the security guys can’t even do it right, we don’t have much hope for everyone else. Sad. – MR Mid-market basics – She does not blog very often, but when she does, Jennifer Jabbusch gets it right. We here at Securosis are all about simplifying security for end users, and I thought JJ’s recent post on Four Must-Have SMB Security Tools did just that. With all the security pontification about new technologies to supplant firewalls, and how ineffective AV is at detecting bad code, there are a couple tools that are fundamental to data security. As bored as we are talking about them, AV, firewalls, and access controls are the three basics that everyone needs. While I would personally throw in encrypted backups as a must have, those are the core components. But for many SMB firms, these technologies are the starting point. They are not looking at extrusion prevention, behavioral monitoring, or event correlation – just trying to make sure the front door is locked, both physically and electronically. It’s amazing to think, but I run into companies all the time where an 8-year-old copy of Norton AV and a password on the ‘server’ are the security program. I hope to see more basic posts like this that appeal to the mainstream – and SMB is the mainstream – on Dark Reading and other blogs as well. – AL Jailbreak with a side of shiv – Are you one of those folks who wants to jailbreak your iPhone to install some free apps on it? Even though it removes some of the most important security controls on the device? Well, have I got a deal for you! Just visit jailbreakme.com and the magical web application will jailbreak your phone right from the browser. Of course any jailbreak is the exploitation of a security vulnerability. And in this case it’s a remotely exploitable browser vulnerability, but don’t worry – I’m sure no bad guys will use it now that it’s public. Who would want to remotely hack the most popular cell phone on the planet? – RM A pig by a different name – SourceFire recently unveiled Razorback, their latest open source framework. Yeah, that’s some kind of hog or something,

Share:
Read Post

What Do We Learn at Black Hat/DefCon?

Actually I learned nothing because I wasn’t there. Total calendar fail on my part, as a family vacation was scheduled during Black Hat week. You know how it goes. The Boss says, “how is the week of July 26 for our week at the beach?” BH is usually in early August, so I didn’t think twice. But much as I missed seeing my peeps and tweeps at Black Hat, a week of R&R wasn’t all bad. Though I was sort of following the Tweeter and did see the coverage and bloggage of the major sessions. So what did we learn this year? SSL is bad: Our friend RSnake and Josh Sokol showed that SSL ain’t all that. Too bad 99% of the laypeople out there see the lock and figure all is good. Actually, 10% of laypeople know what the lock means. The other 89% wonder how the Estonians made off with their life savings. SCADA systems are porous: OK, I’m being kind. SCADA is a steaming pile of security FAIL. But we already knew that. Thanks to a Red Tiger, we now know there are close to 40,000 vulnerabilities in SCADA systems, so we have a number. At least these systems aren’t running anything important, right? Auto-complete is not your friend: As a Mac guy I never really relied on auto-complete, since I can use TextExpander. But lots of folks do and Big J got big press when he showed it’s bad in Safari and also then proved IE is exposed as well. Facebook spiders: Yes, an enterprising fellow named Ron Bowes realized that most folks have set their Facebook privacy settings, ah, incorrectly. So he was able to download about 100 million names, phone numbers, and email addresses with a Ruby script. Then he had the nerve to put it up on BitTorrent. Information wants to be free, after all. (This wasn’t a session at BH, but cool nonetheless.) ATM jackpot: Barnaby Jack showed once again that he can hit the jackpot at will since war dialing still workss (yay WarGames!), and you can get pretty much anything on the Internet (like a key to open many ATM devices). Anyhow, great demo and I’m sure organized crime is very interested in those attack vectors. I can haz your cell tower: Chis Paget showed how he could spoof a cell tower for $1,500. And we thought the WiFi Evil Twin was bad. This is cool stuff. I could probably go on for a week, since all the smart kids go to Vegas in the summer to show how smart they are. And to be clear, they are smart. But do you, Mr. or Ms. Security Practitioner, care about these attacks and this research? The answer is yes. And no. First of all, you can see the future at Black Hat. Most of the research is not weaponized and a good portion of it isn’t really feasible to weaponize. An increasing amount is attack-ready, but for the most part you get to see what will be important at some point in the future. Maybe. For that reason, at least paying attention to the research is important. But tactically what happens in Vegas is unlikely have any impact on day-to-day operations any time soon. Note that I used the word ‘tactical’, because most of us spend our days fighting fires and get precious few minutes a day – if any – to think strategically about what we need to do tomorrow. Forget about thinking about how to protect against attacks discussed at Black Hat. That’s probably somewhere around 17,502 on the To-Do list. Of course, if your ethical compass is a bit misdirected or your revenues need to be laundered through 5 banks in 3 countries before the funds hit your account, then the future is now and Black Hat is your business plan for the next few years. But that’s another story for another day.   Share:

Share:
Read Post

Tokenization: Token Servers, Part 3, Deployment Models

We have covered the internals of token servers and talked about architecture and integration of token services. Now we need to look at some of the different deployment models and how they match up to different types of businesses. Protecting medical records in multi-company environments is a very different challenge than processing credit cards for thousands of merchants. Central Token Server The most common deployment model we see today is a single token server that sits between application servers and the back end transaction servers. The token server issues one or more tokens for each instance of sensitive information that it recieves. For most applications it becomes a reference library, storing sensitive information within a repository and providing an index back to the real data as needed. The token service is placed in line with existing transaction systems, adding a new substitution step between business applications and back-end data processing. As mentioned in previous posts, this model is excellent for security as it consolidates all the credit card data into a single highly secure server; additionally, it is very simple to deploy as all services reside in a single location. And limiting the number of locations where sensitive data is stored and accessed both improves security and reduces auditing, as there are fewer systems to review. A central token server works well for small businesses with consolidated operations, but does not scale well for larger distributed organizations. Nor does it provide the reliability and uptime demanded by always-on Internet businesses. For example: Latency: The creation of a new token, lookup of existing customers, and data integrity checks are computationally complex. Most vendors have worked hard to alleviate this problem, but some still have latency issues that make them inappropriate for financial/point of sale usage. Failover: If the central token server breaks down, or is unavailable because of a network outage, all processing of sensitive data (such as orders) stops. Back-end processes that require tokens halt. Geography: Remote offices, especially those in remote geographic locations, suffer from network latency, routing issues, and Internet outages. Remote token lookups are slow, and both business applications and back-end processes suffer disproportionately in the event of disaster or prolonged network outages. To overcome issues in performance, failover, and network communications, several other deployment variations are available from tokenization vendors. Distributed Token Servers With distributed token servers, the token databases are copies and shared among multiple sites. Each has a copy of the tokens and encrypted data. In this model, each site is a peer of the others, with full functionality. This model solves some of the performance issues with network latency for token lookup, as well as failover concerns. Since each token server is a mirror, if any single token server goes down, the others can share its load. Token generation overhead is mitigated, as multiple servers assist in token generation and distribution of requests balances the load. Distributed servers are costly but appropriate for financial transaction processing. While this model offers the best option for uptime and performance, synchronization between servers requires careful consideration. Multiple copies means synchronization issues, and carefully timed updates of data between locations, along with key management so encrypted credit card numbers can be accessed. Finally, with multiple databases all serving tokens, you increase the number of repositories that must be secured, maintained, and audited increases substantially. Partitioned Token Servers In a partitioned deployment, a single token server is designated as ‘active’, and one or more additional token servers are ‘passive’ backups. In this model if the active server crashes or is unavailable a passive server becomes active until the primary connection can be re-established. The partitioned model improves on the central model by replicating the (single, primary) server configuration. These replicas are normally at the same location as the primary, but they may also be distributed to other locations. This differs from the distributed model in that only one server is active at a time, and they are not all peers of one another. Conceptually partitioned servers support a hybrid model where each server is active and used by a particular subset of endpoints and transaction servers, as well as as a backup for other token servers. In this case each token server is assigned a primary responsibility, but can take on secondary roles if another token server goes down. While the option exists, we are unaware of any customers using it today. The partitioned model solves failover issues: if a token server fails, the passive server takes over. Synchronization is easier with this model as the passive server need only mirror the active server, and bi-directional synchronization is not required. Token servers leverage the mirroring capabilities built into the relational database engines, as part of their back ends, to provide this capability. Next we will move on to use cases. Share:

Share:
Read Post

Tokenization: Series Index

Understanding and Selecting a Tokenization Solution: Introduction Business Justification Token System Basics The Tokens Token Servers, Part 1, Internal Functions Token Servers, Part 2, Architecture and Integration Token Servers, Part 3, Deployment Models Tokenization: Use Cases, Part 1 Tokenization: Use Cases, Part 2 Tokenization: Use Cases, Part 3 Tokenization Topic Roundup Tokenization: Selection Process Share:

Share:
Read Post

GSM Cell Phones to Be Intercepted in Defcon Demonstration

This hit Slashdot today, and I expect the mainstream press to pick it up fairly soon. Chris Paget will be intercepting cell phone communications at Defcon during a live demonstration. I suspect this may be the single most spectacular presentation during all of this year’s Defcon and Black Hat. Yes, people will be cracking SCADA and jackpotting ATMs, but nothing strikes closer to the heart than showing major insecurities with the single most influential piece of technology in society. Globally I think cell phones are even more important than television. Chris is taking some major precautions to stay out of jail. He’s working hand in hand with the Electronic Frontier Foundation on the legal side, and there will be plenty of warnings on-site and no information from any calls recorded or stored. I suspect he’s setting up a microcell under his control and intercepting communications in a man in the middle attack, but we’ll have to wait until his demo to get all the details. For years the mobile phone companies have said this kind of interception is impractical or impossible. I guess we’ll all find out this weekend… Share:

Share:
Read Post

Friday Summary: July 23, 2010

A couple weeks ago I was sitting on the edge of the hotel bed in Boulder, Colorado, watching the immaculate television. A US-made 30” CRT television in “standard definition”. That’s cathode ray tube for those who don’t remember, and ‘standard’ is the marketing term for ‘low’. This thing was freaking horrible, yet it was perfect. The color was correct. And while the contrast ratio was not great, it was not terrible either. Then it dawned on me that the problem was not the picture, as this is the quality we used to get from televisions. Viewing an old set, operating exactly the same way they always did, I knew the problem was me. High def has so much more information, but the experience of watching the game is the same now as it was then. It hit me just how much our brains were filling in missing information, and we did not mind this sort of performance 10 years ago because it was the best available. We did not really see the names on the backs of football jerseys during those Sunday games, we just thought we did. Heck, we probably did not often make out the numbers either, but somehow we knew who was who. We knew where our favorite players on the field were, and the red streak on the bottom of the screen pounding a blue colored blob must be number 42. Our brain filled in and sharpened the picture for us. Rich and I had been discussing experience bias, recency bias, and cognitive dissonance during out trip to Denver. We were talking about our recent survey and how to interpret the numbers without falling into bias traps. It was an interesting discussion of how people detect patterns, but like many of our conversations devolved into how political and religious convictions can cloud judgement. But not until I was sitting there, watching television in the hotel; did I realize how much our prior experiences and knowledge shape perception, derived value, and interpreted results. Mostly for the good, but unquestionably some bad. Rich also sent me a link to a Michael Shermer video just after that, in which Shermer discusses patterns and self deception. You can watch the video and say “sure, I see patterns, and sometimes what I see is not there”, but I don’t think videos like this demonstrate how pervasive this built in feature is, and how it applies to every situation we find ourself in. The television example of this phenomena was more shocking than some others that have popped into my head since. I have been investing in and listening to high-end audio products such as headphones for years. But I never think about the illusion of a ‘soundstage’ right in front of me, I just think of it as being there. I know the guitar player is on the right edge of the stage, and the drummer is in the back, slightly to the left. I can clearly hear the singer when she turns her head to look at fellow band members during the song. None of that is really in front of me, but there is something in the bits of the digital facsimile on my hard drive that lets my brain recognize all these things, placing the scene right there in front of me. I guess the hard part is recognizing when and how it alters our perception. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Rich quoted in “Apple in a bind over its DNS patch”. Adrian’s Dark Reading post on SIEM ain’t DAM. Rich and Martin on Network Security Podcast #206. Favorite Securosis Posts Rich: Pricing Cyber-Policies. As we used to say at Gartner, all a ‘cybersecurity’ policy buys you is a seat at the arbitration table. Mike Rothman: The Cancer within Evidence Based Research Methodologies. We all need to share data more frequently and effectively. This is why. Adrian Lane: FireStarter: an Encrypted Value Is Not a Token!. Bummer. Other Securosis Posts Tokenization: Token Servers. Incite 7/20/2010: Visiting Day. Tokenization: The Tokens. Comments on Visa’s Tokenization Best Practices. Friday Summary: July 15, 2010. Favorite Outside Posts Rich: Successful Evidence-Based Risk Management: The Value of a Great CSIRT. I realize I did an entire blog post based on this, but it really is a must read by Alex Hutton. We’re basically a bunch of blind mice building 2-lego high walls until we start gathering, and sharing, information on which of our security initiatives really work and when. Mike Rothman: Understanding the advanced persistent threat. Bejtlich’s piece on APT in SearchSecurity is a good overview of the term, and how it’s gotten fsked by security marketing. Adrian Lane: Security rule No. 1: Assume you’re hacked. Project Quant Posts NSO Quant: Monitor Process Revisited. NSO Quant: Monitoring Health Maintenance Subprocesses. NSO Quant: Validate and Escalate Subprocesses. NSO Quant: Analyze Subprocess. NSO Quant: Collect and Store Subprocesses. NSO Quant: Define Policies Subprocess. NSO Quant: Enumerate and Scope Subprocesses. Research Reports and Presentations White Paper: Endpoint Security Fundamentals. Understanding and Selecting a Database Encryption or Tokenization Solution. Low Hanging Fruit: Quick Wins with Data Loss Prevention. Report: Database Assessment. Database Audit Events. XML Security Overview Presentation. Project Quant Survey Results and Analysis. Project Quant Metrics Model Report. Top News and Posts Researchers: Authentication crack could affect millions. SCADA System’s Hard-Coded Password Circulated Online for Years. Microsoft Launches ‘Coordinated’ Vulnerability Disclosure Program. GSM Cracking Software Released. How Mass SQL Injection Attacks Became an Epidemic. Harsh Words for Professional Infosec Certification. Google re-ups the disclosure debate. A new policy – 60 days to fix critical bugs or they disclose. I wonder if anyone asked the end users what they want? Adobe reader enabling protected mode. This is a very major development… if it works. Also curious to see what they do for Macs. Oracle to release 59 critical patches in security update. Is it just me, or do they have more security patches than bug fixes nowdays? Connecticut AG reaches agreement with

Share:
Read Post

Death, Irrelevance, and a Pig Roast

There is nothing like a good old-fashioned mud-slinging battle. As long as you aren’t the one covered in mud, that is. I read about the Death of Snort and started laughing. The first thing they teach you in marketing school is when no one knows who you are, go up to the biggest guy in the room and kick them in the nuts. You’ll get your ass kicked, but at least everyone will know who you are. That’s exactly what the folks at OISF (who drive the Suricata project) did, and they got Ellen Messmer of NetworkWorld to bite on it. Then she got Marty Roesch to fuel the fire and the end result is much more airtime than Suricata deserves. Not that it isn’t interesting technology, but to say it’s going to displace Snort any time soon is crap. To go out with a story about Snort being dead is disingenuous. But given the need to drive page views, the folks at NWW were more than willing to provide airtime. Suricata uses Snort signatures (for the most part) to drive its rule base. They’d better hope it’s not dead. But it brings up a larger issue of when a technology really is dead. In reality, there are few examples of products really dying. If you ended up with some ConSentry gear, then you know the pain of product death. But most products are around around ad infinitum, even if they aren’t evolved. So those products aren’t really dead, they just become irrelevant. Take Cisco MARS as an example. Cisco isn’t killing it, it’s just not being used as a multi-purpose SIEM, which is how it was positioned for years. Irrelevant in the SIEM discussion, yes. Dead, no. Ultimately, competition is good. Suricata will likely push the Snort team to advance their technology faster than in the absence of an alternative. But it’s a bit early to load Snort onto the barbie – even if it is the other white meat. Yet, it usually gets back to the reality that you can’t believe everything you read. Actually you probably shouldn’t believe much that you read. Except our stuff, of course. Photo credit: “Roasted pig (large)” originally uploaded by vnoel Share:

Share:
Read Post

Tokenization: Token Servers, Part 2 (Architecture, Integration, and Management)

Our last post covered the core functions of the tokenization server. Today we’ll finish our discussion of token servers by covering the externals: the primary architectural models, how other applications communicate with the server(s), and supporting systems management functions. Architecture There are three basic ways to build a token server: Stand-alone token server with a supporting back-end database. Embedded/integrated within another software application. Fully implemented within a database. Most of the commercial tokenization solutions are stand-alone software applications that connect to a dedicated database for storage, with at least one vendor bundling their offering into an appliance. All the cryptographic processes are handled within the application (outside the database), and the database provides storage and supporting security functions. Token servers use standard Database Management Systems, such as Oracle and SQL Server, but locked down very tightly for security. These may be on the same physical (or virtual) system, on separate systems, or integrated into a load-balanced cluster. In this model (stand-alone server with DB back-end) the token server manages all the database tasks and communications with outside applications. Direct connections to the underlying database are restricted, and cryptographic operations occur within the tokenization server rather than the database. In an embedded configuration the tokenization software is embedded into the application and supporting database. Rather than introducing a token proxy into the workflow of credit card processing, existing application functions are modified to implement tokens. To users of the system there is very little difference in behavior between embedded token services and a stand-alone token server, but on the back end there are two significant differences. First, this deployment model usually involves some code changes to the host application to support storage and use of the tokens. Second, each token is only useful for one instance of the application. Token server code, key management, and storage of the sensitive data and tokens all occur within the application. The tightly coupled nature of this model makes it very efficient for small organizations, but does not support sharing tokens across multiple systems, and large distributed organizations may find performance inadequate. Finally, it’s technically possible to manage tokenization completely within the database without the need for external software. This option relies on stored procedures, native encryption, and carefully designed database security and access controls. Used this way, tokenization is very similar to most data masking technologies. The database automatically parses incoming queries to identify and encrypt sensitive data. The stored procedure creates a random token – usually from a sequence generator within the database – and returns the token as the result of the user query. Finally all the data is stored in a database row. Separate stored procedures are used to access encrypted data. This model was common before the advent of commercial third party tokenization tools, but has fallen into disuse due to its lack for advanced security features and failure to leverage external cryptographic libraries & key management services. There are a few more architectural considerations: External key management and cryptographic operations are typically an option with any of these architectural models. This allows you to use more-secure hardware security modules if desired. Large deployments may require synchronization of multiple token servers in different, physically dispersed data centers. This support must be a feature of the token server, and is not available in all products. We will discuss this more when we get to usage and deployment models. Even when using a stand-alone token server, you may also deploy software plug-ins to integrate and manage additional databases that connect to the token server. This doesn’t convert the database into a token server, as we described in our second option above, but supports communications for distributed systems that need access to either the token or the protected data. Integration Since tokenization must be integrated with a variety of databases and applications, there are three ways to communicate with the token server: Application API calls: Applications make direct calls to the tokenization server procedural interface. While at least one tokenization server requires applications to explicitly access the tokenization functions, this is now a rarity. Because of the complexity of the cryptographic processes and the need for precise use of the tokenization server; vendors now supply software agents, modules, or libraries to support the integration of token services. These reside on the same platform as the calling application. Rather than recoding applications to use the API directly, these supporting modules accept existing communication methods and data formats. This reduces code changes to existing applications, and provides better security – especially for application developers who are not security experts. These modules then format the data for the tokenization API calls and establish secure communications with the tokenization server. This is generally the most secure option, as the code includes any required local cryptographic functions – such as encrypting a new piece of data with the token server’s public key. Proxy Agents: Software agents that intercept database calls (for example, by replacing an ODBC or JDBC component). In this model the process or application that sends sensitive information may be entirely unaware of the token process. It sends data as it normally does, and the proxy agent intercepts the request. The agent replaces sensitive data with a token and then forwards the altered data stream. These reside on the token server or its supporting application server. This model minimizes application changes, as you only need to replace the application/database connection and the new software automatically manages tokenization. But it does create potential bottlenecks and failover issues, as it runs in-line with existing transaction processing systems. Standard database queries: The tokenization server intercepts and interprets the requests. This is potentially the least secure option, especially for ingesting content to be tokenized. While it sounds complex, there are really only two functions to implement: Send new data to be tokenized and retrieve the token. When authorized, exchange the token for the protected data. The server itself should handle pretty much everything else. Systems Management Finally, as with any

Share:
Read Post

Tokenization: Token Servers

In our previous post we covered token creation, a core feature of token servers. Now we’ll discuss the remaining behind-the-scenes features of token servers: securing data, validating users, and returning original content when necessary. Many of these services are completely invisible to end users of token systems, and for day to day use you don’t need to worry about the details. But how the token server works internally has significant effects on performance, scalability, and security. You need to assess these functions during selection to ensure you don’t run into problems down the road. For simplicity we will use credit card numbers as our primary example in this post, but any type of data can be tokenized. To better understand the functions performed by the token server, let’s recap the two basic service requests. The token server accepts sensitive data (e.g., credit card numbers) from authenticated applications and users, responds by returning a new or existing token, and stores the encrypted value when creating new tokens. This comprises 99% of all token server requests. The token server also returns decrypted information to approved applications when presented a token with acceptable authorization credentials. Authentication Authentication is core to the security of token servers, which need to authenticate connected applications as well as specific users. To rebuff potential attacks, token servers perform bidirectional authentication of all applications prior to servicing requests. The first step in this process is to set up a mutually authenticated SSL/TLS session, and validate that the connection is started with a trusted certificate from an approved application. Any strong authentication should be sufficient, and some implementations may layer additional requirements on top. The second phase of authentication is to validate the user who issues a request. In some cases this may be a system/application administrator using specific administrative privileges, or it may be one of many service accounts assigned privileges to request tokens or to request a given unencrypted credit card number. The token server provides separation of duties through these user roles – serving requests only from only approved users, through allowed applications, from authorized locations. The token server may further restrict transactions – perhaps only allowing a limited subset of database queries. Data Encryption Although technically the sensitive data doesn’t might not be encrypted by the token server in the token database, in practice every implementation we are aware of encrypts the content. That means that prior to being written to disk and stored in the database, the data must be encrypted with an industry-accepted ‘strong’ encryption cipher. After the token is generated, the token server encrypts the credit card with a specific encryption key used only by that server. The data is then stored in the database, and thus written to disk along with the token, for safekeeping. Every current tokenization server is built on a relational database. These servers logically group tokens, credit cards, and related information in a database row – storing these related items together. At this point, one of two encryption options is applied: either field level or transparent data encryption. In field level encryption just the row (or specific fields within it) are encrypted. This allows a token server to store data from different applications (e.g., credit cards from a specific merchant) in the same database, using different encryption keys. Some token systems leverage transparent database encryption (TDE), which encrypts the entire database under a single key. In these cases the database performs the encryption on all data prior to being written to disk. Both forms of encryption protect data from indirect exposure such as someone examining disks or backup media, but field level encryption enables greater granularity, with a potential performance cost. The token server will have bundles the encryption, hashing, and random number generation features – both to create tokens and to encrypt network sessions and stored data. Finally, some implementations use asymmetric encryption to protect the data as it is collected within the application (or on a point of sale device) and sent to the server. The data is encrypted with the server’s public key. The connection session will still typically be encrypted with SSL/TLS as well, but to support authentication rather than for any claimed security increase from double encryption. The token server becomes the back end point of decryption, using the private key to regenerate the plaintext prior to generating the proxy token. Key Management Any time you have encryption, you need key management. Key services may be provided directly from the vendor of the token services in a separate application, or by hardware security modules (HSM), if supported. Either way, keys are kept separate from the encrypted data and algorithms, providing security in case the token server is compromised, as well as helping enforce separation of duties between system administrators. Each token server will have one or more unique keys – not shared by other token servers – to encrypt credit card numbers and other sensitive data. Symmetric keys are used, meaning the same key is used for both encryption and decryption. Communication between the token and key servers is mutually authenticated and encrypted. Tokenization systems also need to manage any asymmetric keys for connected applications and devices. As with any encryption, the key management server/device/functions must support secure key storage, rotation, and backup/restore. Token Storage Token storage is one of the more complicated aspects of token servers. How tokens are used to reference sensitive data or previous transactions is a major performance concern. Some applications require additional security precautions around the generation and storage of tokens, so tokens are not stored in a directly reference-able format. Use cases such as financial transactions with either single-use or multi-use tokens can require convoluted storage strategies to balance security of the data against referential performance. Let’s dig into some of these issues: Multi-token environments: Some systems provide a single token to reference every instance of a particular piece of sensitive data. So a credit card used at a specific merchant site will be represented by a

Share:
Read Post

Incite 7/20/2010: Visiting Day

Back when I went to sleepaway camp as a kid I always looked forward to Visiting Day. Mostly for the food, because after a couple weeks of camp food anything my folks brought up was a big improvement. But I admit it was great to see the same families year after year (especially the family that brought enough KFC to feed the entire camp) and to enjoy a day of R&R with your own family before getting back to the serious business of camping. So I was really excited this past weekend when the shoe was on the other foot, and I got to be the parent visiting XX1 at her camp. First off I hadn’t seen the camp, so I had no context when I saw pictures of her doing this or that. But most of all, we were looking forward to seeing our oldest girl. She’s been gone 3 weeks now, and the Boss and I really missed her. I have to say I was very impressed with the camp. There were a ton of activities for pretty much everyone. Back in my day, we’d entertain ourselves with a ketchup cap playing a game called Skully. Now these kids have go-karts, an adventure course, a zipline (from a terrifying looking 50 foot perch), ATVs and dirt bikes, waterskiing, and a bunch of other stuff. In the arts center they had an iMac-based video production and editing rig (yes, XX1 starred in a short video with her group), ceramics (including their own wheels and kiln), digital photography, and tons of other stuff. For boys there was rocketry and woodworking (including tabletop lathes and jigsaws). Made me want to go back to camp. Don’t tell Rich and Adrian if I drop offline for couple weeks, okay? Everything was pretty clean and her bunk was well organized, as you can see from the picture. Just like her room at home…not! Obviously the counselors help out and make sure everything is tidy, but with the daily inspections and work wheel (to assign chores every day), she’s got to do her part of keeping things clean and orderly. Maybe we’ll even be able to keep that momentum when she returns home. Most of all, it was great to see our young girl maturing in front of our eyes. After only 3 weeks away, she is far more confident and sure of herself. It was great to see. Her counselors are from New Zealand and Mexico, so she’s gotten a view of other parts of the world and learned about other cultures, and is now excited to explore what the world has to offer. It’s been a transformative experience for her, and we couldn’t be happier. I really pushed to send her to camp as early as possible because I firmly believe kids have to learn to fend for themselves in the world without the ever-present influence of their folks. The only way to do that is away from home. Camp provides a safe environment for kids to figure out how to get along (in close quarters) with other kids, and to do activities they can’t at home. That was based on my experience, and I’m glad to see it’s happening for my daughter as well. In fact, XX2 will go next year (2 years younger than XX1 is now) and she couldn’t be more excited after visiting. But there’s more! An unforeseen benefit of camp accrues to us. Not just having one less kid to deal with over the summer – which definitely helps. But sending the kids to camp each summer will force us (well, really the Boss) to let go and get comfortable with the reality that at some point our kids will grow, leave the nest, and fly on their own. Many families don’t deal with this transition until college and it’s very disruptive and painful. In another 9 years we’ll be ready, because we are letting our kids fly every summer. And from where I sit, that’s a great thing. – Mike Photo credits: “XX1 bunk” originally uploaded by Mike Rothman Recent Securosis Posts Wow. Busy week on the blog. Nice. Pricing Cyber-Policies FireStarter: An Encrypted Value is Not a Token! Tokenization: The Tokens Comments on Visa’s Tokenization Best Practices Friday Summary: July 15, 2010 Tokenization Architecture – The Basics Color-blind Swans and Incident Response Home Business Payment Security Simple Ideas to Start Improving the Economics of Cybersecurity Various NSO Quant Posts on the Monitor Subprocesses: Define Policies Collect and Store Analyze Validate and Escalate Incite 4 U We have a failure to communicate! – Chris makes a great point on the How is that Assurance Evidence? blog about the biggest problem we security folks face on a daily basis. It ain’t mis-configured devices or other typical user stupidity. It’s our fundamental inability to communicate. He’s exactly right, and it manifests in the lack of having any funds in the credibility bank, obviously impacting our ability to drive our security agendas. Holding a senior level security job is no longer about the technology. Not by a long shot. It’s about evangelizing the security program and persuading colleagues to think security first and to do the right thing. Bravo, Chris. Always good to get a reminder that all the security kung-fu in the world doesn’t mean crap unless the business thinks it’s important to protect the data. – MR Cyber RF – I was reading Steven Bellovin’s post on Cyberwar, and the only thing that came to mind was Sun Tsu’s quote, “Victorious warriors win first and then go to war, while defeated warriors go to war first and then seek to win.” Don’t think I am one of those guys behind the ‘Cyberwar’ bandwagon, or who likes using war metaphors for football – this subject makes me want to gag. Like most posts on this subject, there is an interesting mixture of stuff I agree with, and an equal blend of stuff I totally disagree with. But the reason

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.