Securosis

Research

Tokenization: Use Cases, Part 2

In our last use case we presented an architecture for securely managing credit card numbers in-house. But in response to a mix of breaches and PCI requirements, some payment processors now offer tokenization as a service. Merchants can subscribe in order to avoid any need to store credit cards in their environment – instead the payment processor provides them with tokens as part of the transaction process. It’s an interesting approach, which can almost completely remove the PAN (Primary Account Number) from your environment. The trade-off is that this closely ties you to your processor, and requires you to use only their approved (and usually provided) hardware and software. You reduce risk by removing credit card data entirely from your organization, at a cost in flexibility and (probably) higher switching costs. Many major processors have built end-to-end solutions using tokenization, encryption, or a combination the two. For our example we will focus on tokenization within a fairly standard Point of Sale (PoS) terminal architecture, such as we see in many retail environments. First a little bit on the merchant architecture, which includes three components: Point of Sale terminals for swiping credit cards. A processing application for managing transactions. A database for storing transaction information. Traditionally, a customer swipes a credit card at the PoS terminal, which then communicates with an on-premise server, that then connects either to a central processing server (for payment authorization or batch clearing) in the merchant’s environment, or directly to the payment processor. Transaction information, including the PAN, is stored on the on-premise and/or central server. PCI-compliant configurations encrypt the PAN data in the local and central databases, as well as all communications. When tokenization is implement by the payment processor, the process changes to: Retail customer swipes the credit card at the PoS. The PoS encrypts the PAN with the public key of the payment processor’s tokenization server. The transaction information (including the PAN, other magnetic stripe data, the transaction amount, and the merchant ID) are transmitted to the payment processor (encrypted). The payment processor’s tokenization server decrypts the PAN and generates a token. If this PAN is already in the token database, they can either reuse the existing token (multi-use), or generate a new token specific to this transaction (single-use). Multi-use tokens may be shared amongst different vendors. The token, PAN data, and possibly merchant ID are stored in the tokenization database. The PAN is used by the payment processor’s transaction systems for authorization and charge submission to the issuing bank. The token is returned to the merchant’s local and/or central payment systems, as is the transaction approval/denial, which hands it off to the PoS terminal. The merchant stores the token with the transaction information in their systems/databases. For the subscribing merchant, future requests for settlement and reconciliation to the payment processor reference the token. The key here is that the PAN is encrypted at the point of collection, and in a properly-implemented system is never again in the merchant’s environment. The merchant never again has the PAN – they simply use the token in any case where the PAN would have been used previously, such as processing refunds.This is a fairly new approach and different providers use different options, but the fundamental architecture is fairly consistent.In our next example we’ll move beyond credit cards and show how to use tokenization to protect other private data within your environment. Share:

Share:
Read Post

Friday Summary: August 6th, 2010

I started running when I was 10. I started because my mom was talking a college PE class, so I used to tag along and no one seemed to care. We ran laps three nights a week. I loved doing it and by twelve I was lapping the field in the 20 minutes allotted. I lived 6 miles from my junior high and high school so I used to run home. I could have walked, ridden a bike, or taken rides from friends who offered, but I chose to run. I was on the track team and I ran cross country – the latter had us running 10 miles a day before I ran home. And until I discovered weight lifting, and added some 45 lbs of upper body weight, I was pretty fast. I used to run 6 days week, every week. Run one evening, next day mid-afternoon, then morning; and repeat the cycle, taking the 7th day off. That way I ran with less than 24 hours rest four days days, but it still felt like I got two days off. And I would play all sorts of mental games with myself to keep getting better, and to keep it interesting. Coming off a hill I would see how long I could hold the faster speed on the flat. Running uphill backwards. Going two miles doing that cross-over side step they teach you in martial arts. When I hit a plateau I would take a day and run wind sprints up the steepest local hill I could find. The sandy one. As fast as I could run up, then trot back down, repeating until my legs were too rubbery to feel. Or maybe run speed intervals, trying to get myself in and out of oxygen deprivation several times during the workout. If I was really dragging I would allow myself to go slower, but run with very heavy ‘cross-training’ shoes. That was the worst. I have no idea why, I just wanted to run, and I wanted to push myself. I used to train with guys who were way faster that me, which was another great way to motivate. We would put obscene amounts of weight on the leg press machine and see how many reps we could do, knee cartilage be damned, to get stronger. We used to jump picnic tables, lengthwise, just to gain explosion. One friend like to heckle campus security and mall cops just to get them to chase us because it was fun, but also because being pursued by a guy with a club is highly motivating. But I must admit I did it mainly because there are few things quite as funny as the “oomph-ugghh” sound rent-a-guards make when they hit the fence you just casually hopped over. For many years after college, while I never really trained to run races or compete at any level, I continued to push myself as much as I could. I liked the way I felt after a run, and I liked the fact that I can eat whatever I want … as long as I get a good run in. Over the last couple years, due to a combination of age and the freakish Arizona summers, all that stopped. Now the battle is just getting out of the house: I play mental games just to get myself out the door to run in 112 degrees. I have one speed, which I affectionately call “granny gear”. I call it that because I go exactly the same speed up hill as I do on the flat: slow. Guys rolling baby strollers pass me. And in some form of karmic revenge I can just picture myself as the mall cop, getting toasted and slamming into chain link fence because I lack the explosion and leg strength to hop much more than the curb. But I still love it as it clears my head and I still feel great afterwards … gasping for air and blotchy red skin notwithstanding. Or at least that is what I am telling myself as I am lacing up my shoes, drinking a whole bunch of water, and looking at the thermometer that reads 112. Sigh Time to go … On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian’s Dark Reading post on What You Should Know About Tokenization. Rich’s The Five Things You Need to Know About Social Networking Security, on the Websense blog. Chris’s Beware Bluetooth Keyboards with iOS Devices, starring Mike – belated, as we forgot to include it last time. Favorite Securosis Posts Rich: NSO Quant: Firewall Management Process Map (UPDATED). Mike Rothman: What Do We Learn at Black Hat/DefCon? Adrian Lane: Incite 8/4/2010: Letters for Everyone. Other Securosis Posts Tokenization: Use Cases, Part 1. GSM Cell Phones to Be Intercepted in Defcon Demonstration. Tokenization: Series Index. Tokenization: Token Servers, Part 3, Deployment Models. Tokenization: Token Servers, Part 2 (Architecture, Integration, and Management). Death, Irrelevance, and a Pig Roast. Favorite Outside Posts Mike Rothman: Website Vulnerability Assessments: Good, Fast or Cheap – Pick Two. Great post from Jeremiah on the reality of trade-offs. Adrian Lane: How Microsoft’s Team Approach Improves Security. What is it they say about two drunks holding each other up? David Mortman: Taking Back the DNS. Vixie & ISC plan to build reputation APIs directly into BIND. Rich Mogull: 2010 Data Breach Investigations Report Released. VZ Business continues to raise the bar for data and breach analysis. 2010 version adds data from the US Secret Service. Cool stuff. Chris Pepper: DefCon Ninja Badges Let Hackers Do Battle. I hope Rich is having fun at DefCon – this sounds pretty good, at least. Project Quant Posts NSO Quant: Manage Firewall Policy Review Sub-Processes. NSO Quant: Firewall Management Process Map (UPDATED). NSO Quant: Monitor Process Revisited. NSO Quant: Monitoring Health Maintenance Subprocesses. NSO Quant: Validate and Escalate Sub-Processes. NSO Quant: Analyze Sub-Process. NSO Quant: Collect and Store SubProcesses. Research Reports and Presentations White Paper: Endpoint Security Fundamentals.

Share:
Read Post

Tokenization: Use Cases, Part 1

We have now discussed most of the relevant bits of technology for token server construction and deployment. Armed with that knowledge we can tackle the most important part of the tokenization discussion: use cases. Which model is right for your particular environment? What factors should be considered in the decision? The following three or four uses cases cover most of the customer situations we get calls asking for advice on. As PCI compliance is the overwhelming driver for tokenization at this time, our first two use cases will focus on different options for PCI-driven deployments. Mid-sized Retail Merchant Our first use case profiles a mid-sized retailer that needs to address PCI compliance requirements. The firm accepts credit cards but sells exclusively on the web, so they do not have to support point of sale terminals. Their focus is meeting PCI compliance requirements, but how best to achieve the goal at reasonable cost is the question. As in many cases, most of the back office systems were designed before credit card storage was regulated, and use the CC# as part of the customer and order identification process. That means that order entry, billing, accounts receivable, customer care, and BI systems all store this number, in addition to web site credit authorization and payment settlement systems. Credit card information is scattered across many systems, so access control and tight authentication are not enough to address the problem. There are simply too many access points to restrict with any certainty of success, and there are far too many ways for attackers to compromise one or more systems. Further, some back office systems are accessible by partners for sales promotions and order fulfillment. The security efforts will need to embrace almost every back office system, and affect almost every employee. Most of the back office transaction systems have no particular need for credit card numbers – they were simply designed to store and pass the number as a reference value. The handful of systems that employ encryption are transparent, meaning they automatically return decrypted information, and only protect data when stored on disk or tape. Access controls and media encryption are not sufficient controls to protect the data or meet PCI compliance in this scenario. While the principal project goal is PCI compliance; as with any business strong secondary goals of minimizing total costs, integration challenges, and day to day management requirements. Because the obligation is to protect card holder data and limit the availability of credit cards in clear text, the merchant does have a couple choices: encryption and tokenization. They could implement encryption in each of the application platforms, or they could use a central token server to substitute tokens for PAN data at the time of purchase. Our recommendation for our theoretical merchant is in-house tokenization. An in-house token server will work with existing applications and provide tokens in lieu of credit card numbers. This will remove PAN data from the servers entirely with minimal changes to those few platforms that actually use credit cards: accepting them from customers, authorizing charges, clearing, and settlement – everything else will be fine with a non-sensitive token that matches the format of a real credit card number. We recommend a standalone server over one embedded within the applications, as the merchant will need to share tokens across multiple applications. This makes it easier to segment users and services authorized to generate tokens from those that can actually need real unencrypted credit card numbers. Diagram 1 lays out the architecture. Here’s the structure: A customer makes a purchase request. If this is a new customer, they send their credit card information over an SSL connection (which should go without saying). For future purchases, only the transaction request need be submitted. The application server processes the request. If the credit card is new, it uses the tokenization server’s API to send the value and request a new token. The tokenization server creates the token and stores it with the encrypted credit card number. The tokenization server returns the token, which is stored in the application database with the rest of the customer information. The token is then used throughout the merchant’s environment, instead of the real credit card number. To complete a payment transaction, the application server sends a request to the transaction server. The transaction server sends the token to the tokenization server, which returns the credit card number. The transaction information – including the real credit card number – is sent to the payment processor to complete the transaction. While encryption could protect credit card data without tokenization, and be implemented in such a way as to minimize changes to UI and database storage to supporting applications, it would require modification of every system that handles credit cards. And a pure encryption solution would require support of key management services to protect encryption keys. The deciding factor against encryption here is the cost of retrofitting system with application layer encryption – especially because several rely on third-party code. The required application changes, changes to operations management and disaster recovery, and broader key management services required would be far more costly and time-consuming. Recoding applications would become the single largest expenditure, outweighing the investment in encryption or token services. Sure, the goal is compliance and data security, but ultimately any merchant’s buying decision is heavily affected by cost: for acquisition, maintenance, and management. And for any merchant handling credit cards, as the business grows so does the cost of compliance. Likely the ‘best’ choice will be the one that costs the least money, today and in the long term. In terms of relative security, encryption and tokenization are roughly equivalent. There is no significant cost difference between the two, either for acquisition or operation. But there is a significant difference in the costs of implementation and auditing for compliance. Next up we’ll look at another customer profile for PCI. Share:

Share:
Read Post

Incite 8/4/2010: Letters for Everyone

As I mentioned in the Mailbox Vigil, we don’t put much stock in snail mail anymore. Though we did get a handful of letters from XX1 (oldest daughter) from sleepaway camp, aside from that it’s bills and catalogs. That said, every so often you do get entertained by the mail. A case in point happened when we got back from our summer pilgrimage to the Northern regions this weekend (which is why there was no Incite last week). On arriving home (after a brutal 15 hour car ride, ugh!) we were greeted by a huge box of mail delivered by our trusty postal worker. Given that the Boss was occupied doing about 100 loads of laundry and I had to jump back into work, we let XX1 express her newfound maturity and sort our mail. It was pretty funny. She called out every single piece and got genuinely excited by some of the catalogs. She got a thank you note from a friend, a letter from another, and even a few of her own letters to us from camp (which didn’t arrive before we left on holiday). XX2 (her twin) got a thank you note also. But nothing for the boy. I could tell he was moping a bit and I hoped something would come his way. Finally he heard the magic words: “Sam got a letter.” Reminded me of Blue’s Clues. It was from someone with an address at the local mall. Hmmm. But he dutifully cracked it open and had me read it to him. It was from someone at LensCrafters reminding him that it’s been a year since he’s gotten his glasses and he’s due for a check-up. He was on the edge of his seat as I read about how many adults have big problems with their eyes and how important it is to get an annual check-up. Guess they didn’t realize the Boy is not yet 7 and also that he sees his Opthamologist every 6 weeks. But that didn’t matter – he got a letter. So he’s carrying this letter around all day, like he just got a toy from Santa Claus or the Hanukkah fairy. He made me read it to him about 4 times. Now he thinks the sales person at LensCrafters is his pal. Hopefully he won’t want to invite her to his birthday party. Normally I would have just thrown out the direct mail piece, but I’m glad we let XX1 sort the mail. The Boy provided me with an afternoon of laughter and that was certainly worth whatever it cost to send us the piece. – Mike. Photo credits: “surprise in the mailbox” originally uploaded by sean dreilinger Recent Securosis Posts The Cancer within Evidence Based Research Methodologies Friday Summary: July 23, 2010 Death, Irrelevance, and a Pig Roast What Do We Learn at Black Hat/DefCon? Tokenization Series: Token Servers Token Servers, Part 2 (Architecture, Integration, and Management) Token Servers, Part 3 (Deployment Models) Various NSO Quant Posts: Monitoring Health Maintenance Subprocesses Monitor Process Revisited Incite 4 U We’re AV products. Who would try to hack us? – More great stuff from Krebs. This time he subjected himself to installing (and reinstalling) AV products in his VM to see which of them actually use Windows anti-exploitations technologies (like DEP and ASLR). The answer? Not many, though it’s good to see Microsoft eating their own dog food. I like the responses from the AV vendors, starting with F-Secure’s “we’ve been working on performance,” which means they are prioritizing not killing your machine over security – go figure. And Panda shows they have ostriches in Spain as well, as they use their own techniques to protect their software. OK, sure. This is indicative of the issues facing secure software. If the security guys can’t even do it right, we don’t have much hope for everyone else. Sad. – MR Mid-market basics – She does not blog very often, but when she does, Jennifer Jabbusch gets it right. We here at Securosis are all about simplifying security for end users, and I thought JJ’s recent post on Four Must-Have SMB Security Tools did just that. With all the security pontification about new technologies to supplant firewalls, and how ineffective AV is at detecting bad code, there are a couple tools that are fundamental to data security. As bored as we are talking about them, AV, firewalls, and access controls are the three basics that everyone needs. While I would personally throw in encrypted backups as a must have, those are the core components. But for many SMB firms, these technologies are the starting point. They are not looking at extrusion prevention, behavioral monitoring, or event correlation – just trying to make sure the front door is locked, both physically and electronically. It’s amazing to think, but I run into companies all the time where an 8-year-old copy of Norton AV and a password on the ‘server’ are the security program. I hope to see more basic posts like this that appeal to the mainstream – and SMB is the mainstream – on Dark Reading and other blogs as well. – AL Jailbreak with a side of shiv – Are you one of those folks who wants to jailbreak your iPhone to install some free apps on it? Even though it removes some of the most important security controls on the device? Well, have I got a deal for you! Just visit jailbreakme.com and the magical web application will jailbreak your phone right from the browser. Of course any jailbreak is the exploitation of a security vulnerability. And in this case it’s a remotely exploitable browser vulnerability, but don’t worry – I’m sure no bad guys will use it now that it’s public. Who would want to remotely hack the most popular cell phone on the planet? – RM A pig by a different name – SourceFire recently unveiled Razorback, their latest open source framework. Yeah, that’s some kind of hog or something,

Share:
Read Post

What Do We Learn at Black Hat/DefCon?

Actually I learned nothing because I wasn’t there. Total calendar fail on my part, as a family vacation was scheduled during Black Hat week. You know how it goes. The Boss says, “how is the week of July 26 for our week at the beach?” BH is usually in early August, so I didn’t think twice. But much as I missed seeing my peeps and tweeps at Black Hat, a week of R&R wasn’t all bad. Though I was sort of following the Tweeter and did see the coverage and bloggage of the major sessions. So what did we learn this year? SSL is bad: Our friend RSnake and Josh Sokol showed that SSL ain’t all that. Too bad 99% of the laypeople out there see the lock and figure all is good. Actually, 10% of laypeople know what the lock means. The other 89% wonder how the Estonians made off with their life savings. SCADA systems are porous: OK, I’m being kind. SCADA is a steaming pile of security FAIL. But we already knew that. Thanks to a Red Tiger, we now know there are close to 40,000 vulnerabilities in SCADA systems, so we have a number. At least these systems aren’t running anything important, right? Auto-complete is not your friend: As a Mac guy I never really relied on auto-complete, since I can use TextExpander. But lots of folks do and Big J got big press when he showed it’s bad in Safari and also then proved IE is exposed as well. Facebook spiders: Yes, an enterprising fellow named Ron Bowes realized that most folks have set their Facebook privacy settings, ah, incorrectly. So he was able to download about 100 million names, phone numbers, and email addresses with a Ruby script. Then he had the nerve to put it up on BitTorrent. Information wants to be free, after all. (This wasn’t a session at BH, but cool nonetheless.) ATM jackpot: Barnaby Jack showed once again that he can hit the jackpot at will since war dialing still workss (yay WarGames!), and you can get pretty much anything on the Internet (like a key to open many ATM devices). Anyhow, great demo and I’m sure organized crime is very interested in those attack vectors. I can haz your cell tower: Chis Paget showed how he could spoof a cell tower for $1,500. And we thought the WiFi Evil Twin was bad. This is cool stuff. I could probably go on for a week, since all the smart kids go to Vegas in the summer to show how smart they are. And to be clear, they are smart. But do you, Mr. or Ms. Security Practitioner, care about these attacks and this research? The answer is yes. And no. First of all, you can see the future at Black Hat. Most of the research is not weaponized and a good portion of it isn’t really feasible to weaponize. An increasing amount is attack-ready, but for the most part you get to see what will be important at some point in the future. Maybe. For that reason, at least paying attention to the research is important. But tactically what happens in Vegas is unlikely have any impact on day-to-day operations any time soon. Note that I used the word ‘tactical’, because most of us spend our days fighting fires and get precious few minutes a day – if any – to think strategically about what we need to do tomorrow. Forget about thinking about how to protect against attacks discussed at Black Hat. That’s probably somewhere around 17,502 on the To-Do list. Of course, if your ethical compass is a bit misdirected or your revenues need to be laundered through 5 banks in 3 countries before the funds hit your account, then the future is now and Black Hat is your business plan for the next few years. But that’s another story for another day.   Share:

Share:
Read Post

Tokenization: Token Servers, Part 3, Deployment Models

We have covered the internals of token servers and talked about architecture and integration of token services. Now we need to look at some of the different deployment models and how they match up to different types of businesses. Protecting medical records in multi-company environments is a very different challenge than processing credit cards for thousands of merchants. Central Token Server The most common deployment model we see today is a single token server that sits between application servers and the back end transaction servers. The token server issues one or more tokens for each instance of sensitive information that it recieves. For most applications it becomes a reference library, storing sensitive information within a repository and providing an index back to the real data as needed. The token service is placed in line with existing transaction systems, adding a new substitution step between business applications and back-end data processing. As mentioned in previous posts, this model is excellent for security as it consolidates all the credit card data into a single highly secure server; additionally, it is very simple to deploy as all services reside in a single location. And limiting the number of locations where sensitive data is stored and accessed both improves security and reduces auditing, as there are fewer systems to review. A central token server works well for small businesses with consolidated operations, but does not scale well for larger distributed organizations. Nor does it provide the reliability and uptime demanded by always-on Internet businesses. For example: Latency: The creation of a new token, lookup of existing customers, and data integrity checks are computationally complex. Most vendors have worked hard to alleviate this problem, but some still have latency issues that make them inappropriate for financial/point of sale usage. Failover: If the central token server breaks down, or is unavailable because of a network outage, all processing of sensitive data (such as orders) stops. Back-end processes that require tokens halt. Geography: Remote offices, especially those in remote geographic locations, suffer from network latency, routing issues, and Internet outages. Remote token lookups are slow, and both business applications and back-end processes suffer disproportionately in the event of disaster or prolonged network outages. To overcome issues in performance, failover, and network communications, several other deployment variations are available from tokenization vendors. Distributed Token Servers With distributed token servers, the token databases are copies and shared among multiple sites. Each has a copy of the tokens and encrypted data. In this model, each site is a peer of the others, with full functionality. This model solves some of the performance issues with network latency for token lookup, as well as failover concerns. Since each token server is a mirror, if any single token server goes down, the others can share its load. Token generation overhead is mitigated, as multiple servers assist in token generation and distribution of requests balances the load. Distributed servers are costly but appropriate for financial transaction processing. While this model offers the best option for uptime and performance, synchronization between servers requires careful consideration. Multiple copies means synchronization issues, and carefully timed updates of data between locations, along with key management so encrypted credit card numbers can be accessed. Finally, with multiple databases all serving tokens, you increase the number of repositories that must be secured, maintained, and audited increases substantially. Partitioned Token Servers In a partitioned deployment, a single token server is designated as ‘active’, and one or more additional token servers are ‘passive’ backups. In this model if the active server crashes or is unavailable a passive server becomes active until the primary connection can be re-established. The partitioned model improves on the central model by replicating the (single, primary) server configuration. These replicas are normally at the same location as the primary, but they may also be distributed to other locations. This differs from the distributed model in that only one server is active at a time, and they are not all peers of one another. Conceptually partitioned servers support a hybrid model where each server is active and used by a particular subset of endpoints and transaction servers, as well as as a backup for other token servers. In this case each token server is assigned a primary responsibility, but can take on secondary roles if another token server goes down. While the option exists, we are unaware of any customers using it today. The partitioned model solves failover issues: if a token server fails, the passive server takes over. Synchronization is easier with this model as the passive server need only mirror the active server, and bi-directional synchronization is not required. Token servers leverage the mirroring capabilities built into the relational database engines, as part of their back ends, to provide this capability. Next we will move on to use cases. Share:

Share:
Read Post

Tokenization: Series Index

Understanding and Selecting a Tokenization Solution: Introduction Business Justification Token System Basics The Tokens Token Servers, Part 1, Internal Functions Token Servers, Part 2, Architecture and Integration Token Servers, Part 3, Deployment Models Tokenization: Use Cases, Part 1 Tokenization: Use Cases, Part 2 Tokenization: Use Cases, Part 3 Tokenization Topic Roundup Tokenization: Selection Process Share:

Share:
Read Post

GSM Cell Phones to Be Intercepted in Defcon Demonstration

This hit Slashdot today, and I expect the mainstream press to pick it up fairly soon. Chris Paget will be intercepting cell phone communications at Defcon during a live demonstration. I suspect this may be the single most spectacular presentation during all of this year’s Defcon and Black Hat. Yes, people will be cracking SCADA and jackpotting ATMs, but nothing strikes closer to the heart than showing major insecurities with the single most influential piece of technology in society. Globally I think cell phones are even more important than television. Chris is taking some major precautions to stay out of jail. He’s working hand in hand with the Electronic Frontier Foundation on the legal side, and there will be plenty of warnings on-site and no information from any calls recorded or stored. I suspect he’s setting up a microcell under his control and intercepting communications in a man in the middle attack, but we’ll have to wait until his demo to get all the details. For years the mobile phone companies have said this kind of interception is impractical or impossible. I guess we’ll all find out this weekend… Share:

Share:
Read Post

Friday Summary: July 23, 2010

A couple weeks ago I was sitting on the edge of the hotel bed in Boulder, Colorado, watching the immaculate television. A US-made 30” CRT television in “standard definition”. That’s cathode ray tube for those who don’t remember, and ‘standard’ is the marketing term for ‘low’. This thing was freaking horrible, yet it was perfect. The color was correct. And while the contrast ratio was not great, it was not terrible either. Then it dawned on me that the problem was not the picture, as this is the quality we used to get from televisions. Viewing an old set, operating exactly the same way they always did, I knew the problem was me. High def has so much more information, but the experience of watching the game is the same now as it was then. It hit me just how much our brains were filling in missing information, and we did not mind this sort of performance 10 years ago because it was the best available. We did not really see the names on the backs of football jerseys during those Sunday games, we just thought we did. Heck, we probably did not often make out the numbers either, but somehow we knew who was who. We knew where our favorite players on the field were, and the red streak on the bottom of the screen pounding a blue colored blob must be number 42. Our brain filled in and sharpened the picture for us. Rich and I had been discussing experience bias, recency bias, and cognitive dissonance during out trip to Denver. We were talking about our recent survey and how to interpret the numbers without falling into bias traps. It was an interesting discussion of how people detect patterns, but like many of our conversations devolved into how political and religious convictions can cloud judgement. But not until I was sitting there, watching television in the hotel; did I realize how much our prior experiences and knowledge shape perception, derived value, and interpreted results. Mostly for the good, but unquestionably some bad. Rich also sent me a link to a Michael Shermer video just after that, in which Shermer discusses patterns and self deception. You can watch the video and say “sure, I see patterns, and sometimes what I see is not there”, but I don’t think videos like this demonstrate how pervasive this built in feature is, and how it applies to every situation we find ourself in. The television example of this phenomena was more shocking than some others that have popped into my head since. I have been investing in and listening to high-end audio products such as headphones for years. But I never think about the illusion of a ‘soundstage’ right in front of me, I just think of it as being there. I know the guitar player is on the right edge of the stage, and the drummer is in the back, slightly to the left. I can clearly hear the singer when she turns her head to look at fellow band members during the song. None of that is really in front of me, but there is something in the bits of the digital facsimile on my hard drive that lets my brain recognize all these things, placing the scene right there in front of me. I guess the hard part is recognizing when and how it alters our perception. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Rich quoted in “Apple in a bind over its DNS patch”. Adrian’s Dark Reading post on SIEM ain’t DAM. Rich and Martin on Network Security Podcast #206. Favorite Securosis Posts Rich: Pricing Cyber-Policies. As we used to say at Gartner, all a ‘cybersecurity’ policy buys you is a seat at the arbitration table. Mike Rothman: The Cancer within Evidence Based Research Methodologies. We all need to share data more frequently and effectively. This is why. Adrian Lane: FireStarter: an Encrypted Value Is Not a Token!. Bummer. Other Securosis Posts Tokenization: Token Servers. Incite 7/20/2010: Visiting Day. Tokenization: The Tokens. Comments on Visa’s Tokenization Best Practices. Friday Summary: July 15, 2010. Favorite Outside Posts Rich: Successful Evidence-Based Risk Management: The Value of a Great CSIRT. I realize I did an entire blog post based on this, but it really is a must read by Alex Hutton. We’re basically a bunch of blind mice building 2-lego high walls until we start gathering, and sharing, information on which of our security initiatives really work and when. Mike Rothman: Understanding the advanced persistent threat. Bejtlich’s piece on APT in SearchSecurity is a good overview of the term, and how it’s gotten fsked by security marketing. Adrian Lane: Security rule No. 1: Assume you’re hacked. Project Quant Posts NSO Quant: Monitor Process Revisited. NSO Quant: Monitoring Health Maintenance Subprocesses. NSO Quant: Validate and Escalate Subprocesses. NSO Quant: Analyze Subprocess. NSO Quant: Collect and Store Subprocesses. NSO Quant: Define Policies Subprocess. NSO Quant: Enumerate and Scope Subprocesses. Research Reports and Presentations White Paper: Endpoint Security Fundamentals. Understanding and Selecting a Database Encryption or Tokenization Solution. Low Hanging Fruit: Quick Wins with Data Loss Prevention. Report: Database Assessment. Database Audit Events. XML Security Overview Presentation. Project Quant Survey Results and Analysis. Project Quant Metrics Model Report. Top News and Posts Researchers: Authentication crack could affect millions. SCADA System’s Hard-Coded Password Circulated Online for Years. Microsoft Launches ‘Coordinated’ Vulnerability Disclosure Program. GSM Cracking Software Released. How Mass SQL Injection Attacks Became an Epidemic. Harsh Words for Professional Infosec Certification. Google re-ups the disclosure debate. A new policy – 60 days to fix critical bugs or they disclose. I wonder if anyone asked the end users what they want? Adobe reader enabling protected mode. This is a very major development… if it works. Also curious to see what they do for Macs. Oracle to release 59 critical patches in security update. Is it just me, or do they have more security patches than bug fixes nowdays? Connecticut AG reaches agreement with

Share:
Read Post

Death, Irrelevance, and a Pig Roast

There is nothing like a good old-fashioned mud-slinging battle. As long as you aren’t the one covered in mud, that is. I read about the Death of Snort and started laughing. The first thing they teach you in marketing school is when no one knows who you are, go up to the biggest guy in the room and kick them in the nuts. You’ll get your ass kicked, but at least everyone will know who you are. That’s exactly what the folks at OISF (who drive the Suricata project) did, and they got Ellen Messmer of NetworkWorld to bite on it. Then she got Marty Roesch to fuel the fire and the end result is much more airtime than Suricata deserves. Not that it isn’t interesting technology, but to say it’s going to displace Snort any time soon is crap. To go out with a story about Snort being dead is disingenuous. But given the need to drive page views, the folks at NWW were more than willing to provide airtime. Suricata uses Snort signatures (for the most part) to drive its rule base. They’d better hope it’s not dead. But it brings up a larger issue of when a technology really is dead. In reality, there are few examples of products really dying. If you ended up with some ConSentry gear, then you know the pain of product death. But most products are around around ad infinitum, even if they aren’t evolved. So those products aren’t really dead, they just become irrelevant. Take Cisco MARS as an example. Cisco isn’t killing it, it’s just not being used as a multi-purpose SIEM, which is how it was positioned for years. Irrelevant in the SIEM discussion, yes. Dead, no. Ultimately, competition is good. Suricata will likely push the Snort team to advance their technology faster than in the absence of an alternative. But it’s a bit early to load Snort onto the barbie – even if it is the other white meat. Yet, it usually gets back to the reality that you can’t believe everything you read. Actually you probably shouldn’t believe much that you read. Except our stuff, of course. Photo credit: “Roasted pig (large)” originally uploaded by vnoel Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.