Securosis

Research

Mobile Identity—WTF?

Identity management on mobile devices: How do we do it? I have been taking a lot of calls on mobile identity issues and solutions over the last three months, and I am just as confused now as when I started looking into this subject. And I think the vendors I have spoken with are reaching, in their assessments of the right course of action and where the market is heading. If you want to implement identity on a mobile device, what do you do? Option 1, Crawl: Use a mobile browser and capture user names and passwords just like we do on the desktop. But mobile browsers kinda suck. People don’t want to use them and they suffer many of the same security problems we have had for a decade (see OWASP Top 10). Option 2, Toddle: Augment with OAuth tokens. Is OAuth 2.0 even a standard? But what about the security issues of encryption, digital signatures, and bi-directional verification of trust? Option 3, Walk: Adopt the ‘App’ model, and create an IAM app, which handles all the complicated identity stuff on your behalf. How does that app cooperate with other apps? How do we deal with personal and corporate personas? How do we deal with knowing the user is who they are supposed to be, and not a random person who found your phone? Option 4, Run: Use special features of the mobile platform, such as voice recognition on phones, or cameras for facial recognition? Will that work when I am on the subway or in Starbucks? Does Joe User want that – enough to pay for it – or will they look at such things as privacy violations? These are the options I am hearing about. And none of them seem to be fully thought out. And once we get past Toddle, who’s the buyer? Seeking wisdom, I scaled the mountain to discuss the topic with Securosis’s IAM guru, Gunnar Peterson. What I got was: “Mobile Identity? Ooohhh – it’s early days and it’s an unholy mess”. Yes, that pretty much summed it up. Gunnar agreed that this is the current progression, and that Identity definitely gets ‘stronger’ with each progressive step outlined, but it also gets much more complicated. Do you think I am over-reacting? Did I miss anything that concerns you? This is a topic we will dive into over the coming weeks, so I would like to hear from the community. Share:

Share:
Read Post

Friday Summary: January 11, 2013

Tina Slankas presented at the Phoenix ISSA chapter this week on use of patterns for building security programs – slides can be downloaded here (PDF). The thrust of her idea was to use patterns – think design patterns if you like – for putting together control frameworks to define security efforts. Tina stated she was using the definition of ‘pattern’ in a very broad way, but the essence was reusable constructs for managing different aspects of enterprise security. For example: how identity management will function at a high level, and how will it fit with other systems. As a software developer or architect, patterns are invaluable for object-oriented programming, helping model complex ideas as a collection of simple patterns. To be honest, I abandoned the idea of secure design patterns for software architecture pretty much when I first got involved with security. I could not articulate security into the patterns, be they behavioral or structural. Maybe that was just my lack of skill at the time, but it felt like the complexities of how to secure code were beyond pattern descriptions. What was compromised was not as interesting as how it was compromised, and it usually turned out to be a process or protocol that got abused. It was the bits flowing between different patterns, or the ones left undefined, that I worried about. Trust relationships. Assumptions. Identity. Avoiding things like replay attacks. Repudiation. The problem space felt process-oriented, not object-oriented. But in terms of a control or management framework for IT systems, reusable patterns are an interesting idea. They help with consistency across multiple sites/deployments. They offer a layer of abstraction – you don’t care if the problem is solved by a firewall, a WAF, or DLP, so long as the required controls are in place and meet the requirements. Your could represent the entire PCI specification as a set of patterns. Unless you have a huge infrastructure to manage, I’m not clear how practical this is – but I am interested in the idea of security patterns. I remain skeptical of its value for secure code development, but I see its value for security program management. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Rich’s TidBITS post: Do You Need Mac Antivirus Software in 2013? Gunnar’s Dark Reading post: What Is It You Would Say That You Do Here? Adrian’s Dark Reading Post on DB Threats and Countermeasures. Securosis Posts $50K buys how much FDE? Java Sucks. Again. Most Consumers Don’t Need Mac AV. Integration vs. Segregation. DDoS: Distributed, but not evenly. Incite 1/9/2013: Never Lost. Detection vs. Protection and the Game of Words. ENISA BYOD FTW. Pwn Ur Cisco Phone. Understanding Identity Management for Cloud Service: The Solution Space. Prove It to Use It. Bored? Set up your own CA. Internet Explorer 8 0-Day Bypasses Patch. Favorite Outside Posts Adrian Lane: Hardening Sprints. What are they? Do you need them? I’m a big fan of the occasional hardening sprint to let each developer fix one thing that bugs them, to pull stuff out of the security bucket list, or to otherwise do quality control. James Arlen: Nather’s Law of Policy Management. Mike Rothman: State sponsored attack: a howto guide. For a change, Rob Graham is lampooning the prevailing wisdom. He’s very good that that. Project Quant Posts Malware Analysis Quant: Index of Posts. Malware Analysis Quant: Metrics – Monitor for Reinfection. Malware Analysis Quant: Metrics – Remediate. Malware Analysis Quant: Metrics – Find Infected Devices. Malware Analysis Quant: Metrics – Define Rules and Search Queries. Malware Analysis Quant: Metrics – The Malware Profile. Malware Analysis Quant: Metrics – – – Dynamic Analysis. Research Reports and Presentations Implementing and Managing Patch and Configuration Management. Defending Against Denial of Service (DoS) Attacks. Securing Big Data: Security Recommendations for Hadoop and NoSQL Environments. Tokenization vs. Encryption: Options for Compliance. Pragmatic Key Management for Data Encryption. The Endpoint Security Management Buyer’s Guide. Pragmatic WAF Management: Giving Web Apps a Fighting Chance. Understanding and Selecting Data Masking Solutions. Top News and Posts Adobe fixes Flash Player and Microsoft patches IE 10 to update its built-in version. Under the hood of the cyber attack on the U.S. Banks. Facebook, Yahoo Fix Valuable $ecurity Hole$. Zero-Day Java Exploit Debuts in Crimeware. Does Your Alarm Have a Default Duress Code? How PCI Standards Will Really Die. Enhancing Certificate Security. Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Bert Knabe, in response to Prove It to Use It. You mean you don’t believe it?! It’s from a government official! They never lie! Share:

Share:
Read Post

Understanding Identity Management for Cloud Service: The Solution Space

Adrian and Gunnar here: After spending a few weeks getting updates from Identity and Access Management (IAM) service vendors – as well as a couple weeks for winter break – we have gathered the research we need to delve into the meat of our series on Understanding and Selecting Identity Management for Cloud Services. Our introductory post outlined the topics we will cover. This series is intended as a market overview, taking a broad look at issues you need to consider when evaluating cloud-based identity support systems. The intro hinted at the reasons cloud computing models force change in our approaches to access control, but today’s post will flesh out the problems of cloud IAM. The cloud excels at providing enterprise with apps and data. But what about identity information? Companies face issues trying to retain control of identity management while taking advantage of the cloud. The goal is to unify identity management for internal and external user across both traditional IT and third party cloud services. It is possible to manage user access to cloud computing resources in-house, but the architecture must address take integration complexity and management costs into account. Most organizations – particularly enterprises – find these inconveniences outweigh the benefits. For many of the same reasons (including on-demand service, elasticity, broad network access, reduction in capital expenditures, and total cost) companies adopt cloud computing services instead of in-house services, and they also leverage third-party cloud services to manage identity and access management. Managing identity was a lot simpler when the client-server computing model was the norm, and users were mostly limited to a desktop PC with another set of credentials to access a handful of servers.. set up the ACLs, sprinkle on some roles, and voila! But as servers and applications multiplied, the “endpoint” shifted from fixed desktop to remote devices, and servers were integrated to other server domains – never mind ACLs and roles, what realm are we in? – we used directory services to provide a single identity management repository, and help propagate identity across the enterprise. Now we have an explosion of external service providers: financial applications, cloud storage, social media, workflow, CRM, email, collaboration, and web conferencing, to name a few. These ‘extra-enterprise’ services are business critical, but don’t directly link into traditional directory services. Cloud computing services turn identity management on its ear. The big shift comes in three main parts: IT no longer owns the servers and applications the organization relies upon, provider capabilities are not fully compatible with existing internal systems, and the ways users consume cloud services have changed radically. In fact an employee may consume corporate cloud services without ever touching in-house IT systems. Just about every enterprise uses Software as a Service (SaaS), and many use Platform and Infrastructure as a Service (PaaS and IaaS, respectively) as well – each with its own approaches to Identity and Access Management. Extending traditional corporate identity services outside the corporate environment is not a trivial effort – it requires integration of existing IAM systems with the cloud service provider(s). Most companies rely on dozens of cloud service providers, each with a different set of identity and authorization capabilities, as well as different programatic and web interfaces. The time, effort, and cost to develop and maintain links with each service provider can be overwhelming. Cloud Identity Solutions Ideally we want to extend the existing in-house identity management capabilities to third-party systems, minimizing the work for IT management while delivering services to end users with minimal disruption. And we would like to maintain control over user access – adding and removing users as needed, and propagating new authorization policies without significant latency. We also want to collect information on access and policy status that help meet security and compliance requirements. And rather than build a custom bridge to each and every third-party service, we would like a simple management interface that extends our controls and policies to the various third-party services. Features and benefits common to most cloud identity and access management systems include: Authentication, Single Sign-on (SSO): One of the core services is the ability to authenticate users based on provided credentials, and then allow each user to access multiple (internal and external) services without having to repeatedly supply credentials to each service. Offering SSO to users is, of course, just about the only time anyone is happy to see the security team show up – make the most of it! Identity Federation: Federated identity is where identity and authorization settings are collected from multiple identity management systems, enabling different systems to define user capabilities and access. Identity and authorization are a shared responsibility across multiple authoritative sources. Federated identity is a superset of authentication and single sign-on. Federation made headway as a conveyance engine for SSO and Web Services. Its uptake in cloud has been substantial because its core architecture helps companies navigate one of the thornier cloud issues: retaining in-house control of user accounts while leveraging cloud apps and data. Granular authorization controls: Access is typically not an ‘all-or-nothing’ proposition – each user is allowed access to a subset of functions and data stored in the cloud. Authorization maps instruct applications which resources to provide to each user. How much control you have over each user’s access depends, both on the capabilities of the cloud service provider and on the capabilities of the IAM system. The larger industry trends – in authorization in general and the cloud specifically – are a focus on finer-grained access control, and removing access policy from code as much as possible. In a nutshell, roles are necessary but not sufficient for authorization – you need attributes too. You also do not want to spelunk through millions of lines of code to define/review/change/audit them, so they should be configurable and data driven. Administration: User administrators generally prefer a single management pane for administering users and managing identity across multiple services. The goal of most cloud IAM systems is to do just that, but they

Share:
Read Post

Friday Summary: 2012 Year End Wrap

It’s the holiday season, people are leaving for vacation, and most people have things other than security on their minds – including me – so I’ll keep today’s Friday Summary short. It’s time to reflect on the successes – and failures – of the past year. For the most part 2012 has been a good year for the Securosis Team: Rich, Mike, Dave Mortman, Dave Lewis, Jamie Arlen, Gunnar Peterson, and I have all been active with research, conference presentations, webcasts, consulting projects, and engaging the community at large. We did more deep-dive research projects than we have ever done before. And more importantly, despite the huge amount of work, it remains fun. I believe everyone on the Securosis team loves working in the field of IT Security, and despite having seen the ugly underbelly of the profession, it is simply one of the most interesting and challenging fields imaginable. Our candid internal research debates are a genuine treat; and chat discussions are simultaneously illuminating, depressing, funny, and rewarding. I am really lucky to work with such a great team and have so many interesting projects to work on! Sure, I could do without the daily baths in vendor FUD, but I just can’t imagine doing anything else. Well, perhaps I can imagine running off to be a roadie for an AC/DC world tour, but then reality sets in and I come back here. For those of you who are wondering what we will be up to in the New Year, we will publish a full research calendar in the coming weeks. We have several projects starting on endpoint security, web applications, mobile, and cloud IAM. And several of us will be presenting at conferences early next year – notably the Phoenix ISSA meeting January 8 and the Open Group conference in Newport Beach, California on the 28th of January. This is the last Friday Summary of 2012. I would like to thank all of you who read and participate on the blog, chat with us on Twitter, and help with our Totally Transparent Research process – open public debate over content makes the research better. Happy Holidays! –Adrian On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian’s Dark Reading Post on DB Threats and Countermeasures. Rich’s excellent TidBITS post on Apple’s Security Efforts in 2012. Securosis Posts Incite 12/19/2012: Celebration. Friday Summary: December 13, 2012 – You, Me, and Twitter. The CloudSec Chicken or the DevOps Egg? Favorite Outside Posts Mike Rothman: Why Collect Full Content Data? Adrian Lane: On Puppy Farm Vendors, Petco and The Remarkable Analog To Security … Nobody works a theme like Chris. I’m just wondering which part he would play in a remake of “Best In Show”? Project Quant Posts Malware Analysis Quant: Index of Posts. Malware Analysis Quant: Metrics – Monitor for Reinfection. Malware Analysis Quant: Metrics – Remediate. Malware Analysis Quant: Metrics – Find Infected Devices. Malware Analysis Quant: Metrics – Define Rules and Search Queries. Malware Analysis Quant: Metrics – The Malware Profile. Malware Analysis Quant: Metrics – Dynamic Analysis. Research Reports and Presentations Implementing and Managing Patch and Configuration Management. Defending Against Denial of Service (DoS) Attacks. Securing Big Data: Security Recommendations for Hadoop and NoSQL Environments. Tokenization vs. Encryption: Options for Compliance. Pragmatic Key Management for Data Encryption. The Endpoint Security Management Buyer’s Guide. Pragmatic WAF Management: Giving Web Apps a Fighting Chance. Understanding and Selecting Data Masking Solutions. Top News and Posts Dell acquires Credant Technologies. Senator introduces bill to regulate data caps. Rebooting Security Engagement at Mozilla. Adobe hasn’t yet fixed Critical Shockwave vulnerability reported in 2010. Point-of-Sale Skimmers: No Charge…Yet. CSRF Protection for Public Functionality. Delta Air Lines publishes privacy policy, but reseacher finds a fault. Living with HTTPS. An HSTS discussion – from July apparently, but interesting. Hosting Antagonist automatically fixes vulnerabilities in customers websites. Blog Comment of the Week Securosis makes a $25 donation to Hackers for Charity. While there were no comments this holiday week, we have raised a sizable sum, which we will be donating this week. Thanks again for all your comments! Share:

Share:
Read Post

Can we effectively monitor big data?

During the big data research project I found myself thinking about how I would secure a NoSQL database if I was responsible for a cluster. One area I can’t help thinking about is Database Activity Monitoring; how I would implement a solution for big databases? The only currently available solution I am aware of is very limited in what it provides. And I think the situation to stay that way for a long time. The ways to collect data with big data clusters, and to deploy monitoring, are straightforward. But analyzing queries will remain a significant engineering challenge. NoSQL tasks are processed very differently than on relational platforms, and the information at your disposal is significantly less. First some background: With Database Activity Monitoring, you judge a user’s behavior by looking at the queries they send to the database. There are two basic analysis techniques for relational databases: either to examine the metadata associated with relational database queries, or to examine the structure and content of queries themselves. The original and most common method is metadata examination – we look at data including user identity, time of day, origin location of the query, and origin application of the query. Just as importantly we examine which objects are requested – such as column definitions – to see if a user may be requesting sensitive data. We might even look at frequency of queries or quantity of data returned. All these data points can indicate system misuse. The second method is to examine the query structure and variables provided by the user. There are specific indicators in the where clause of a relational query that can indicate SQL injection or logic attacks on the database. There are specific patterns, such as “1=1”, designed to confuse the query parser into automatically taking action. There are content ‘fingerprints’, such as social secuirty number formats, which indicate sensitive data. And there are adjustments to the from clause, or even usage of optional query elements, designed to mask attacks from the Database Activity Monitor. But the point is that relational query grammars are known, finite, and fully cataloged. It’s easy for databases and monitors to validate structure, and then by proxy user intent. With big data tasks – most often MapReduce – it’s not quite so easy. MapReduce is a means of distributing a query across many nodes, and reassembling the results from each node. These tasks look a lot more like code than structured relational queries. But it gets worse: the query model could be text search, or an XPath XML parser, or SPARQL. A monitor would need to parse very different query types. Unfortunately we don’t necessarily know the data storage model of the database, which complicates things. Is it graph data, tuple-store, quasi-relational, or document storage? We get no hints from the selection’s structure or data type, because in a non-relational database that data is not easily accessible. There is no system table to quickly consult for table and column types. Additionally, the rate at which data moves in and out of the cluster makes dynamic content inspection infeasible. We don’t know the database storage structure and cannot even count on knowing the query model without some inspection and analysis. And – I really hate to say this because the term is so overused and abused – but understanding the intention of a MapReduce task is a halting problem: it’s at least difficult, and perhaps impossible, to dynamically determine whether it is malicious. So where does that leave us? I suspect that Database Activity Monitoring for NoSQL databases cannot be as effective as relational database monitoring for a very long time. I expect solutions to work purely by analyzing available metadata available for the foreseeable future, and they will restrict themselves to cookie-cutter MapReduce/YARN deployments in Hadoop environments. I imagine that query analysis engines will need to learn their target database (deployment, data storage scheme, and query type) and adapt to the platforms, which will take several cycles for the vendors to get right. I expect it to be a very long time before we see truly useful systems – both because of the engineering difficulty and because of the diversity of available platforms. I wish I could say that I have seen innovative new approaches to this problem, and they are just over the horizon, but I have not. With so many customers using these systems and pumping tons of information into them – much of it sensitive – demand for security will come. And based on what’s available today I expect the tools to lean heavily toward logging tools and WAF. That’s my opinion. Share:

Share:
Read Post

Friday Summary: November 29, 2012

When I visit the homes of friends who are Formula One fans on race day, I am amazed. At how fanatical they are – worse than NFL and college football fans. They have the TV on for pre-race action hours before it starts. And this year’s finale was at least in a friendly time zone – otherwise they would have been up all night. But what really amazes me is not the dedication – it’s how they watch. Big screen TV is on, but the sound is turned off. The audio portion comes from a live feed from some other service, through their stereo – complete with subwoofer – to make sure they hear their favorite commentator. Laptop is on lap, browsers fired up so they can look up stats, peruse multiple team and fan sites, check weather conditions, and just heckle friends over IM. An iPad sits next to them with TweetDeck up, watching their friends tweet. If a yellow flag pops up, they are instantly on the cell phone talking to someone about what happened. They are literally surrounded by multiple media platforms, each one assigned the task it is best suited for. But their interest in tech goes beyond that. Ask them stats about F1 engine development programs, ‘tyre’ development, or how individual drivers do on certain tracks, and they pour data forth like they get paid to tell you everything they know. They can tell you about the in-car telemetry systems that constantly send tire pressure, gear box temp, G-force analysis, and 100 other data feeds. Ask them a question and you get both a factual list of events and a personal analysis of what these people are doing wrong. It’s a layman’s perspective but they are on top of every nuance. God forbid should they have to work over the weekend and only have access to a Slingbox and headphones. That’s just freakin’ torture. Those fantasy baseball people look like ignorant sissies next to F1 fans. They may not have Sabermetrics but they watch car telemetry like they’re in the Matrix. Perhaps it’s because in the US we don’t have many opportunities to attend F1 events that the ultimate experience is at home, but the degree to which fans have leveraged technology to maximize the experience is pretty cool to watch – or rather to watch them watch the race. So when I get a call from one of these friends asking, “How do I secure my computer?”, or something like “Which Antivirus product should I use” or “Does Life Lock help keep me secure?” I am shocked. They immerse themselves in all sorts of tech and apps and hardware, but have no clue to the simplest security settings or approaches. So I’m sitting here typing up a “personal home computer security 101” email. And congratulations to Sebastian Vettel for winning his third world championship – that puts him in very select company. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Rich and Martin on Network Security Podcast #297. Adrian’s Big Data Paper … synthesized. David Mortman is presenting at Sec-Zone next week. Adrian’s Dark Reading post: Database Threats and Countermeasures. Mike’s Dark Reading post: A Backhanded Thanks. Favorite Securosis Posts Mike Rothman: Building an Early Warning System: External Threat Feeds. You can’t do it all yourself. So you need to rely on others for threat intelligence in some way, shape, or form. Adrian Lane: Incite 11/28/2012: Meet the Masters. I’m starting to think Mike was just being nice when he said he loved my collection of Heineken beer posters. Other Securosis Posts New Paper: Implementing and Managing Patch and Configuration Management. Enterprise Key Managers: Technical Features, Part 2. Enterprise Key Manager Features: Deployment and Client Access Options. Building an Early Warning System: External Threat Feeds. Friday Summary: November 16, 2012. Favorite Outside Posts Dave Lewis: Log All The Things. Mike Rothman: China’s cyber hackers drive US software-maker to brink. Disturbing story about how a well funded attack can almost bring down a small tech business. That said, if this guy’s pretty good business was at risk, why didn’t he bring in experts earlier and move his systems elsewhere to keep business moving forward? Sounds a bit like Captain Ahab. But it does have a sort of happy ending (h/t @taosecurity). Adrian Lane: Expanding the Cloud – Announcing Amazon Redshift, a Petabyte-scale Data Warehouse Service. I’ll write about this in the near future, but the dirt cheap accessibility of massive resources makes many analysis projects feasible, even for small firms. Project Quant Posts Malware Analysis Quant: Index of Posts. Malware Analysis Quant: Metrics – Monitor for Reinfection. Malware Analysis Quant: Metrics – Remediate. Malware Analysis Quant: Metrics – Find Infected Devices. Malware Analysis Quant: Metrics – Define Rules and Search Queries. Malware Analysis Quant: Metrics – The Malware Profile. Malware Analysis Quant: Metrics – Dynamic Analysis. Research Reports and Presentations Implementing and Managing Patch and Configuration Management. Defending Against Denial of Service (DoS) Attacks. Securing Big Data: Security Recommendations for Hadoop and NoSQL Environments. Tokenization vs. Encryption: Options for Compliance. Pragmatic Key Management for Data Encryption. The Endpoint Security Management Buyer’s Guide. Pragmatic WAF Management: Giving Web Apps a Fighting Chance. Understanding and Selecting Data Masking Solutions. Top News and Posts Banking Trojan tries to hide from security researchers. Microsoft is toast, here’s why. Student Suspended for Refusing to Wear a School-Issued RFID Tracker. No truth to the rumor that they later stapled the RFID tag to his forehead. All Banks Should Display A Warning Like This. Rackspace: Why Does Every Visitor To My Cloud Sites Website Have The Same IP Address? HP says its products sold unknowingly to Syria by partner. EU plans to implement mandatory cyber incident reporting. Chevron was a victim of Stuxnet. RSA Releases Advanced Threat Summit Findings (PDF) Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Sashank Dara, in response to

Share:
Read Post

Securing Big Data: Security Recommendations for Hadoop and NoSQL [New Paper]

We are pleased to announce the release of our white paper on securing big data environments. This research project provides a high-level overview of security challenges for big data environments. We cover the ways big data differs from traditional relational databases, both architecturally and operationally. We look at some of the built-in and third-party security solutions for big data clusters, and how they work with – and against – big data installations. Finally, we make a base set of recommendations for securing big data installations – we recommend several technologies to address specific threats to the data and the big data cluster itself, preferring options which can scale with the cluster. After all, security should support big data clusters, not break or hamper them. Somewhat to our surprise, a major task for this research project was to actually define big data. None of our past topics caused so much trouble identifying our topic. Big data clusters exhibit a handful of essential characteristics, but there are hundreds of possible functional configurations for creating a big data cluster. A concrete definition is elusive because there is an exception to almost every rule. One euphemism for big data is ‘NoSQL’ – which highlights big data’s freedom from traditional relational constraints, but there are relational big data clusters. In general we are talking about self-organizing clusters built on a distributed file model such as Hadoop, which can handle insertion and analysis of massive amounts of data. Beyond that it gets a bit fuzzy, and the range of potential uses is nearly limitless. So we developed a definition we think you will find helpful. Finally, I would like to thank our sponsor for this research: Vormetric. Without sponsorship like this we could not bring you quality research free to the public! We hope you find this research – and the definition – helpful in understanding big data and its associated security challenges. Download the research paper: Securing Big Data. Share:

Share:
Read Post

White Paper: Tokenization vs. Encryption

We are relaunching one of our more popular white papers, Tokenization vs. Encryption: Options for Compliance. The paper was originally written to close some gaps in our existing tokenization research coverage and address common user questions. Specifically, how does tokenization differ from encryption, and how can I decide which to use? We believe tokenization is particularly important, for several reasons. First, in an evolving regulatory landscape, we need a critical examination of tokenization’s suitability for compliance. There are many possible applications of tokenization, and it’s simpler and easier to use than many other security tools. Second, we wanted to dispel the myth that tokenization is a replacement technology for encryption, when in fact it’s a complimentary solution that – in some cases – makes regulatory compliance easier. Finally, not all of the claimed use cases for tokenization are practical at this time. These questions keep popping up, so we feel a relaunch is in order. This paper discusses the use of tokenization for payment data, personal information, and health records. The paper was written to address questions regarding the business applicability of tokenization, and therefore far less technical than most of our research papers. The content has been updated slightly to reflect some of the changes in the PCI Council’s stance on PCI and address some questions which arise when considering tokenization for PHI and PII. I hope you enjoy reading it as much as I enjoyed writing it. A special thanks to Intel and Prime Factors for sponsoring this research! Download: Tokenization vs. Encryption: Options for Compliance, version 2. Share:

Share:
Read Post

Friday Summary: October 19, 2012

Research. It’s what I do. And long before I started work at Securosis I had a natural inclination toward it. Researching platforms, software toolkits, hardware, whatever. I want to know all the facts, and most of the rumors and anecdotes as well. I research things furiously. I’m obsessive about it. I will spend hour upon hour trying to answer every question I come up with, looking at all aspects of a product. This job lets me really indulge that facet of my personality – it makes the job enjoyable, and is the reason some research projects go a tad longer that I originally expected. And in an odd way it’s one of the reasons I really like the name Securosis – the name Rich chose for the company before I joined in. My research habits border a bit on neurosis, so it fits. This inclination bleeds over to my personal life as well. Detailed analysis, fact finding, understanding how things work, how the pieces fit, what options are available, using products when you can, or imaging how you might use them when you can’t. It’s a wonderful approach when you are making big purchases like a car or a home. The sheer volume of mental analysis spotlights bad decisions and removes emotion from the equation, and has saved me from several bad decisions in life. But it’s a bit absurd when you’re buying a pair of running shoes. Or a $20 crock pot. In fact it’s a problem. I have found that analysis takes a lot of the passion out of things. I can analyze a pair of headphones or an amplifier to death. Several items I have purchased over the years are really nice – possibly some of the finest of their types. Yet I am so aware of their faults that I have a tough time just enjoying these products. I can’t just plunk my money down and experience a new CD, a new bicycle, or a new office chair. Great when analyzing stocks – not so much at the Apple Store. Does a new pair of hiking boots really need 20 hours of fact finding? I don’t think so. The ability to just relax and enjoy rather than analyze and critique is a learned response – for me. Now that I have finally admitted my neurosis and accepted it, time to hit the ‘Buy’ button and enjoy my purchase, research be damned! One last item: Anyone else notice the jump in phishing attempts? Blatant, and multiple attempts with the same payloads. I usually get one a week, but got about 20 over the last couple. Perhaps it’s just that spam filters are not catching the bulk of them, but it looks like volume has jumped dramatically. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Rich on Pragmatic Key Management for Data Encryption. Favorite Securosis Posts Adrian Lane: Understanding and Selecting a Key Manager. Focused introduction – excellent post! Mike Rothman: Understanding and Selecting a Key Manager. The more cloudy things become, the more important encryption is going to be. This research is very important for the next few years. Other Securosis Posts Incite 10/17/2012: Passion. Defending Against DoS Attacks: the Process. Friday Summary: October 12, 2012. Favorite Outside Posts Rich: Hacked terminals capable of causing pacemaker deaths. We knew this was coming and the device manufacturers tried to pretend it wouldn’t happen. Now let the denials start. Dave Lewis: ‘Four horsemen’ posse: This here security town needs a new sheriff. David Mortman: Amazon’s Glacier cloud is made of… TAPE. It’s ‘elastic’, self service, and on demand. Mike Rothman: What an Academic Who Wrote Her Dissertation on Trolls Thinks of Violentacrez. A week ago, the worst troll on Reddit was outed. This guy portrays himself as a “regular guy.” Nonesense. Trolls are the scum of the earth. Web gladiators who are very tough behind the veil of anonymity. Read this article, where a person who did her dissertation on trolls weighs in. Adrian Lane: The Scrap Value of a Hacked PC, Revisited. This graphic works as a quick education on both the types of attacks a user might face, and why users are barraged with attacks. Project Quant Posts Malware Analysis Quant: Index of Posts. Malware Analysis Quant: Metrics – Monitor for Reinfection. Malware Analysis Quant: Metrics – Remediate. Malware Analysis Quant: Metrics – Find Infected Devices. Malware Analysis Quant: Metrics – Define Rules and Search Queries. Research Reports and Presentations The Endpoint Security Management Buyer’s Guide. Pragmatic WAF Management: Giving Web Apps a Fighting Chance. Understanding and Selecting Data Masking Solutions. Evolving Endpoint Malware Detection: Dealing with Advanced and Targeted Attacks. Implementing and Managing a Data Loss Prevention Solution. Defending Data on iOS. Top News and Posts General Dynamics Introduces NSA-Certified COTS Computer. The question is, would you or someone you know buy one? Netanyahu: Cyber attacks on Israel increasing. I want a digital Iron Dome too! With lasers and stuff. Wonder if they sell them on Think Geek? State-Sponsored Malware ‘Flame’ Has Smaller, More Devious Cousin. miniFlame. ‘Mass Murder’ malware. The Costs of the Cloud: Double-Check Me on This, Would You? Nitol Botnet Shares Code with Other China-Based DDoS Malware. PayPal’s Security Token Is Not So Secure After All. The token does not protect the user account from an attacker gaming the process, but that’s not really the value of the token to PayPal. Hackers Exploit ‘Zero-Day’ Bugs For 10 Months On Average Before They’re Exposed. Could Hackers Change Our Election Results? Microsoft Security Intel Report (PDF). Beating Automated SQL Injection Attacks. About the same as our WAF management recommendations. CallCentric hit by DDoS It’s the fashionable thing. Everyone’s doing it! Russian Anti-Virus Firm Plans Secure Operating System to Combat Stuxnet. For control systems? Yeah, good luck with that. Java Patch Plugs 30 Security Holes. Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to nobody, as we have

Share:
Read Post

Friday Summary: October 5, 2012

Gunnar Peterson posted a presentation a while back on how being an investor makes him better at security, and conversely how being in security makes him better at investing. It’s a great concept, and my recent research on different investment techniques has made me realize how amazing his concept is. Gunnar’s presentation gets a handful of the big ideas (including defensive mindset, using data rather than anecdotes to make decisions, and understanding the difference between what is and what should be) right, but actually under-serves his concept – there are many other comparisons that make his point. That crossed my mind when reading An Investor’s Guide to Famous Last Words. Black Swan author Nassim Taleb: “People focus on role models; it is more effective to find antimodels – people you don’t want to resemble when you grow up.” The point in the Fool article is to learn from others’ mistakes. With investing mistakes are often very public, and we share them as examples of what not to do. In security, not so much. Marcus Ranum does a great presentation called the Anatomy of The Security Disaster, pointing out that problem identification is ignored during the pursuit of great ideas, and blame-casting past the point of no return is the norm. I have lived through this sequence of events myself. And I am not arrogant enough to think I always get things right – I know I had to screw things up more than once just to have a decent chance of not screwing up security again in the future. And that’s because I know many things that don’t work, which – theoretically anyway – gives me better odds at success. This is exactly the case with investing, and it took a tech collapse in 2001 to teach me what not to do. We teach famous investment failures but we don’t share security failures. Nobody wants the shame of the blame in security. There is another way investing makes me better at security and it has to do with investment styles, such as meta-trending, day trading, efficient market theory, cyclic investing, hedging, shorting, value investing, and so on. When you employ a specific style you need to collect specific types of data to fuel your model, which in turn helps you make investment choices. You might look at different aspects of a company’s financials, industry trends, market trends, political trends, social trends, cyclic patterns, the management team, or even disasters and social upheaval as information catalysts. Your model defines which data is needed. You quickly realize that mainstream media only caters to certain styles of investing – data for other styles is only a tiny fraction of what the media covers. Under some investment styles all mainstream investment news is misleading BS. The data you don’t want is sprayed at you like a fire hose because those stories interest many people. We hear about simple and sexy investment styles endlessly – boring, safe, and effective investment is ignored. Security practitioners, do you see where I am going with this? It is very hard to filter out the noise. Worse, when noise is all you hear, it’s really easy to fall into the “crap trap”. Getting good data to base decisions on is hard, but bad data is free and easy. The result is to track outside of your model, your style, and your decision process. You react to the BS and slide toward popular or sexy security models or products – that don’t work. It’s frightfully easy to do when all anyone talks about are red herrings. Back to Gunnar’s quote… Know what you don’t want your security model to be. This is a great way to sanity check the controls and processes you put into place to ensure you are not going down the wrong path, or worrying about the wrong threats. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian’s Dark Reading post: What’s the threat? Rich’s Dark Reading post: Security Losses Remain Within Range of Acceptable Adrian’s research paper: Securing Small Databases. Mike’s upcoming webcast: I just got my WAF, Now What? Favorite Securosis Posts Mike Rothman: Securing Big Data: Operational Security Issues. This stuff looks a lot like the issues you face on pretty much everything else. But a little different. That’s the point I take away from this post and the series. Yes it’s a bit different, and a lot of the fundamentals and other disciplines used through the years may not map exactly, but they are still useful. Adrian Lane: Incite: Cash is King. How many startups have I been at that hung on the fax machine at the end of every quarter? How many sales teams have I been with where “the Bell” only rang the last three days of a quarter? Good stuff. Rich I’m picking my Dark Reading post this week. It stirred up a bit of a Twitter debate, and I think I need to write more on this topic because I couldn’t get enough nuance into the initial piece. Other Securosis Posts New Series: Understanding and Selecting Identity Management for Cloud Services. Endpoint Security Management Buyer’s Guide Published (with the Index of Posts). Securing Big Data: Operational Security Issues. Favorite Outside Posts Mike Rothman: DDoS hitmen for hire. You can get anything as a service nowadays. Even a distributed denial of service (DDoS). I guess this is DDoSaaS, eh? Adrian Lane: Think differently about database hacking. Lazlo Toth and Ferenc Spala’s DerbyCon presentation shows how to grab encryption keys and passwords from OCI clients. A bit long, but a look at hacking Oracle databases without SQL injection. Yes, there are non-SQL injection attacks, in case you forgot. Will we see this in the wild? I don’t know. Rich: Antibiotic Resistant security by Valsmith. What he’s really driving at is an expansion of monoculture and our reliance on signature-based AV, combined with a few other factors. It’s a very worthwhile read. The TL;DR version is that we have created

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.