Securosis

Research

NoSQL Security 2.0 [New Series] *updated*

NoSQL, both the technology and the industry, have taken off. We are past the point where we can call big data a fad, and we recognize that we are staring straight into the face of the next generation of data storage platforms. About 2 years ago we started the first Securosis research project on big data security, and a lot has changed since then. At that point many people had heard of Hadoop, but could not describe what characteristics made big data different than relational databases – other than storing a lot of data. Now there is no question that NoSQL — as a data management platform — is here to stay; enterprises have jumped into large scale analysis projects with both feet and people understand the advantages of leveraging analytics for business, operations, and security use cases. But as with all types of databases – and make no mistake, big data systems are databases – high quality data produces better analysis results. Which is why in the majority of cases we have witnessed, a key ingredient is sensitive data. It may be customer data, transactional data, intellectual property, or financial information, but it is a critical ingredient. It is not really a question of whether sensitive data is stored within the cluster – more one of which sensitive data it contains. Given broad adoption, rapidly advancing platforms, and sensitive data, it is time to re-examine how to secure these systems and the data they store. But this paper will be different than the last one. We will offer much more on big data security strategies in addition to tools and technologies. We will spend less time defining big data and more looking at trends. We will offer more explanation of security building blocks including data encryption, logging, network encryption, and access controls/identity management in big data ecosystems. We will discuss the types of threats to big data and look at some of the use cases driving security discussions. And just like last time, we will offer a frank discussion of limitations in platforms and vendor offerings, which leave holes in security or fail to mesh with the inherent performance and scalability of big data. I keep getting one question from enterprise customers and security vendors. People ask repeatedly for a short discussion of data-centric security, so this paper provides one. This is because I have gotten far fewer questions in the last year on how to protect a NoSQL cluster, and far more on how to protect data before it is stored into the cluster. This was a surprise, and it is not clear from my conversations whether it is because users simply don’t trust the big data technology, due to worries about data propagation, because they don’t feel they can meet compliance obligations, or if they are worried about the double whammy of big data atop cloud services – all these explanations are plausible, and they have all come up. But regardless of driver, companies are looking for advice around encryption and wondering if tokenization and masking are viable alternatives for their use cases. The nature of the questions tells me that is where the market is looking for guidance, so I will cover both cluster security and data-centric security approaches. Here is our current outline: Big Data Overview and Trends: This post will provide a refresher on what big data is, how it differs from relational databases, and how companies are leveraging its intrinsic advantages. We will also provide references on how the market has changed and matured over the last 24 months, as this bears on how to approach security. Big Data Security Challenges: We will discuss why it is different architecturally and operationally, and also how the platform bundles and approaches differ from traditional relational databases. We will discuss what traditional tools, technologies and security controls are present, and how usage of these tools differs in big data environments. Big Data Security Approaches: We will outline the approaches companies take when implementing big data security programs, as reference architectures. We will outline walled-garden models, cluster security approaches, data-centric security, and cloud strategies. Cluster Security: An examination of how to secure a big data cluster. This will be a threat-centric examination of how to secure a cluster from attackers, rogue admins, and application programmers. Data (Centric) Security: We will look at tools and technologies that protect data regardless of where it is stored or moved, for use when you don’t trust the database or its repository. Application Security: An executive summary of application security controls and approaches. Big data in cloud environments: Several cloud providers offer big data as part of Platform or Infrastructure as a Service offerings. Intrinsic to these environments are security controls offered by the cloud vendor, offering optional approaches to securing the cluster and meeting compliance requirements. Operational Considerations: Day-to-day management of the cluster is different than management of relational databases, so the focus of security efforts changes too. This post will examine how daily security tasks change and how to adjust operational controls and processes to compensate. We will also offer advice on integration with existing security systems such as SIEM and IAM. As with all our papers, you have a voice in what we cover. So I would like feedback from readers, particularly whether you want a short section of application layer security as well. It is (tentatively) included in the current outline. Obviously this would be a brief overview – application security itself is a very large topic. That said, I would like input on that and any other areas you feel need addressing. Share:

Share:
Read Post

Friday Summary: March 28, 2014—Cloud Wars

Begun, the cloud war has. We have been talking about cloud computing for a few years now on this blog, but in terms of market maturity it is still early days. We are really entering the equivalent of the second inning of a much longer game, it will be over for a long time, and things are just now getting really interesting. In case you missed it, the AWS Summit began this week in San Francisco, with Amazon announcing several new services and advances. But the headline of the week was Google’s announced price cuts for their cloud services: Google Compute Engine is seeing a 32 percent reduction in prices across all regions, sizes and classes. App Engine prices are down 30 percent, and the company is also simplifying its price structure. The price of cloud storage is dropping a whopping 68 percent to just $0.026/month per gigabyte and $0.2/month per gigabyte/DRA. At that price, the new pricing is still lower than the original discount available for those who stored more than 4,500TB of data in Google’s cloud. Shortly thereafter Amazon countered with their own price reductions – something we figured they were prepared to do, but didn’t intend during the event. Amazon has been more focused on methodically delivering new AWS functionality, outpacing all rivals by a wide margin. More importantly Amazon has systematically removed impediments to enterprise adoption around security and compliance. But while we feel Amazon has a clear lead in the market, Google has been rapidly improving. Our own David Mortman pointed out several more interesting aspects of the Google announcement, lost in the pricing war noise: “The thing isn’t just the lower pricing. It’s the lower pricing with automatic “reserve instances” and the managed VM offering so you can integrate Google Compute Engine (GCE) and Google App Engine. Add in free git repositories for managing the GCE infrastructure and support for doing that via github – we’re seeing some very interesting features to challenge AWS. GOOG is still young at offering this as an external service but talk about giving notice… Competition is good! This all completely overshadowed Cisco’s plans to pour $1b into an OpenStack-based “Network of Clouds”. None of this is really security news, but doubling down on cloud investments and clearly targeting DevOps teams with new services, make it clear where vendors think this market is headed. But Google’s “Nut Shot” shows that the battle is really heating up. On to the Summary, where several of us had more than one favorite external post: Favorite Securosis Posts Adrian Lane: Incite 3/26/2014: One Night Stand. All I could think of when I read this was Rev. Horton Heat’s song “Eat Steak”. Mike Rothman: Firestarter: The End of Full Disclosure. Other Securosis Posts Mike’s Upcoming Webcasts. Friday Summary: March 21, 2014 – IAM Mosaic Edition. Favorite Outside Posts Gal Shpantzer: Why Google Flu is a failure: the hubris of big data. Adrian Lane: Canaries are Great! David Mortman: Primer on Showing Empathy in the Tech Industry. Gunnar: Making Sure Your Security Advice and Decisions are Relevant. “Information security professionals often complain that executives ignore their advice. There could be many reasons for this. One explanation might be that you are presenting your concerns or recommendations in the wrong business context. You’re more likely to be heard if you relate the risks to an economic moat relevant to your company.” Gunnar: Cyberattacks Give Lift to Insurance. The cybersecurity market is growing: “The Target data breach was the equivalent of 10 free Super Bowl ads”. Mike Rothman: You’ve Probably Been Pouring Guinness Beer The Wrong Way Your Whole Life. As a Guinness lover, this is critical information to share. Absolutely critical. Mike Rothman: Data suggests Android malware threat greatly overhyped. But that won’t stop most security vendors from continuing to throw indiscriminate Android FUD. Research Reports and Presentations Reducing Attack Surface with Application Control. Leveraging Threat Intelligence in Security Monitoring. The Future of Security: The Trends and Technologies Transforming Security. Security Analytics with Big Data. Security Management 2.5: Replacing Your SIEM Yet? Defending Data on iOS 7. Eliminate Surprises with Security Assurance and Testing. What CISOs Need to Know about Cloud Computing. Defending Against Application Denial of Service Attacks. Executive Guide to Pragmatic Network Security Management. Top News and Posts Apple and Google’s wage-fixing. Not security but interesting. Google Announces Massive Price Drops for Cloud. Cisco plans $1B investment in global cloud infrastructure. Microsoft Security Advisory: Microsoft Word Under Seige. Chicago’s Trustwave sued over Target data breach. Blog Comment of the Week This week’s best comment goes to Marco Tietz, in response to Friday Summary: IAM Mosaic Edition. Thanks Adrian, it looks like you captured the essence of the problem. IAM is very fragmented and getting everything to play together nicely is quite challenging. Heck, just sorting it out corp internal is challenging enough without even going to the Interwebs. This is clearly something we need to get better at, if we are serious about ‘The Cloud’. Share:

Share:
Read Post

Analysis of Visa’s Proposed Tokenization Spec

Visa, Mastercard, and Europay – together known as EMVCo – published a new specification for Payment Tokenisation this month. Tokenization is a proven security technology, which has been adopted by a couple hundred thousand merchants to reduce PCI audit costs and the security exposure of storing credit card information. That said, there is really no tokenization standard, for payments or otherwise. Even the PCI-DSS standard does not address tokenization, so companies have employed everything from hashed credit card (PAN) values (craptastic!) to very elaborate and highly secure random value tokenization systems. This new specification is being provided to both raise the bar on shlock home-grown token solutions, but more importantly to address fraud with existing and emerging payment systems. I don’t expect many of you to read 85 pages of token system design to determine what it really means, if there are significant deficiencies, or whether these are the best approaches to solving payment security and fraud issues, so I will summarize here. But I expect this specification to last, so if you build tokenization solutions for a living you had best get familiar with it. For the rest of you, here are some highlights of the proposed specification. As you would expect, the specification requires the token format to be similar to credit card numbers (13-19 digits) and pass LUHN. Unlike financial tokens used today, and at odds with the PCI specification I might add, the tokens can be used to initiate payments. Tokens are merchant or payment network specific, so they are only relevant within a specific domain. For most use cases the PAN remains private between issuer and customer. The token becomes a payment object shared between merchants, payment processors, the customer, and possibly others within the domain. There is an identity verification process to validate the requestor of a token each time a token is requested. The type of token generated is variable based upon risk analysis – higher risk factors mean a low-assurance token! When tokens are used as a payment objects, there are “Data Elements” – think of them as metadata describing the token – to buttress security. This includes a cryptographic nonce, payment network data, and token assurance level. Each of these points has ramifications across the entire tokenization eco-system, so your old tokenization platform is unlikely to meet these requirements. That said, they designed the specification to work within todays payment systems while addressing near-term emerging security needs. Don’t let the misspelled title fool you – this is a good specification! Unlike the PCI’s “Tokenization Guidance” paper from 2011 – rumored to have been drafted by VISA – this is a really well thought out document. It is clear whoever wrote this has been thinking about tokenization for payments for a long time, and done a good job of providing functions to support all the use cases the specification needs to address. There are facilities and features to address PAN privacy, mobile payments, repayments, EMV/smartcard, and even card-not-present web transactions. And it does not address one single audience to the detriment of others – the needs of all the significant stakeholders are addressed in some way. Still, NFC payments seems to be the principle driver, the process and data elements really only gel when considered from that perspective. I expect this standard to stick. Share:

Share:
Read Post

Friday Summary: March 21, 2014—IAM Mosaic Edition

Researching and writing about identity and access management over the last three years has made one thing clear: This is a horrifically fragmented market. Lots and lots of vendors who assemble a bunch of pieces together to form a ‘vision’ of how customers want to extend identity services outside the corporate perimeter – to the cloud, mobile, and whatever else they need. And for every possible thing you might want to do, there are three or more approaches. Very confusing. I have had it in mind for several months to create a diagram that illustrates all the IAM features available out there, along with how they all link together. About a month ago Gunnar Peterson started talking about creating an “identity mosaic” to show how all the pieces fit together. As with many subjects, Gunnar and I were of one mind on this: we need a way to show the entire IAM landscape. I wanted to do something quick to show the basic data flows and demystify what protocols do what. Here is my rough cut at diagramming the current state of the IAM space (click to enlarge):   But when I sent over a rough cut to Gunnar, he responded with: “Only peril can bring the French together. One can’t impose unity out of the blue on a country that has 265 different kinds of cheese.” – Charles de Gaulle Something as basic as ‘auth’ isn’t simple at all. Just like the aisles in a high-end cheese shop – with all the confusing labels and mingled aromas, and the sneering cheese agent who cannot contain his disgust that you don’t know Camembert from Shinola – identity products are unfathomable to most people (including IT practitioners). And no one has been able to impose order on the identity market. We have incorrectly predicted several times that recent security events would herd identity cats vendors in a single unified direction. We were wrong. We continue to swim in a market with a couple hundred features but no unified approach. Which is another way to say that it is very hard to present this market to end users and have it make sense. A couple points to make on this diagram: This is a work in progress. Critique and suggestions encouraged. There are many pieces to this puzzle and I left a couple things out which I probably should not have. LDAP replication? Anyone? Note that I did not include authorization protocols, roles, attributes, or other entitlement approaches! Yes, I know I suck at graphics. Gunnar is working on a mosaic that will be a huge four-dimensional variation on Eve Mahler’s identity Venn diagram, but it requires Oculus Rift virtual reality goggles. Actually he will probably have his kids build it as a science project, but I digress. Do let us know what you think. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Mort quoted in Network World. Favorite Securosis Posts Mike Rothman: Firestarter: An Irish Wake Most of us chose this one: Jennifer Minella Is Now a Securosis Contributing Analyst. Other Securosis Posts Incite 3/18/2014: Yo Mama! Webinar Tomorrow: What Security Pros Need to Know About Cloud. Defending Against Network Distributed Denial of Service Attacks [New Series]. Reminder: We all live in glass houses. New Paper: Reducing Attack Surface with Application Control. Favorite Outside Posts A Few Lessons From Sherlock Holmes. Great post here about some of the wisdom of Sherlock that can help improve your own thinking. Gunnar: Project Loon. Cloud? Let’s talk stratosphere and balloons – that’s what happens when you combine the Internet with the Montgolfiers Adrian Lane: It’s not my birthday. I was going to pick Weev’s lawyers appear in court by Robert Graham as this week’s Fav, but Rik Ferguson’s post on sites that capture B-Day information struck an emotional chord – this has been a peeve of mine for years. I leave the wrong date at every site, and record which is which, so I know what’s what. Gal Shpantzer: Nun sentenced to three years, men receive five. Please read the story – it’s informative and goes into sentencing considerations by the judge, based on the histories of the convicted protesters, and the requests of the defense and prosecution. One of them was released on January 2012 for a previous trespass. At Y-12… David Mortman: Trust me: The DevOps Movement fits perfectly with ITSM. Yes, trust him. He’s The Real Gene Kim! Research Reports and Presentations Reducing Attack Surface with Application Control. Leveraging Threat Intelligence in Security Monitoring. The Future of Security: The Trends and Technologies Transforming Security. Security Analytics with Big Data. Security Management 2.5: Replacing Your SIEM Yet? Defending Data on iOS 7. Eliminate Surprises with Security Assurance and Testing. What CISOs Need to Know about Cloud Computing. Defending Against Application Denial of Service Attacks. Executive Guide to Pragmatic Network Security Management. Top News and Posts 110,000 WordPress Databases Exposed. Whitehat Security’s Aviator browser is coming to Windows. Missing the (opportunity of) Target. PWN2OWN Results. Symantec CEO fired. The official ‘CEO Transition’ Press Release. This Is Why Apple Enables Bluetooth Every Time You Update iOS. Threat Advisory: PHP-CGI At Your Command. IBM says no NSA backdoors in its products. Google DNS Hijack. 14% of Starbucks transactions are now made with a mobile device. And what the heck is a “Chief Digital Officer”? New Jersey Boy Climbs to Top of 1 World Trade Center. Are Nation States Responsible for Evil Traffic Leaving Their Networks? Full Disclosure shuts down. NSA Program monitors content of all calls. Country details not provided. Share:

Share:
Read Post

Friday Summary: March 7, 2014

I don’t code much. In fact over the last 10 years or so I have been actively discouraged from coding, with at least one employer threatening to fire me if I was discovered. I have helped firms architect new products, I have done code reviews, I have done some threat modeling, and even a few small Java utilities to weave together a couple other apps. But there has been very, very little development in the last decade. Now I have a small project I want to do so I jumped in with both feet, and it feels like I was dumped into the deep end of the pool. I forgot how much bigger a problem space application development is, compared to simple coding. In the last couple of days I have learned the basics of Ruby, Node.js, Chef, and even Cucumber. I have figured out how to bounce between environments with RVM. I brushed up on some Python and Java. And honestly, it’s not very difficult. Learning languages and tools are trivial matters. A few hours with a good book or web site, some dev tools, and you’re running. But when you are going to create something more than a utility, everything changes. The real difficulty is all the different layers of questions about the big picture: architecture, deployment, UI, and development methodologies. How do you want to orchestrate activities and functions? How do you want to architect the system? How do you allow for customization? Do I want to do a quick prototype with the intention of rewriting once I have the basic proof of concept, or do I want to stub out the system and then use a test-driven approach? State management? Security? Portability? The list goes on. I had forgotten a lot of these tasks, and those brain cells have not been exercised in a long time. I forgot how much prep work you need to do before you write a line of code. I forgot how easy it is to get sucked into the programming vortex, and totally lose track of time. I forgot the stacks of coffee-stained notes and hundreds of browser tabs with all the references I am reviewing. I forgot the need to keep libraries of error handling, input validation, and various other methods so I don’t need to recode them over and over. I forgot how much I eat when developing – when my brain is working at capacity I consume twice as much food. And twice as much caffeine. I forgot the awkwardness of an “Aha!” moment when you figure out how to do something, a millisecond before your wife realizes you haven’t heard a word she said for the last ten minutes. It’s all that. And it’s good. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Mort quoted in Network World. Rich quoted in Building the security bridge to the Millennials. Adrian quoted on Database Denial of Service. David Mortman and Adrian Lane will be presenting at Secure360. Mike and JJ podcast about the Neuro-Hacking talk at RSA. Favorite Securosis Posts Adrian Lane: Research Revisited: The Data Breach Triangle. This magical concept from Rich has aged very very well. I also use this frequently, basically because it’s awesome. Mike Rothman: Research Revisited: Off Topic: A Little Perspective. Rich brought me back to the beginning of this strange journey since I largely left the corporate world. 2006 was so long ago, yet it seems like yesterday. Other Securosis Posts Incite 3/5/2014: Reentry. Research Revisited: FireStarter: Agile Development and Security. Research Revisited: POPE analysis on the new Securosis. Research Revisited: Apple, Security, and Trust. Research Revisited: Hammers vs. Homomorphic Encryption. Research Revisited: Security Snakeoil. New Paper: The Future of Security The Trends and Technologies Transforming Security. Research Revisited: RSA/NetWitness Deal Analysis. Research Revisited: 2006 Incites. Research Revisited: The 3 Dirty Little Secrets of Disclosure No One Wants to Talk About. Favorite Outside Posts Adrian Lane: Charlie Munger on Governance. Charlie Munger is a favorite of mine, and about as pragmatic as it gets. Good read from Gunnar’s blog. Gal Shpantzer: Bloodletting the Arms Race: Using Attacker’s Techniques for Defense. Ryan Barnett, web app security and WAF expert, writes about banking trojans’ functionality and how to use it against attackers. David Mortman: Use of the term “Intelligence” in the RSA 2014 Expo. Mike Rothman: How Khan Academy is using design to pave the way for the future of education. I’m fascinated by design, or more often by very bad design. Which we see a lot of in security. This is a good story of how Khan Academy focuses on simplification to teach more effectively. Research Reports and Presentations The Future of Security: The Trends and Technologies Transforming Security. Security Analytics with Big Data. Security Management 2.5: Replacing Your SIEM Yet? Defending Data on iOS 7. Eliminate Surprises with Security Assurance and Testing. What CISOs Need to Know about Cloud Computing. Defending Against Application Denial of Service Attacks. Executive Guide to Pragmatic Network Security Management. Security Awareness Training Evolution. Firewall Management Essentials. Top News and Posts Behind iPhone’s Critical Security Bug, a Single Bad ‘Goto’. We Are All Intelligence Officers Now. A week old – we’re catching up on our reading. Marcus Ranum at RSA (audio). Hacking Team’s Foreign Espionage Infrastructure Located in U.S. The Face Behind Bitcoin Uroburos Rootkit Fix it tool available to block Internet Explorer attacks leveraging CVE-2014-0322 Blog Comment of the Week This week’s best comment goes to Marco Tietz, in response to Research Revisited: FireStarter: Agile Development and Security, and you’ll have to watch the video to get it. @Adrian: good video on Agile vs Security. But why did you have the Flying Spaghetti Monster in there and didn’t even give it credit! 🙂 rAmen Share:

Share:
Read Post

Research Revisited: FireStarter: Agile Development and Security

I have had many conversations over the last few months with firms about to take their first plunge into Agile development methodologies. Each time they ask how to map secure software development processes into an Agile framework. So I picked this Firestarter for today’s retrospective on Agile Development and Security (see the original post with comments). I am a big fan of the Agile project development methodology, especially Agile with Scrum. I love the granularity and focus it requires. I love that at any given point in time you are working on the most important feature or function. I love the derivative value of communication and subtle peer pressure that Scrum meetings produce. I love that if mistakes are made, you do not go far in the wrong direction – resulting in higher productivity and few total disasters. I think Agile is the biggest advancement in code development in the last decade because it addresses issues of complexity, scalability, focus, and bureaucratic overhead. But it comes with one huge caveat: Agile hurts secure code development. There, I said it. Someone had to. The Agile process, and even the scrum leadership model, hamstrings development in terms of building secure products. Security is not a freakin’ task card. Logic flaws are not well-documented and discreet tasks to be assigned. Project managers (and unfortunately most ScrumMasters) learned security by skimming a “For Dummies” book at Barnes & Noble while waiting for lattes, but they are the folks making the choices as to what security should make it into iterations. Just like general IT security, we end up wrapping the Agile process in a security blanket or bolting on security after the code is complete, because the process itself is not suited to secure development. I know several of you will be saying “Prove it! Show us a study or research evidence that supports your theory.” I can’t. I don’t have meaningful statistical data to back up my claim. But that does not mean it isn’t true, and there is ample anecdotal evidence to support what I am saying. For example: The average Sprint duration of two weeks is simply too short for meaningful security testing. Fuzzing & black box testing are infeasible in the context of nightly builds or pre-release sanity checks. Trust assumptions, between code modules or system functions where multiple modules process requests, cannot be fully exercised and tested within the Agile timeline. White box testing can be effective, but security assessments simply don’t fit into neat 4-8 hour windows. In the same way Agile products deviate from design and architecture specifications, they deviate from systemic analysis of trust and code dependencies. It is a classic forest for the trees problem: efficiency and focus gained by skipping over big picture details necessarily come at the expense of understanding how the system and data are used as a whole. Agile is great for dividing and conquering what you know, but not so great for dealing with the abstract. Secure code development is not like fixing bugs where you have a stack trace to follow. Secure code development is more about coding principles that lead to better security. In the same way Agile cannot help enforce code ‘style’, it doesn’t help with secure coding guidelines. (Secure) style verification is an advantage of peer programming and inherent to code review, but not intrinsic to Agile. The person on the Scrum team with the least knowledge of security, the Product Manager, prioritizes what gets done. Project managers generally do not track security testing, and they are not incented to get security right. They are incented to get the software over the finish line. If they track bugs on the product backlog, they probably have a task card buried somewhere but do not understand the threats. Security personnel are chickens in the project, and do not gate code acceptance the way they traditionally were able in waterfall testing; they may also have limited exposure to developers. The fact that major software development organizations are modifying or wrapping Agile with other frameworks to compensate provide security is evidence of the difficulties in applying security practices directly. The forms of testing that fit Agile development are more likely to get done. If they don’t fit they are typically skipped (especially at crunch time), or they need to be scheduled outside the development cycle. It’s not just that the granular focus on tasks makes it harder to verify security at the code and system levels. It’s not just that features are the focus, or that the wrong person is making security decisions. It’s not just that the quick turnaround in code production precludes effective forms of testing for security. It’s not just that it’s hard to bucket security into discreet tasks. It is all that and more. We are not about to see a study comparing Waterfall with Agile for security benefits. Putting together similar development teams to create similar products under two development methodologies to prove this point is infeasible. I have run Agile and Waterfall projects of similar natures in parallel, and while Agile had overwhelming advantages in a number of areas, security was not one of them. If you are moving to Agile, great – but you will need to evolve your Agile process to accommodate security. What do you think? How have you successfully integrated secure coding practices with Agile? Share:

Share:
Read Post

Research Revisited: Hammers vs. Homomorphic Encryption

We are running a retrospective during RSA because we cannot blog at the show. We each picked a couple posts we like and still think relevant enough to share. I picked a 2011 post on Hammers and Homomorphic Encryption, because a couple times a year I hear about a new startup which is going to revolutionize security with a new take on homomorphic encryption. Over and over. And perhaps some day we will get there, but for now we have proven technologies that work to the same end. (Original post with comments) Researchers at Microsoft are presenting a prototype of encrypted data which can be used without decrypting. Called homomorphic encryption, the idea is to keep data in a protected state (encrypted) yet still useful. It may sound like Star Trek technobabble, but this is a real working prototype. The set of operations you can perform on encrypted data is limited to a few things like addition and multiplication, but most analytics systems are limited as well. If this works, it would offer a new way to approach data security for publicly available systems. The research team is looking for a way to reduce encryption operations, as they are computationally expensive – their encryption and decryption demand a lot of processing cycles. Performing calculations and updates on large data sets becomes very expensive, as you must decrypt the data set, find the data you are interested in, make your changes, and then re-encrypt altered items. The ultimate performance impact varies with the storage system and method of encryption, but overhead and latency might typically range from 2x-10x compared to unencrypted operations. It would be a major advancement if they could dispense away with the encryption and decryption operations, while still enabling reporting on secured data sets. The promise of homomorphic encryption is predictable alteration without decryption. The possibility of being able to modify data without sacrificing security is compelling. Running basic operations on encrypted data might remove the threat of exposing data in the event of a system breach or user carelessness. And given that every company even thinking about cloud adoption is looking at data encryption and key management deployment options, there is plenty of interest in this type of encryption. But like a lot of theoretical lab work, practicality has an ugly way of pouring water on our security dreams. There are three very real problems for homomorphic encryption and computation systems: Data integrity: Homomorphic encryption does not protect data from alteration. If I can add, multiply, or change a data entry without access to the owner’s key: that becomes an avenue for an attacker to corrupt the database. Alteration of pricing tables, user attributes, stock prices, or other information stored in a database is just as damaging as leaking information. An attacker might not know what the original data values were, but that’s not enough to provide security. Data confidentiality: Homomorphic encryption can leak information. If I can add two values together and come up with a consistent value, it’s possible to reverse engineer the values. The beauty of encryption is that when you make a very minor change to the ciphertext – the data you are encrypting – you get radically different output. With CBC variants of encryption, the same plaintext has different encrypted values. The question with homomorphic encryption is whether it can be used while still maintaining confidentiality – it might well leak data to determined attackers. Performance: Performance is poor and will likely remain no better than classical encryption. As homomorphic performance improves, so do more common forms of encryption. This is important when considering the cloud as a motivator for this technology, as acknowledged by the researchers. Many firms are looking to “The Cloud” not just for elastic pay-as-you-go services, but also as a cost-effective tool for handling very large databases. As databases grow, the performance impact grows in a super-linear way – layering on a security tool with poor performance is a non-starter. Not to be a total buzzkill, but I wanted to point out that there are practical alternatives that work today. For example, data masking obfuscates data but allows computational analytics. Masking can be done in such a way as to retain aggregate values while masking individual data elements. Masking – like encryption – can be poorly implemented, enabling the original data to be reverse engineered. But good masking implementations keep data secure, perform well, and facilitate reporting and analytics. Also consider the value of private clouds on public infrastructure. In one of the many possible deployment models, data is locked into a cloud as a black box, and only approved programatic elements ever touch the data – not users. You import data and run reports, but do not allow direct access the data. As long as you protect the management and programmatic interfaces, the data remains secure. There is no reason to look for isolinear plasma converters or quantum flux capacitors when when a hammer and some duct tape will do. Share:

Share:
Read Post

Security Analytics with Big Data Research Paper

  I am happy to announce the release of a research paper a long time in the making: Security Analytics with Big Data. This topic generates tons of questions from end users, and we get them from large and mid-sized enterprises alike. The goals of this research project were threefold: The research outline Describes what security analytics with big data is and what it looks like Discusses how it is different than past tools and platforms Discusses the main use cases These topics mirror our early discussions around security analytics. Big data is a very new and very disruptive trend, so how we might use big data to help with security problems was interesting to the community as a whole. Answering questions about how to leverage virtually free NoSQL analytics tools to do a better job of detecting security events is important – both for what is possible and to provide a picture of where the industry is heading. The story behind the research But a funny thing happened during the research – during interviews people invariably wanted to know how it works within their environment. Many people did not want to just start evaluating security analytics options – they were keen to leverage existing investments and build on infrastructure they already own. The backstory is relevant because this ended up becoming three contiguous research projects, and then we massaged the content into this final paper to address the full breadth of questions. When I begun this work a year ago I wanted to fully describe the skunkworks projects I was seeing at some small and mid-sized firms. Both security companies and motivated individuals were using multiple NoSQL variants to detect security problems, often either with a new approach or at a unprecedented cost we had not seen before. Those trends are reflected in this research. Along the way I spoke with 20 large enterprises, and I kept getting the same request: “We are interested in security analytics, but we want to blend both the data and analysis with existing investments”. Most of the time these firms were referring to SIEM, but occasionally they had data warehouses with other information they wished to reference as well. That is also reflected in the paper. But when I got to this point, things got a bit odd. Once our research papers are completed we see if companies are interested in licensing our research to educate employees, customers, or the larger IT community. The responses I got were, “This is not in line with our position”, “This research does not reflect what we see”, “This research does not differentiate our solution” and “Our SIEM was big data before there was big data”. The broader scope of this research generated a degree of negative feedback which got me thinking I had totally missed the mark, asked the wrong questions, or simply talked to too few of customers. I spent another 6 months going through new interviews with a broader set of questions, and speaking to more data architects, vendors, and would-be customers. Retracing my steps reaffirmed that the research was on target, and I feel this paper captures the market today. Customer interest and inquiries outpace what the vendor community is prepared to offer, and customers are asking for capabilities outside the vendor storylines. So this paper tells a decidedly different story than what you are likely to hear elsewhere. Recommendations First and foremost, this is a research paper to educate end users on what security analytics with big data is, the value it provides, and how to distinguish big data solutions from pretenders. That is its core value. If you are going to “roll your own” big data security analytics cluster, this research provides a sample of what other firms are doing, architectures they use, and the underlying components they leverage to support their work. It will help you understand what types of data you probably already have at your disposal, and what observations you can derive from it. If you are looking to acquire a big data analytics solution this research will help you understand potential risks in realizing your investment and help with rollout and integration. You can download a copy on its landing page: Security Analytics with Big Data. We hope you find this information helpful, and as always please ask questions or provide feedback on the blog. Share:

Share:
Read Post

RSA Conference Guide 2014 Deep Dive: Identity and Access Management

One of the biggest trends in security gets no respect at RSA. Maybe because identity folks still look at security folks cross-eyed. But this year things will be a bit different. Here’s why: The Snowden Effect Companies are (finally) dealing with the hazards of privilege – a.k.a. Privileged User Access. Yes, we hate the term “insider threat” – we have good evidence that external risks are the real issue. That said, logic does not always win out – many companies are asking themselves right now, “How can I stop a ‘Snowden Incident’ from happening at my company?” This Snowden Effect is getting traction as a marketing angle, and you will see it on the RSA Conference floor because people are worried about their dirty laundry going public. Aside from the marketing hype, we have been surprised by the zeal with which companies are now pursuing technology to enforce Privileged User Access policies. The privileged user problem is not new, but companies’ willingness to incur cost, complexity, and risk to address it is. Part of this is driven by auditors assigning higher risk to these privileged accounts (On a cynical note, we have to wonder, “What’s the matter, big-name audit firm? All out of easy findings?”). But sometimes the headline news does really scare the bejesus out of companies in that vertical (that’s right, we’re looking at you, retailers). Whatever the reason, companies and external auditors are waking up to privileged users as perhaps the largest catalyst in downside risk scenarios. Attackers go after databases because that’s where the data is (duh). The same goes for privileged accounts – that’s where the access is! But while the risk is almost universally recognized, what to do about it isn’t – aside from “continuous improvement”, because hey, everyone needs to pass their audit. One reason the privileged user problem has persisted so long is that the controls often reduce productivity of some of the most valuable users, drive up cost, and generally increase availability risk. Career risk, anyone? But that’s why security folks make the big bucks. High-probability events gets the lion’s share of attention, but lower-probability gut-punch events like privileged user misuse have come to the fore. Buckle up! Nobody cares what your name is! Third-party identity services and cloud-based identity are gaining momentum. The need for federation (to manage customer, employee, and partner identities), and two-factor authentication (2FA) to reduce fraud are both powerful motivators. But we expected last year’s hack of Mat Honan to start a movement away from passwords in favor of certificates and other better user authentication tools. But what we got was risk-based handling of requests on the back end. It is not yet the year of PKI, apparently. Companies are less concerned with logins and more concerned with request context and metadata. Does the user normally log in at this time? From that location? With that app? Is this a request they normally make? Is it for a typical dollar amount? A lot more is being spent on analytics to determine ‘normal’ behavior than on replacing identity infrastructure, and fraud analytics on the back end are leading the way. In fact precious little attention is being paid to identity systems on the front end – even payment processors are discussing third-party identity from Facebook and Twitter for authentication. What could possibly go wrong? As usual cheap, easy, and universally available trump security – for authentication tools, this time. To compensate, effort will need to be focused on risk-based authorization on the back end. Share:

Share:
Read Post

RSA Conference Guide 2014 Deep Dive: Application Security

With PoS malware, banking trojans, and persistent NSA threats the flavors of the month and geting all the headlines, application security seems to get overshadowed every year at the RSA Conference. Then again, who wants to talk about the hard, boring tasks of fixing the applications that run your business. We have to admit it’s fun to read about who the real hackers are, including selfies of the dorks people apparently selling credit card numbers on the black market. Dealing with a code vulnerability backlog? Not so much fun. But very real and important trends are going on in application security, most of which involve “calling in the cavalry” – or more precisely outsourcing to people who know more about this stuff, to jumpstart application security programs. The Application Security Specialists Companies are increasingly calling in outside help to deal with application security, and it is not just the classi dynamic web site and penetration testing. On the show floor you will see several companies offering cloud services for code scanning. You upload your code and associated libraries, and they report back on known vulnerabilities. Conceptually this sounds an awful lot like white-box scanning in the cloud, but there is more to it – the cloud services can do some dynamic testing as well. Some firms leverage these services before they launch public web applications, while others are responding to customer demands to prove and document code security assurance. In some cases the code scanning vendors can help validate third-party libraries – even when source code is not available – to provide confidence and substantiation for platform providers in the security of their foundations. Several small professional services firms are popping up to evaluate code development practices, helping to find bad code, and more importantly getting development teams pointed in the right direction. Finally, there is new a trend in application vulnerability management – no, we are not talking about tools that scan for platform defects. The new approaches track vulnerabilities in much the same way we track general software defects, but with a focus on specific issues around security. Severity, path to exploit, line of code responsible, and calling modules that rely on defective code, are all areas where tools can help development teams prioritize security vulnerability fixes. Exposing Yourself At the beginning of 2013, several small application security gateway vendors were making names for themselves. Within a matter of months the three biggest were acquired (Mashery by Intel, Vordel by Axway, and Layer 7 by CA). Large firms quickly snapping up little firms often signal the end of a market, but in this case it is just the beginning – to become truly successful these smaller technologies need to be integrated into a broader application infrastructure suite. Time waits for no one, and we will see a couple new vendors on the show floor with similar models. You will also see a bunch of activity around API gateways because they serve as application development accelerators. The gateway provides base security controls, release management, and identity functions in a building block platform, on top of which companies publish internal systems to the world via RESTful APIs. This means an application developer can focus on delivery of a good user experience, rather than worrying extensively about security. Even better, a gateway does not care whether the developer is an employee or a third party. That plays into the trend of using third-party coders to develop mobile apps. Developers are compensated according to the number of users of their apps, and gateways track which app serves any given customer. This simple technology allows crowdsourcing apps, so we expect the phenomenon to grow over the next few years. Bounty Hunters – Bug Style Several companies, most notably Google and Microsoft, have started very public “security bug bounty” programs and hackathons to incentivize professional third-party vulnerability researchers and hackers to find and report bugs for cash. These programs have worked far better than the companies originally hoped, with dozens of insidious and difficult-to-detect flaws disclosed quickly, before new code goes live. Google alone has paid out more than $1 million in bounties – their programs has been so successful that they have announced they will quintuple rewards for bugs on core platforms. These programs tend to attract skilled people who understand the platforms and uncover things development teams were totally unaware of. Additionally, internal developers and security architects learn from attacker approaches. Clearly, as more software publishers engage the public to shake down their applications, we will see everyone jumping on this bandwagon – which will provide an opportunity for small services firms to help software companies set up these programs. Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.