Securosis

Research

New Series: EMV, Tokenization, and the Changing Payment Space

October 1st, 2015, is the deadline for merchants to upgrade “Point of Sale” and “Point of Swipe” terminals to recommended EMV compliant systems. To quote Wikipedia, “EMV (Europay MasterCard Visa), is a technical standard for smart payment cards and for payment terminals and automated teller machines which can accept them.” These new terminals can validate an EMV specific chip in a customer’s credit card on swipe, or validate a secure element in a mobile device when it is scanned by a terminal. The press is calling this transition “The EMV Liability Shift” because merchants who do not adopt the new standard for payment terminals are being told that they – not banks – will be responsible for fraudulent transactions. There are many possible reasons for this push. But why should you care? I know some of you don’t care – or at least don’t think you should. Maybe your job does not involve payments, or perhaps your company doesn’t have payment terminals, or you could be a merchant who only processes “card not present” transactions. But the reality is that mobile payments and their supporting infrastructure will be a key security battleground in the coming years. Talking about the EMV shift and payment security is difficult; there is a lot of confusion about what this shift means, what security is really being delivered, and the real benefits for merchants. Some of the confusion stems from the press focusing on value statement marketing by card brands, rather than digging into what these specifications and rollouts really involve. Stated another way, the marketed consumer value seldom matches the business intent driving the effort. So we are kicking off this new research series to cover the EMV shift, its impact on security and operations for merchants, and what they need to do beyond the specifications for security and business continuity – as part of the shift and beyond. Every research paper we write at Securosis has the core goal of helping security practitioners get their jobs done. It’s what we do. And that’s usually a clear task when we are talking about how to deploy DLP, what DAM can and cannot do, or how to get the most out of your SIEM platform. With this series, it’s more difficult. First, payment terminals are not security appliances, but transaction processing devices which depend on security to work properly. The irony is that – from the outside – technologies that appear security-focused are only partially related to security. They are marketed as security solutions, but really intended to solve business problems or maintain competitive advantages. Second, the ecosystem is highly complex, with many different companies providing services along the chain, each having access to payment information. Third, we will discuss some security issues you probably haven’t considered – perhaps in the news or on the horizon, but likely not yet fully in your sphere of influence. Finally, many of the most interesting facets of this research, including details we needed to collect so we could write this series, are totally off the record. We will do our best to provide insights into issues merchants and payment service providers are dealing with behind the scenes (without specifically describing the scenarios that raised the issues) to help you make decisions on payment deployment options. To amass sufficient background for this series we have spoken with merchants (both large and mid-sized), merchant banks, issuing banks, payment terminal manufacturers, payment gateway providers, card manufacturers, payment security specialists, and payment security providers. Each stakeholder has a very different view of the payment world and how they want it to work. We remain focused on helping end users get their (security) jobs done, but some of this research is background to help you understand how the pieces all fit together – and just as importantly, the business issues driving these changes. The Stated Goals: We will set the stage by explaining what EMV is, and what they are demanding of merchants. We will discuss how EMV and “smart card” technologies have changed the threat landscape in Europe and other parts of the world, and the card brands’ vision for the US. This is the least interesting part of the story, but it is necessary to understand the differences between what is being requested and what is being required – between security benefits and other things marketed as security benefits. The Landscape: We will sketch out the complicated payment landscape and where the major players fit. We do not expect readers to know the difference between an issuing bank and a merchant bank, so we will briefly explain the major players (merchants, gateways, issuers, acquirers, processors, and affiliates); showing where data, tokens, and other encrypted bits move. We will introduce each party along with their role. Where appropriate we will share public viewpoints on how each player would like access to consumer and payment data for various business functions. The Great EMV Migration: We will discuss the EMV-mandated requirements in some detail, the security problems they are intended to address, and how merchants should comply. We will examine some of the issues surrounding adoption, along with how deployment choices affect security and liability. We will also assess concerns over Chip & PIN vs. Chip & Signature, and why merchants and consumers should care. The P2P Encryption Conundrum: We will consider P2P encryption and the theory behind it. We will consider the difference between theory and practice, specifically between acquirer-based encryption solutions and P2P encryption, and the different issues when the endpoint is the gateway vs. the processor vs. the acquirer. We will explain why P2P is not part of the EMV mandate, and show how the models create weak links in the chain, possibly creating liability for merchants, and how this creates opportunities for fraud and grey areas of responsibility. The Tokens: Tokenization is a reasonably new subject in security circles, but it has demonstrated value for credit card (PAN) data security. With recent mobile payment solutions, we do not see new types of tokens to obfuscate account numbers or other pieces of financial data.

Share:
Read Post

RSAC Guide 2015: P.Compliance.90X

Compliance. It’s a principle driver for security spending, and vendors know this. That’s why each year compliance plays a major role in vendor messaging on the RSA show floor. A plethora of companies claiming to be “the leader in enterprise compliance products” all market the same basic message: “We protect you at all levels with a single, easy-to-use platform.” and “Our enterprise-class capabilities ensure complete data security and compliance.” Right. The single topic that best exemplifies our fitness meme is compliance. Most companies treat compliance as the end goal: you hold meetings, buy software, and generate reports, so you’re over the finish line, right? Not so much. The problem is that compliance is supposed to be like a motivational poster on the wall in the break room, encouraging you to do better – not the point itself. Buying compliance software is a little like that time you bought a Chuck Norris Total Gym for Christmas. You were psyched for fitness and harbored subconscious dreams it would turn you into a Chuck Norris badass. I mean, c’mon, it’s endorsed by Chuck Friggin’ Norris! But it sat in your bedroom unused, right next to the NordicTrack you bough a few years earlier. By March you hadn’t lost any weight, and come October the only thing it was good for was hanging your laundry on, so your significant other posted it on Craigslist. The other side of the compliance game is the substitution of certifications and policy development for the real work of reducing risk. PCI-DSS certification suggests you care about security but does not mean you are secure – the same way chugging down 1,000-calorie fruit smoothies makes you look like you care about fitness but won’t get you healthy. Fitness requires a balance of diet and exercise over a long period; compliance requires hard work and consistent management towards the end goal over years. Your compliance requirements may hinge on security, privacy, fraud reduction or something else entirely, but success demands a huge amount of hard work. So we chide vendors on their yearly claims about compliance-made-easy, and that the fastest way to get compliant is buy this vendors class-leading product. But this year we think it will be a little more difficult for vendors, because there is a new sheriff in town. No, it’s not Chuck Norris, but a new set of buyers. As with every period of disruptive innovation, developers start to play a key role in making decisions on what facilities will be appropriate with newer technology stacks. Big Data, Cloud, Mobile, and Analytics are owned by the fitness freaks who build these systems. Think of them as the leaner, meaner P90X fitness crowd, working their asses off and seeing the results of new technologies. They don’t invest in fancy stuff that cannot immediately show its worth: anything that cannot both help productivity and improve reliability isn’t worth their time. Most of the value statements generated by the vendor hype machine look like Olivia Newton-John’s workout gear to this crowd – sorely out of date and totally inappropriate. Still, we look forward to watching these two worlds collide on the show floor. Share:

Share:
Read Post

Friday Summary: April 3, 2013: Getting back in

Running. I started running when I was 9. I used to tag along to exercise class at the local community college with my mom, and they always finished the evening with a couple laps around the track. High school was track and cross country. College too. When my friends and I started to get really fast, there would be the occasional taunting of rent-a-cops, and much hilarity during the chase, usually ending in the pursuers crashing into a fence we had neatly hopped over. Through my work career, running was a staple, with fantastic benefits for both staying healthy and washing away workday stresses. Various injuries and illness stopped that over the last few years, but recently I have been back at it. And it was … frigging awful and painful. Unused muscles and tendons screamed at me. But after a few weeks that went away. And then I started to enjoy the runs again. Now I find myself more buoyant during the day – better energy and just moving better. It’s a subtle thing, but being fit just makes you feel better in several ways, all throughout the day. This has been true for several other activities of late — stuff I love to do, but for various reasons dropped. Target shooting is something I enjoy, but the restart was awful. You forget how critical it is to control your breathing. You forget the benefit of a quality load. You forget how the trigger pull feels and how to time the break. I grew up taking two or three fishing trips a year, but had pretty much stopped fishing for the last 10 years – lack of time, good local places to go, and people you wanted to go with. You forget how much fun you can have sitting around doing basically nothing. And you forget how much skill and patience good fishermen bring to the craft. In this year of restarts, I think the one activity that surprised me most was coding. Our research has swung more and more into the security aspects of cloud, big data, and DevOps. But I can’t expect to fully understand them without going waist-deep to really use them. Like running, this restart was painful, but this was more like being punched in the mouth. I was terrible. I am good at learning new tools and languages and environments, and I expected a learning curve there. The really bad part is that much of what I used to do is now wrong. My old coding methods – setting up servers to be super-resilient, code re-use, aspects of object-oriented design, and just about everything having to do with old-school relational database design, needs to get chucked out the window. I was not only developing slowly, but I found myself throwing code out and reworking to take advantage of new technologies. It would have been faster to learn Hadoop and Dynamo without my relational database background – I needed to start by unlearning decades of training. But after the painful initial foray, when I got a handle on ways to use these new tools, I began to feel more comfortable. I got productive. I started seeing the potential of the new technologies, and how I should really apply security. Then I got happy! I’ve always been someone who just feels good when I produce something. But over and above that is something about the process of mastering new stuff and, despite taking some lumps, gaining confidence through understanding. Getting back in was painful but now it feels good, and is benefitting both my psyche and my research. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences In case you missed it, Dave Lewis, JJ, James Arlen, Rich, Mike, and Adrian posted some of our yearly RSA Conference preview on the RSAC Blog. We will post them and the remaining sections on the Securosis blog next week. Mike on Endpoint Defense. Favorite Securosis Posts James Arlen: Firestarter: Using RSA. Crushing the rant on a Monday morning. Adrian Lane: Securosis Guide: DevOpsX Games. Really funny post by Rich – despite being a sick puppy, he cranked out his best post of the year. Mike Rothman: Network-based Threat Detection: Overcoming the Limitations of Prevention. Other Securosis Posts Incite 4/1/2015: Fooling Time. New Paper! Endpoint Defense: Essential Practices. Favorite Outside Posts Adrian Lane: The PCI Council calls it quits. Very funny. The clarity of the message gave it away! James Arlen: Pin-pointing China’s attack against GitHub. Wouldn’t be the first time an American company has been coerced by a foreign government. Itty Bitty Machines could tell a story or two. Rich: Pin-pointing China’s attack against GitHub. This is a make it or break it moment for our government. If they don’t take action they will prove that China can blatantly attack US companies with impunity. This is historically unprecedented. David Mortman: The ABC of ABC – An Analysis of Attribute-Based Credentials in the Light of Data Protection, Privacy and Identity . Dave Lewis: The failure of the security industry. Mike Rothman: Are you the most thrilling ride at the theme park? I’m not sure how Thom Langford made a drab theme park experience into our security reality, but he did. You should check it out. Research Reports and Presentations Endpoint Defense: Essential Practices. Cracking the Confusion: Encryption and Tokenization for Data Centers, Servers, and Applications. Security and Privacy on the Encrypted Network. Monitoring the Hybrid Cloud: Evolving to the CloudSOC. Security Best Practices for Amazon Web Services. Securing Enterprise Applications. Secure Agile Development. Trends in Data Centric Security White Paper. Leveraging Threat Intelligence in Incident Response/Management. Pragmatic WAF Management: Giving Web Apps a Fighting Chance. Top News and Posts The Attack on GitHub Must Stop Distrusting New CNNIC Certificates Secrecy around police surveillance equipment proves a case’s undoing How the NSA’s Firmware Hacking Works Share:

Share:
Read Post

Cracking the Confusion: Encryption Layers

Picture enterprise applications as a layer cake: applications sit on databases, databases on files, and files are mapped onto storage volumes. You can use encryption at each of these layers in your application stack: within the application, in the database, on files, or on storage volumes. Where you use an encryption engine dominates security and performance. Higher up the stack can offer more security, with higher complexity and performance cost. There is a similar tradeoff with encryption engine and key manager deployments: more tightly coupled systems offer less complexity, but less security and reliability. Building an encryption system requires a balance between security, complexity, and performance. Let’s take a closer look at each layer and their tradeoffs. Application Encryption One of the more secure ways to encrypt application data is to collect it in the application, send it to an encryption server or appliance (an encryption library embedded in the application), and then store the encrypted data in a separate database. The application has full control over who sees what and can secure data without depending on the security of the underlying database, file system, or storage volumes. The keys themselves might be on the encryption server or could even be stored in yet another system. The separate key store increases security, simplifies management of multiple encryption appliances, and helps keep keys safe for data movement – backup, restore, and migration/synchronization to other data centers. Database Encryption Relational database management systems (RDBMS) typically have two encryption options: transparent and column. In our layer cake above columnar encryption occurs as applications insert data into a database, whereas transparent encryption occurs as the database writes data out. Transparent encryption is applied automatically to data before it is stored at the file or disk layer. In this model encryption and key management happen behind the scenes, without the user’s knowledge or requiring application programming. The database management system handles encryption and decryption operations as data is read (or written), ensuring all data is secured, and offering very good performance. When you need finer control over data access, you can encrypt single columns, or tables, within the database. This approach offers the advantage that only authenticated users of encrypted data are able to gain access, but requires changing database or application code to manage encryption operations. With either approach there is less burden on application developers to build a crypto system, but slightly less control over who can access sensitive data. Some third-party tools also offer transparent database encryption by automatically encrypting data as it is stored in files. These tools aren’t part of the database management system itself, so they can work with databases that don’t support TDE directly, and provide greater separation of duties for database administrators. File Encryption Some applications, such as payment systems and web applications, do not use databases and instead store sensitive data in files. Encryption is applied transparently as data is written to files. This type of encryption is offered as a third-party add-on to the file system, or embedded within the operating system. Encryption and decryption are transparent to both users and applications. Data is decrypted when a user requests a file, after they have authenticated to the system. If the user does not have permission to read the file, or has not provided proper credentials, they only get encrypted data. File encryption is commonly used to protect “data at rest” in applications that do not include encryption capabilities, including legacy enterprise applications and many big data platforms. Disk/Volume Encryption Many off-the-shelf disk drives and Storage Area Network (SAN) arrays include automatic data encryption. Encryption is applied as data is written to disk, and decrypted by authenticated users/apps when requested. Most enterprise-class systems hold encryption keys locally to support encryption operations, but rely on external key management services to manage keys and provide advanced key services such as key rotation. Volume encryption protects data in case drives are physically stolen. Authenticated users and applications are provided unencrypted copies of files and data. Tradeoffs In general, the further “up the stack” you deploy encryption, the more secure your data is. The price of that extra security is more difficult integration, usually in the form o application code changes. Ideally we would encrypt all data at the application layer and fully leverage user authentication, authorization, and business context to determine who can see sensitive data. In the real world the code changes required for this level of precision control are often insurmountable engineering challenges and/or cost prohibitive. Surprisingly, transparent encryption often perform faster than application-layer encryption, even with larger data sets. The tradeoff is moving high enough “up the stack” to address relevant threats while minimizing the pain of integration and management. Later in this series we will walk you through the selection process in detail. Next up in this series: key management options. Share:

Share:
Read Post

Friday Summary: February 13, 2015

Welcome to the Friday the 13th edition of the Friday Summary! It has been a while since I wrote the summary so there is lots to cover … My favorite external post this week is a research paper, Mongo Databases At Risk, outlining a very common trend among MongoDB users: not using basic user authentication to secure their databases. Well, that, and putting them on the Internet. On the default port. Does this sound like SQL Server circa 2003 to anyone else? One angle I found important was the number of instances discovered: nearly 40k databases. That is a freakin’ lot! Remember, this is MongoDB. And just those running on the Internet at the default port. Yes, it’s one of the top NoSQL platforms, but during our inquiries we spoke with 4 Hadoop users for every MongoDB user. MongoDB was also behind Hadoop and Cassandra. I don’t know if anyone publishes download or usage numbers for the various platforms, but extrapolating from those numbers, there are a lot of NoSQL databases in use. Someone with more time on their hands might decide to scan the Internet for instances of the other platforms (the default port for Hadoop, Cassandra, CouchDB, and Redis is 6380; RIAK is 8087). I would love to know what you find. Back to security… I have had conversations with several firms trying to figure out how to monitor NoSQL usage; we know how to apply DAM principles to SQL, but MapReduce and other types of queries are much more difficult to parse. I expect several vendors to introduce basic monitoring for Hadoop in the next year, but it will take time to mature, and even more to cover other platforms. What I haven’t heard discussed is the easier – and no less pressing – issue of configuration and vulnerability assessment. The enterprise distributions are providing best practices but automated scans are scarce – and usually custom. That is a free hint for any security vendors looking for a quick way to help big data customers get secure. Mobile security consumes much more of my time than it should. I geek out on it, often engaging Gunnar in conversation on everything from the inner workings of secure elements to the apps that make payments happen. And I read everything I can find. This week I ran across Why Banks Will Win the Battle for the Mobile Wallet, by John Gunn – the guy who runs the wonderfully helpful twitter feed @AuthNews. But on this I think he has missed the point. Banks are not battling to win mobile wallets. In fact those I have spoken with don’t care about wallets. They care about transactions. And moving more transactions from cash to credit means a growing stream of revenue for merchant banks and payment processors, which makes them very happy. Wallets in and of themselves don’t fosters adoption – as Google is well aware – and in fact many users don’t really trust wallets at all. What gets people to move from a plastic card or cash, at least in the US, is a combination of convenience and trust. Starbucks leveraged their brand affinity into seven million subscribers for their app and an impressive 2.1 million transactions per week. Banks benefit directly when more transactions move away from cash, and they are happy to let others own the user experience. But things get really interesting in overseas markets, which make US adoption of mobile payments look like a payments backwater. Nations without traditional banking or payment infrastructure can now move money in ways they previously could not, so adoption rates have been huge. Leveraging cellular infrastructure makes it faster and safer to move money, with fewer worries about carrying cash. Nations like Kenya – which is not often considered on the cutting edge of technology, but had 25 million mobile payment users and moved $26 billion in 2014 via mobile payments and mobile money subscriptions. Sometimes technology really does make the world a better place. The banks don’t care which wallets, apps, technology, or carriers wins – they just want someone to make progress. In January I normally publish my research calendar for the coming year. But Rich has been hogging the Friday Summary for weeks now, so I finally get a chance to talk about what I am seeing and doing. Tokenization: I am – finally – going to publish some thoughts on the latest trends in tokenization. I want to talk about changes in the technology, adoption on mobile platforms, how the latest PCI specification is changing compliance, and some customer user cases. Risk-Based Authentication and Authorization: We see many more organizations looking at risk-based approaches to augment the security of web-based applications. Rather than rewrite applications they use metadata, behavioral information, business context, and… wait for it… big data analytics to better determine the acceptability of a request. And it is often cheaper and easier to bolt this on externally than to change applications. Gunnar and I have wanted to write this paper for a year, and now we finally have the time. Building a Security Analytics Platform: I have been briefed by many of security analytics startups, and each is putting together some basic security analysis capabilities, usually built on big data databases. I have, in that same period, also spoken with many large enterprises who decided not to wait for the industry to innovate, and are building their own in-house systems. The last couple even asked me what I thought of certain architectural choices, and which data elements should they use as hash keys! So there is considerable demand for consumer education; I will cover off-the-shelf and DIY options. I am still on the fence about some secure code development ideas, so if you have an idea, let’s talk. Even the security vendors I have visited in the last year have pulled me aside to ask about transitioning to Agile, or how to fix a failed transition to Agile. Most want to know what

Share:
Read Post

Monitoring the Hybrid Cloud: Migration Planning

We will wrap up this series with a migration path to monitoring the hybrid cloud. Whether you choose to monitor the cloud services you consume, or go all the way and create your own SOC in the cloud, these steps will get you there. Let’s dive in. Phase 1: Deploy Collectors The first phase is to collect and aggregate the data. You need to decide how to deploy event collectors – including agents, ‘edge’ proxies, and reverse proxies – to gather information from cloud resources. Your goal is to gather events as quickly and easily as possible, so start with what you know. That basically means leveraging the capabilities of your current security solution(s) to get these new events into the existing system. The complexity is not around understanding these new data sources – flow data and syslog output are well understood. The challenge comes in adapting collection methods designed for on-premises services with a cloud model. If an agent or collector works with your cloud provider’s environment, either to consume cloud vendor logs or those created by your own cloud-based servers, you are in luck. If not you will likely find yourself rerouting traffic to and/or from the cloud into a network proxy to capture events. Depending on the type of cloud service (such as SaaS or IaaS) you will have various means to access event data (such as logs and API connectivity), as outlined in our solution architectures post. We suggest collecting data directly from the cloud provider whenever possible, because much of that data is unavailable from instances or applications running inside the cloud. Monitoring agents can be deployed in IaaS or private cloud environments, where you control the full stack. But in other cloud models, particularly PaaS and SaaS, agents are generally not viable. There you need to rely on proxies that can collect data from all types of cloud deployments, provided you can route traffic through their data-gathering choke points. It is decidedly suboptimal to insert choke points in your cloud network, but it may be necessary. Finally, you have might instead be able to use remote API calls from an on-premise collector to pull events directly from your cloud provider. Not all cloud providers offer this access, and if they do you will likely need to code something yourself from their API documentation. Once you understand what is available you can figure out whether your source provides sufficiently granular data. Each cloud provider/vendor API, and each event log, offer a slightly different set of events in a slightly different format. Be prepared to go back to the future – you may need to build a collector based on sample data from your provider, because not all of the cloud vendors/providers offer logs in syslog or a similarly convenient format. Also look for feed filter options to screen out events you are not interested in – cloud services are excellent at flooding systems with (irrelevant) data. Our monitoring philosophy hasn’t changed. Collect as much data as possible. Get everything the cloud vendor provides as the basis for security monitoring. Then fill in the deficiencies with agents, proxy filters, and cloud monitoring services as needed. This is a very new capability, so likely you will need to build API interface layers to your cloud service providers. Finally keep in mind that using proxies and/or forcing cloud traffic through appliances at the ‘edge’ of your cloud is likely to require re-architecting both on-premise and cloud networks to funnel traffic in and out of your collection point. This also requires that disconnected devices (phones/tablets and laptops not on the corporate network) be configured to send traffic through the choke points / gateways, and cloud services must be configured to reject any direct access which bypasses these portals. If an inspection point can be bypassed it cannot effectively monitor security. Now that you have figured out your strategy and deployed basic collectors, it is time to integrate these new data sources into the monitoring environment. Phase 2: Integrate and Monitor Cloud-based Resources To integrate these cloud-based event sources into the monitoring solution you need to decide which deployment model will best fit your needs. If you already have an on-premise SOC platform and supporting infrastructure it may make sense to simply feed the events into your existing SIEM, malware detection, or other monitoring systems. But a few considerations might change your decision. Capacity: Ensure the existing system can handle your anticipated event volume. SaaS and PaaS environments can be noisy, so expect a significant uptick in event volume, and account for the additional storage and processing overhead. Push vs. Pull: Log Management and SIEM systems can collect events as remote systems and agents push events to them. Then the collector grabs the events, possibly performing some event preprocessing, and forwards the stream to the main aggregation point. But what if you cannot run a remote agent to push the data to you? Most cloud events must be pulled from the cloud service via an active API request. While pull requests are secured across HTTPS, SSL, or even VPN connections, this doesn’t happen magically – a program or script must initiate the transfer. Additionally, the program (script) must supply credentials or identity tokens to the cloud service. You need to know whether your current system is capable of initiating the pull request, and whether it can securely manage the remote API service credentials necessary to collect data. Data Retention: Cloud services require network access, so you need to plan for when your connection is down – especially given the frequency of DoS attacks and network service outages. Make sure you understand the impact if you cannot collect remote events for a time. If the connection goes down, how long can relevant security data be retained or buffered? You don’t want to lose that data. The good news is that many PaaS and IaaS platforms provide easy mechanisms to archive event feeds to long-term storage, to avoid event data loss, but

Share:
Read Post

Monitoring the Hybrid Cloud: Technical Considerations

New platforms for hybrid cloud monitoring bring both new capabilities and new challenges. We have already discussed some differences between monitoring the different cloud models, and some of the different deployment options available. This post will dive into some technical considerations for these new hybrid platforms, highlighting potential benefits and issues for data security, privacy, scalability, security analytics, and data governance. As cool as a ‘CloudSOC’ sounds, there are technical nuances which need to be factored into your decision and selection processes. There are also data privacy issues because some types of information fall under compliance and jurisdictional regimes. Cloud computing and service providers can provide an opportunity to control infrastructure costs more effectively, but service models costs are calculated differently that on-premise systems, so you need to understand the computing and storage characteristics of the SOC platform in detail to understand where you are spending money. Let’s jump into some key areas where you need to focus. Data Security As soon as event data is moved out of one ‘cloud’ such as say Salesforce into another, you need to consider the sensitivity of the data, which forces a decision on how to handle security. Using SSL or similar technology to secure the data in motion is the easy part – what to do with the data at rest, once it reaches the CloudSOC, is far more challenging. You can get some hints from folks who have already grappled with this question: security monitoring providers. These services either build their own private clouds to accommodate and protect client data, or leverage yet another IaaS or PaaS cloud to provide the infrastructure to store the data. Many of you will find the financial and scalability advantages of storing cloud data in a cloud services more attractive than moving all that collected data back to an on-premise system. Regardless of whether you build your own CloudSOC or use a managed service, a key part of your security strategy will be the Service Level Agreements (SLAs) you establish with your providers. These agreements specify the security controls implemented by the provider, and if something is not specified in that agreement the provider has no obligation to provide it. An SLA is a good place to start, but be wary of unspecified areas – those are where gaps are most likely emerge. A good place to start is a comparison of what the provider does with what you do internally today. We recommend you ask questions and get clear answers on every topic you don’t understand because once you execute the agreement you have no further leverage to negotiate. And if you are running your own make sure you carefully plan out your cloud security model to take advantage of what your IaaS provider offers. You may decide some data is too sensitive to be stored in the cloud without obfuscation (encryption) or removal (typically redaction, tokenization, or masking). Data Privacy and Jurisdiction Over and above basic data security for logs and event data, some countries have strict laws about how Personally Identifiable Information (PII) data may be collected and stored, and some even require that PII not leave its country of origin – even encrypted. If you do business in these countries your team likely already understands the regulations today, but for a hybrid SOC deployment you also need to understand the locations of your primary and backup cloud data centers, and their regional laws as well. This can be incredibly confusing – particularly when data protection laws conflict between countries. Once you understand the requirements and where your cloud (including CloudSOC) providers are located, you can effectively determine which security controls you need. Once again data encryption addresses many legal requirements, and data masking and tokenization services can remove sensitive data without breaking your applications or impairing security analytics. The key is to know where the data will be stored to figure out the right mix of controls. Automation and Scalability If you have ever used Dropbox or Salesforce or Google Docs, you know how easy it is to store data in the cloud. When you move beyond SaaS to PaaS and IaaS, you will find it is just as easy to spin up whole clusters of new applications and servers with a few clicks. Security monitoring, deploying collectors, and setting up proxies for traffic filtering, all likewise benefit from the cloud’s ease of use and agility. You can automate the deployment of collectors, agents, or other services; or agents can be embedded in the start-up process for new instances or technology stacks. Verification and discovery of services running in your cloud can be performed with a single API call. Automation is a hallmark of the cloud so you can script pretty much anything you need. But getting started with basic collection is a long way from getting a CloudSOC into production. As you move to a production environment you will be constructing and refining initialization and configuration scripts to launch services, and defining templates which dictate when collectors or analytics instances are spun up or shut down via the magic of autoscaling. You will be writing custom code to call cloud APIs to collect events, and writing event filters if the API does not offer suitable options. It is basically back to the future, hearkening back to the early days of SIEM when you spent as much time writing and tuning collectors as analyzing data. Archiving is also something you ll need to define and implement. The cloud offers very granular control of which data gets moved from short-term to long-term storage, and when. In the long run cloud models offer huge benefits for automation and on-demand scalability, but there are short-term set-up and tuning costs to get a CloudSOC working the way you need. A managed CloudSOC service will do much of this for you, at additional cost. Other Considerations Management Plane: The management plane for cloud services is a double-edged sword; IT admins now have the power to automate

Share:
Read Post

Monitoring the Hybrid Cloud: Solution Architectures

The good old days: Monitoring employees on company-owned PCs, accessing the company data center across corporate networks. You knew where everything was, and who was using it. And the company owned it all, so you could pretty much dictate where and how you performed security monitoring. With cloud and mobile? Not so much. To take advantage of cloud computing you will need to embrace new approaches to collecting event data if you hope to continue security monitoring. The sources, and the information they contain, are different. Equally important – although initially more subtle – is how to deploy monitoring services. Deployment architectures are critical to deploying and scaling any Security Operations Center; defining how you manage security monitoring infrastructure and what event data you can capture. Furthermore, how you deploy the SOC platform impacts performance and data management. There are a variety of different architectures, intended to meet the use cases outlined in our last post. So now we can focus on alternative ways to deploy collectors in the cloud, and the possibility of using a cloud security gateway as a monitoring point. Then we will take a look at the basic cloud deployment models for a SOC architected to monitor the hybrid cloud, focusing on how to manage pools of event data coming from distributed environments – both inside and outside the organization. Data collection strategies API: Automated, elastic, and self-service are all intrinsic characteristics for cloud computing. Most cloud service providers offer a management dashboard for convenience (and unsophisticated users), but advanced cloud features are typically exposed only via scripts and programs. Application Programming Interfaces (APIs) are the primary interfaces to cloud services; they are essential for configuring a cloud environment, configuring and activating monitoring, and gathering data. These APIs can be called from any program or service, running either on-premise or within a cloud environment. So APIs are the cloud equivalent to platform agents, providing many of the same capabilities in the cloud where a ‘platform’ becomes a virtualized abstraction and a traditional agent wouldn’t really work. API calls return data in a variety of ways, including the familiar syslog format, JSON files, and even various formats specific to different cloud providers. Regardless, aggregating data returned by API calls is a new key source of information for monitoring hybrid clouds. Cloud Gateways: Hybrid cloud monitoring often hinges on a gateway – typically an appliance deployed at the ‘edge’ of the network to collect events. Leveraging the existing infrastructure for data management and SOC interfaces, this approach requires all cloud usage to first be authenticated to the cloud gateway as a choke point; after inspection, traffic is passed on to the appropriate cloud service. The resulting events are then passed to event collection services, comparable to on-premise infrastructure. This enables tight integration with existing security operations and monitoring platforms, and the initial authentication allows all resource requests to be tied to specific user credentials. Cloud 2 Cloud: A newer option is to have one cloud service – in this case a monitoring service – act as a proxy to another cloud service; tapping into user requests and parsing out relevant data, metadata, and application calls. Similarly to using a managed service for email security, traffic passes through a cloud provider to parse incoming requests before they are forwarded to internal or cloud applications. This model can incorporate mobile devices and events – which otherwise never touch on-premise networks – by passing their traffic through an inspection point before they reach cloud service providers such as Salesforce and Microsoft Azure. This enables the SOC to provide real-time event analysis and alert on policy violations, with collected events forwarded to the SOC (either on-premise or in the cloud) for storage. In some cases by proxying traffic these services can also add additional security – such as checks against on-premise identity stores, to ensure employees are still employed before granting access to cloud resources. App Telemetry: Like cloud providers, mobile carriers, mobile OS providers, and handset manufacturers don’t provide much in the way of logging capabilities. Mobile platforms are intended to be secured from outsiders and not leak information between apps. But we are beginning to see mobile apps developed specifically for corporate use, as well as company-specific mobile app containers on devices, which send basic telemetry back to the corporate customer to provide visibility into device activity. Some telemetry feeds include basic data about the device, such as jailbreak detection, while others append user ‘fingerprints’ to authorize requests for remote application access. These capabilities are compiled into individual mobile apps or embedded into app containers which protect corporate apps and data. This capability is very new, and will eventually help to detect fraud and misuse on mobile endpoints. Agents: You are highly unlikely to deploy agentry in SaaS or PaaS clouds; but there are cases where agents have an important role to play in hybrid clouds, private clouds, and Infrastructure as a Service (IaaS) clouds – generally when you control the infrastructure. Because network architecture is virtualized in most clouds, agents offer a way to collect events and configuration information when traditional visibility and taps are unavailable. Agents also call out to cloud APIs to check application deployment. Supplementary Services: Cloud SOCs often rely on third-party intelligence feeds to correlate hostile acts or actors attacking other customers, helping you identify and block attempts to abuse your systems. These are almost always cloud-based services that provide intelligence, malware analysis, or policies based on a broader analysis of data from a broad range of sites and data in order to detect unwanted behavior patterns. This type of threat intelligence supplements hybrid SOCs and helps organizations detect potential attacks faster, but it is not itself a SOC platform. You can refer to our other threat intelligence papers to dig deeper into this topic. (link to threat intel research) Deployment Strategies The following are all common ways to deploy event collectors, monitoring systems, and operations centers to support security monitoring: On-premise: We will forgo

Share:
Read Post

Securing Enterprise Applications [New White Paper]

Securing enterprise applications is hard work. These are complex platforms, with lots of features and interfaces, reliant on database support, and often deployed across multiple machines. They leverage both code provided by the vendor, as well as hundreds – if not thousands – of supporting code modules produced specifically for the customer’s needs. This make every environment a bit different, and acceptable application behavior unique to every company. This is problematic because during our research we found that most organizations rely on security tools which work on the network fringes, around applications. These tools cannot see inside an application to fully understand its configuration and feature set, nor do they understand application-layer communication. This approach is efficient because a generic tool can see a wide variety of threats, but misses subtle misuse and most serious misconfigurations. We decided to discuss some of our findings. But to construct an entire application security program for enterprise applications would require 100 pages of research, and still fail to provide complete coverage. Many firms have had enterprise applications from Oracle and SAP deployed for a decade or more, so we decided to focus on areas where the security problems have changed, or where tools and technology have superseded approaches that were perfectly acceptable just a couple years ago. This research paper spotlight these problem areas and offers specific suggestions for how to close the security gaps. Here is an except: Supply chain management, customer relationship management, enterprise resource management, business analytics, and financial transaction management, are all multi-billion dollar application platforms unto themselves. Every enterprise depends upon them to orchestrate core business functions, spend tens of millions of dollars on software and support. We are beyond explaining why enterprise applications need security to protect these investments – it is well established that insiders and persistent adversaries target these applications. Companies invest heavily in these applications, hardware to run them, and teams to keep them up and running. They perform extensive risk analysis on their business implications and the costs of downtime. And in many cases their security investments are a byproduct of these risk profiles. Application security trends in the 1-2% range of total application investment, so we cannot say large enterprises don’t take security seriously – they spend millions and hire dedicate staff to protect these platforms. That said, their investments are not always optimal – enterprises may bet on solutions with limited effectiveness, without a complete understanding of the available options. It is time for a fresh look. In this research paper, Building an Enterprise Application Security program, we will take a focused look at the major facets in an enterprise application security program, and make practical suggestions on how to improve efficiency and effectiveness of your security program. Or goal is to discuss specific security and compliance use cases for large enterprise applications, highlight gaps, and explain some application-specific tools to address these issues. This will not be an exhaustive examination of enterprise application security controls, rather a spotlight common deficiencies with the core pillars of security controls and products. We would like to thank Onapsis for licensing this research. They reached out to us on this topic and asked to back this effort, which we are very happy about, because support like this enables us to keep doing what we do. You can download a copy of the research in our research library or download it directly: Securing Enterprise Applications. As always, if you have questions or comments, please drop us a line! Share:

Share:
Read Post

Friday Summary: November 21, 2014

Thus ends the busiest four weeks I have had since joining Securosis. A few conferences – AWS Re:Invent was awesome – a few client on-site days, meeting with some end customers, and about a half dozen webcasts, have together left me gasping for air. We all need a little R&R here and the holidays are approaching, so Firestarters and blog posts will be a bit sporadic. Technically it is still Friday, so here goes today’s (slightly late) summary. I am ignorant of a lot of things, and I thought this one was odd enough that I would ask more knowledgable people in the community for assistance in explaining how this works. The story starts like this: A few months ago the new Lamborghini Huracan was introduced. Being a bit of a car weenie I went to the web site – http://huracan.lamborghini.com – in a Safari browser to see some pictures of the new car. Nice! I wish I could afford one – not that I would drive it much. I would probably just stare at it in the garage. Regardless, I had never been to the Lamborghini web site before. So I was a little surprised the next morning when I opened up a new copy of Firefox, which was trying to make a request to http://media.lamborghini.com. WTF? As I started to dig into this, I saw it was a repeating pattern. I visited http://www.theabsolutesound.com, and when I opened my newly installed Aviator browser, it tried to connect to http://media.theabsolutesound.com. Again, I had never been to that site in the Aviator browser, but recently visited it from FF. Amazon Web services, Tech Target, and a dozen or so requests to connect to media.sitename.com or files.sitename.com popped up. But the capper was a few weeks later, when my computer tried to send the same request to media.theabsolutesound.com from an email client! That is malware behavior, likely adware! So is this behavior part of an evercookie Flash/Java exploit through persistent data? I had Java disabled and Flash set to prompt before launch, so I thought a successful cross-browser attack via those persistence methods was unlikely. Of course it is entirely possible that I missed something. Anyway, if you know about this and would care to explain it – or have a link – I would appreciate an education on current techniques for browser/user tracking. I am clearly missing something. As a side note, as I pasted the huracan.lamborghini.com link into my text editor to wrote this post, an Apple services daemon tried to send a packet to gs-loc.apple.com with that URL in it. Monitor much? If you don’t already run an outbound firewall like Little Snitch, I highly recommend it. It is a great way to learn who sends what where and completely block lots of tracking nonsense. Puppy names. Everybody does it: before you get a new puppy you discuss puppy names. Some people even buy a book, looking for that perfect cute name to give their snugly little cherub. They fail to understand their mistake until after the puppy is in their home. They name the puppy from the perspective of prepuppy normal life. Let me save you some trouble and provide some good puppy names for you, ones more appropriate for the post-puppy honeymoon: “Outside!” – the winner by a landslide. “Drop-It!” “Stinky!” “No, no, no!” “Bad!” “Not again!” “Stop!” “OWW, NO!” “Little bastard” “Come here!” “Droptheshoe!” “AAhhhhrrrr” “F&%#” or the swear word of you choice. Trust me on this – the puppy is going to think one of these is their name anyway, so starting from this list saves you time. My gift to you. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian on Secure Agile Development, via SANS Adrian on Pragmatic WAF Management Securosis Posts Ticker Symbol: HACK. Incite 11/12/2014: Focus. Building an Enterprise Application Security Program: Recommendations. Changing Pricing (for the first time ever). Monitoring the Hybrid Cloud: Emerging SOC Use Cases. Favorite Outside Posts Mike Rothman: Open Whisper Systems partners with WhatsApp to provide end-to-end encryption. The future will be encrypted. Even WhatsApp! Much to the chagrin of the NSA… Rich: Secure Agile Development. Think like a Developer. Maybe I have been spending too much time coding lately, but I love this concept. Needless to say we have lately been spending a lot of time on this area. Adrian Lane: Experimental Videogame Consoles That Let You Make One Move a Day. In a world of instant gratification, getting back to a slow pace is refreshing and awesome. Research Reports and Presentations Securing Enterprise Applications. Secure Agile Development. Trends in Data Centric Security White Paper. Leveraging Threat Intelligence in Incident Response/Management. Pragmatic WAF Management: Giving Web Apps a Fighting Chance. The Security Pro’s Guide to Cloud File Storage and Collaboration. The 2015 Endpoint and Mobile Security Buyer’s Guide. Analysis of the 2014 Open Source Development and Application Security Survey. Defending Against Network-based Distributed Denial of Service Attacks. Reducing Attack Surface with Application Control. Top News and Posts Microsoft patches critical bug that affects every Windows version since 95 Google Removes SSLv3 Fallback Support From Chrome Nasty Security Bug Fixed in Android Lollipop 5.0 Amazon Web Services releases key management service U.S. Marshals Using Fake, Airplane-based Cell Towers Facebook’s ‘Privacy Basics’ Is A Privacy Guide You May Actually Want To Read Hiding Executable Javascript in Images That Pass Validation UPnP Devices Used in DDoS Attacks Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.