Securosis

Research

Defending Against DDoS: Mitigations

Our past two posts discussed network-based Distributed Denial of Device (DDoS) attacks and the tactics used to magnify those attacks to unprecedented scale and volume. Now it’s time to wrap up this series with a discussion of defenses. To understand what you’re up against let’s take a small excerpt from our Defending Against Denial of Service Attacks paper. First the obvious: you cannot just throw bandwidth at the problem. Your adversaries likely have an unbounded number of bots at their disposal and are getting smarter at using shared virtual servers and cloud instances to magnify the amount at their disposal. So you can’t just hunker down and ride it out. They likely have a bigger cannon than you can handle. You need to figure out how to deal with a massive amount of traffic, and separate good traffic from bad while maintaining availability. Your first option is to leverage existing network/security products to address the issue. As we discussed in our introduction, that is not a good strategy because those devices aren’t built to withstand the volumes or tactics involved in a DDoS. Next, you could deploy a purpose-built device on your network to block DDoS traffic before it melts your networks. This is certainly an option, but if your inbound network pipes are saturated, an on-premise device cannot help much – applications will still be unavailable. Finally, you can front-end your networks with a service to scrub traffic before it reaches your network. But this approach is no panacea either – it takes time to move traffic to a scrubbing provider, and during that window you are effectively down. So the answer is likely a combination of these tactics, deployed in a complimentary fashion to give you the best chance to maintain availability. Do Nothing Before we dig into the different alternatives, we need to acknowledge one other choice: doing nothing. The fact is that many organizations have to go through an exercise after being hit by a DDoS attack, to determine what protections are needed. Given the investment required for any of the alternatives listed above, you have to weigh the cost of downtime against the cost of potentially stopping the attack. This is another security tradeoff. If you are a frequent or high-profile target then doing nothing isn’t an option. If you got hit with a random attack – which happens when attackers are testing new tactics and code – and you have no reason to believe you will be targeted again, you may be able to get away with doing nothing. Of course you could be wrong, in which case you will suffer more downtime. You need to both make sure all the relevant parties are aware of this choice, and manage expectations so they understand the risk you are accepting in case you do get attacked again. We will just say we don’t advocate this do-nothing approach, but we do understand that tough decision need to be made with scarce resources. Assuming you want to put some defenses in place to mitigate the impact of a DDoS, let’s work through the alternatives. DDoS Defense Devices These appliances are purpose-built to deal with DoS attacks, and include both optimized IPS-like rules to prevent floods and other network anomalies, and simple web application firewall capabilities to protect against application layer attacks. Additionally, they feature anti-DoS features such as session scalability and embedded IP reputation capabilities, in order to discard traffic from known bots without full inspection. To understand the role of IP reputation, let’s recall how email connection management devices enabled anti-spam gateways to scale up to handle spam floods. It is computationally expensive to fully inspect every inbound email, so immediately dumping messages from known bad senders focuses inspection on email that might be legitimate to keep mail flowing. The same concept applies here. Keep the latency inherent in checking a cloud-based reputation database in mind – you will want the device to aggressively cache bad IPs to avoid a lengthy cloud lookup for every incoming session. For kosher connections which pass the reputation test, these devices additionally enforce limits on inbound connections, govern the rate of application requests, control clients’ request rates, and manage the number of total connections allowed to hit the server or load balancer sitting behind it. Of course these limits must be defined incrementally to avoid shutting down legitimate traffic during peak usage. Speed is the name of the game for DDoS defense devices, so make sure yours have sufficient headroom to handle your network pipe. Over-provision to ensure they can handle bursts and keep up with the increasing bandwidth you are sure to bring in over time. CDN/Web Protection Services Another popular option is to front-end web applications with a content delivery network or web protection service. This tactic only protects the web applications you route through the CDN, but can scale to handle very large DDoS attacks in a cost-effective manner. Though if the attacker is targeting other address or ports on your network, you’re out of luck – they aren’t protected. DNS servers, for instance, aren’t protected. We find CDNs effective for handling network-based DDOS in smaller environments with a small external web presence. There are plenty of other benefits to a CDN, including caching and shielding your external IP addresses. But for stopping DDoS attacks a CDN is a limited answer. External Scrubbing The next level up the sophistication (and cost) scale is an external scrubbing center. These services allow you to redirect all your traffic through their network when you are attacked. The switch-over tends to be based on either a proprietary switching protocol (if your perimeter devices or DDoS Defense appliances support the carrier’s signaling protocol) or a BGP request. Once the determination has been made to move traffic to the scrubbing center, there will be a delay while the network converges, before you start receiving clean traffic through a tunnel from the scrubbing center. The biggest question with a scrubbing center is when to move the traffic. Do it too soon and your resources stay

Share:
Read Post

NoSQL Security 2.0 [New Series] *updated*

NoSQL, both the technology and the industry, have taken off. We are past the point where we can call big data a fad, and we recognize that we are staring straight into the face of the next generation of data storage platforms. About 2 years ago we started the first Securosis research project on big data security, and a lot has changed since then. At that point many people had heard of Hadoop, but could not describe what characteristics made big data different than relational databases – other than storing a lot of data. Now there is no question that NoSQL — as a data management platform — is here to stay; enterprises have jumped into large scale analysis projects with both feet and people understand the advantages of leveraging analytics for business, operations, and security use cases. But as with all types of databases – and make no mistake, big data systems are databases – high quality data produces better analysis results. Which is why in the majority of cases we have witnessed, a key ingredient is sensitive data. It may be customer data, transactional data, intellectual property, or financial information, but it is a critical ingredient. It is not really a question of whether sensitive data is stored within the cluster – more one of which sensitive data it contains. Given broad adoption, rapidly advancing platforms, and sensitive data, it is time to re-examine how to secure these systems and the data they store. But this paper will be different than the last one. We will offer much more on big data security strategies in addition to tools and technologies. We will spend less time defining big data and more looking at trends. We will offer more explanation of security building blocks including data encryption, logging, network encryption, and access controls/identity management in big data ecosystems. We will discuss the types of threats to big data and look at some of the use cases driving security discussions. And just like last time, we will offer a frank discussion of limitations in platforms and vendor offerings, which leave holes in security or fail to mesh with the inherent performance and scalability of big data. I keep getting one question from enterprise customers and security vendors. People ask repeatedly for a short discussion of data-centric security, so this paper provides one. This is because I have gotten far fewer questions in the last year on how to protect a NoSQL cluster, and far more on how to protect data before it is stored into the cluster. This was a surprise, and it is not clear from my conversations whether it is because users simply don’t trust the big data technology, due to worries about data propagation, because they don’t feel they can meet compliance obligations, or if they are worried about the double whammy of big data atop cloud services – all these explanations are plausible, and they have all come up. But regardless of driver, companies are looking for advice around encryption and wondering if tokenization and masking are viable alternatives for their use cases. The nature of the questions tells me that is where the market is looking for guidance, so I will cover both cluster security and data-centric security approaches. Here is our current outline: Big Data Overview and Trends: This post will provide a refresher on what big data is, how it differs from relational databases, and how companies are leveraging its intrinsic advantages. We will also provide references on how the market has changed and matured over the last 24 months, as this bears on how to approach security. Big Data Security Challenges: We will discuss why it is different architecturally and operationally, and also how the platform bundles and approaches differ from traditional relational databases. We will discuss what traditional tools, technologies and security controls are present, and how usage of these tools differs in big data environments. Big Data Security Approaches: We will outline the approaches companies take when implementing big data security programs, as reference architectures. We will outline walled-garden models, cluster security approaches, data-centric security, and cloud strategies. Cluster Security: An examination of how to secure a big data cluster. This will be a threat-centric examination of how to secure a cluster from attackers, rogue admins, and application programmers. Data (Centric) Security: We will look at tools and technologies that protect data regardless of where it is stored or moved, for use when you don’t trust the database or its repository. Application Security: An executive summary of application security controls and approaches. Big data in cloud environments: Several cloud providers offer big data as part of Platform or Infrastructure as a Service offerings. Intrinsic to these environments are security controls offered by the cloud vendor, offering optional approaches to securing the cluster and meeting compliance requirements. Operational Considerations: Day-to-day management of the cluster is different than management of relational databases, so the focus of security efforts changes too. This post will examine how daily security tasks change and how to adjust operational controls and processes to compensate. We will also offer advice on integration with existing security systems such as SIEM and IAM. As with all our papers, you have a voice in what we cover. So I would like feedback from readers, particularly whether you want a short section of application layer security as well. It is (tentatively) included in the current outline. Obviously this would be a brief overview – application security itself is a very large topic. That said, I would like input on that and any other areas you feel need addressing. Share:

Share:
Read Post

Booth Babes Be Gone

OK. I have changed my tune. I have always had a laissez-faire attitude toward booth babes. I come from the school of what works. And if booth babes generate leads, of which some statistically result in deals, I’m good. Mr. Market says that if something works, you keep doing it. And when it stops working you move on to the next tactic. Right? Not so much. Chenxi Wang and Zenobia Godschalk posted a thought-provoking piece about why it’s time to grow up. As people and as a business. This quote from Sonatype’s Debbie Rosen sums it up pretty well, …this behavior is a “lazy way of marketing”, Debbie Rosen of Sonatype said, “this happens when you do not have any creative or otherwise more positive ways of getting attention.” I agree with Debbie. But there are a lot of very bad marketers in technology and security. Getting attention for these simpletons is about getting a louder bullhorn. Creativity is hard. Hiring models is easy.   What’s worse is that I have had attractive technical product managers and SEs, who happen to be female, working at my company, and they were routinely asked to bring over a technical person to do the demo. It was just assumed that an attractive female wouldn’t have technical chops. And that’s what is so troubling about continuing to accept this behavior. I have daughters. And I’m teaching my girls they can be anything they want. I would be really happy if they pursued technical careers, and I am confident they will be attractive adults (yes, I’ll own my bias on that). Should they have to put up with this nonsense? I say not. Even better, the post calls for real change. Not bitching about it on Twitter. Writing blog posts and expressing outrage on social media alone won’t work. We need to make this issue a practical, rather than a rhetorical one. Those of us who are in positions of power, those of us in sales, marketing, and executive positions, need to do something real to effect changes. I still pray at the Temple of Mr. Market. And that means until the tactic doesn’t work, there will be no real change. So if you work for a vendor make it clear that booth babes make you uncomfortable, and it’s just wrong. Take a stand within your own company. And if they don’t like it, leave. I will personally do whatever I can to get you a better job if it comes to that. If you work for an end-user don’t get scanned at those booths. And don’t buy products from those companies. Vote with your dollars. That is the only way to effect real sustainable change. Money talks. We live in an age of equality. It is time to start acting that way. If a company wants to employ a booth babe, at least provide babes of both genders. I’m sure there are a bunch of lightly employed male actors and models in San Francisco who would be happy to hand out cards and put asses in trade show theater seats. Share:

Share:
Read Post

Incite 4/2/2014: Disruption

The times they are a-changin’. Whether you like it or not. Rich has hit the road, and has been having a ton of conversations about his Future of Security content, and I have adapted it a bit to focus on the impact of the cloud and mobility on network security. We tend to get one of three reactions: Excitement: Some people rush up at the end of the pitch to learn more. They see the potential and need to know how they can prepare and prosper as these trends take root. Confusion: These folks have a blank stare through most of the presentation. You cannot be sure if they even know where they are. You can be sure they have no idea what we are talking about. Fear: These folks don’t want to know. They like where they are, and don’t want to know about potential disruptions to the status quo. Some are belligerent in telling us we’re wrong. Others are more passive-aggressive, going back to their office to tell everyone who will listen that we are idiots.   Those categories more-or-less reflect how folks deal with change in general. There are those who run headlong into the storm, those who have no idea what’s happening to them, and those who cling to the old way of doing things – actively resisting any change to their comfort zone. I don’t judging any of these reactions. How you deal with disruption is your business. But you need to be clear which bucket you fit into. You are fooling yourself and everyone else if you try to be something you aren’t. If you don’t like to be out of your comfort zone, then don’t be. The disruptions we are talking about will be unevenly distributed for years to come. There are still jobs for mainframe programmers, and there will be jobs for firewall jockeys and IPS tuners for a long time. Just make sure the organization where you hang your hat is a technology laggard. Similarly, if you crave change and want to accelerate disruption, you need to be in an environment which embraces that. The organizations that take risks and understand not everything works out. We have been around long enough to know we are at the forefront of a major shift in the technology landscape. The last one of this magnitude I expect to see during my working career. I am excited. Rich is excited, and so is Adrian. Of course that’s easy for us – due to the nature of our business model we don’t have as much at stake. We are proverbial chickens, contributing eggs (our research) to the breakfast table. You are the pig, contributing the bacon. It’s your job on the line, not ours. –Mike Photo credit: “Expect Disruption” originally uploaded by Brett Davis Securosis Firestarter Have you checked out our new video podcast? Rich, Adrian, and Mike get into a Google Hangout and.. hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail. March 24 – The End of Full Disclosure March 19 – An Irish Wake March 11 – RSA Postmortem Feb 21 – Happy Hour – RSA 2014 Feb 17 – Payment Madness Feb 10 – Mass Media Abuse Feb 03 – Inevitable Doom Jan 27 – Government Influence Jan 20 – Target and Antivirus Jan 13 – Crisis Communications 2014 RSA Conference Guide In case any of you missed it, we published our fifth RSA Conference Guide. Yes, we do mention the conference a bit, but it’s really our ideas about how security will shake out in 2014. You can get the full guide with all the memes you can eat. Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too. Defending Against Network Distributed Denial of Service Attacks Magnification The Attacks Introduction Advanced Endpoint and Server Protection Quick Wins Detection/Investigation Prevention Assessment Introduction Newly Published Papers Reducing Attack Surface with Application Control Leveraging Threat Intelligence in Security Monitoring The Future of Security Security Management 2.5: Replacing Your SIEM Yet? Defending Data on iOS 7 Eliminating Surprises with Security Assurance and Testing What CISOs Need to Know about Cloud Computing Incite 4 U The good old days of the security autocrat: At some point I will be old and retired, drinking fruity drinks with umbrellas in them, and reminiscing about the good old days when security leaders could dictate policy and shove it down folks’ throats. Yeah, that lasted a few days, before those leaders were thrown out the windows. The fact is that autocrats can be successful, but usually only right after a breach when a quick cleanup and attitude adjustment is needed – at any other time that act wears thin quickly. But as Dave Elfering points out, the rest of the time you need someone competent, mindful, diligent, well-spoken and business savvy. Dare I say it, a Pragmatic CSO. Best of all, Dave points out that folks who will succeed leading security teams need to serve the business, not have fixed best practices in mind, which they adhere to rigidly. Flexibility to business needs is the name of the game. – MR Throwing stones: I couldn’t agree more with Craig Carpenter, who writes in Dark Reading that folks need to Be Careful Beating Up Target. It has become trendy for every vendor providing alerts via a management console to talk about how they address the Target issue: missing alerts. But as Craig explains, the fact is that Target had as much data as they needed. It looks like a process failure at a busy time of year, relying on mostly manual procedures to investigate alerts. This can (and does) happen to almost every company. Don’t fall into the trap of thinking you’re good. If you haven’t had a breach, chalk it up to being lucky. And that’s okay! Thinking that it can’t happen to you is a sure sign of imminent

Share:
Read Post

Breach Counters

The folks at the Economist (with some funding from Booz Allen Hamilton, clearly doing penance for bringing Snow into your Den) have introduced the CyberTab cyber crime cost calculator. And no, this isn’t an April Fool’s joke. The Economist is now chasing breaches and throwinging some cyber around. Maybe they will sponsor a drinking game at DEFCON or something. It will calculate the costs of a specific cyber attack–based on your estimates of incident-response and business expenses and of lost sales and customers–and estimate your return on prevention. Basically they built a pretty simple model (PDF) that gives you guidelines for estimating the cost of an attack. It’s pretty standard stuff, including items such as the cost of lost IP and customer data. They also provide a model to capture the direct costs of investigation and clean-up. You also try to assess the value of lost business – always a slippery slope.   You can submit data anonymously, and presumably over time (with some data collection), you should be able to benchmark your losses against other organizations. So you can brag to your buddies over beers that you lost more than they did. The data will also provide fodder for yet another research report to keep the security trade rags busy cranking out summary articles. Kidding aside, I am a big fan of benchmarks, and data on the real costs of attacks can help substantiate all the stuff we security folks have been talking about for years. Photo credit: “My platform is bigger than yours” originally uploaded by Alberto Garcia Share:

Share:
Read Post

Defending Against DDoS: Magnification

As mentioned in our last post, the predominant mechanism of network-based DDoS attacks involves flooding the pipes with standard protocols like SYN, ICMP, DNS, and NTP. But that’s not enough, so attackers now take advantage of weaknesses in the protocols to magnify the impact of their floods by an order of magnitude. This makes each compromised device far more efficient as an attack device and allows attackers to scale attacks over 400gbps (as recently reported by CloudFlare). Only a handful of organizations in the world can handle an attack of that magnitude, so DDoS + reflection + amplification is a potent combination. Fat Packets Attackers increasingly tune the size of their packets to their desired outcome. For example simple SYN packets can crush the compute capabilities of network/security devices. Combining small SYNs with larger SYN packets can also saturate the network pipe, so we see often them combined in today’s DDoS attacks. Reflection + Amplification The first technique used to magnify a DDoS attack is reflection. This entails sending requests to a large number of devices (think millions), spoofing the origination IP address of a target site. The replies to those millions of requests are reflected back to the target. The UDP-based protocols used in reflection attacks don’t require handshaking to establish new sessions, so they are spoofable. The latest wave of DDoS attacks uses reflected DNS and NTP traffic to dramatically scale the volume of traffic hitting targets. Why those two protocols? Because they provide good leverage for amplifying attacks – DNS and NTP responses are typically much bigger than requests. DNS can provide about 50x amplification because responses are that much larger than requests. And the number of open DNS resolvers which respond to any DNS request from any device make this an easy and scalable attack. Until the major ISPs get rid of these open resolvers DNS-based DDoS attacks will continue. NTP has recently become a DDoS protocol of choice because it offers almost 200x magnification. This is thanks to a protocol feature: clients can request a list of the last 600 IP addresses to access a server. To illustrate the magnitude of magnification, the CloudFlare folks reported that attack used 4,529 NTP servers, running on 1,298 different networks, each sending about 87mbps to the victim. The resulting traffic totaled about 400gbps. Even more troubling is that all those requests (to 4,500+ NTP servers) could be sent from one device on one network. Even better, other UDP-based protocols offers even greater levels of amplification. An SNMP response can be 650x the size of a request, which could theoretically be weaponized to create 1gbps+ DDoS attacks. Awesome. Stacking Attacks Of course none of these techniques existing a vacuum, so sometimes we will see them pounding a target directly, while other times attackers combine reflection and amplification to hammer a target. All the tactics in our Attacks post are in play, and taken to a new level with magnification. The underlying issue is that these attacks are enabled by sloppy network hygiene on the part of Internet service providers, who allow spoofed IP addresses for these protocols and don’t block flood attacks. These issues are largely beyond the control of a typical enterprise target, leaving victims with little option but to respond with a bigger pipe to absorb the attack. We will wrap up tomorrow, with look at the options for mitigating these attacks. Share:

Share:
Read Post

Defending Against DDoS: Attacks

As we discussed in our Introduction to Defending Against Network-based Distributed Denial of Service Attacks, DDoS is a blunt force instrument for many adversaries. So organizations need to remain vigilant against these attacks. There is not much elegance in a volumetric attack – adversaries impact network availability by consuming all the bandwidth into a site and/or by knocking down network and security devices, overwhelming their ability to handle the traffic onslaught. Today’s traditional network and security devices (routers, firewalls, IPS, etc.) were not designed to handle these attacks. Nor were network architectures built to easily decipher attack traffic and keep legitimate traffic flowing. So an additional layer of products and services has emerged to protect networks from DDoS attacks. But first things first. Before we dig into ways to deal with these attacks let’s understand the types of attacks and how attackers assemble resources to blast networks to virtual oblivion. The Attacks The first category of DDoS attacks is the straightforward flood. Attackers use tools that send requests using specific protocols or packets (SYN, ICMP, UDP, and NTP are the most popular) but don’t acknowledge the responses. If enough attack computers send requests to a site, its bandwidth can quickly be exhausted. Even if bandwidth is sufficient, on-site network and security devices need to maintain session state while continuing to handle additional (legitimate) inbound session requests. Despite the simplicity of the problem floods continue to be a very effective tactic for overwhelming targets. Increasingly we see the DNS infrastructure targeted by DDoS attacks. This prevents the network from successfully routing traffic from point A to point B, because the map is gone. As with floods, attackers can overwhelm the DNS by blasting it with traffic, especially because DNS infrastructure has not scaled to keep pace with overall Internet traffic growth. DNS has other frailties which make it an easy target for DDoS. Like the shopping cart and search attacks we highlighted for Application DoS, legitimate DNS queries can also overwhelm the DNS service and knock down a site. The attacks target weaknesses in the DNS system, where a single request for resolution can trigger 4-5 additional DNS requests. This leverage can overwhelm domain name servers. We will dig into magnification tactics later in this series. Similarly, attackers may request addresses for hosts that do not exist, causing the targeted servers to waste resources passing on the requests and polluting caches with garbage to further impair performance. Finally, HTTP continues to be a popular target for floods and other application-oriented attacks, taking advantage of the inherent protocol weaknesses. We discussed slow HTTP attacks in our discussion of Application Denial of Service, so we won’t rehash the details here, but any remediations for volumetric attacks should alleviate slow HTTP attacks as well. Assembling the Army To launch a volumetric attack an adversary needs devices across the Internet to pound the victim with traffic. Where do these devices come from? If you were playing Jeopardy the correct response would be “What is a bot network, Alex?” Consumer devices continue to be compromised and monetized at an increasing rate, driving by increasingly sophisticated malware and the lack of innovation in consumer endpoint protection. These compromised devices generate the bulk of DDoS traffic. Of course attackers need to careful – Internet Service Providers are increasingly sensitive to consumer devices streaming huge amounts of traffic at arbitrary sites, and take devices off the network when they find violations of their terms of service. Bot masters use increasingly sophisticated algorithms to control their compromised devices, to protect them from detection and remediation. Another limitation of consumer devices is their limited bandwidth, particularly upstream. Bandwidth continues to grow around the world, but DDoS attackers hit capacity constraints. DDoS attackers like to work around these limitations of consumer devices by instead compromising servers to blast targets. Given the millions of businesses with vulnerable Internet-facing devices, it tends to be unfortunately trivial for attackers to compromise some. Servers tend to have much higher upstream bandwidth so they are better at serving up malware, commanding and controlling bot nodes, and launching direct attacks. Attackers are currently moving a step beyond conventional servers, capitalizing on cloud services to change their economics. Cloud servers – particularly Infrastructure as a Service (IaaS) servers are inherently Internet-facing and often poorly configured. And of course cloud servers have substantial bandwidth. For network attacks, a cloud server is like a conventional server on steroids – DDoS attackers see major gains in both efficiency and leverage. To be fair, the better-established cloud providers take great pains to identify compromised devices and notify customers when they notice something remiss. You can check out Rich’s story for how Amazon proactively notified us of a different kind of issue, but they do watch for traffic patterns that indicate misuse. Unfortunately by the time misuse is detected by a cloud provider, server owner, or other server host, it may be too late. It doesn’t take long to knock a site offline. And attackers without the resources or desire to assemble and manage botnets can just rent them. Yes, a number of folks offer DDoS as a service (DDoSaaS, for the acronym hounds), so it couldn’t be easier for attackers to harness the resources to knock down a victim. And it’s not expensive according to McAfee, which recorded DDoS costs from $2/hour for short attacks, up to $1,000 to take a site down for a month. It is a bit scary to think you could knock down someone’s site for 4 hours for less than a cup of coffee. But when you take a step back and consider the easy availability of compromised devices, servers, and cloud servers, DDoS is a very easy service to add to an attacker’s arsenal. Our next post will discuss tactics for magnifying the impact of a DDoS attack – including encryption and reflection – to make attacks an order of magnitude more effective. Share:

Share:
Read Post

Security Sharing

I really like that some organizations are getting more open about sharing information regarding their security successes and failures. Prezi comes clean about getting pwned as part of their bug bounty program. They described the bug, how they learned about it, and how they fixed it. We can all learn from this stuff.   Facebook talked about their red team exercise last year, and now they are talking about how they leverage threat intelligence. They describe their 3-tier architecture to process intel and respond to threats. Of course they have staff to track down issues as they are happening, which is what really makes the process effective. Great alerts with no response don’t really help. You can probably find a retailer to ask about that… I also facilitated a CISO roundtable where a defense sector attendee offered to share his indicators with the group via a private email list. So clearly this sharing thing is gaining some steam, and that is great. So why now? What has changed that makes sharing information more palatable? Many folks would say it’s the only way to deal with advanced adversaries. Which is true, but I don’t think that’s the primary motivation. It certainly got the ball rolling, and pushed folks to want to share. But it has typically been general counsels and other paper pushers preventing discussion of security issues and sharing threat information. My hypothesis is that these folks finally realized have very little to lose by sharing. Companies have to disclose breaches, so that’s public information. Malware samples and the associated indicators of attack provide little to no advantage to the folks holding them close to the vest. By the time anything gets shared the victim organization has already remediated the issue and placed workarounds in place. I think security folks (and their senior management) finally understand that. Or at least are starting to, because you still see folks who will only share on ‘private’ fora or within very controlled groups. Of course there are exceptions. If an organization can monetize the data, either by selling it or using it to hack someone else (yes, that happens from time to time), they aren’t sharing anything. But in general we will see much more sharing moving forward. Which is great. I guess it is true that everything we need to know we learned in kindergarten. Photo credit: “Sharing” originally uploaded by Toban Black Share:

Share:
Read Post

Friday Summary: March 28, 2014—Cloud Wars

Begun, the cloud war has. We have been talking about cloud computing for a few years now on this blog, but in terms of market maturity it is still early days. We are really entering the equivalent of the second inning of a much longer game, it will be over for a long time, and things are just now getting really interesting. In case you missed it, the AWS Summit began this week in San Francisco, with Amazon announcing several new services and advances. But the headline of the week was Google’s announced price cuts for their cloud services: Google Compute Engine is seeing a 32 percent reduction in prices across all regions, sizes and classes. App Engine prices are down 30 percent, and the company is also simplifying its price structure. The price of cloud storage is dropping a whopping 68 percent to just $0.026/month per gigabyte and $0.2/month per gigabyte/DRA. At that price, the new pricing is still lower than the original discount available for those who stored more than 4,500TB of data in Google’s cloud. Shortly thereafter Amazon countered with their own price reductions – something we figured they were prepared to do, but didn’t intend during the event. Amazon has been more focused on methodically delivering new AWS functionality, outpacing all rivals by a wide margin. More importantly Amazon has systematically removed impediments to enterprise adoption around security and compliance. But while we feel Amazon has a clear lead in the market, Google has been rapidly improving. Our own David Mortman pointed out several more interesting aspects of the Google announcement, lost in the pricing war noise: “The thing isn’t just the lower pricing. It’s the lower pricing with automatic “reserve instances” and the managed VM offering so you can integrate Google Compute Engine (GCE) and Google App Engine. Add in free git repositories for managing the GCE infrastructure and support for doing that via github – we’re seeing some very interesting features to challenge AWS. GOOG is still young at offering this as an external service but talk about giving notice… Competition is good! This all completely overshadowed Cisco’s plans to pour $1b into an OpenStack-based “Network of Clouds”. None of this is really security news, but doubling down on cloud investments and clearly targeting DevOps teams with new services, make it clear where vendors think this market is headed. But Google’s “Nut Shot” shows that the battle is really heating up. On to the Summary, where several of us had more than one favorite external post: Favorite Securosis Posts Adrian Lane: Incite 3/26/2014: One Night Stand. All I could think of when I read this was Rev. Horton Heat’s song “Eat Steak”. Mike Rothman: Firestarter: The End of Full Disclosure. Other Securosis Posts Mike’s Upcoming Webcasts. Friday Summary: March 21, 2014 – IAM Mosaic Edition. Favorite Outside Posts Gal Shpantzer: Why Google Flu is a failure: the hubris of big data. Adrian Lane: Canaries are Great! David Mortman: Primer on Showing Empathy in the Tech Industry. Gunnar: Making Sure Your Security Advice and Decisions are Relevant. “Information security professionals often complain that executives ignore their advice. There could be many reasons for this. One explanation might be that you are presenting your concerns or recommendations in the wrong business context. You’re more likely to be heard if you relate the risks to an economic moat relevant to your company.” Gunnar: Cyberattacks Give Lift to Insurance. The cybersecurity market is growing: “The Target data breach was the equivalent of 10 free Super Bowl ads”. Mike Rothman: You’ve Probably Been Pouring Guinness Beer The Wrong Way Your Whole Life. As a Guinness lover, this is critical information to share. Absolutely critical. Mike Rothman: Data suggests Android malware threat greatly overhyped. But that won’t stop most security vendors from continuing to throw indiscriminate Android FUD. Research Reports and Presentations Reducing Attack Surface with Application Control. Leveraging Threat Intelligence in Security Monitoring. The Future of Security: The Trends and Technologies Transforming Security. Security Analytics with Big Data. Security Management 2.5: Replacing Your SIEM Yet? Defending Data on iOS 7. Eliminate Surprises with Security Assurance and Testing. What CISOs Need to Know about Cloud Computing. Defending Against Application Denial of Service Attacks. Executive Guide to Pragmatic Network Security Management. Top News and Posts Apple and Google’s wage-fixing. Not security but interesting. Google Announces Massive Price Drops for Cloud. Cisco plans $1B investment in global cloud infrastructure. Microsoft Security Advisory: Microsoft Word Under Seige. Chicago’s Trustwave sued over Target data breach. Blog Comment of the Week This week’s best comment goes to Marco Tietz, in response to Friday Summary: IAM Mosaic Edition. Thanks Adrian, it looks like you captured the essence of the problem. IAM is very fragmented and getting everything to play together nicely is quite challenging. Heck, just sorting it out corp internal is challenging enough without even going to the Interwebs. This is clearly something we need to get better at, if we are serious about ‘The Cloud’. Share:

Share:
Read Post

Analysis of Visa’s Proposed Tokenization Spec

Visa, Mastercard, and Europay – together known as EMVCo – published a new specification for Payment Tokenisation this month. Tokenization is a proven security technology, which has been adopted by a couple hundred thousand merchants to reduce PCI audit costs and the security exposure of storing credit card information. That said, there is really no tokenization standard, for payments or otherwise. Even the PCI-DSS standard does not address tokenization, so companies have employed everything from hashed credit card (PAN) values (craptastic!) to very elaborate and highly secure random value tokenization systems. This new specification is being provided to both raise the bar on shlock home-grown token solutions, but more importantly to address fraud with existing and emerging payment systems. I don’t expect many of you to read 85 pages of token system design to determine what it really means, if there are significant deficiencies, or whether these are the best approaches to solving payment security and fraud issues, so I will summarize here. But I expect this specification to last, so if you build tokenization solutions for a living you had best get familiar with it. For the rest of you, here are some highlights of the proposed specification. As you would expect, the specification requires the token format to be similar to credit card numbers (13-19 digits) and pass LUHN. Unlike financial tokens used today, and at odds with the PCI specification I might add, the tokens can be used to initiate payments. Tokens are merchant or payment network specific, so they are only relevant within a specific domain. For most use cases the PAN remains private between issuer and customer. The token becomes a payment object shared between merchants, payment processors, the customer, and possibly others within the domain. There is an identity verification process to validate the requestor of a token each time a token is requested. The type of token generated is variable based upon risk analysis – higher risk factors mean a low-assurance token! When tokens are used as a payment objects, there are “Data Elements” – think of them as metadata describing the token – to buttress security. This includes a cryptographic nonce, payment network data, and token assurance level. Each of these points has ramifications across the entire tokenization eco-system, so your old tokenization platform is unlikely to meet these requirements. That said, they designed the specification to work within todays payment systems while addressing near-term emerging security needs. Don’t let the misspelled title fool you – this is a good specification! Unlike the PCI’s “Tokenization Guidance” paper from 2011 – rumored to have been drafted by VISA – this is a really well thought out document. It is clear whoever wrote this has been thinking about tokenization for payments for a long time, and done a good job of providing functions to support all the use cases the specification needs to address. There are facilities and features to address PAN privacy, mobile payments, repayments, EMV/smartcard, and even card-not-present web transactions. And it does not address one single audience to the detriment of others – the needs of all the significant stakeholders are addressed in some way. Still, NFC payments seems to be the principle driver, the process and data elements really only gel when considered from that perspective. I expect this standard to stick. Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.