Login  |  Register  |  Contact
Wednesday, April 16, 2014

Incite 4/16/2014: Allergies

By Mike Rothman

It was a crummy winter. Cold. Snowy. Whiplash temperature swings. Over the past few weeks, when ATL finally seemed to warm up for spring (and I was actually in town), I rejoiced. One of the advantages of living a bit south is the temperate weather from mid-February to late November.

But there is a downside. The springtime blooming of the flowers and trees is beautiful, and brings the onslaught of pollen. For a couple weeks in the spring, everything is literally green. It makes no difference what color your car is – if it’s outside for a few minutes it’s green. Things you leave outside (like your deck furniture and grill), green. Toys and balls the kids forget to put back in the garage when they are done. Yup, those are green too. And not a nice green, but a fluorescent type green that reminds you breathing will be a challenge for a few weeks.

Love is not a strong enough word when discussing pollen

Every so often we get some rain to wash the pollen away. And the streams and puddles run green. It’s pretty nasty.

Thankfully I don’t have bad allergies, but for those few weeks even I get some sniffles and itchy eyes. But XX2 has allergies, bad. It’s hard for her to function during the pollen season. Her eyes are puffy (and last year swelled almost shut). She can’t really breathe. She’s hemorrhaging mucus; we can’t seem to send her to school with enough Sudafed, eye drops, and tissues to make it even barely comfortable.

It’s brutal for her. But she’s a trooper. And for the most part she doesn’t play outside (no recess, phys ed, and limited sports activities) until the pollen is mostly gone. Unless she does. Last night, when we were celebrating Passover with a bunch of friends, we lost track of XX2. With 20+ kids at Seder that was easy enough to do. When it was time to leave we found her outside, and she had been playing for close to an hour. Yeah, it rained yesterday and gave her a temporary respite from the pollen. But that lulled her into a false sense of security.

So when she started complaining about her eyes itching a bit and wanted some Benadryl to get to sleep, we didn’t want to hear about it. Yes, it’s hard seeing your child uncomfortable. It’s also brutal to have her wake you up in the middle of the night if she can’t breathe and can’t get back to sleep. But we make it clear to all the kids that they have the leeway to make choices for themselves. With that responsibility, they need to live with the consequences of their choices. Even when those consequences are difficult for all of us.

But this will pass soon enough. The pollen will be gone and XX2 will be back outside playing every day. Which means she’ll need to learn the same lesson during next year’s pollen onslaught. Wash, rinse, repeat. It’s just another day in the parenting life.

–Mike

Photo credit: “I Heart Pollen!” originally uploaded by Brooke Novak


See Mike Speak

Mike will be moderating a webcast this coming Thursday at 2pm ET, discussing how to Combat the Next Generation of Advanced Malware with folks from Critical Assets and WatchGuard. Register here: http://secure.watchguard.com/how-to-survive-an-apt-attack-social.html


Securosis Firestarter

Have you checked out our new video podcast? Rich, Adrian, and Mike get into a Google Hangout and.. hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail.


2014 RSA Conference Guide

In case any of you missed it, we published our fifth RSA Conference Guide. Yes, we do mention the conference a bit, but it’s really our ideas about how security will shake out in 2014. You can get the full guide with all the memes you can eat.


Heavy Research

We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too.

Understanding Role-based Access Control

NoSQL Security 2.0

Defending Against Network Distributed Denial of Service Attacks

Advanced Endpoint and Server Protection

Newly Published Papers


Incite 4 U

  1. Traitors are the new whistleblowers: A good thought-provoking post by Justine Aitel on how security needs to change and evolve, given some of the architectural and social disruptions impacting technology. She makes a bunch of points about how the cloud and the “compete now/share first/think later mentality, “ impacts risk. It comes back to some tried and true tactics folks have been talking about for years (yes, Pragmatic CSO reference). Things like communications and getting senior folks on board with the risks they are taking – and ignorance is no excuse. She also makes good points about new roles as these changes take root, and that’s where the traitors and whistleblowers in the title comes from. Overall her conclusion: “This game is no longer just for us nerds” rings true. But that’s not new. Security has been the purview of business folks for years. It’s just that now the stakes are higher. – MR

  2. A glimpse of DBSec’s future: From a database design perspective, the way Facebook is customizing databases to meet their performance needs is a fascinating look at what’s possible with modular, open source NoSQL platforms. Facebook’s goals are performance related, but these approaches can also be leveraged for security. For example you can implement tokenization or encryption where FB leveraged compression. And the same way Facebook swapped Corona for Hadoop’s job manager, you could implement identity controls prior to resource grants from the cluster manager. You can install what you want – most anything is possible here! Security can be woven into the platform, without being beholden to platform vendors to design and develop the security model. Granted, most customers want someone else to provide off-the-shelf security solutions, but their modular approach to Hadoop nicely illustrates what is possible. – AL

  3. ‘Marketing’ attacks: The Kalzumeus blog has a really interesting point about how the stickiness of any attack tends to be based on how it is merchandised. Remember Melissa? Or the I Love You virus? Or SQL Slammer? Of course you do – these high-profile attacks got a ton of press coverage and had catchy names. The Heartbleed name and logo were genius. Yes, it is a big issue and worthy of note and remembrance. But will we really remember Kaminsky’s DNS discovery years from now? I probably will because I am a security historian of sorts, but you might not – it doesn’t have a cool name. As an industry we pooh-pooh marketing, but it is integral to many things. But only if you want them to be memorable and drive action. – MR

  4. Helpful ignorance: The question Why should passwords be encrypted if they’re stored in a secure database? makes security professionals go into uncontrollable spasms, but it is a good question! For those new to security, the implicit assumptions underscore areas they don’t understand, and which pieces they need to be educated on. There is no single answer to this question, but “Secured from what?” is a good starting point. Is it secured from malicious DBAs? SQL injection? Direct file examination? The point here is to open a dialog to educate DBAs – and application developers, for that matter – to other types of threats not directly addressed by passwords, user roles, and encrypted backup tapes. – AL

  5. You can’t fight city hall: Actually you can, but it probably won’t work out very well. Case in point: Barrett Brown of allegedly Anon and Stratfor hack fame. He recently agreed to a sealed plea bargain for being an accesssory after the fact on posting the credit card numbers (and other stuff). What he pled to wasn’t even part of the original indictment, and he has already done 2 years in custody. With today’s forensicators and their ability to parse digital trails, it is really hard to get away with hacking. At least over a sustained period of time, and at some point the authorities (or Krebs – whoever gets there first) will find you with a smoking digital gun. So what to do? I know it sounds novel, but try to do the right thing – don’t steal folks’ stuff or be a schmuck. – MR

–Mike Rothman

Tuesday, April 15, 2014

Understanding Role Based Access Controls - Role Lifecycle

By Adrian Lane

  • Gunnar
  • Adrian Lane
  • Roles-based access control (RBAC) has earned a place in the access control architectures at many organization. Companies have many questions about how to effectively use roles, including “How can I integrate role-based systems with my applications? How can I build a process around roles? How can I manage roles on a day-to-day basis? And by the way, how does this work?” It is difficult to distinguish between the different options on the market – they all claim equivalent functionality. Our goal for this post is to provide a simple view of how all the pieces fit together, what you do with them, and how each piece helps provide and/or support role-based access.

    Role Lifecycle in a real-world enterprise

    Roles make access control policy management easier. The concept is simple: perform access control based on a role assigned to one or more users. Users are grouped by job functions so a single role can define access for all users who perform a function – simplifying access control policy development, management, and deployment. The security manager does not need to set permissions for every user, but can simply provide access to necessary functions to a single shared role.

    Like many simple concepts, what is easy to understand can be difficult to achieve in the real world. We begin our discussion of real-world usage of roles and role-based access control (RBAC) by looking at practices and pitfalls for using roles in your company.

    Role definition

    For a basic definition we will start with roles as a construct for managing the application of security policy in the separation between users and the system’s resources. A role is a way to group similar users. On the resource side resources are accessed via a set of permissions – such as Create, Read, Update, and Delete – which are assigned to roles which need them.

    Roles defined

    This simple definition is the way roles are commonly used: as a tool for management convenience. If you have many users and a great many applications – each with many features and functions – it quickly becomes untenable to manage them individually. Roles provide an abstraction layer to ease administration.

    Roles and groups are often lumped together, but there is an important difference. Users are added to Groups – such as the Finance Group – to club them together. Roles go one step further – the association is bi-directional: users are members of roles, which are then associated with permissions. Permissions allow a user, through a role, to take action (such as Create, Read, Update, or Delete) on an application and/or resources.

    Enforcing access control policy with roles

    What roles should you create? What are your companies’ rules for which users get access to which application features? Most firms start with their security policies, if they are documented. But this is where things get interesting: some firms don’t have documented policies – or at least not at the right level to unambiguously specify technical access control policy. Others have information security policies which are tens or even hundreds of pages long. But as a rule those are not really read by IT practitioners, and sometimes not even by their authors. Information security policies are full of moldy old chestnuts like “principle of least privilege” – which sounds great, but what does it mean in practice? How do you actually use that? Another classic is “Separation of Duties” – which means privileged users should not have unfettered access, so you divide capabilities across several people. Again the concept makes sense, but there is no clear roadmap to take advantage of it.

    One of the main values of RBAC is that it lets you enforce a specific set of policies for a specific set of users. Only a user acting in the role of Department X can access Department X’s resources. In addition, RBAC can enforce a hierarchy of roles. A user with the Department X manager role can add or disable users in the Department X worker bee roles.

    Our recommendation is clear: start simple. It is very effective to start with a small set of rules, perhaps 20-30. Do not feel obliged to create more roles initially — instead ensure that your initial small set of roles is integrated end-to-end, to users on the front end, and to permissions and resources on the back end.

    Roles open up ways to enforce important access control policies – including separation of duties. For example your security policy might state that users in a Finance role cannot also be in an IT role. Role-Based Access Control gives you a way to enforce that policy.

    Implementation

    Building on our simple definition, a permission checker could perform this role check:

    Subject currentUser = SecurityService.getSubject();

    if (currentUser.hasRole("CallCenter")) {
     //show the Call Center screen
    } else {
      //access denied
    }
    

    In this simple example an application does not make an access control decision per user, but instead based on the user’s role.

    Most application servers contain some form of RBAC support, and it is often better to rely on server configuration than to hard-code permission checks. For example:

    <web-app>
    <security-role>
            <role-name>CallCenter</role-name>
       </security-role>
        <security-constraint>
            <web-resource-collection>
                <web-resource-name>Call Center pages</web-resource-name>
                <url-pattern>/CCFunctions/*</url-pattern>
           </web-resource-collection>
            <auth-constraint>
                <role-name>CallCenter</role-name>
           </auth-constraint>
       </security-constraint>
    

    Notice that both code and configuration examples map the role the permission set to the resource (screen and URL). This accomplishes a key RBAC concept: the programmer does not need specific knowledge about any user – they are abstracted from user accounts, and only deal with permissions and roles.

    Making this work in the real world raises the question of integration: Where do you deploy the roles that govern access? Do you do it in code, configuration, or a purpose-built tool?

    Integration

    RBAC systems raise both first-mile and last-mile integration considerations. For the first mile what you do is straightforward: role assignment is tied to user accounts. Each user has one or more assigned roles. Most enterprises use Active Directory, LDAP, and other systems to store and manage users, so role mapping conveniently takes place in collaboration with the user directory.

    Role Touchpoints

    The second integration point (the last mile) is defined by an application’s ‘container’. The container is the place where you manage resources: it could be a registry, repository, server configuration, database, or any of various other places. Linking permissions to roles may be performed through configuration management, or in code, or in purpose-built tools such as access management products. The amount of work you have varies by container type, as does who performs it. With some solutions it is as simple as checking a box, while others require coding.

    Using roles in real-world systems

    This introduction has provided a simple illustration of roles. Our simple system shows both the power of roles and their value as a central control point for access control. Taking advantage of roles requires a plan of action, so here are some key considerations to get started:

    • Identify and establish authoritative source(s) for roles: where and how to define and manage the user-to-role mapping
    • Identify and establish authoritative source(s) for permissions: where and how to define and manage resource permissions
    • Link roles to permissions: the RBAC system must have a way to bind roles and permissions. This can be static in a access management system or a directory, or dynamic at runtime
    • Role assignment: Granting roles to users should be integrated into identity provisioning processes
    • Permission assignment: Configuration management should include a step to provision new applications and services with access rights for each interface
    • Make access control decisions in code and configuration, and services
    • Use roles to conduct access reviews: large organizations adopt roles to simplify access review during audit

    Our next post will build on our simple definition of roles, drilling down into role engineering, management, and design issues.

    –Adrian Lane

  • Gunnar
  • Adrian Lane
  • Can’t Unsee (and the need for better social media controls)

    By Mike Rothman

    I have to admit the USAirways porno tweet had me cracking up. Business Insider has good coverage (even including the NSFW link, if you are a glutton for well, whatever). It was funny not because of the picture, but as an illustration of how a huge corporation could have its brand and image impacted by the mistake of one person. Also because it didn’t happen to me. I assure you the executive suite at the company did not think this was funny, at all.

    Need eye bleach NOW

    But it highlights the need for much greater control of social media. With advertising there are multiple layers of approval before anything ever hits the airwaves – and we still have branding fiascos. Social media changes the rules. One person can control a very highly followed account, and that person’s device can be attacked and compromised – giving attackers free reign to behave badly and impact the brand. Or a malicious insider could do the same. Or just plain old human error. It happens all the time, but not like the USAir tweet. That went viral fast, and the damage was done even faster.

    It’s like Pandora’s Box. Once it’s open, you shouldn’t try to put a plane in it. (Sorry, had to…)

    I know you have to move fast with social media. But folks will be lampooning USAirways for years over this. I don’t think their real-time response to the customer outweighs the downside, or that a little check and balance would be a terrible thing – if only to make sure you have multiple eyes on the corporate social media accounts.

    Photo credit: “Cannot Unsee” originally uploaded by Lynn Williams

    –Mike Rothman

    Monday, April 14, 2014

    Responsibly (Heart)Bleeding

    By Mike Rothman

    Yeah, we hit on the Heartbleed vulnerability in this week’s FireStarter, but I wanted to call attention to how Akamai handled the vulnerability. They first came out with an announcement that their networks (and their customers) were safe because their systems were already patched. Big network service providers tend to get an early heads-up when stuff like this happens, so they can get a head start on patching.

    They were also very candid about whether they have proof of compromise:

    Do you have any evidence of a data breach?

    No. And unfortunately, this isn’t “No, we have evidence that there was no breach of data;” rather, “we have no evidence at all.” We doubt many people do – and this leaves data holders in the uncomfortable position of not knowing what, if any, data breaches might have happened. Sites using Akamai were not measurably safer – or less safe – than sites not using Akamai.

    So kudos are due Akamai for explaining the issue in understandable terms, discussing their home-grown way of issuing and dealing with certs, discussing the potential vulnerability window before they started patching, and owning up to the fact that they (like everyone else) have no idea what (if anything) was compromised.

    Then they assured customers they were protected. Unless they weren’t. Over the weekend a researcher pointed out a bug in Akamai’s patch. Ruh Roh. But again, to Akamai’s credit, they came clean. They posted an update explaining the specifics of the buggy patch and why they were still exposed. Then they made it clear that all the certs will be re-issued – just to be sure.

    As a result, we have begun the process of rotating all customer SSL keys/certificates. Some of these certificates will quickly rotate; some require extra validation with the certificate authorities and may take longer.

    It is okay to be wrong. As long as an organization works diligently to make it right, and they keep customers updated and in the loop. Preferably without requiring an NDA to figure out what’s going on…

    –Mike Rothman

    Sunday, April 13, 2014

    Firestarter: Three for Five

    By Rich

    In this week’s Firestarter the team makes up for last week and picks three different stories, each with a time limit. It’s like one of those ESPN shows, but with less content and personality.

    The audio-only version is up too.

    –Rich

    FFIEC’s Rear-View Mirror

    By Mike Rothman

    You have to love compliance mandates, especially when they are anywhere from 18 months to 3 years behind the threat. Recently the FFIEC (the body that regulates financial institutions) published some guidance for financials to defend against DDoS attacks. Hat tip to Techworld.

    Hindsight is right, but the impact is from looking at the beauty in front of you

    It’s not like the guidance is bad. Assessing risk, monitoring inbound traffic, and having a plan to move traffic to a scrubber is all good. And I guess some organizations still don’t know that they should even perform that simple level of diligence. But a statement in the FFIEC guidance sums up rear-view mirror compliance:

    “In the latter half of 2012, an increased number of DDoS attacks were launched against financial institutions by politically motivated groups,” the FFIEC statement says. “These DDoS attacks continued periodically and increased in sophistication and intensity. These attacks caused slow website response times, intermittently prevented customers from accessing institutions’ public websites, and adversely affected back-office operations.”

    Uh, right on time. 18 months later. It’s not that DDoS is going away, but to mandate such obvious stuff at this point is a beautiful illustration of solving yesterday’s problem tomorrow. Which I guess is what most compliance mandates are about.

    Sigh.

    Photo credit: “mtcook” originally uploaded by Jim Howard

    –Mike Rothman

    Wednesday, April 09, 2014

    Understanding Role Based Access Control [New Series]

    By Adrian Lane

  • Gunnar
  • Adrian Lane
  • Identity and Access Management (IAM) is a marathon rather than a sprint. Most enterprises begin their IAM journey by strengthening authentication, implementing single-sign on, and enabling automated provisioning. These are excellent starting points for an enterprise IAM foundation, but what happens next? Once users are provisioned, authenticated, and signed on to multiple systems, how are they authorized? Enterprises need to very quickly answer crucial questions: How is access managed for large groups of users? How will you map business roles to technology and applications? How is access reviewed for security and auditing? What level of access granularity is appropriate?

    Many enterprises have gotten over the first hurdle for IAM programs with sufficient initial capabilities in authentication, single sign-on, and provisioning. But focusing on access is only half the challenge; the key to establishing a durable IAM program for the long haul is tying it to an effective authorization strategy. Roles are not just a management concept to make IT management easier; they are also fundamental to defining how work in an enterprise gets done.

    Role based access control (RBAC) has been around for a while and has a proven track record, but key questions remain for enterprise practitioners. How can roles make management easier? Where is the IAM industry going? What pitfalls exist with current role practices? How should an organization get started setting up a role based system? This series will explore these questions in detail.

    Roles are special to IAM. They can answer certain critical access management problems, but they require careful consideration. Their value is easy to see, but there are essential to realize value. These include identifying authoritative sources, managing the business-to-technology mapping, integration with applications, and the art and science of access granularity. The paper will provide context, explore each of these questions in detail, and provide the critical bits enterprises need to choose between role-based access control products:

    • The role lifecycle in a real world enterprise – how to use roles to make management easier: This post will focus on three areas: defining roles and how they work, enforcing access control policies with roles, and using roles in real-world systems. We will also cover identification of sources, integration, and access reviews.
    • Advanced concepts – where is the industry going? This section will talk about role engineering – rolling up your sleeves to get work done. But we will also cover more advanced concepts such as using attributes with roles, dynamic ‘risk-based’ assess, scalability, and dealing with legacy systems.
    • Role management: This is the section many of you will be most interested in: how to manage roles. We will examine access control reviews, scaling across the enterprise, metrics, logging, error handling, and handling key audit & compliance chores.
    • Buyer’s guide: As with most of our series, not all vendors and services are equal, so we will offer a buyer’s guide. We will examine the criteria for the major use cases, help you plan and run the evaluation, and decide on a product. We will offer a set of steps to ensure success, and finally, a buyer’s checklist for features and proofs-of-concept.

    Our goal is to address the common questions from enterprises regarding role-based access controls, with a focus on techniques and technologies that address these concerns. The content for this paper will be developed and posted to the Securosis blog, and as always we welcome community feedback on the blog and via Twitter.

    –Adrian Lane

  • Gunnar
  • Adrian Lane
  • Monday, April 07, 2014

    Defending Against DDoS: Mitigations

    By Mike Rothman

    Our past two posts discussed network-based Distributed Denial of Device (DDoS) attacks and the tactics used to magnify those attacks to unprecedented scale and volume. Now it’s time to wrap up this series with a discussion of defenses. To understand what you’re up against let’s take a small excerpt from our Defending Against Denial of Service Attacks paper.

    First the obvious: you cannot just throw bandwidth at the problem. Your adversaries likely have an unbounded number of bots at their disposal and are getting smarter at using shared virtual servers and cloud instances to magnify the amount at their disposal. So you can’t just hunker down and ride it out. They likely have a bigger cannon than you can handle. You need to figure out how to deal with a massive amount of traffic, and separate good traffic from bad while maintaining availability.

    Your first option is to leverage existing network/security products to address the issue. As we discussed in our introduction, that is not a good strategy because those devices aren’t built to withstand the volumes or tactics involved in a DDoS. Next, you could deploy a purpose-built device on your network to block DDoS traffic before it melts your networks. This is certainly an option, but if your inbound network pipes are saturated, an on-premise device cannot help much – applications will still be unavailable. Finally, you can front-end your networks with a service to scrub traffic before it reaches your network. But this approach is no panacea either – it takes time to move traffic to a scrubbing provider, and during that window you are effectively down.

    So the answer is likely a combination of these tactics, deployed in a complimentary fashion to give you the best chance to maintain availability.

    Do Nothing

    Before we dig into the different alternatives, we need to acknowledge one other choice: doing nothing. The fact is that many organizations have to go through an exercise after being hit by a DDoS attack, to determine what protections are needed. Given the investment required for any of the alternatives listed above, you have to weigh the cost of downtime against the cost of potentially stopping the attack.

    This is another security tradeoff. If you are a frequent or high-profile target then doing nothing isn’t an option. If you got hit with a random attack – which happens when attackers are testing new tactics and code – and you have no reason to believe you will be targeted again, you may be able to get away with doing nothing. Of course you could be wrong, in which case you will suffer more downtime. You need to both make sure all the relevant parties are aware of this choice, and manage expectations so they understand the risk you are accepting in case you do get attacked again.

    We will just say we don’t advocate this do-nothing approach, but we do understand that tough decision need to be made with scarce resources. Assuming you want to put some defenses in place to mitigate the impact of a DDoS, let’s work through the alternatives.

    DDoS Defense Devices

    These appliances are purpose-built to deal with DoS attacks, and include both optimized IPS-like rules to prevent floods and other network anomalies, and simple web application firewall capabilities to protect against application layer attacks. Additionally, they feature anti-DoS features such as session scalability and embedded IP reputation capabilities, in order to discard traffic from known bots without full inspection.

    To understand the role of IP reputation, let’s recall how email connection management devices enabled anti-spam gateways to scale up to handle spam floods. It is computationally expensive to fully inspect every inbound email, so immediately dumping messages from known bad senders focuses inspection on email that might be legitimate to keep mail flowing. The same concept applies here. Keep the latency inherent in checking a cloud-based reputation database in mind – you will want the device to aggressively cache bad IPs to avoid a lengthy cloud lookup for every incoming session.

    For kosher connections which pass the reputation test, these devices additionally enforce limits on inbound connections, govern the rate of application requests, control clients’ request rates, and manage the number of total connections allowed to hit the server or load balancer sitting behind it. Of course these limits must be defined incrementally to avoid shutting down legitimate traffic during peak usage.

    Speed is the name of the game for DDoS defense devices, so make sure yours have sufficient headroom to handle your network pipe. Over-provision to ensure they can handle bursts and keep up with the increasing bandwidth you are sure to bring in over time.

    CDN/Web Protection Services

    Another popular option is to front-end web applications with a content delivery network or web protection service. This tactic only protects the web applications you route through the CDN, but can scale to handle very large DDoS attacks in a cost-effective manner. Though if the attacker is targeting other address or ports on your network, you’re out of luck – they aren’t protected. DNS servers, for instance, aren’t protected.

    We find CDNs effective for handling network-based DDOS in smaller environments with a small external web presence. There are plenty of other benefits to a CDN, including caching and shielding your external IP addresses. But for stopping DDoS attacks a CDN is a limited answer.

    External Scrubbing

    The next level up the sophistication (and cost) scale is an external scrubbing center. These services allow you to redirect all your traffic through their network when you are attacked. The switch-over tends to be based on either a proprietary switching protocol (if your perimeter devices or DDoS Defense appliances support the carrier’s signaling protocol) or a BGP request. Once the determination has been made to move traffic to the scrubbing center, there will be a delay while the network converges, before you start receiving clean traffic through a tunnel from the scrubbing center.

    The biggest question with a scrubbing center is when to move the traffic. Do it too soon and your resources stay up, but at significant cost. Do it too late and you can suffer additional downtime. Finding that balance is a company-specific decision based on the perceived cost of downtime, compared to the cost and valuable of the service.

    Another blind spot for scrubbing is hit and run attacks, when an attacker blasts a site for briefly to take it down. Once the victim moves the traffic over to a scrubbing center, the attacker stops, not even trying to take down a scrubber. But the attack has already achieved its goals: disrupted availability and increased latency.

    These factors have pushed scrubbing centers to advocate for an always on approach, where the customer runs all traffic through the scrubbing center all the time. Obviously there is a cost but if you are a frequent DDoS target or cannot afford downtime for any reason, it may be worth it.

    All of the above

    As we stated in Defending Against DoS attacks, the best answer is often all the above. Your choice of network-based DoS mitigations inevitably involves trade-offs. It is not good to over-generalize, but most organizations are best suited by a hybrid approach, involving both an on-premise appliance and a contract with a CDN or anti-DoS service provider to handle more severe volumetric attacks. It is rarely cost-effective to run all traffic through a scrubbing center constantly, and many DoS attacks target the application layer – in which case you need a customer premise device anyway.

    Other Protection Tactics

    Given that many DDoS attacks also target DNS (as described in the Attacks post), you will want to make sure your internal DNS infrastructure is protected by front-ending your DNS servers with a DDoS defense device. You will also want some due diligence on your external DNS provider to ensure they have sufficient protections against DDoS, as they will be targeted along with you, and you could be impacted if they fall over.

    You don’t want to contribute to the problem yourself, so as a matter of course you should make sure you aren’t responding to public NTP requests on public NTP servers (as described by US-CERT). You will want to remediate compromised devices as quickly as practical for many reasons, not least to ensure they don’t blast others with your resources and bandwidth.

    The Response Process

    A strong underlying process is your best defense against a DDoS attack. Tactics change as attack volumes increase, but if you don’t know what to do when your site goes down, it will be out for a while.

    The good news is that the DoS defense process is quite similar to general incident response. We have already published a ton of research on this topic, so check out both our Incident Response Fundamentals series and our React Faster and Better paper. If your incident handling process isn’t where it needs to be yet, start there.

    Building off your existing IR process, think about what you need to do as a set of activities: before, during, and after an attack:

    • Before: Before an attack, spend time figuring out attack indicators and ensuring you perform sufficient monitoring to provide both adequate warning and enough information to identify the root cause of attacks. You might see increasing bandwidth volumes or a spike in DNS traffic. Perhaps your applications get flaky and fall down, you see server performance issues, or your CDN alerts you to a possible attack. Unfortunately many DDoS attacks come out of nowhere, so you may not know you are under attack until you are down.
    • During: How can you restore service as quickly as possible? By identifying the root cause accurately and remediating effectively. So you need to notify the powers that be, assemble your team, and establish responsibilities and accountability. Then focus on identifying root cause, attack vectors, and adversaries to figure out the best way to get the site back up. Restoring service depends on the mitigations in place, discussed above. Optimally your contracted CDN and/or anti-DoS service provider already has a team working on the problem by this point. In case you don’t have one, you can hope the attack doesn’t last long or your ISP can help you. Good luck.
    • After: Once the attack has been contained focus shifts to restoring normal operations, moving traffic back from the scrubbing center, and perhaps loosening anti-DoS/WAF rules. Keep monitoring for trouble. Try to make sure this doesn’t happen again. This involves asking questions… What worked? What didn’t? Who needs to be added to the team? Who just got in the way? This analysis needs to objectively identify the good, the bad, and the ugly. Dig into the attack as well. What controls would have blunted its impact? Would running all your traffic through a scrubbing provider have helped? Did network redirection work quickly enough? Did you get the right level of support from your service provider? Then update your process as needed and implement new controls if necessary.

    As we wrap up this series on network-based DDoS, let’s revisit a few key points.

    • Today’s DoS attacks encompass network attacks, application attacks, and magnification techniques to confuse defenders and exploit weaknesses in defenses.
    • Organizations need a multi-faceted approach to defend against DDoS, which likely involves both deploying DDoS defense equipment on-site and contracting with a service provider (either a scrubbing center or a content delivery network) to handle excessive traffic.
    • DoS mitigations do not work in isolation – on-premise devices and services are interdependent for adequate protection, and should communicate with each other to ensure an efficient and transparent transition to the scrubbing service when necessary.

    Of course there are trade-offs with DDoS defense, as with everything. Selecting an optimal mix of defensive tactics requires some adversary analysis, an honest and objective assessment of just how much downtime is survivable, and what you are willing to pay to restore service quickly. If a few hours of downtime are survivable defensive tactics can be much different than in situations where no downtime is ever acceptable – which demands more expenditure and much more sophisticated defenses.

    –Mike Rothman

    Friday, April 04, 2014

    NoSQL Security 2.0 [New Series] *updated*

    By Adrian Lane

    NoSQL, both the technology and the industry, have taken off. We are past the point where we can call big data a fad, and we recognize that we are staring straight into the face of the next generation of data storage platforms. About 2 years ago we started the first Securosis research project on big data security, and a lot has changed since then. At that point many people had heard of Hadoop, but could not describe what characteristics made big data different than relational databases – other than storing a lot of data. Now there is no question that NoSQL — as a data management platform — is here to stay; enterprises have jumped into large scale analysis projects with both feet and people understand the advantages of leveraging analytics for business, operations, and security use cases. But as with all types of databases – and make no mistake, big data systems are databases – high quality data produces better analysis results. Which is why in the majority of cases we have witnessed, a key ingredient is sensitive data. It may be customer data, transactional data, intellectual property, or financial information, but it is a critical ingredient. It is not really a question of whether sensitive data is stored within the cluster – more one of which sensitive data it contains. Given broad adoption, rapidly advancing platforms, and sensitive data, it is time to re-examine how to secure these systems and the data they store.

    But this paper will be different than the last one. We will offer much more on big data security strategies in addition to tools and technologies. We will spend less time defining big data and more looking at trends. We will offer more explanation of security building blocks including data encryption, logging, network encryption, and access controls/identity management in big data ecosystems. We will discuss the types of threats to big data and look at some of the use cases driving security discussions. And just like last time, we will offer a frank discussion of limitations in platforms and vendor offerings, which leave holes in security or fail to mesh with the inherent performance and scalability of big data.

    I keep getting one question from enterprise customers and security vendors. People ask repeatedly for a short discussion of data-centric security, so this paper provides one. This is because I have gotten far fewer questions in the last year on how to protect a NoSQL cluster, and far more on how to protect data before it is stored into the cluster. This was a surprise, and it is not clear from my conversations whether it is because users simply don’t trust the big data technology, due to worries about data propagation, because they don’t feel they can meet compliance obligations, or if they are worried about the double whammy of big data atop cloud services – all these explanations are plausible, and they have all come up. But regardless of driver, companies are looking for advice around encryption and wondering if tokenization and masking are viable alternatives for their use cases. The nature of the questions tells me that is where the market is looking for guidance, so I will cover both cluster security and data-centric security approaches.

    Here is our current outline:

    • Big Data Overview and Trends: This post will provide a refresher on what big data is, how it differs from relational databases, and how companies are leveraging its intrinsic advantages. We will also provide references on how the market has changed and matured over the last 24 months, as this bears on how to approach security.
    • Big Data Security Challenges: We will discuss why it is different architecturally and operationally, and also how the platform bundles and approaches differ from traditional relational databases. We will discuss what traditional tools, technologies and security controls are present, and how usage of these tools differs in big data environments.
    • Big Data Security Approaches: We will outline the approaches companies take when implementing big data security programs, as reference architectures. We will outline walled-garden models, cluster security approaches, data-centric security, and cloud strategies.
    • Cluster Security: An examination of how to secure a big data cluster. This will be a threat-centric examination of how to secure a cluster from attackers, rogue admins, and application programmers.
    • Data (Centric) Security: We will look at tools and technologies that protect data regardless of where it is stored or moved, for use when you don’t trust the database or its repository.
    • Application Security: An executive summary of application security controls and approaches.
    • Big data in cloud environments: Several cloud providers offer big data as part of Platform or Infrastructure as a Service offerings. Intrinsic to these environments are security controls offered by the cloud vendor, offering optional approaches to securing the cluster and meeting compliance requirements.
    • Operational Considerations: Day-to-day management of the cluster is different than management of relational databases, so the focus of security efforts changes too. This post will examine how daily security tasks change and how to adjust operational controls and processes to compensate. We will also offer advice on integration with existing security systems such as SIEM and IAM.

    As with all our papers, you have a voice in what we cover. So I would like feedback from readers, particularly whether you want a short section of application layer security as well. It is (tentatively) included in the current outline. Obviously this would be a brief overview – application security itself is a very large topic. That said, I would like input on that and any other areas you feel need addressing.

    –Adrian Lane

    Thursday, April 03, 2014

    Booth Babes Be Gone

    By Mike Rothman

    OK. I have changed my tune. I have always had a laissez-faire attitude toward booth babes. I come from the school of what works. And if booth babes generate leads, of which some statistically result in deals, I’m good. Mr. Market says that if something works, you keep doing it. And when it stops working you move on to the next tactic. Right?

    Not so much. Chenxi Wang and Zenobia Godschalk posted a thought-provoking piece about why it’s time to grow up. As people and as a business. This quote from Sonatype’s Debbie Rosen sums it up pretty well,

    …this behavior is a “lazy way of marketing”, Debbie Rosen of Sonatype said, “this happens when you do not have any creative or otherwise more positive ways of getting attention.”

    I agree with Debbie. But there are a lot of very bad marketers in technology and security. Getting attention for these simpletons is about getting a louder bullhorn. Creativity is hard. Hiring models is easy.

    Not only is he interesting, he is smart

    What’s worse is that I have had attractive technical product managers and SEs, who happen to be female, working at my company, and they were routinely asked to bring over a technical person to do the demo. It was just assumed that an attractive female wouldn’t have technical chops. And that’s what is so troubling about continuing to accept this behavior.

    I have daughters. And I’m teaching my girls they can be anything they want. I would be really happy if they pursued technical careers, and I am confident they will be attractive adults (yes, I’ll own my bias on that). Should they have to put up with this nonsense? I say not.

    Even better, the post calls for real change. Not bitching about it on Twitter.

    Writing blog posts and expressing outrage on social media alone won’t work. We need to make this issue a practical, rather than a rhetorical one. Those of us who are in positions of power, those of us in sales, marketing, and executive positions, need to do something real to effect changes.

    I still pray at the Temple of Mr. Market. And that means until the tactic doesn’t work, there will be no real change. So if you work for a vendor make it clear that booth babes make you uncomfortable, and it’s just wrong. Take a stand within your own company. And if they don’t like it, leave. I will personally do whatever I can to get you a better job if it comes to that.

    If you work for an end-user don’t get scanned at those booths. And don’t buy products from those companies. Vote with your dollars. That is the only way to effect real sustainable change. Money talks.

    We live in an age of equality. It is time to start acting that way. If a company wants to employ a booth babe, at least provide babes of both genders. I’m sure there are a bunch of lightly employed male actors and models in San Francisco who would be happy to hand out cards and put asses in trade show theater seats.

    –Mike Rothman

    Wednesday, April 02, 2014

    Incite 4/2/2014: Disruption

    By Mike Rothman

    The times they are a-changin’. Whether you like it or not. Rich has hit the road, and has been having a ton of conversations about his Future of Security content, and I have adapted it a bit to focus on the impact of the cloud and mobility on network security. We tend to get one of three reactions:

    1. Excitement: Some people rush up at the end of the pitch to learn more. They see the potential and need to know how they can prepare and prosper as these trends take root.
    2. Confusion: These folks have a blank stare through most of the presentation. You cannot be sure if they even know where they are. You can be sure they have no idea what we are talking about.
    3. Fear: These folks don’t want to know. They like where they are, and don’t want to know about potential disruptions to the status quo. Some are belligerent in telling us we’re wrong. Others are more passive-aggressive, going back to their office to tell everyone who will listen that we are idiots.

    Stop messing with my lawn. I'm happy with it just the way it is.

    Those categories more-or-less reflect how folks deal with change in general. There are those who run headlong into the storm, those who have no idea what’s happening to them, and those who cling to the old way of doing things – actively resisting any change to their comfort zone. I don’t judging any of these reactions. How you deal with disruption is your business.

    But you need to be clear which bucket you fit into. You are fooling yourself and everyone else if you try to be something you aren’t. If you don’t like to be out of your comfort zone, then don’t be. The disruptions we are talking about will be unevenly distributed for years to come. There are still jobs for mainframe programmers, and there will be jobs for firewall jockeys and IPS tuners for a long time. Just make sure the organization where you hang your hat is a technology laggard.

    Similarly, if you crave change and want to accelerate disruption, you need to be in an environment which embraces that. The organizations that take risks and understand not everything works out. We have been around long enough to know we are at the forefront of a major shift in the technology landscape. The last one of this magnitude I expect to see during my working career.

    I am excited. Rich is excited, and so is Adrian. Of course that’s easy for us – due to the nature of our business model we don’t have as much at stake. We are proverbial chickens, contributing eggs (our research) to the breakfast table. You are the pig, contributing the bacon. It’s your job on the line, not ours.

    –Mike

    Photo credit: “Expect Disruption” originally uploaded by Brett Davis


    Securosis Firestarter

    Have you checked out our new video podcast? Rich, Adrian, and Mike get into a Google Hangout and.. hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail.


    2014 RSA Conference Guide

    In case any of you missed it, we published our fifth RSA Conference Guide. Yes, we do mention the conference a bit, but it’s really our ideas about how security will shake out in 2014. You can get the full guide with all the memes you can eat.


    Heavy Research

    We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too.

    Defending Against Network Distributed Denial of Service Attacks

    Advanced Endpoint and Server Protection

    Newly Published Papers


    Incite 4 U

    1. The good old days of the security autocrat: At some point I will be old and retired, drinking fruity drinks with umbrellas in them, and reminiscing about the good old days when security leaders could dictate policy and shove it down folks’ throats. Yeah, that lasted a few days, before those leaders were thrown out the windows. The fact is that autocrats can be successful, but usually only right after a breach when a quick cleanup and attitude adjustment is needed – at any other time that act wears thin quickly. But as Dave Elfering points out, the rest of the time you need someone competent, mindful, diligent, well-spoken and business savvy. Dare I say it, a Pragmatic CSO. Best of all, Dave points out that folks who will succeed leading security teams need to serve the business, not have fixed best practices in mind, which they adhere to rigidly. Flexibility to business needs is the name of the game. – MR

    2. Throwing stones: I couldn’t agree more with Craig Carpenter, who writes in Dark Reading that folks need to Be Careful Beating Up Target. It has become trendy for every vendor providing alerts via a management console to talk about how they address the Target issue: missing alerts. But as Craig explains, the fact is that Target had as much data as they needed. It looks like a process failure at a busy time of year, relying on mostly manual procedures to investigate alerts. This can (and does) happen to almost every company. Don’t fall into the trap of thinking you’re good. If you haven’t had a breach, chalk it up to being lucky. And that’s okay! Thinking that it can’t happen to you is a sure sign of imminent doom. And for those vendors trying to trade on Target’s issue, or pointing fingers at FireEye or Symantec or any of the other vendors Target used, there is a special place in breach hell for you. Karma is a bitch, and your stuff will be busted. And I’ll laugh at your expense, along with the rest of the industry. – MR

    3. CC-DNS: We have been highlighting the role of attacking DNS in Distributed Denial of Service attacks (DDoS), and Dark Reading highlights some other DNS attack vectors. This foundational part of the Internet, designed decades ago, simply wasn’t designed to stand up to 400gbps attacks. Go figure. But it is a real problem – it’s not like you can just swap out DNS in one fell swoop across the entire Internet. And technologies meant to protect the infrastructure like DNSSEC, put in place after the Kaminsky attack was made public, can be used to overload the system. Finally, the article raises the issue of DNS tampering for mobile devices – a key employee in a coffee shop (me, for instance) could be routed to a fake server if the coffee shop’s DNS is busted. So many problems and few solutions – like pretty much everything else. – MR

    4. One log, multiple consumers: Stormy highlights the importance of logging in a DevOps context on Shimmy’s new devops.com site (yes, Rich is an advisor). His point is that you will need to pull information from the technology stack and applications to be sure you understand what’s happening as you move to continuous deployment. Though he draws a distinction between DevOps and Security, which for the time being is fine. Over time we expect the security function (except perhaps program management) to be subsumed within true operational processes. In a DevOps world there are no logical breakpoints for inserting security, which means it really will need to be built in. Finally. – MR

    –Mike Rothman

    Tuesday, April 01, 2014

    Breach Counters

    By Mike Rothman

    The folks at the Economist (with some funding from Booz Allen Hamilton, clearly doing penance for bringing Snow into your Den) have introduced the CyberTab cyber crime cost calculator. And no, this isn’t an April Fool’s joke. The Economist is now chasing breaches and throwinging some cyber around. Maybe they will sponsor a drinking game at DEFCON or something.

    It will calculate the costs of a specific cyber attack–based on your estimates of incident-response and business expenses and of lost sales and customers–and estimate your return on prevention.

    Basically they built a pretty simple model (PDF) that gives you guidelines for estimating the cost of an attack. It’s pretty standard stuff, including items such as the cost of lost IP and customer data. They also provide a model to capture the direct costs of investigation and clean-up. You also try to assess the value of lost business – always a slippery slope.

    I bet you say that to overcompensate for your little compute

    You can submit data anonymously, and presumably over time (with some data collection), you should be able to benchmark your losses against other organizations. So you can brag to your buddies over beers that you lost more than they did. The data will also provide fodder for yet another research report to keep the security trade rags busy cranking out summary articles.

    Kidding aside, I am a big fan of benchmarks, and data on the real costs of attacks can help substantiate all the stuff we security folks have been talking about for years.

    Photo credit: “My platform is bigger than yours” originally uploaded by Alberto Garcia

    –Mike Rothman

    Monday, March 31, 2014

    Defending Against DDoS: Magnification

    By Mike Rothman

    As mentioned in our last post, the predominant mechanism of network-based DDoS attacks involves flooding the pipes with standard protocols like SYN, ICMP, DNS, and NTP. But that’s not enough, so attackers now take advantage of weaknesses in the protocols to magnify the impact of their floods by an order of magnitude. This makes each compromised device far more efficient as an attack device and allows attackers to scale attacks over 400gbps (as recently reported by CloudFlare). Only a handful of organizations in the world can handle an attack of that magnitude, so DDoS + reflection + amplification is a potent combination.

    Fat Packets

    Attackers increasingly tune the size of their packets to their desired outcome. For example simple SYN packets can crush the compute capabilities of network/security devices. Combining small SYNs with larger SYN packets can also saturate the network pipe, so we see often them combined in today’s DDoS attacks.

    Reflection + Amplification

    The first technique used to magnify a DDoS attack is reflection. This entails sending requests to a large number of devices (think millions), spoofing the origination IP address of a target site. The replies to those millions of requests are reflected back to the target. The UDP-based protocols used in reflection attacks don’t require handshaking to establish new sessions, so they are spoofable.

    The latest wave of DDoS attacks uses reflected DNS and NTP traffic to dramatically scale the volume of traffic hitting targets. Why those two protocols? Because they provide good leverage for amplifying attacks – DNS and NTP responses are typically much bigger than requests. DNS can provide about 50x amplification because responses are that much larger than requests. And the number of open DNS resolvers which respond to any DNS request from any device make this an easy and scalable attack. Until the major ISPs get rid of these open resolvers DNS-based DDoS attacks will continue.

    NTP has recently become a DDoS protocol of choice because it offers almost 200x magnification. This is thanks to a protocol feature: clients can request a list of the last 600 IP addresses to access a server. To illustrate the magnitude of magnification, the CloudFlare folks reported that attack used 4,529 NTP servers, running on 1,298 different networks, each sending about 87mbps to the victim. The resulting traffic totaled about 400gbps. Even more troubling is that all those requests (to 4,500+ NTP servers) could be sent from one device on one network.

    Even better, other UDP-based protocols offers even greater levels of amplification. An SNMP response can be 650x the size of a request, which could theoretically be weaponized to create 1gbps+ DDoS attacks. Awesome.

    Stacking Attacks

    Of course none of these techniques existing a vacuum, so sometimes we will see them pounding a target directly, while other times attackers combine reflection and amplification to hammer a target. All the tactics in our Attacks post are in play, and taken to a new level with magnification.

    The underlying issue is that these attacks are enabled by sloppy network hygiene on the part of Internet service providers, who allow spoofed IP addresses for these protocols and don’t block flood attacks. These issues are largely beyond the control of a typical enterprise target, leaving victims with little option but to respond with a bigger pipe to absorb the attack. We will wrap up tomorrow, with look at the options for mitigating these attacks.

    –Mike Rothman

    Sunday, March 30, 2014

    Defending Against DDoS: Attacks

    By Mike Rothman

    As we discussed in our Introduction to Defending Against Network-based Distributed Denial of Service Attacks, DDoS is a blunt force instrument for many adversaries. So organizations need to remain vigilant against these attacks. There is not much elegance in a volumetric attack – adversaries impact network availability by consuming all the bandwidth into a site and/or by knocking down network and security devices, overwhelming their ability to handle the traffic onslaught.

    Today’s traditional network and security devices (routers, firewalls, IPS, etc.) were not designed to handle these attacks. Nor were network architectures built to easily decipher attack traffic and keep legitimate traffic flowing. So an additional layer of products and services has emerged to protect networks from DDoS attacks. But first things first. Before we dig into ways to deal with these attacks let’s understand the types of attacks and how attackers assemble resources to blast networks to virtual oblivion.

    The Attacks

    The first category of DDoS attacks is the straightforward flood. Attackers use tools that send requests using specific protocols or packets (SYN, ICMP, UDP, and NTP are the most popular) but don’t acknowledge the responses. If enough attack computers send requests to a site, its bandwidth can quickly be exhausted. Even if bandwidth is sufficient, on-site network and security devices need to maintain session state while continuing to handle additional (legitimate) inbound session requests. Despite the simplicity of the problem floods continue to be a very effective tactic for overwhelming targets.

    Increasingly we see the DNS infrastructure targeted by DDoS attacks. This prevents the network from successfully routing traffic from point A to point B, because the map is gone. As with floods, attackers can overwhelm the DNS by blasting it with traffic, especially because DNS infrastructure has not scaled to keep pace with overall Internet traffic growth.

    DNS has other frailties which make it an easy target for DDoS. Like the shopping cart and search attacks we highlighted for Application DoS, legitimate DNS queries can also overwhelm the DNS service and knock down a site. The attacks target weaknesses in the DNS system, where a single request for resolution can trigger 4-5 additional DNS requests. This leverage can overwhelm domain name servers. We will dig into magnification tactics later in this series. Similarly, attackers may request addresses for hosts that do not exist, causing the targeted servers to waste resources passing on the requests and polluting caches with garbage to further impair performance.

    Finally, HTTP continues to be a popular target for floods and other application-oriented attacks, taking advantage of the inherent protocol weaknesses. We discussed slow HTTP attacks in our discussion of Application Denial of Service, so we won’t rehash the details here, but any remediations for volumetric attacks should alleviate slow HTTP attacks as well.

    Assembling the Army

    To launch a volumetric attack an adversary needs devices across the Internet to pound the victim with traffic. Where do these devices come from? If you were playing Jeopardy the correct response would be “What is a bot network, Alex?” Consumer devices continue to be compromised and monetized at an increasing rate, driving by increasingly sophisticated malware and the lack of innovation in consumer endpoint protection. These compromised devices generate the bulk of DDoS traffic.

    Of course attackers need to careful – Internet Service Providers are increasingly sensitive to consumer devices streaming huge amounts of traffic at arbitrary sites, and take devices off the network when they find violations of their terms of service. Bot masters use increasingly sophisticated algorithms to control their compromised devices, to protect them from detection and remediation. Another limitation of consumer devices is their limited bandwidth, particularly upstream. Bandwidth continues to grow around the world, but DDoS attackers hit capacity constraints.

    DDoS attackers like to work around these limitations of consumer devices by instead compromising servers to blast targets. Given the millions of businesses with vulnerable Internet-facing devices, it tends to be unfortunately trivial for attackers to compromise some. Servers tend to have much higher upstream bandwidth so they are better at serving up malware, commanding and controlling bot nodes, and launching direct attacks.

    Attackers are currently moving a step beyond conventional servers, capitalizing on cloud services to change their economics. Cloud servers – particularly Infrastructure as a Service (IaaS) servers are inherently Internet-facing and often poorly configured. And of course cloud servers have substantial bandwidth. For network attacks, a cloud server is like a conventional server on steroids – DDoS attackers see major gains in both efficiency and leverage. To be fair, the better-established cloud providers take great pains to identify compromised devices and notify customers when they notice something remiss. You can check out Rich’s story for how Amazon proactively notified us of a different kind of issue, but they do watch for traffic patterns that indicate misuse. Unfortunately by the time misuse is detected by a cloud provider, server owner, or other server host, it may be too late. It doesn’t take long to knock a site offline.

    And attackers without the resources or desire to assemble and manage botnets can just rent them. Yes, a number of folks offer DDoS as a service (DDoSaaS, for the acronym hounds), so it couldn’t be easier for attackers to harness the resources to knock down a victim. And it’s not expensive according to McAfee, which recorded DDoS costs from $2/hour for short attacks, up to $1,000 to take a site down for a month.

    It is a bit scary to think you could knock down someone’s site for 4 hours for less than a cup of coffee. But when you take a step back and consider the easy availability of compromised devices, servers, and cloud servers, DDoS is a very easy service to add to an attacker’s arsenal.

    Our next post will discuss tactics for magnifying the impact of a DDoS attack – including encryption and reflection – to make attacks an order of magnitude more effective.

    –Mike Rothman

    Friday, March 28, 2014

    Analysis of Visa’s Proposed Tokenization Spec

    By Adrian Lane

    Visa, Mastercard, and Europay – together known as EMVCo – published a new specification for Payment Tokenisation this month. Tokenization is a proven security technology, which has been adopted by a couple hundred thousand merchants to reduce PCI audit costs and the security exposure of storing credit card information. That said, there is really no tokenization standard, for payments or otherwise. Even the PCI-DSS standard does not address tokenization, so companies have employed everything from hashed credit card (PAN) values (craptastic!) to very elaborate and highly secure random value tokenization systems. This new specification is being provided to both raise the bar on shlock home-grown token solutions, but more importantly to address fraud with existing and emerging payment systems.

    I don’t expect many of you to read 85 pages of token system design to determine what it really means, if there are significant deficiencies, or whether these are the best approaches to solving payment security and fraud issues, so I will summarize here. But I expect this specification to last, so if you build tokenization solutions for a living you had best get familiar with it. For the rest of you, here are some highlights of the proposed specification.

    • As you would expect, the specification requires the token format to be similar to credit card numbers (13-19 digits) and pass LUHN.
    • Unlike financial tokens used today, and at odds with the PCI specification I might add, the tokens can be used to initiate payments.
    • Tokens are merchant or payment network specific, so they are only relevant within a specific domain.
    • For most use cases the PAN remains private between issuer and customer. The token becomes a payment object shared between merchants, payment processors, the customer, and possibly others within the domain.
    • There is an identity verification process to validate the requestor of a token each time a token is requested.
    • The type of token generated is variable based upon risk analysis – higher risk factors mean a low-assurance token!
    • When tokens are used as a payment objects, there are “Data Elements” – think of them as metadata describing the token – to buttress security. This includes a cryptographic nonce, payment network data, and token assurance level.

    Each of these points has ramifications across the entire tokenization eco-system, so your old tokenization platform is unlikely to meet these requirements. That said, they designed the specification to work within todays payment systems while addressing near-term emerging security needs.

    Don’t let the misspelled title fool you – this is a good specification! Unlike the PCI’s “Tokenization Guidance” paper from 2011 – rumored to have been drafted by VISA – this is a really well thought out document. It is clear whoever wrote this has been thinking about tokenization for payments for a long time, and done a good job of providing functions to support all the use cases the specification needs to address. There are facilities and features to address PAN privacy, mobile payments, repayments, EMV/smartcard, and even card-not-present web transactions. And it does not address one single audience to the detriment of others – the needs of all the significant stakeholders are addressed in some way. Still, NFC payments seems to be the principle driver, the process and data elements really only gel when considered from that perspective. I expect this standard to stick.

    –Adrian Lane