Securosis

Research

The CISO’s Guide to the Cloud: Real World Examples and Where to Go from Here

This is part five of a series. You can read part one, part two, part three, or part four; or track the project on GitHub. Real World Examples Cloud computing covers such a wide range of different technologies that there are no shortage of examples to draw from. Here are a few generic examples from real-world deployments. These get slightly technical because we want to highlight practical, tactical techniques to prove we aren’t just making all this up: Embedding and Validating a Security Agent Automatically In a traditional environment we embed security agents by building them into standard images or requiring server administrators to install and register them. Both options are very prone to error and omission, and hard to validate because you often need to rely on manual scanning. Both issues become much easier to manage in cloud computing. To embed the agent: The first option is to build the agent into images. Instead of using generic operating system images you build your own, then require users to only launch approved images. In a private cloud you can enforce this with absolute control of what they run. In public clouds it is a bit tougher to enforce, but you can quickly catch exceptions using our validation process. The second option, and our favorite, is to inject the agent when instances launch. Some operating systems support initialization scripts which are passed to the launching instance by the cloud controller. Depending again on your cloud platform, you can inject these scrips automatically when autoscaling, via a management portal, or manually at other times. The scripts install and configure software in the instance before it is accessible on the network. Either way you need an agent that understands how to work within cloud infrastructure and is capable of self-registering to the management server. The agent pulls system information and cloud metadata, then connects with its management server, which pushes configuration policies back to the agent so it can self-configure. This process is entirely automated the first time the agent runs. Configuration may be based on detected services running on the instance, metadata tags applied to the instance (in the cloud management plane), or other characteristics such as where it is on the network. We provide a detailed technical example of agent injection and self-configuration in our Software Defined Security paper. The process is simple. Build the agent into images or inject it into launching instances, then have it connect to a management server to configure itself. The capabilities of these agents vary widely. Some replicate standard endpoint protection but others handle system configuration, administrative user management, log collection, network security, host hardening, and more. Validating that all your instances are protected can be quite easy, especially if your tool supports API: Obtain a list of all running instances from the cloud controller. This is a simple API call. Obtain a list of all instances with the security agent. This should be an API call to your security management platform, but might require pulling a report if that isn’t supported. Compare the lists. You cannot hide in the cloud, so you know every single instance. Compare active instances against managed instances, and find the exceptions. We also show how to do this in the paper linked above. Controlling SaaS with SAML Pretty much everyone uses some form of Software as a Service, but controlling access and managing users can be a headache. Unless you link up using federated identity, you need to manage user accounts on the SaaS platform manually. Adding, configuring, and removing users on yet another system, and one that is always Internet accessible, is daunting. Federated identity solves this problem: Enable federated identity extensions on your directory server. This is an option for Active Directory and most LDAP servers. Contact your cloud provider to obtain their SAML configuration and management requirements. SAML (Security Assertion Markup Language) is a semi-standard way for a relying party to allow access and activities based on approval from an identity provider. Configure SAML yourself or use a third-party tool compatible with your cloud provider(s) which does this for you. If you use several SaaS providers a tool will save a lot of effort. With SAML users don’t have a username and password with the cloud provider. The only way to log in is to first authenticate to your directory server, which then provides (invisible to the user) a token to allow access to the cloud provider. Users need to be in the office or on a VPN. If you want to enable remote users without VPN you can set up a cloud proxy and issue them a special URL to use instead of the SaaS provider’s standard address. This address redirects to your proxy, which then handles connecting back to your directory server for authentication and authorization. This is something you typically buy rather than build. Why do this? Instead of creating users on the SaaS platform it enables you to use existing user accounts in your directory server and authorize access using standard roles and groups, just like you do for internal servers. You also now get to track logins, disable accounts from a single source (your directory server), and otherwise maintain control. It also means people can’t steal a user’s password and then access Salesforce from anywhere on the Internet Compartmentalizing Cloud Management with IAM One of the largest new risks in cloud computing is Internet-accessible management of your entire infrastructure. Most cloud administrators use cloud APIs and command line interfaces to manage the infrastructure (or PaaS, and even sometimes SaaS). This means access credentials are accessed through environment variables or even the registry. If they use a web interface that opens up browser-based attacks. Either way, without capability compartmentalization an attacker could take complete control over their infrastructure by merely hacking a laptop. With a few API calls or a script they could copy or destroy everything in minutes. All cloud platforms support internal identity and access management to varying degrees –

Share:
Read Post

The CISO’s Guide to the Cloud: Adapting Security for Cloud Computing, Part 2

This is part four of a series. You can read part one, part two, or part three; or track the project on GitHub. As a reminder, this is the second half of our section on examples for adapting security to cloud computing. As before this isn’t an exhaustive list – just ideas to get you started. Intelligently Encrypt There are three reasons to encrypt data in the cloud, in order of their importance: Compliance. To protect data in backups, snapshots, and other portable copies or extracts. To protect data from cloud administrators. How you encrypt varies greatly, depending on where the data resides and which particular risks most concern you. For example many cloud providers encrypt object file storage or SaaS by default, but they manage the keys. This is often acceptable for compliance but doesn’t protect against a management plane breach. We wrote a paper on infrastructure encryption for cloud, from which we extracted some requirements which apply across encryption scenarios: If you are encrypting for security (as opposed to a compliance checkbox) you need to manage your own keys. If the vendor manages your keys your data may still be exposed in the event of a management plane compromise. Separate key management from cloud administration. Sure, we are all into DevOps and flattening management, but this is one situation where security should manage outside the cloud management plane. Use key managers that are as agile and elastic as the cloud. Like host security agents, your key manager needs to operate in an environment where servers appear and disappear automatically, and networks are virtual. Minimize SaaS encryption. The only way to encrypt data going to a SaaS provider is with a proxy, and encryption breaks the processing of data at the cloud provider. This reduces the utility of the service, so minimize which fields you need to encrypt. Or, better yet, trust your provider. Use secure cryptography agents and libraries when embedding encryption in hosts or IaaS and PaaS applications. The defaults for most crypto libraries used by developers are not secure. Either understand how to make them secure or use libraries designed from the ground up for security. Federate and Automate Identity Management Managing users and access in the cloud introduces two major headaches: Controlling access to external services without having to manage a separate set of users for each. Managing access to potentially thousands or tens of thousands of ephemeral virtual machines, some of which may only exist for a few hours. In the first case, and often the second, federated identity is the way to go: For external cloud services, especially SaaS, rely on SAML-based federated identity linked to your existing directory server. If you deal with many services this can become messy to manage and program yourself, so consider one of the identity management proxies or services designed specifically to tackle this problem. For access to your actual virtual servers, consider managing users with a dynamic privilege management agent designed for the cloud. Normally you embed SSH keys (or known Windows admin passwords) as part of instance initialization (the cloud controller handles this for you). This is highly problematic for privileged users at scale, and even straight directory server integration is often quite difficult. Specialized agents designed for cloud computing dynamically update users, privileges, and credentials at cloud speeds and scale. Adapt Network Security Networks are completely virtualized in cloud computing, although different platforms use different architectures and implementation mechanisms, complicating the situation. Despite that diversity there are consistent traits to focus on. The key issues come down to loss of visibility using normal techniques, and adapting to the dynamic nature of cloud computing. All public cloud providers disable networking sniffing, and that is an option on all private cloud platforms. A bad guy can’t hack a box and sniff the entire network, but you also can’t implement IDS and other network security like in traditional infrastructure. Even when you can place a physical box on the network hosting the cloud, you will miss traffic between instances on the same physical server, and highly dynamic network changes and instances appear and disappear too quickly to be treated like regular servers. You can sometimes use a virtual appliance instead, but unless the tool is designed to cloud specifications, even one that works in a virtual environment will crack in a cloud due to performance and functional limitations. While you can embed more host network security in the images your virtual machines are based on, the standard tools typically won’t work because they don’t know exactly where on the network they will pop up, nor what addresses they need to talk to. On a positive note, all cloud platforms include basic network security. Set your defaults properly, and every single server effectively comes with its own firewall. We recommend: Design a good baseline of Security Groups (the basic firewalls that secure the networking of each instance), and use tags or other mechanisms to automatically apply them based on server characteristics. A Security Group is essentially a firewall around every instance, offering compartmentalization that is extremely difficult to get in a traditional network. Use a host firewall, or host firewall management tool, designed for your cloud platform or provider. These connect to the cloud itself to pull metadata and configure themselves more dynamically than standard host firewalls. Also consider pushing more network security, including IDS and logging, into your instances. Prefer virtual network security appliances that support cloud APIs and are designed for the cloud platform or provider. For example, instead of forcing you to route all your virtual traffic through it as if you were on a physical network, the tool could distribute its own workload – perhaps even integrating with hypervisors. Take advantage of cloud APIs. It is very easy to pull every Security Group rule and then locate every instance. Combined with some additional basic tools you could then automate finding errors and omissions. Many cloud deployments do this today as a matter of course.

Share:
Read Post

Black Hat Cloud Security Training (Beta) in Seattle Next Month

I am teaching another cloud security class for Black Hat. There are two classes, one on December 9-10, and the other December 11-12. This class covers the CCSK certificate requirements and includes a test token to sit the exam (online). But we maintain the CCSK courseware, and it is time to try out some updated material. Specifically: We are streamlining the lecture day to reduce cruft and generally clean up the slides. We have even more real-world examples of how to get things done, based on our ongoing research. The labs are being updated for changes at Amazon Web Services. We are bringing more advanced material, as we did in Black Hat Vegas. The advanced material is not part of the core course, and we only get to it after the normal training requirements. It is an extension of the material I wrote about in the Software Defined Security paper. This class also qualifies as a Train the Trainer course, with some additional online training we offer for free after the class proper. If you want to become an instructor and sign up for this class, please email me and let me know ahead of time. Thanks, and hope to see you in Seattle! Share:

Share:
Read Post

You Cannot Outsource Accountability

  Given our severe skills gap in security, managed services and other security outsourcing tactics continue to be very interesting to end users. Either that, or non-security senior management gets frustrated by the inability of the internal team to get anything done, so they look at having someone else take a crack. As the NSS folks ask in their blog post, To Outsource or Not to Outsource, That is the Question!, but I don’t think that’s the right question. It’s really more about what they can outsource, not whether to outsource at all. Although their first sentence does irk me: Is it a good thing that one of the fastest growing segments in the field of information security revolves around surrendering control of your security to another party? Surrendering control? Really? That kind of attitude will get you killed. If there is one thing I have learned over the years, it was from cleaning up roadkill from security folks who bought the hype, and believed that a service provider would solve all their problems. But you can’t outsource accountability. Then NSS went on to categorize some decision points for selecting a provider. And depending on what you are asking the provider to do, there are various nuances to making that selection. That’s fine. But ultimately there must be someone inside the organization responsible for the security program. Really responsible, and empowered to make decisions. That person is responsible for allocating resources to get the job done. That could mean using internal staff, deploying technology, leveraging managed services, or deeper outsourcing. I am not religious about any specific mix, but I am about the need for someone on internal to make those decisions. Share:

Share:
Read Post

The CISO’s Guide to the Cloud: Adapting Security for Cloud Computing

This is part three of a series. You can read part one or part two, or track the project on GitHub. This part is split into two posts – here is the first half: Adapting Security for Cloud Computing If you didn’t already, you should now have a decent understanding of how cloud computing differs from traditional infrastructure. Now it’s time to switch gears to how to evolve security to address shifting risks. These examples are far from comprehensive, but offer a good start and sample of how to think differently about cloud security. General Principles As we keep emphasizing, taking advantage of the cloud poses new risks, as well as both increasing and decreasing existing risks. The goal is to leverage the security advantages, freeing up resources to cover the gaps. There are a few general principles for approaching the problem that help put you in the proper state of mind: You cannot rely on boxes and wires. Quite a bit of classical security relies on knowing the physical locations of systems, as well as the network cables connecting them. Network traffic in cloud computing is virtualized, which completely breaks this model. Network routing and security are instead defined by software rules. There are some advantages here, which are beyond the scope of this paper but which we will detail with future research. Security should be as agile and elastic as the cloud itself. Your security tools need to account for the highly dynamic nature of the cloud, where servers might pop up automatically and run for only an hour before disappearing forever. Rely more on policy-based automation. Wherever possible design your security to use the same automation as the cloud itself. For example there are techniques to automate (virtual) firewall rules based on tags associated with a server, rather than applying them manually. Understand and adjust for the characteristics of the cloud. Most virtual network adapters in cloud platforms disable network sniffing, so that risk drops off the list. Security groups are essentially virtual firewalls that on individual instance, meaning you get full internal firewalls and compartmentalization by default. Security tools can be embedded in images or installation scripts to ensure they are always installed, and cloud-aware ones can self configure. SAML can be used to provide absolute device and user authentication control to external SaaS applications. All these and more are enabled by the cloud, once you understand its characteristics. Integrate with DevOps. Not all organizations are using DevOps, but DevOps principles are pervasive in cloud computing. Security teams can integrate with this approach and leverage it themselves for security benefits, such as automating security configuration policy enforcement. Defining DevOps DevOps is an IT model that blurs the lines between development and IT operations. Developers play a stronger role in managing their own infrastructure through heavy use of programming and automation. Since cloud enables management of infrastructure using APIs, it is a major enabler of DevOps. While it is incredibly agile and powerful, lacking proper governance and policies it can also be disastrous since it condenses many of the usual application development and operations check points. These principles will get you thinking in cloud terms, but let’s look at some specifics. Control the Management Plane The management plane is the administrative interfaces, web and API, used to manage your cloud. It exists in all types of cloud computing service models: IaaS, PaaS, and SaaS. Someone who compromises a cloud administrator’s credentials has the equivalent of unmonitored physical access to your entire data center, with enough spare hard drives, fork lifts, and trucks to copy the entire thing and drive away. Or blow the entire thing up. We cannot overstate the importance of hardening the management plane. It literally provides absolute control over your cloud deployment – often including all disaster recovery.* We have five recommendations for securing the management plane: If you manage a private cloud, ensure you harden the web and API servers, keeping all components up to date and protecting them with the highest levels of web application security. This is no different than protecting any other critical web server. Leverage the Identity and Access Management features offered by the management plane. Some providers offer very fine-grained controls. Most also integrate with your existing IAM using federated identity. Give preference to your platform/provider’s controls and… Compartmentalize with IAM. No administrator should have full rights to all aspects of the cloud. Many providers and platforms support granular controls, including roles and groups, which you can leverage to restrict the damage potential of a compromised developer or workstation. For example, you can have a separate administrator for assigning IAM rights, only allow administrators to manage certain segments of your cloud, and further restrict them from terminating instances. Add auditing, logging, and alerting where possible. This is one of the more difficult problems in cloud security because few cloud providers audit administrator activity – such as who launched or stopped a server using the API. For now you will likely need a third-party tool or to work with particular providers for necessary auditing. Consider using security or cloud management proxies. These tools and services proxy the connection between a cloud administrator and the public or private cloud management plane. They can apply additional security rules and fill logging and auditing gaps. Automate Host (Instance) Security An instance is a virtual machine, which is based on a stored template called an image. When you ask the cloud for a server you specify the image to base it on, which includes an operating system and might bring a complete single-server application stack. The cloud then configures it using scripts which can embed administrator credentials, provide an IP address, attach and format storage, etc. Instances may exist for years or minutes, are configured dynamically, and can be launched nearly anywhere in your infrastructure – public or private. You cannot rely on manually assessing and adjusting their security. This is very different than building a server in a test environment, performing a

Share:
Read Post

Defending Against Application Denial of Service: Building Protections in

As we have discussed through this series, many types of attacks can impact the availability of your applications. To reiterate a number of points we made in Defending Against Denial of Service Attacks, your defenses need to be coordinated at multiple levels: at the network layer, in front of your application, within the application stack, and finally within the application. We understand this is a significant undertaking, and security folks have been trying to get developers on board for years to build security into applications – with little effect to date. That said, it doesn’t mean you shouldn’t keep pushing, especially given the relative ease of knocking down an application without proper defenses within the application. We have found the best way to get everyone on board is by implementing a structured web application security program that looks at each application in its entirety, and can be extended to add protections against denial of service attacks. Web Application Security Process Revisiting the process described in Building a Web Application Security Program, web applications need to be protected across the entire lifecycle: Secure Development: You start the process by building security into the software development lifecycle (SDLC). This includes training for people who deliver web applications, and improved processes to guide their activity. Security awareness training for developers is managed through education and supportive process modifications, as a precursor to making security a functional application requirement. This phase of the process leverages tools to automate portions of the effort: static analysis to help engineering identify vulnerable code, and dynamic analysis to detect anomalous application behavior. Secure Deployment: At the point where an application is code complete, and ready for more rigorous testing and validation, it is time to confirm that it doesn’t suffer from serious known security flaws (vulnerabilities) and is configured so it is not subject to any known compromises. This is where you use vulnerability assessments and penetration testing – along with solid operational approaches to configuration analysis, threat discovery, patch levels, and operational consistency checking. Secure Operations: The last phase of the process moves from preventative tools and processes to detecting and reacting to events from production applications. Here you deploy technologies and/or services to front-end the application, including web application firewalls and web protection services. Some technologies can protect applications from unwanted uses; others only monitor requests for inappropriate activity. To look at the specific aspects of what’s required to deal with AppDoS attacks, let’s look at each step in the process. Secure Development In this phase we are looking to build the protections we have been discussing into the application(s). This involves making sure the application stack in use is insulated against HashDoS attacks and no database calls present an opportunity for misuse and excessive queries. The most impactful protections are input validation on form fields to mitigate against buffer overflow, code injection, and other attacks that can break application logic. Understand that heavy input validation impacts application performance at scale, especially when under attack with a GET/POST flood or a similar attack. You should prioritize validating fields that require the least computational resources, and check them as early as possible. Extensive validation may exacerbate the flood attack and take down the application sooner, so you need to balance protection against performance when stress-testing the application prior to deployment. Also ensure your application security testing (static and dynamic) checks the application’s robustness against denial of service attacks, including shopping cart and pagination attacks. Secure Deployment When deploying the application make sure the stack has protections against the common web server DoS attacks including SlowLoris, Slow HTTP, and Apache Killer. You can check for these vulnerabilities using an application scanner or during a penetration test. Keep in mind that you will likely need some tuning to find the optimal timeout for session termination. Secure Operations Once the application goes into production the fun begins – you will be working with live ammunition. You can deploy an anti-DoS appliance or service, or a WAF (either product or service) to rate limit slow HTTP type attacks. This is also where a CDN or web protection service comes into play to absorb high-bandwidth attacks and intelligently cache static content to blunt the impact of random query string attacks. Finally, during the operational phase, you will want to monitor the performance and responsiveness of the application, as well as track inbound traffic to detect emerging DoS attacks as early as possible. You developed profiles for normal application behavior earlier – now you can use them to identify attack traffic before you have an outage. Finding the Right Mix As we have described, you have a bunch of options to defend your applications against denial of service attacks, so how can you determine the right mix of cloud-based, server-based, and application-based protections? You need to think about each in terms of effort and agility required to deploy at each level. Building protections into applications doesn’t happen overnight – it is likely to require development process changes and a development cycle or three to implement proper security controls to protect against this threat. The application may also require significant re-architecture – especially if the database-driven aspects of the applications haven’t been optimized. Keep in mind that new attacks and newly discovered vulnerabilities require you to revisit application security on an ongoing basis. Like other security disciplines, you never really finish securing your application. Somewhat less disruptive is hardening the application stack, including the web server, APIs, and database. This tends to be an operational responsibility, so you will need to collaborate with the ops team to ensure the right protections, patches, and configurations are deployed on the servers. Finally, the quickest path to protection is to front-end your application with an anti-DoS device and/or a cloud-based CDN/website protection service to deal with flood attacks and simple application attacks. As we have mentioned, these defenses are not a panacea – you still need to harden the stack and protect the application as well. But

Share:
Read Post

Friday Summary: November 15, 2013

There is lots I want to talk about this week, so I decided to resort to some three-dot blogging. A few years ago at the security bloggers meet-up, Jeremiah Grossman, Rich Mogull and Robert Hansen were talking about browser security. After I rudely butted into the conversation they asked me if “the market” would be interested in a secure browser, one that was not compromised to allow marketing and advertising concerns to trump security. I felt no one would pay for it but the security community and financial services types would certainly be interested in such a browser. So I was totally jazzed when WhiteHat finally announced Aviator a couple weeks back. And work being what is has been, I finally got a chance to download it today and use it for a few hours. So far I miss nothing from Firefox, Safari, or Chrome. It’s fast, navigation is straightforward, it easily imported all my Firefox settings, and preferences are simple – somewhat the opposite of Chrome, IMO. And I like being able to switch users as I switch between different ISPs/locations (i.e., tunnels to different cloud providers ). I am not giving up my dedicated Fluid browsers dedicated to specific sites, but Fluid has been breaking for unknown reasons on some sites. But the Aviator and Little Snitch combinations is pretty powerful for filtering and blocking outbound traffic. I recommend WhiteHat’s post on key differences between Aviator and Chrome. If you are looking for a browser that does not hemorrhage personal information to any and every website, download a copy of Aviator and try it out. * * * I also want to comment on the MongoHQ breach a couple weeks back. Typically, it was discovered by one of their tenant clients: Buffer. Now that some of the hype has died away a couple facets of the breach should be clarified. First, MongoHQ is a Platform-as-a-Service (PaaS) provider, running on top of Amazon AWS, and specializing in in-memory Mongo databases. But it is important that this is a breach of a small cloud service provider, rather than a database hack, as the press has incorrectly portrayed it. Second, many people assume that access tokens are inherently secure. They are not. Certain types of identity tokens, if stolen, can be used to impersonate you. Third, the real root cause was a customer support application that provided MongoHQ personnel “an ‘impersonate’ feature that enables MongoHQ employees to access our primary web UI as if they were a logged in customer”. Yeah, that is as bad as it sounds, and not a feature you want accessible from just any external location. While the CEO stated “If access tokens were encrypted (which they are now) then this would have been avoided”, that’s just one way to prevent this issue. Amazon provides pretty good security recommendations, and this sort of attack is not possible if management applications are locked down with good security zone settings and restricted to AWS certificates for administrative access. Again, this is not a “big data hack” – it is a cloud service provider who was sloppy with their deployment. * * * It has been a strange year – I am normally “Totally Transparent” about what I am working on, but this year has involved several projects I can’t talk about. Now that things have cleared up, I am moving back to a normal research schedule, and I have a heck of a lot to talk about. I expect that during the next couple weeks I will begin work on: Risk-based Authentication: Simple questions like “who are you” and “what can you do” are no longer simple binary answers in this age of mobile computing. The answers are subjective and tinged with shades of gray. Businesses need to make access control decisions based on simple control lists, but simple lists are no longer adequate – they need to consider risk and behavior when making these decisions. Gunnar and I will explore this trend, and talk about the different techniques in use and the value they can realistically provide. Securing Big Data 2.0: The market has changed significantly over the past 14 months – since I last wrote about how to secure big data clusters – I will refresh that research, add sections on identity management, and take a closer look at application layer security – where a number of the known threats and issues persist. Two-factor Authentication: It is often discussed as the ultimate in security: a second authentication factor to make doubly sure you are who you claim to be. Many vendors are talking about it, both for and against, because of the hype. Our executive summary will look at usage, threats it can help address, and integration into existing systems. Understanding Mobile Identity Management: This will be a big one. A full-on research project in mobile identity management. We will publish a full outline in the coming weeks. Security Analytics with Big Data: I will release a series of targeted summaries of how big data works for security analytics, and how to start a security analytics program. If you have questions on any of these, or if there are other topics you thing we should be covering, shoot us an email. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian quoted on Trustwave’s acquisition of Application Security. Favorite Securosis Posts Mike Rothman: How to Detect Cloudwashing by Your Vendors. – Love how Adrian and Gunnar put a pin in the marketing hyperbole around cloud now. And brace yourself – we will see a lot more over the next year. Adrian Lane: The CISO’s Guide to Cloud: How Cloud is Different for Security. This is good old-fashioned Securosis research. Focused. A bit ahead of the curve. Pragmatic. Enjoying this series. Other Securosis Posts Incite 11/13/2013: Bully. New Series: What CISOs Need to Know about Cloud Computing. How to Edit Our Research on GitHub. Trustwave Acquires Application Security Inc. Security Awareness Training Evolution [New Paper]. Blowing Your

Share:
Read Post

Incite 11/13/2013: Bully

  When you really see the underbelly of something, it is rarely pretty. The NFL is no different. Grown men are paid millions of dollars a year to display unbridled aggression, toughness, and competitiveness. That sounds like a pretty Darwinian environment, where the strong prey on the weak. And it is given what we have seen over the last few weeks, as behavior in the Miami Dolphins locker room comes to light. It is counterintuitive to think of a 320-pound offensive lineman being bullied by anyone. You hear about fights on the field and in the locker room as these alpha males all look to establish position within the pride. But how are the bullies in the Dolphins locker room any different than the petty mean girls and boys you had to deal with in high school? They aren’t. If you take a step back, a bully is always compensating for some kind of self-perceived inadequacy that forces him or her to act out. Small people (even if they weigh over 300+ pounds) make themselves feel bigger by making others feel smaller. So the first question is whether the behavior is acceptable. I think everyone can agree racial epithets have no place in today’s society. But what about the other tactics, such as mind games and intentionally excluding a fellow player from activities? I’m not sure that kind of hazing would normally be a huge deal, but combined with an environment of racial insensitivity, it is probably crossing the line as well. What’s more surprising is that no one stepped up and said that behavior was no bueno. Bullies prey on folks, because folks who aren’t directly targeted don’t stand up and make clear what is acceptable and what isn’t. But that has happened since the beginning of time. No one want to stand up for what’s right, so folks just watch catastrophic events happen. Maybe this will be a catalyst to change the culture. There is nothing the NFL hates more than bad publicity. So things will change. Every other team in the NFL made statements about how their work environments are not like that. No one wants to be singled out as a bully or a bigot. Not when they have potential endorsement deals riding on their public image. Like most other changes, some old timers will resist. Others will adapt because they need to. And with the real-time nature of today’s media, and rampant leaks within every organization, it is hard to see this kind of behavior happening again. I guess I can’t understand why players who call themselves brothers would treat each other so badly. Of course you beat up your little brother(s) when you are 10. But if you are still treating your siblings shabbily as an adults, you need some help. Maybe I am getting a bit judgmental, especially considering that I have never worked in an NFL locker room, so I can’t even pretend to understand the mindset. But I do know a bit about dealing with people. One of the key tenets of a functional and successful organization is to manage people in an individual fashion. A guy may be 320 pounds, an athletic freak, and capable of serious violence when the ball is snapped, but that doesn’t mean he wants to get called names or fight a teammate to prove his worth. I learned the importance of managing people individually early in my career, mostly because it worked. This management philosophy is masterfully explained in First, Break All the Rules, which shows how important corporate performance is for keeping happy employees who do what they love every day with people they care about. Clearly someone in Miami didn’t get the memo. And you have to wonder what kind of player Jonathan Martin could be if he worked in a place where he didn’t feel singled out and persecuted, so he could focus on the task at hand: his blocking assignment for each play. Not whether he was going to get jumped in the parking lot. Maybe he’ll even get a chance to find out, but it’s hard to see that happening in Miami. –Mike Photo credit: “Bully Advance Screening Hosted by First Lady Katie O’Malley” originally uploaded by Maryland GovPics Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. What CISOs Need to Know about Cloud Computing Introduction Defending Against Application Denial of Service Attacking the Application Stack Attacking the Application Server Introduction Newly Published Papers Security Awareness Training Evolution Firewall Management Essentials Continuous Security Monitoring API Gateways Threat Intelligence for Ecosystem Risk Management Dealing with Database Denial of Service Identity and Access Management for Cloud Services The 2014 Endpoint Security Buyer’s Guide The CISO’s Guide to Advanced Attackers Incite 4 U What is it that you do? I have to admit that I really did not understand analysts or the entire analyst industry prior to joining Securosis. Analysts were the people on our briefing calendar who were more knowledgable – and far more arrogant – than the press. But they did not seem to have a clear role, nor was their technical prowess close to what they thought it was. I was assured by our marketing team that they were important, but I could not see how. Now I do, but the explanation needs to be repeated every so often. The aneelism blog has a nice primer on technology analyst 101 for startups. Long story short, some analysts speak with customers as an independent advisor, which means two things for small security vendors: we are told things customers will never tell you directly, and we see a breadth of industry issues & trends you won’t because you are focused on your own stuff and try to wedge

Share:
Read Post

The CISO’s Guide to the Cloud: How the Cloud Is Different for Security

This is part two of a series. You can read part one here or track the project on GitHub. How the Cloud Is Different for Security In the early days of cloud computing, even some very well-respected security professionals claimed it was little more than a different kind of outsourcing, or equivalent to the multitenancy of a mainframe. But the differences run far deeper, and we will show how they require different cloud security controls. We know how to manage the risks of outsourcing or multi-user environments; cloud computing security builds on this foundation and adds new twists. These differences boil down to abstraction and automation, which separate cloud computing from basic virtualization and other well-understood technologies. Abstraction Abstraction is the extensive use of multiple virtualization technologies to separate compute, network, storage, information, and application resources from the underlying physical infrastructure. In cloud computing we use this to convert physical infrastructure into a resource pool that is sliced, diced, provisioned, deprovisioned, and configured on demand, using the automation we will talk about next. It really is a bit like the matrix. Individual servers run little more than a hypervisor with connectivity software to link them into the cloud, and the rest is managed by the cloud controller. Virtual networks overlay the physical network, with dynamic configuration of routing at all levels. Storage hardware is similarly pooled, virtualized, and then managed by the cloud control layers. The entire physical infrastructure, less some dedicated management components, becomes a collection of resource pools. Servers, applications, and everything else runs on top of the virtualized environment. Abstraction impacts security significantly in four ways: Resource pools are managed using standard, web-based (REST) Application Programming Interfaces (APIs). The infrastructure is managed with network-enabled software at a fundamental level. Security can lose visibility into the infrastructure. On the network we can’t rely on physical routing for traffic inspection or management. We don’t necessarily know which hard drives hold which data. Everything is virtualized and portable. Entire servers can migrate to new physical systems with a few API calls or a click on a web page. We gain greater pervasive visibility into the infrastructure configuration itself. If the cloud controller doesn’t know about a server it cannot function. We can map the complete environment with those API calls. We have focused on Infrastructure as a Service, but the same issues apply to Platform and Software as a Service, except they often offer even less visibility. Automation Virtualization has existed for a long time. The real power cloud computing adds is automation. In basic virtualization and virtual data centers we still rely on administrators to manually provision and manage our virtual machines, networks, and storage. Cloud computing turns these tasks over to the cloud controller to coordinate all these pieces (and more) using orchestration. Users ask for resources via web page or API call, such as a new server with 1tb of storage on a particular subnet, and the cloud determines how best to provision it from the resource pool; then it handles installation, configuration, and coordinating all the networking, storage, and compute resources to pull everything together into a functional and accessible server. No human administrator required. Or the cloud can monitor demand on a cluster and add and remove fully load-balanced and configured systems based on rules, such as average system utilization over a specified threshold. Need more resources? Add virtual servers. Systems underutilized? Drop them back into the resource pool. In public cloud computing this keeps costs down as you expand and contract based on what you need. In private clouds it frees resources for other projects and requirements, but you still need a shared resource pool to handle overall demand. But you are no longer stuck with under-utilized physical boxes in one corner of your data center and inadequate capacity in another. The same applies to platforms (including databases or application servers) and software; you can expand and contract database storage, software application server capacity, and storage as needed – without additional capital investment. In the real world it isn’t always so clean. Heavy use of public cloud may exceed the costs of owning your own infrastructure. Managing your own private cloud is no small task, and is ripe with pitfalls. And abstraction does reduce performance at certain levels, at least for now. But with the right planning, and as the technology continues to evolve, the business advantages are undeniable. The NIST model of cloud computing is the best framework for understanding the cloud. It consists of five Essential Characteristics, three Service Models (IaaS, PaaS, and SaaS) and four Delivery Models (public, private, hybrid and community). Our characteristic of abstraction generally maps to resource pooling and broad network access, while automation maps to on-demand self service, measured service, and rapid elasticity. We aren’t proposing a different model, just overlaying the NIST model to better describe things in terms of security. Thanks to this automation and orchestration of resource pools, clouds are incredibly elastic, dynamic, agile, and resilient. But even more transformative is the capability for applications to manage their own infrastructure because everything is now programmable. The lines between development and operations blur, offering incredible levels of agility and resilience, which is one of the concepts underpinning the DevOps movement. But of course done improperly it can be disastrous. Cloud, DevOps, and Security in Practice: Examples Here are a few examples that highlight the impact of abstraction and automation on security. We will address the security issues later in this paper. Autoscaling: As mentioned above, many IaaS providers support autoscaling. A monitoring tool watches server load and other variables. When the average load of virtual machines exceeds a configurable threshold, new instances are launched from the same base image with advanced initialization scripts. These scripts can automatically configure all aspects of the server, pulling metadata from the cloud or a configuration management server. Advanced tools can configure entire application stacks. But these servers may only exist for a short period, perhaps never during a vulnerability

Share:
Read Post

Defending Against Application Denial of Service: Abusing Application Logic

We looked at application denial of service in terms of attacking the application server and the application stack, so now let’s turn our attention to attacking application itself. Clearly every application contains weaknesses that can be exploited, especially when the goal is simply to knock the application offline rather than something more complicated, such as stealing credentials or gaining access to the data. That lower bar of taking the application offline means more places to attack. If we bust out the kill chain to illuminate attack progression, let’s first focus on the beginning: reconnaissance. That’s where the process starts for application denial of service attacks as well. The attackers need to find the weak points in the application, so they assess it to figure out which pages consume the most resources, the kinds of field-level validation on forms, and the supported attributes on query strings. For instance, if a form field does a ton of field-level validation or needs to make multiple database calls to multiple sites to render the page, that page would be a good target to blast. Serving dynamic content requires a bunch of database calls to populate the page, and each call consumes resources. The point is to consume as many of resources as possible to impact the application’s ability to serve legitimate traffic. Flooding the Application In our Defending Against Denial of Service Attacks paper, we talked about how network-based attacks flood the pipes. Targeting resource intensive pages with either GET or POST requests (or both) provides is an equivalent application flooding attack, exhausting the server’s session and memory capacity. Attackers flood a number of different parts of web applications, including: Top-level index page: This one is straightforward and usually has the fewest protections because it’s open to everyone. When blasted by tens of thousands of clients simultaneously, the server can become overwhelmed. Query string “proxy busting”: Attackers can send a request to bypass any proxy or cache, forcing the application to generate and send new information, and eliminating the benefit of a CDN or other cache in front of the application. The impact can be particularly acute when requesting large PDFs or other files repeatedly, consuming excessive bandwidth and server resources. Random session cookies/tokens: By establishing thousands of sessions with the application, attackers can overload session tables on the server and impact its ability to serve legitimate traffic. Flood attacks can be detected rather easily (unlike the slow attacks described in Attacking the Server), providing an opportunity to rate-limit attack while allowing legitimate traffic through. Of course this approach puts a premium on accuracy, as false positives slow down or discard legitimate traffic, and false negatives allow attack to consume server resources. To accurately detect application floods you need a detailed baseline of legitimate traffic, tracking details such as URL distribution, request frequency, maximum requests per device, and outbound traffic rates. With this data a legitimate application behavior profile can be developed. You can then compare incoming traffic (usually on a WAF or application DoS device) against the profile to identify bad traffic, and then limit or block it. Another tactic to mitigate application floods is input validation on all form fields, to ensure requests neither overflow application buffers nor misuse application resources. If you are using a CDN to front-end your application, make sure it can handle random query string attacks and that you are benefitting from the caching service. Given the ability of some attackers to bypass a CDN (assuming you have one), you will want to ensure your input validation ignores random query strings. You can also leverage IP reputation services to identify bot traffic and limit or block them. That requires coordination between the application and network-based defenses, but it is effective for detecting and limiting floods. Pagination A pagination attack involves requesting the web application to return an unreasonable amount of results by expanding the PageSize query parameter to circumvent limits. This can return tens of thousands or even millions of records. Obviously this consumes significant database resources, especially when servicing multiple requests at the same time. These attacks are typically launched against the search page. Another tactic for overwhelming applications is to use a web scraper to capture information from dynamic content areas such as store locators and product catalogs. If the scraper is not throttled it can overwhelm the application by scraping over and over again. Mitigation for most pagination attacks must be built into the application. For example, regardless of the PageSize parameter, the application should limit the number of records returned. Likewise, you will want to limit the number of search requests the site will process simultaneously. You can also leverage a Content Delivery Network or web protection service to cache static information and limit search activity. Alternatively, embedding complicated JavaScript on the search pages can deter bots. Gaming the Shopping Cart Another frequently exploited legitimate function is the shopping cart. An attacker might put a few items in a cart and then abandon it for a few hours. At some point they can come back and refresh the cart, causes the session to be maintained, and the database to reload the cart. If the attacker has put tens of thousand of products into the cart, it consumes significant resources. Shopping cart mitigations include limiting the number of items that can be added to a cart and periodically clearing out carts with too many items. You will also want to periodically terminate sufficiently old carts to reclaim session spaces and flush abandoned carts. Combination Platter Attackers are smart. They have figured out that they can combine many of these attacks with devastating results. For instance an attacker could launch a volume-based network attack on a site. Then start a GET flood on legitimate pages, limited in avoid looking like a network attack. Follow up with a slow HTTP attack so any traffic that does make it through consumes application resources. Finally they might attack the shopping cart or store locator which looks like legitimate activity.

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.