Securosis

Research

Incite 4/6/2016—Hindsight

When things don’t go quite as you hoped, it’s human nature to look backwards and question your decisions. If you had done something different maybe the outcome would be better. If you didn’t do the other thing, maybe you’d be in a different spot. We all do it. Some more than others. It’s almost impossible to not wonder what would have been. But you have to be careful playing Monday Morning QB. If you wallow in a situation you end up stuck in a house of pain after a decision doesn’t go well. You probably don’t have a time machine, so whatever happened is already done. All you have left is a learning opportunity to avoid making the same mistakes again. That is a key concept, and I work to learn from every situation. I want to have an idea of what I would do if I found myself in a similar situation again down the line. Sometimes this post-mortem is painful – especially when the decision you made or action you took was idiotic in hindsight. And I’ve certainly done my share of idiotic things through the years. The key to leveraging hindsight is not to get caught up in it. Learn from the situation and move on. Try not to beat yourself up over and over again about what happened. This is easy to say and very hard to do. So here is how I make sure I don’t get stuck after something doesn’t exactly meet my expectations. Be Objective: You may be responsible for what happened. If you are, own it. Don’t point fingers. Understand exactly what happened and what your actions did to contribute to the eventual outcome. Also understand that some things were going to end badly regardless of what you did, so accept that as well. Speculate on what could be different: Next take some time to think about how different actions could have produced different outcomes. You can’t be absolutely sure that a different action would work out better, but you can certainly come up with a couple scenarios and determine what you want to do if you are in that situation again. It’s like a game where you can choose different paths. Understand you’ll be wrong: Understand that even if you evaluate 10 different options for a scenario, next time around there will be something you can’t anticipate. Understand that you are dealing with speculation, and that’s always dicey. Don’t judge yourself: At this point you have done what you can do. You owned your part in however the situation ended up. You figured out what you’ll do differently next time. It’s over, so let it go and move forward. You learned what you needed, and that’s all you can ask for. That’s really the point. Fixating on what’s already happened closes off future potential. If you are always looking behind you, you can neither appreciate nor take advantage of what’s ahead. This was a hard lesson for me. I did the same stuff for years, and was confused because nothing changed. It took me a long time to figure out what needed to change, which of course turned out to be me. But it wasn’t wasted time. I’m grateful for all my experiences, especially the challenges. I’ve had plenty of opportunities to learn, and will continue to screw things up and learn more. I know myself much better now and understand that I need to keep moving forward. So that’s what I do. Every single day. –Mike Photo credit: “Hindsight” from The.Rohit Security is changing. So is Securosis. Check out Rich’s post on how we are evolving our business. We’ve published this year’s Securosis Guide to the RSA Conference. It’s our take on the key themes of this year’s conference (which is really a proxy for the industry), as well as deep dives on cloud security, threat protection, and data security. And there is a ton of meme goodness… Check out the blog post or download the guide directly (PDF). The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the 2014 conference. You can check it out on YouTube. Take an hour. Your emails, alerts, and Twitter timeline will be there when you get back. Securosis Firestarter Have you checked out our video podcast? Rich, Adrian, and Mike get into a Google Hangout and… hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail. Mar 16 – The Rugged vs. SecDevOps Smackdown Feb 17 – RSA Conference – The Good, Bad and Ugly Dec 8 – 2015 Wrap Up and 2016 Non-Predictions Nov 16 – The Blame Game Nov 3 – Get Your Marshmallows Oct 19 – re:Invent Yourself (or else) Aug 12 – Karma July 13 – Living with the OPM Hack May 26 – We Don’t Know Sh–. You Don’t Know Sh– May 4 – RSAC wrap-up. Same as it ever was. March 31 – Using RSA March 16 – Cyber Cash Cow March 2 – Cyber vs. Terror (yeah, we went there) February 16 – Cyber!!! February 9 – It’s Not My Fault! Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too. Resilient Cloud Network Architectures [Design Patterns] [Fundamentals] Shadow Devices The Exponentially Expanding Attack Surface Building a Vendor IT Risk Management Program Program Structure Understanding Vendor IT Risk SIEM Kung Fu Getting Started and Sustaining Value Advanced Use Cases Fundamentals Recently Published Papers Securing Hadoop Threat Detection Evolution Building Security into DevOps Pragmatic Security for Cloud and Hybrid Networks EMV Migration and the Changing Payments Landscape Applied Threat Intelligence Endpoint Defense: Essential Practices Cracking the Confusion: Encryption & Tokenization for Data Centers, Servers & Applications Monitoring the Hybrid Cloud Best Practices for AWS Security The Future of Security Incite 4 U Still no free lunch, even if it’s fake: Troy Hunt’s post is awesome, digging into how slimy free websites gather

Share:
Read Post

Maximizing Value From Your WAF [New Series]

Web Application Firewalls (WAFs) have been in production use for well over a decade, maturing from point solutions primarily blocking SQL injection to mature application security tools. In most mature security product categories, such as anti-virus, there hasn’t been much to talk about, aside from complaining that not much has changed over the last decade. WAFs are different: they have continued to evolve in response to new threats, new deployment models, and a more demanding clientele’s need to solve more complicated problems. From SQL injection to cross-site scripting (XSS), from PCI compliance to DDoS protection, and from cross-site request forgeries (CSRF) to 0-day protection, WAFs have continued add capabilities to address emerging use cases. But WAF’s greatest evolution has taken place in areas undergoing heavy disruption, notably cloud computing and threat analytics. WAFs are back at the top of our research agenda, because users continue to struggle with managing WAF platforms as threats continue to evolve. The first challenge has been that attacks targeting the application layer require more than simple analysis of individual HTTP requests – they demand systemic analysis of the entire web application session. Detection of typical modern attack vectors including automated bots, content scraping, fraud, and other types of misuse, all require more information and deeper analysis. Second, as the larger IT industry flails to find security talent to manage WAFs, customers struggle to keep existing devices up and running; they have no choice but to emphasize ease of set-up and management during product selection. So we are updating our WAF research. This brief series will discuss the continuing need for Web Application Firewall technologies, and address the ongoing struggles of organizations to run WAFs. We will also focus on decreasing the time to value for WAF, by updating our recommendations for standing up a WAF for the first time, discussing what it takes to get a basic set of policies up and running, and covering the new capabilities and challenges customers face. WAF’s Continued Popularity The reasons WAF emerged in the first place, and still one of the most common reason customers use it, is that no other product really provides protection at the application layer. Cross-site scripting, request forgeries, SQL injection, and many common attacks which specifically target application stacks tend to go undetected. Intrusion Detection Systems (IDS) and general-purpose network firewalls are poorly suited to protecting the application layer, and remain largely ineffective for that use case. In order to detect application misuse and fraud, a security solution must understand the dialogue between application and end user. WAFs were designed for this need, to understand application protocols so they can identify applications under attack. For most organizations, WAF is still the only way get some measure of protection for applications. For many years sales of WAFs were driven by compliance, specifically a mandate from the Payment Card Industry’s Data Security Standard (PCI-DSS). This standard gave firms the option to either build security into their application (very hard), or protect them with WAF (easier). The validation requirements for WAF deployments are far less rigorous than for secure code development, so most companies opted for WAF. Shocking! You basically plug one in and get a certification. WAF has long been the fastest and most cost-effective way to satisfy Requirement 6 of the PCI-DSS standard, but it turns out there is long-term value as well. Users now realize that leveraging a WAF is both faster and cheaper than fixing bug-ridden legacy applications. The need has morphed from “get compliant fast!” to “secure legacy apps for less!” WAF Limitations The value of WAF is muted by difficulties in deployment and ongoing platform management. A tool cannot provide sustainable value if it cannot be effectively deployed and managed. The last thing organizations need is yet another piece of software sitting on a shelf. Or even worse an out-of-date WAF providing a false sense of security. Our research highlighted the following issues which contribute to insecure WAF implementations, allowing penetration testers and attackers to easily evade WAF and target applications directly. Ineffective Policies: Most firms complain about maintaining WAF policies. Some complaints are about policies falling behind new application features, and policies which fail to keep pace with emerging threats. Equally troubling is a lack of information on which policies are effective, so security professionals are flying blind. Better metrics and analytics are needed to tell users what’s working and how to improve. Breaking Apps: Security policies – the rules that determine what a WAF blocks and what passes through to applications – can and do sometimes block legitimate traffic. Web application developers are incentivized to push new code as often as possible. Code changes and new functionality often violate existing policies, so unless someone updates the whitelist of approved application requests for every application change, a WAF will block legitimate requests. Predictably, this pisses off customers and operational folks alike. Firms trying to “ratchet up” security by tightening policies may also break applications, or generate too many false positives for the SoC to handle, leading to legitimate attacks going ignored and unaddressed in a flood of irrelevant alerts. Skills Gap: As we all know, application security is non-trivial. The skills to understand spoofing, fraud, non-repudiation, denial of service attacks, and application misuse within the context of an application are rarely all found in any one individual. But they are all needed to be an effective WAF administrator. Many firms – especially those in retail – complain that “we are not in the business of security” – they want to outsource WAF management to someone with the necessary skills. Still others find their WAF in purgatory after their administrator is offered more money, leaving behind no-one who understands the policies. But outsourcing is no panacea – even a third-party service provider needs the configuration to be somewhat stable and burned-in before they can accept managerial responsibility. Without in-house talent for configuration you are hiring professional services teams to get up and running, and then scrambling to find budget for this unplanned expense. Cloud Deployments: Your on-premise applications are covered

Share:
Read Post

Incite 3/30/2016: Rational People Disagree

It’s definitely a presidential election year here in the US. My Twitter and Facebook feeds are overwhelmed with links about what this politician said and who that one offended. We get to learn how a 70-year old politician got arrested in his 20s and why that matters now. You also get to understand that there are a lot of different perspectives, many of which make absolutely no sense to you. Confirmation bias kicks into high gear, because when you see something you don’t agree with, you instinctively ignore it, or have a million reasons why dead wrong. I know mine does. Some of my friends frequently share news about their chosen candidates, and even more link to critical information about the enemy. I’m not sure whether they do this to make themselves feel good, to commiserate with people who think just like them, or in an effort to influence folks who don’t. I have to say this can be remarkably irritating because nothing any of these people posts is going to sway my fundamental beliefs. That got me thinking about one of my rules for dealing with people. I don’t talk about religion or politics. Unless I’m asked. And depending on the person I might not engage even if asked. Simply because nothing I say is going to change someone’s story regarding either of those two third rails of friendship. I will admit to scratching my head at some of the stuff people I know post to social media. I wonder if they really believe that stuff, or they are just trolling everyone. But at the end of the day, everyone is entitled to their opinion, and it’s not my place to tell them their opinion is idiotic. Even if to it is. I try very hard not to judge people based on their stories and beliefs. They have different experiences and priorities than me, and that results in different viewpoints. But not judging gets pretty hard between March and November every 4 years. At least 4 or 5 times a day I click the unfollow link when something particularly offensive (to me) shows up in my feed. But I don’t hit the button to actually unfollow someone. I use the fact that I was triggered by someone as an opportunity to pause and reflect on why that specific headline, post, link, or opinion bothers me so much. Most of the time it’s just exhaustion. If I see one more thing about a huge fence or bringing manufacturing jobs back to the US, I’m going to scream. I get these are real issues which warrant discussion. But in a world with a 24/7 media cycle, the discussion never ends. I’m not close-minded, although it may seem that way. I’m certainly open to listening to other candidates’ views, mostly to understand the other side of the discussion and continually refine and confirm my own positions. But I have some fundamental beliefs that will not change. And no, I’m not going to share them here (that third rail again!). I know that rational people can disagree, and that doesn’t mean I don’t respect them, or that I don’t want to work together or hang out and drink beer. It just means I don’t want to talk about religion or politics. –Mike Photo credit: “Laugh-Out-Loud Cats #2204” from Ape Lad Security is changing. So is Securosis. Check out Rich’s post on how we are evolving our business. We’ve published this year’s Securosis Guide to the RSA Conference. It’s our take on the key themes of this year’s conference (which is really a proxy for the industry), as well as deep dives on cloud security, threat protection, and data security. And there is a ton of meme goodness… Check out the blog post or download the guide directly (PDF). The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the 2014 conference. You can check it out on YouTube. Take an hour. Your emails, alerts, and Twitter timeline will be there when you get back. Securosis Firestarter Have you checked out our video podcast? Rich, Adrian, and Mike get into a Google Hangout and… hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail. Mar 16 – The Rugged vs. SecDevOps Smackdown Feb 17 – RSA Conference – The Good, Bad and Ugly Dec 8 – 2015 Wrap Up and 2016 Non-Predictions Nov 16 – The Blame Game Nov 3 – Get Your Marshmallows Oct 19 – re:Invent Yourself (or else) Aug 12 – Karma July 13 – Living with the OPM Hack May 26 – We Don’t Know Sh–. You Don’t Know Sh– May 4 – RSAC wrap-up. Same as it ever was. March 31 – Using RSA March 16 – Cyber Cash Cow March 2 – Cyber vs. Terror (yeah, we went there) February 16 – Cyber!!! February 9 – It’s Not My Fault! Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too. Resilient Cloud Network Architectures Design Patterns Fundamentals Shadow Devices The Exponentially Expanding Attack Surface Building a Vendor IT Risk Management Program Program Structure Understanding Vendor IT Risk Securing Hadoop Architectural Security Issues Architecture and Composition Security Recommendations for NoSQL platforms SIEM Kung Fu Getting Started and Sustaining Value Advanced Use Cases Fundamentals Recently Published Papers Threat Detection Evolution Building Security into DevOps Pragmatic Security for Cloud and Hybrid Networks EMV Migration and the Changing Payments Landscape Applied Threat Intelligence Endpoint Defense: Essential Practices Cracking the Confusion: Encryption & Tokenization for Data Centers, Servers & Applications Security and Privacy on the Encrypted Network Monitoring the Hybrid Cloud Best Practices for AWS Security The Future of Security Incite 4 U That depends on your definition of consolidation: Stiennon busts out his trusty spreadsheet of security companies and concludes that the IT security industry is not consolidating. He has numbers. Numbers! That prove there is a

Share:
Read Post

Resilient Cloud Network Architectures: Design Patterns

We introduced resilient cloud networks in this series’ first post. We define them as networks using cloud-specific features to provide both stronger security and higher availability for your applications. This post will dig into two different design patterns, and show how cloud networking enables higher resilience. Network Segregation by Default Before we dive into design patterns let’s make sure we are all clear on using network segmentation to improve your security posture, as discussed in our first post. We know segmentation isn’t novel, but it is still difficult in a traditional data center. Infrastructure running different applications gets intermingled, just to efficiently use existing hardware. Even in a totally virtualized data center, segmentation requires significant overhead and management to keep all applications logically isolated – which makes it rare. What is the downside of not segmenting properly? It offers adversaries a clear path to your most important stuff. They can compromise one application and then move deeper into your environment, accessing resources not associated with the application stack they first compromised. So if they bust one application, there is a high likelihood they’ll end up with free rein over everything in the data center. The cloud is different. Each server in a cloud environment is associated with a security group, which defines with very fine granularity which other devices it can communicate with, and over what protocols. This effectively enables you to contain an adversary’s ability to move within your environment, even after compromising a server or application. This concept is often called limiting blast radius. So if one part of your cloud environment goes boom, the rest of your infrastructure is unaffected. This is a key concept in cloud network architecture, highlighted in the design patterns below. PaaS Air Gap To demonstrate a more secure cloud network architecture, consider an Internet-facing application with both web server and application server tiers. Due to the nature of the application, communications between the two layers are through message queues and notifications, so the web servers don’t need to communicate directly with each other. The application server tier connects to the database (a Platform as a Service offering from the cloud provider). The application server tier also communicates with a traditional data center to access other internal corporate data outside the cloud environment. An application must be architected for the get-go to support this design. You aren’t going to redeploy your 20-year-old legacy general ledger application to this design. But if you are architecting a new application, or can rearchitect existing applications, and want total isolation between environments, this is one way to do it. Let’s describe the design. Network Security Groups The key security control typically used in this architecture is a Network Security Group, allowing access to the app servers only from the web servers, and only to the specific port and protocol required. This isolation limits blast radius. To be clear, the NSG is applied individually to each instance – not to subnets. This avoids a flat network, where all instances within a subnet have unrestricted access to all subnet peers. PaaS Services In this application you wouldn’t open access from the web server NSG to the app server NSG, because the architecture doesn’t require direct communication between web servers and app servers. Instead the cloud provider offers a message queue platform and notification service which provide asynchronous communication between the web and application tiers. So even if the web servers are compromised, the app servers are not accessible. Further isolation is offered by a PaaS database, also offered by the cloud service provider. You can restrict requests to the PaaS DB to specific Network Security Groups. This ensures only the right instances can request information from the database service, and all requests are authorized. Connection to the Data Center The application may require data from the data center, so the app servers have access to the needed data through a VPN. You route all traffic to the data center through this inspection and control point. Typically it’s better not to route cloud traffic through inspection bottlenecks, but in this design pattern it’s not a big deal, because the traffic needs to pass over a specific egress connection to the data center, so you might as well inspect there as well. You ensure ingress traffic over that connection can only go to the app server security group. This ensures that an adversary who compromises your network cannot access your whole cloud network by bouncing through your data center. Advantages of This Design Isolation between Web and App Servers: By putting the auto-scaling groups in a Network Security Groups, you restrict their access to everything. No Direct Connection: In this design pattern you can block direct traffic to the application servers from anywhere but the VPN. Intra-application traffic is asynchronous via the message queue and notification service, so isolation is complete. PaaS Service: This architecture uses cloud provider services, with strong built-in security and resilience. Cloud providers understand that security and availability are core to their business. What’s next for this kind of architecture? To advance this architecture you could deploy mirrors of the application in different zones within a region to limit the blast radius in case one device is compromised, and to provide additional resiliency in case of a zone failure. Additionally, if you use immutable servers within each auto-scale group, you can update/patch/reconfigure instances automatically by changing the master image and having auto-scaling replace the old instances with new ones. This limits configuration drift and adversary persistence. Multi-Region Web Site This architecture was designed to deploy a website in multiple regions, with availability as close to 100% as possible. This design is far more expensive than even running in multiple zones within a single region, because you need to pay for network traffic between regions (compared to free intra-region traffic); but if uptime is essential to your business, this architecture improves resiliency. This is an externally facing application so you run traffic through a cloud WAF to get rid of obvious attack traffic. Inbound sessions can

Share:
Read Post

Securing Hadoop: Security Recommendations for Hadoop [New Paper]

We are pleased to release our updated white paper on big data security: Securing Hadoop: Security Recommendations for Hadoop Environments. Just about everything has changed in the four years since we published the original. Hadoop has solidified its position as the dominant big data platform, by constantly advancing in function and scale. While the ability to customize a Hadoop cluster to suit diverse needs has been its main driver, the security advances make Hadoop viable for enterprises. Whether embedded directly into Hadoop or deployed as add-on modules, services like identity, encryption, log analysis, key management, cluster validation, and fine-grained authorization are all available. Our goal for this research paper is first to introduce these technologies to IT and security teams, and also to help them assemble these technologies into an coherent security strategy. This research project provides a high-level overview of security challenges for big data environments. From there we discuss security technologies available for the Hadoop ecosystem, and then sketch out a set of recommendations to secure big data clusters. Our recommendations map threats and compliance requirements directly to supporting technologies to facilitate your selection process. We outline how these tactical responses work within the security architectures which firms employ, tailoring their approaches to the tools and technical talent on hand. Finally, we would like to thank Hortonworks and Vormetric for licensing this research. Without firms who appreciate our work enough to license our content, we could not bring you quality research free! We hope you find this research helpful in understanding big data and its associated security challenges. You can download a free copy of the white paper from our research library, or grab a copy directly: Securing Hadoop: Security Recommendations for Hadoop Environments (PDF). Share:

Share:
Read Post

Resilient Cloud Network Architectures: Fundamentals

As much as we like to believe we have evolved as a species, people continue to be scared of things they don’t understand. Yes, many organizations have embraced the cloud whole hog and are rushing headlong into the cloud age. But it’s a big world, and millions of others remain paralyzed – not really understanding cloud computing, and taking the general approach that it can’t be secure because, well, it just can’t. Or it’s too new. Or some for other unfounded and incorrect reason. Kind of like when folks insisted that the Earth was the center of the universe. This blog series builds on our recent Pragmatic Security for Cloud and Hybrid Networks paper, focusing on cloud-native network architectures that provide security and availability in ways you cannot accomplish in a traditional data center. This evolution will take place over the next decade, and organizations will need to support hybrid networks for some time. But for those ready, willing, and able to step forward into the future today, the cloud is waiting to break the traditional rules of how technology has been developed, deployed, scaled, and managed. We have been aggressive in proselytizing our belief that the move towards the cloud is the single biggest disruption in technology for the next few decades. Yes, even bigger than the move from mainframes to client/server (we’re old – we know). So our Resilient Cloud Network Architectures series will provide the basics of cloud network security, with a few design patterns to illustrate. We would like to thank Resilient Systems for provisionally agreeing to license the content in this paper. As always, we’ll build the content using our Totally Transparent Research methodology, mean we will post everything to the blog first, and allow you (our readers) to poke holes in it. Once it has been sufficiently prodded, we will publish a paper for your reference. Defining Resilient If we bust out the old dictionary to define resilient, we get: able to become strong, healthy, or successful again after something bad happens able to return to an original shape after being pulled, stretched, pressed, bent, etc. In the context of computing, you want to deploy technology that can not just become strong again, but resist attack in the first place. Recoverability is also key: if something bad happens you want to return service quickly, if it causes an outage at all. For network architecture we always fall back on the cloud computing credo: Design for failure. A resilient network architecture both makes it harder to compromise an application and minimizes downtime in case of an issue. Key aspects of cloud computing which provide security and availability include: Network Isolation: Using the inherent ability of the cloud to restrict connections (via software firewalls, which are called security groups and described below), you can build a network architecture that fully isolates the different tiers of an application stack. That prevents a compromise in one application (or database) from leaking or attacking information stored in another. Account Isolation: Another important feature of the cloud is the ability to use multiple accounts per application. Each of your different environments (Dev, Test, Production, Logging, etc.) can use different accounts, which provides valuable isolation because you cannot access cloud infrastructure across accounts without explicit authorization. Immutability: An immutable server is one that is never logged into or changed in production. In cloud-native DevOps environments servers are deployed in auto-scale groups based on standard images. This prevents human error and configuration drift from creating exploitation paths. You take a new known-good state, and completely replace older images in production. No more patching and no more logging into servers. Regions: You could build multiple data centers around the world to provide redundancy. But that’s not a cheap option, and rarely feasible. To do the same thing in the cloud, you basically just replicate an entire environment in a different region via an API call or a couple clicks in a cloud console. Regions are available all over the world, with multiple availability zones within each, to further minimize single points of failure. You can load balance between zones and regions, leveraging auto-scaling to keep your infrastructure running the same images in real time. We will explain this design pattern in our next post. The key takeaway is that cloud computing provides architectural options which are either impossible or economically infeasible in a traditional data center, to provide greater protection and availability. This series we will describe the fundamentals of cloud networking for context, and then dig into design patterns which provide both security and availability – which we define as resilience. Understanding Cloud Networks The key difference between a network in your data center and one in the cloud is that cloud customers never access the ‘real’ network or hardware. Cloud computing uses virtual networks to abstract the networks you see and manage from the (invisible) underlying physical resources. When your server gets IP address 10.0.1.12, that IP address does not exist on routing hardware – it’s a virtual address on a virtual network. Everything is handled in software. Cloud networking varies across cloud providers, but differs from traditional networks in visibility, management, and velocity of change. You cannot tap into a cloud provider’s virtual network, so you’ll need to think differently to monitor your networks. Additionally, cloud networks are typically managed via scripts or programs, making Application Programming Interfaces (API) calls, rather than a graphical console or command line. That enables developers to do pretty much anything, including standing up networks and reconfiguring them – instantly via code. Finally, cloud networks change much faster than physical networks because cloud environments change faster, including spinning up and shutting down servers via automation. So traditional workflows to govern network change don’t really map to your cloud network. It can be confusing because cloud networks look like traditional networks, with their own routing tables and firewalls. But looks are deceiving – although familiar constructs have been carried over, there are fundamental differences. Cloud Network Architectures In order to choose the right solution to address your requirements, you need to understand the types of cloud network

Share:
Read Post

Shadow Devices: The Exponentially Expanding Attack Surface [New Series]

One of the challenges of being security professionals for decades is that we actually remember the olden days. You remember, when Internet-connected devices were PCs; then we got fancy and started issuing laptops. That’s what was connected to our networks. If you recall, life was simpler then. But we don’t have much time for nostalgia. We are too busy getting a handle on the explosion of devices connected to our networks, accessing our data. Here is just a smattering of what we see: Mobile devices: Supporting smartphones and tablets seems like old news, mostly because you can’t remember a time when they weren’t on your network. But despite their short history, their impact on mobile networking and security cannot be understated. What’s more challenging is how these devices can connect directly to the cellular data network, which gives them a path around your security controls. BYOD: Then someone decided it would be cheaper to have employees use their own devices, and Bring Your Own Device (BYOD) became a thing. You can have employees sign paperwork giving you the ability to control their devices and install software, but in practice they get (justifiably) very cranky when they cannot do something on their personal devices. So balancing the need to protect corporate data against antagonizing employees has been challenging. Other office devices: Printers and scanners have been networked for years. But as more sophisticated imaging devices emerged, we realized their on-board computers and storage were insecure. They became targets, attacker beachheads. Physical security devices: The new generation of physical security devices (cameras, access card readers, etc.) is largely network connected. It’s great that you can grant access to a locked-out employee, from your iPhone on the golf course, but much less fun when attackers grant themselves access. Control systems and manufacturing equipment: The connected revolution has made its way to shop floors and facilities areas as well. Whether it’s a sensor collecting information from factory robots or warehousing systems, these devices are networked too, so they can be attacked. You may have heard of StuxNet targeting centrifuge control systems. Yep, that’s what we’re talking about. Healthcare devices: If you go into any healthcare facility nowadays, monitoring devices and even some treatment devices are managed through network connections. There are jokes to be made about taking over shop floor robots and who cares. But if medical devices are attacked, the ramifications are significantly more severe. Connected home: Whether it’s a thermostat, security system, or home automation platform – the expectation is that you will manage it from wherever you are. That means a network connection and access to the Intertubes. What could possibly go wrong? Cars: Automobiles can now use either your smartphone connection or their own cellular link to connect to the Internet for traffic, music, news, and other services. They can transmit diagnostic information as well. All cool and shiny, but recent stunt hacking has proven a moving automobile can be attacked and controlled remotely. Again, what’s to worry? There will be billions of devices connected to the Internet over the next few years. They all present attack surface. And you cannot fully know what is exploitable in your environment, because you don’t know about all your devices. The industry wants to dump all these devices into a generic Internet of Things (IoT) bucket because IoT is the buzzword du jour. The latest Chicken Little poised to bring down the sky. It turns out the sky has already fallen – networks are already too vast to fully protect. The problem is getting worse by the day as pretty much anything with a chip in it gets networked. So instead of a manageable environment, you need to protect Everything Internet. Anything with a network address can be attacked. Fortunately better fundamental architectures (especially for mobile devices) make it harder to compromise new devices than traditional PCs (whew!), but sophisticated attackers don’t seem to have trouble compromising any device they can reach. And that says nothing of devices whose vendors have paid little or no attention to security to date. Healthcare and control system vendors, we’re looking at you! They have porous defenses, if any, and once an attacker gains presence on the network, they have a bridgehead to work their way to their real targets. In the Shadows So what? You don’t even have medical devices or control systems – why would you care? The sad fact is that what you don’t see can hurt you. Your entire security program has been built to protect what you can see with traditional discovery and scanning technologies. The industry has maintained a very limited concept of what you should be looking for – largely because that’s all security scanners could see. The current state of affairs is you run scans every so often and see new devices emerge. You test them for configuration issues and vulnerabilities, and then you add those issues to the end of an endless list of things you’ll never have time to finish with. Unfortunately visible devices are only a portion of the network-connected devices in your environment. There are hundreds if not thousands or more other devices you don’t know about on your network. You don’t scan them periodically, and you have no idea about their security posture. Each of thm can be attacked, and may provide an adversary a presence in your environment. Your attack surface is much larger than you thought. These shadow devices are infrequently discussed, and rarely factored into discovery and protection programs. It’s a big Don’t Ask, Don’t Tell approach, which never seems to work out well in the end. We haven’t yet published anything on IoT devices (or Everything Internet), but it’s time. Not because we currently see many attacks in the wild. But most organizations we talk to are unprepared for when an attack happens, so they will scramble – as usual. We have espoused a visibility, then control approach to security for over a decade. Now it’s time to get a handle on the visibility of all devices on your network, so when you need to, you will know what you have to control. And how to control

Share:
Read Post

Incite 3/23/2016: The Madness

I’m not sure why I do it, but every year I fill out brackets for the annual NCAA Men’s College basketball tournament. Over all the years I have been doing brackets, I won once. And it wasn’t a huge pool. It was a small pool in my office, when I used to work in an office, so the winnings probably didn’t even amount to a decent dinner at Fuddrucker’s. I won’t add up all my spending or compare against my winning, because I don’t need a PhD in Math to determine that I am way below the waterline. Like anyone who always questions everything, I should be asking myself why I continue to play. I’m not going to win – I don’t even follow NCAA basketball. I’d have better luck throwing darts at the wall. So clearly it’s not a money-making endeavor. I guess I could ask the same question about why I sit in front of a Wheel of Fortune slot machine in a casino. Or why I buy PowerBall tickets when the pot goes above $200MM. I understand statistics – I know I’m not going to win slots (over time) or the lottery (ever). They call the NCAA tournament March Madness – perhaps because most people get mad when their brackets blow up on the second day of the tournament when the team they picked to win it all loses to a 15 seed. Or does that just happen to me? But I wasn’t mad. I laughed because 25% of all brackets had Michigan State winning the tournament. And they were all as busted as mine. These are rhetorical questions. I play a few NCAA tournament brackets every year because it’s fun. I get to talk smack to college buddies about their idiotic picks. I play the slots because my heart races when I spin the wheel and see if I got 35 points or 1,000. I play the lottery because it gives me a chance to dream. What would I do with $200MM? I’d do the same thing I’m doing now. I’d write. I’d sit in Starbucks, drink coffee, and people-watch, while pretending to write. I’d speak in front of crowds. I’d explore and travel with my loved ones. I’d still play the brackets, because any excuse to talk smack to my buddies is worth the minimal donation. And I’d still play the lottery. And no, I’m not certifiable. I just know from statistics that I wouldn’t have any less chance to win again just because I won before. Score 1 for Math. –Mike Photo credit: “Now, that is a bracket!” from frankieleon We’ve published this year’s Securosis Guide to the RSA Conference. It’s our take on the key themes of this year’s conference (which is really a proxy for the industry), as well as deep dives on cloud security, threat protection, and data security. And there is a ton of meme goodness… Check out the blog post or download the guide directly (PDF). The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the 2014 conference. You can check it out on YouTube. Take an hour. Your emails, alerts, and Twitter timeline will be there when you get back. Securosis Firestarter Have you checked out our video podcast? Rich, Adrian, and Mike get into a Google Hangout and… hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail. Mar 16 – The Rugged vs. SecDevOps Smackdown Feb 17 – RSA Conference – The Good, Bad and Ugly Dec 8 – 2015 Wrap Up and 2016 Non-Predictions Nov 16 – The Blame Game Nov 3 – Get Your Marshmallows Oct 19 – re:Invent Yourself (or else) Aug 12 – Karma July 13 – Living with the OPM Hack May 26 – We Don’t Know Sh–. You Don’t Know Sh– May 4 – RSAC wrap-up. Same as it ever was. March 31 – Using RSA March 16 – Cyber Cash Cow March 2 – Cyber vs. Terror (yeah, we went there) February 16 – Cyber!!! February 9 – It’s Not My Fault! Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too. Shadow Devices The Exponentially Expanding Attack Surface Building a Vendor IT Risk Management Program Program Structure Understanding Vendor IT Risk Securing Hadoop Architectural Security Issues Architecture and Composition Security Recommendations for NoSQL platforms SIEM Kung Fu Getting Started and Sustaining Value Advanced Use Cases Fundamentals Building a Threat Intelligence Program Success and Sharing Using TI Gathering TI Introduction Recently Published Papers Threat Detection Evolution Building Security into DevOps Pragmatic Security for Cloud and Hybrid Networks EMV Migration and the Changing Payments Landscape Applied Threat Intelligence Endpoint Defense: Essential Practices Cracking the Confusion: Encryption & Tokenization for Data Centers, Servers & Applications Security and Privacy on the Encrypted Network Monitoring the Hybrid Cloud Best Practices for AWS Security The Future of Security Incite 4 U Enough already: Encryption is a safeguard for data. It helps ensure data is used the way its owner intends. We work with a lot of firms – helping them protect data from rogue employees, hackers, malicious government entities, and whoever else may want to misuse their data. We try to avoid touching political topics on this blog, but the current attempt by US Government agencies to paint encryption as a terrorist tool is beyond absurd. They are effectively saying security is a danger, and that has really struck a nerve in the security community. Forget for a minute that the NSA already has all the data that moves on and off your cellphone, and that law enforcement already has the means to access the contents of iPhones without Apple’s assistance. And avoid wallowing in counter-examples where encryption aided freedom, or illustrations of misuse of power to inspire fear in the opposite direction. These arguments devolve into pig-wrestling – only the pig enjoys that sort of thing. As Rich explained in Do We Have

Share:
Read Post

Summary: Who pays who?

Adrian here… Apple buying space on Google’s cloud made news this week, as many people were surprised that Apple relies on others to provide cloud services, but they have been leveraging AWS and others for years. Our internal chat was alive with discussion about build vs. buy for different providers of cloud services. Perhaps a hundred or so companies have the scale to make a go at building from scratch at this point, and the odds of success for many of those are small. You need massive scale before the costs make it worth building your own. Especially the custom engineering required to get equivalent hardware margins. That leave a handful of firms who can make a go of this, and it’s still not always clear whether they should. Even Apple buys others’ services, and it usually makes good economic sense. We did not really talk about RSA conference highlights, but the Rugged DevOps event (slides are up) was the highlight of RSAC week for me. The presentations were all thought-provoking. Concepts which were consistently reinforced included: Constantly test, constantly improve Without data you’re just another person with an opinion Don’t update; dispose and improve Micro-services and Docker containers are the basic building blocks for application development today Micro-services make sense to me, and I have successfully used that design concept, but I have zero practical experience with Docker. Which is a shocker because it’s freakin’ everywhere, but I have never yet taken the time to learn. That stops this week. AWS and Azure both support it, and it’s embedded into Big Data frameworks as well, so it’s everywhere I want to be. I saw two vendor presentation on security concerns around Docker deployment models, and yeah, it scares me a bit. But Docker addresses the basic demand for easy updates, packaging, and accelerating deployment, so it stays. Security will iterate improvements to the model over time, as we usually do. DevOps doesn’t fix everything. That’s not me being a security curmudgeon – it’s me being excited by new technologies that let me get work done faster. Amazon’s CTO wants to make it impossible for anyone else to access your data – including him. And no, Werner does not have Captain Crunch stuck to his beard. Cisco Acquires CliQr For $260M – basically software defined cloud management. The Dangers of Docker.sock Get Ready for Docker’s 3rd Birthday! Docker may be the dumbest thing you do today RightScale 2016 State of the Cloud Report (registration required) We protect the wrong things and we slow everything down Have you heard of Google’s Project Loon? No? Then how about Microsoft’s version: Pegasus II. It’s IoT meets cloud. Data Lakes – no longer just a marketing buzzword. Share:

Share:
Read Post

Building a Vendor IT Risk Management Program: Program Structure

As we started exploring when we began Building a Vendor IT Risk Management Program, modern integrated business processes have dramatically expanded the attack surface of pretty much every organization. You can no longer ignore the risk presented by vendors or other business partners, even without regulatory bodies pushing for formal risk management of vendors and third parties. As security program fanatics we figure it’s time to start documenting such a program. Defining a Program First we have never really defined what we mean by a security program. Our bad. So let’s get that down, and then we can tailor it to vendor IT risk management. The first thing a program needs is to be systematic, which means you don’t do things willy-nilly. You plan the work and then work the plan. The processes involved in the program need to be predictable and repeatable. Well, as predictable as anything in security can be. Here are some other hallmarks of a program: Executive Sponsorship: Our research shows a program has a much higher chance of success if there is an executive (not the CISO) who feels accountable for its success. Inevitably security involves changing processes, and maybe not doing things business or other IT groups want because of excessive risk. Without empowerment to make those decisions and have them stick, most security programs die on the vine. A senior sponsor can break down walls and push through tough decisions, making the difference between success and failure. Funding: Regardless of which aspect of security you are trying to systematize, it costs money. This contributes to another key reason programs fail: lack of resources. We also see a lot of organizations kickstart new programs by just throwing new responsibilities at existing employees, with no additional compensation or backfill for their otherwise overflowing plates. That’s not sustainable, so a key aspect of program establishment is allocating money to the initiative. Governance: Who is responsible for operation of the program? Who makes decisions when it needs to evolve? What is the escalation path when someone doesn’t play nice or meet agreed-upon responsibilities? Without proper definition of responsibilities, and sufficient documentation so revisionist history isn’t a factor, the program won’t be sustainable. These roles need to be defined when the program is being formally established, because it’s much easier to make these decisions and get everyone on board before it goes live. If it does not go well people will runn for cover, and if the program is a success everyone will want credit. Operations: This will vary greatly between different kinds of programs, but you need to define how you will achieve your program goals. This is the ‘how’ of the program, and don’t forget about an ongoing feedback and improvement loop so the program continues to evolve. Success criteria: In security this can be a bit slippery, but it’s hard to claim success without everyone agreeing what success means. Spend some time during program establishment to focus on applicable metrics, and be clear about what success looks like. Of course you can change your definition once you get going and learn what is realistic and necessary, but if you fail to establish it up front, you will have a hard time showing value. Integration points: No program stands alone, so there will be integration points with other groups or functions within the organization. Maybe you need data feeds from the security monitoring group, or entitlements from the identity group. Maybe your program defines actions required from other groups. If the ultimate success of your program depends on other teams or functions within the organization (and it does, because security doesn’t stand alone), then making sure everyone is crystal clear about integration points and responsibilities from the beginning is critical. The V(IT)RM Program To tailor the generic structure above to vendor IT risk management you need to go through the list, make some decisions, and get everyone on board. Sounds easy, right? Not so much, but doing this kind of work now will save you from buying Tums by the case as your program goes operational. We cannot going to tell you exactly what governance and accountability needs to look like for your program because that is heavily dependent on your culture and organization. Just make sure someone is accountable, and operational responsibilities are defined. In some cases this kind of program resides within a business unit managing vendor relationships, other times it’s within a central risk management group, or it could be somewhere else. You need to figure out what will work in your environment. One thing to pay close attention to, particularly for risk management, is contracts. You enter business agreements with vendors every day, so make sure the contract language reflects your program objectives. If you want to scan vendor environments for vulnerabilities, that needs to be in your contracts. If you want them to do an extensive self-survey or provide a data center tour, that needs to be there. If your contracts don’t include this kind of language, look at adding an addendum or forcing a contract overhaul at some point. That’s a decision for the business people running your vendors. Defining Vendor Risk: The first key requirement of a vendor risk management program is actually defining categories in which to group your vendors. We will dig into this in our next post, but these categories define the basis for your operation of the entire program. You will need to categorize both vendors and the risks they present so you know what actions to take, depending on the importance of the vendor and the type of risk. Operations: How will you evaluate the risk posed by each vendor? Where will you get the information and how will you analyze it? Do you reward organizations for top-tier security? What happens when a vendor is a flaming pile of IT security failure? Will you just talk to them and inform them of the issues? Will you lock them out of your systems? It will be controversial if you take a vendor off-line, so you need to have had all these

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.