Securosis

Research

New Report: Pragmatic Security for Cloud and Hybrid Networks

This is one of those papers I’ve been wanting to write for a while. When I’m out working with clients, or teaching classes, we end up spending a ton of time on just how different networking is in the cloud, and how to manage it. On the surface we still see things like subnets and routing tables, but now everything is wired together in software, with layers of abstraction meant to look the same, but not really work the same. This paper covers the basics and even includes some sample diagrams for Microsoft Azure and Amazon Web Services, although the bulk of the paper is cloud-agnostic.   From the report: Over the last few decades we have been refining our approach to network security. Find the boxes, find the wires connecting them, drop a few security boxes between them in the right spots, and move on. Sure, we continue to advance the state of the art in exactly what those security boxes do, and we constantly improve how we design networks and plug everything together, but overall change has been incremental. How we think about network security doesn’t change – just some of the particulars. Until you move to the cloud. While many of the fundamentals still apply, cloud computing releases us from the physical limitations of those boxes and wires by fully abstracting the network from the underlying resources. We move into entirely virtual networks, controlled by software and APIs, with very different rules. Things may look the same on the surface, but dig a little deeper and you quickly realize that network security for cloud computing requires a different mindset, different tools, and new fundamentals. Many of which change every time you switch cloud providers. Special thanks to Algosec for licensing the research. As usual everything was written completely independently using our Totally Transparent Research process. It’s only due to these licenses that we are able to give this research away for free. The landing page for the paper is here. Direct download: Pragmatic Security for Cloud and Hybrid Networks (pdf) Share:

Share:
Read Post

Pragmatic Security for Cloud and Hybrid Networks: Design Patterns

This is the fourth post in a new series I’m posting for public feedback, licensed by Algosec. Well, that is if they like it – we are sticking to our Totally Transparent Research policy. I’m also live-writing the content on GitHub if you want to provide any feedback or suggestions. Click here for the first post in the series, [here for post two](https://securosis.com/blog/pragmatic-security-for-cloud-and-hybrid-networks-cloud-networking-101, post 3, post 4. To finish off this research it’s time to show what some of this looks like. Here are some practical design patterns based on projects we have worked on. The examples are specific to Amazon Web Services and Microsoft Azure, rather than generic templates. Generic patterns are less detailed and harder to explain, and we would rather you understand what these look like in the real world. Basic Public Network on Microsoft Azure This is a simplified example of a public network on Azure. All the components run on Azure, with nothing in the enterprise data center, and no VPN connections. Management of all assets is over the Internet. We can’t show all the pieces and configuration settings in this diagram, so here are some specifics:   The Internet Gateway is set in Azure by default (you don’t need to do anything). Azure also sets up default service endpoints for the management ports to manage your instances. These connections are direct to each instance and don’t run through the load balancer. They will (should) be limited to only your current IP address, and the ports are closed to the rest of the world. In this example we have a single public facing subnet. Each instance gets a public IP address and domain name, but you can’t access anything that isn’t opened up with a defined service endpoint. Think of the endpoint as port forwarding, which it pretty much is. The service endpoint can point to the load balancer, which in turn is tied to the auto scale group. You set rules on instance health, performance, and availability; the load balancer and auto scale group provision and deprovision servers as needed, and handle routing. The IP addresses of the instances change as these updates take place. Network Security Groups (NSGs) restrict access to each instance. In Azure you can also apply them to subnets. In this case we would apply them on a per-server basis. Traffic would be restricted to whatever services are being provided by the application, and would deny traffic between instances on the same subnet. Azure allows such internal traffic by default, unlike Amazon. NSGs can also restrict traffic to the instances, locking it down to only from the load balancer and thus disabling direct Internet access. Ideally you never need to log into the servers because they are in an auto scale group, so you can also disable all the management/administration ports. There is more, but this pattern produces a hardened server, with no administrative traffic, protected with both Azure’s default protections and Network Security Groups. Note that on Azure you are often much better off using their PaaS offerings such as web servers, instead of manually building infrastructure like this. Basic Private Network on Amazon Web Services Amazon works a bit differently than Azure (okay – much differently). This example is a Virtual Private Cloud (VPC, their name for a virtual network) that is completely private, without any Internet routing, connected to a data center through a VPN connection.   This shows a class B network with two smaller subnets. In AWS you would place each subnet in a different Availability Zone (what we called a ‘zone’) for resilience in case one goes down – they are separate physical data centers. You configure the VPN gateway through the AWS console or API, and then configure the client side of the VPN connection on your own hardware. Amazon maintains the VPN gateway in AWS; you don’t directly touch or maintain it, but you do need to maintain everything on your side of the connection (and it needs to be a hardware VPN). You adjust the routing table on your internal network to send all traffic for the 10.0.0.0/16 network over the VPN connection to AWS. This is why it’s called a ‘virtual’ private cloud. Instances can’t see the Internet, but you have that gateway that’s Internet accessible. You also need to set your virtual routing table in AWS to send Internet traffic back through your corporate network if you want any of your assets to access the Internet for things like software updates. Sometimes you do, sometimes you don’t – we don’t judge. By default instances are protected with a Security Group that denies all inbound traffic and allows all outbound traffic. Unlike in Azure, instances on the same subnet can’t talk to each other. You cannot connect to them through the corporate network until you open them up. AWS Security Groups offer allow rules only. You cannot explicitly deny traffic – only open up allowed traffic. In Azure you create Service Endpoints to explicitly route traffic, then use network security groups to allow or deny on top of that (within the virtual network). AWS uses security groups for both functions – opening a security group allows traffic through the private IP (or public IP if it is public facing). Our example uses no ACLs but you could put an ACL in place to block the two subnets from talking to each other. ACLs in AWS are there by default, but allow all traffic. An ACL in AWS is not stateful, so you need to create rules for all bidrectional traffic. ACLs in AWS work better as a deny mechanism. A public network on AWS looks relatively similar to our Azure sample (which we designed to look similar). The key differences are how security groups and service endpoints function. Hybrid Cloud on Azure This builds on our previous examples. In this case the web servers and app servers are separated, with app servers on a private subnet. We already explained the components in our other examples, so there is only a little to add:   The key security control here is a Network Security Group

Share:
Read Post

Pragmatic Security for Cloud and Hybrid Networks: Building Your Cloud Network Security Program

This is the fourth post in a new series I’m posting for public feedback, licensed by Algosec. Well, that is if they like it – we are sticking to our Totally Transparent Research policy. I’m also live-writing the content on GitHub if you want to provide any feedback or suggestions. Click here for the first post in the series, here for post two. There is no single ‘best’ way to secure a cloud or hybrid network. Cloud computing is moving faster than any other technology in decades, with providers constantly struggling to out-innovate each other with new capabilities. You cannot lock yourself into any single architecture, but instead need to build out a program capable of handling diverse and dynamic needs. There are four major focus areas when building out this program. Start by understanding the key considerations for the cloud platform and application you are working with. Design the network and application architecture for security. Design your network security architecture including additional security tools (if needed) and management components. Manage security operations for your cloud deployments – including everything from staffing to automation. Understand Key Considerations Building applications in the cloud is decidedly not the same as building them on traditional infrastructure. Sure, you can do it, but the odds are high something will break. Badly. As in “update that resume” breakage. To really see the benefits of cloud computing, applications must be designed specifically for the cloud – including security controls. For network security this means you need to keep a few key things in mind before you start mapping out security controls. Provider-specific limitations or advantages: All providers are different. Nothing is standard, and don’t expect it to ever become standard. One provider’s security group is another’s ACL. Some allow more granular management. There may be limits on the number of security rules available. A provider might offer both allow and deny rules, or allow only. Take the time to learn the ins and outs of your provider’s capabilities. They all offer plenty of documentation and training, and in our experience most organizations limit themselves to no more than one to three infrastructure providers, keeping the problem manageable. Application needs: Applications, especially those using the newer architectures we will mention in a moment, often have different needs than applications deployed on traditional infrastructure. For example application components in your private network segment may still need Internet access to connect to a cloud component – such as storage, a message bus, or a database. These needs directly affect architectural decisions – both security and otherwise. New architectures: Cloud applications use different design patterns than apps on traditional infrastructure. For example, as previously mentioned, components are typically distributed across diverse network locations for resiliency, and tied tightly to cloud-based load balancers. Early cloud applications often emulated traditional architectures but modern cloud applications make extensive use of advanced cloud features, particularly Platform as a Service, which may be deeply integrated into a particular cloud provider. Cloud-based databases, message queues, notification systems, storage, containers, and application platforms are all now common due to cost, performance, and agility benefits. You often cannot even control the network security of these services, which are instead fully managed by the cloud provider. Continuous deployment, DevOps, and immutable servers are the norm rather than exceptions. On the upside, used properly these architectures and patterns are far more secure, cost effective, resilient, and agile than building everything yourself, but you do need to understand how they work. Data Analytics Design Pattern Example A common data analytics design pattern highlights these differences (see the last section for a detailed example). Instead of keeping a running analytics pool and sending it data via SFTP, you start by loading data into cloud storage directly using an (encrypted) API call. This, using a feature of the cloud, triggers the launch of a pool of analytics servers and passes the job on to a message queue in the cloud. The message queue distributes the jobs to the analytics servers, which use a cloud-based notification service to signal when they are done, and the queue automatically redistributes failed jobs. Once it’s all done the results are stored in a cloud-based NoSQL database and the source files are archived. It’s similar to ‘normal’ data analytics except everything is event-driven, using features and components of the cloud service. This model can handle as many concurrent jobs as you need, but you don’t have anything running or racking up charges until a job enters the system. Elasticity and a high rate of change are standard in the cloud: Beyond auto scaling, cloud applications tend to alter the infrastructure around them to maximize the benefits of cloud computing. For example one of the best ways to update a cloud application is not to patch servers, but instead to create an entirely new installation of the app, based on a template, running in parallel; and then to switch traffic over from the current version. This breaks familiar security approaches, including relying on IP addresses for: server identification, vulnerability scanning, and logging. Server names and addresses are largely meaningless, and controls that aren’t adapted for cloud are liable to be useless. Managing and monitoring security changes: You either need to learn how to manage cloud security using the provider’s console and APIs, or choose security tools that integrate directly. This may become especially complex if you need to normalize security between your data center and cloud provider when building a hybrid cloud. Additionally, few cloud providers offer good tools to track security changes over time, so you will need to track them yourself or use a third-party tool. Design the Network Architecture Unlike traditional networks, security is built into cloud networks by default. Go to any major cloud provider, spin up a virtual network, launch a server, and the odds are very high it is already well-defended – with most or all access blocked by default. Because security and core networking are so intertwined, and every cloud application has its own virtual network (or networks), the first step toward security is to work with the application team and design it into the architecture. Here are some

Share:
Read Post

Pragmatic Security for Cloud and Hybrid Networks: Network Security Controls

This is the second post in a new series I’m posting for public feedback, licensed by Algosec. Well, that is if they like it – we are sticking to our Totally Transparent Research policy. I’m also live-writing the content on GitHub if you want to provide any feedback or suggestions. Click here for the first post in the series, and here for post two. Now that we’ve covered the basics of cloud networks, it’s time to focus on the available security controls. Keep in mind that all of this varies between providers and that cloud computing is rapidly evolving and new capabilities are constantly appearing. These fundamentals give you the background to get started, but you will still need to learn the ins and outs of whatever platforms you work with. What Cloud Providers Give You Not to sound like a broken record (those round things your parents listened to… no, not the small shiny ones with lasers), but all providers are different. The following options are relatively common across providers, but not necessarily ubiquitous. Perimeter security is traditional network security that the provider totally manages, invisibly to the customers. Firewalls, IPS, etc. are used to protect the provider’s infrastructure. The customer doesn’t control any of it.PRO: It’s free, effective, and always there. CON: You don’t control any of it, and it’s only useful for stopping background attacks. Security groups – Think of this is a tag you can apply to a network interface/instance (or certain other cloud objects, like a database or load balancer) that applies an associated set of network security rules. Security groups combine the best of network and host firewalls, since you get policies that can follow individual servers (or even network interfaces) like a host firewall but you manage them like a network firewall and protection is applied no matter what is running inside. You get the granularity of a host firewall with the manageability of a network firewall. These are critical to auto scaling – since you are now spreading your assets all over your virtual network – and, because instances appear and disappear on demand, you can’t rely on IP addresses to build your security rules. Here’s an example: You can create a “database” security group that only allows access to one specific database port and only from instances inside a “web server” security group, and only those web servers in that group can talk to the database servers in that group. Unlike a network firewall the database servers can’t talk to each other since they aren’t in the web server group (remember, the rules get applied on a per-server basis, not a subnet, although some providers support both). As new databases pop up, the right security is applied as long as they have the tag. Unlike host firewalls, you don’t need to log into servers to make changes, everything is much easier to manage. Not all providers use this term, but the concept of security rules as a policy set you can apply to instances is relatively consistent.Security groups do vary between providers. Amazon, for example, is default deny and only allows allow rules. Microsoft Azure, however, allows rules that more-closely resemble those of a traditional firewall, with both allow and block options.PRO: It’s free and it works hand in hand with auto scaling and default deny. It’s very granular but also very easy to manage. It’s the core of cloud network security. CON: They are usually allow rules only (you can’t explicitly deny), basic firewalling only and you can’t manage them using tools you are already used to. ACLs (Access Control Lists) – While security groups work on a per instance (or object) level, ACLs restrict communications between subnets in your virtual network. Not all providers offer them and they are more to handle legacy network configurations (when you need a restriction that matches what you might have in your existing data center) than “modern” cloud architectures (which typically ignore or avoid them). In some cases you can use them to get around the limitations of security groups, depending on your provider.PRO: ACLs can isolate traffic between virtual network segments and can create both allow or deny rules CON: They’re not great for auto scaling and don’t apply to specific instances. You also lose some powerful granularity.By default nearly all cloud providers launch your assets with default-deny on all inbound traffic. Some might automatically open a management port from your current location (based on IP address), but that’s about it. Some providers may use the term ACL to describe what we called a security group. Sorry, it’s confusing, but blame the vendors, not your friendly neighborhood analysts. Commercial Options There are a number of add-ons you can buy through your cloud provider, or buy and run yourself. Physical security appliances: The provider will provision an old-school piece of hardware to protect your assets. These are mostly just seen in VLAN-based providers and are considered pretty antiquated. They may also be used in private (on premise) clouds where you control and run the network yourself, which is out of scope for this research.PRO: They’re expensive, but they’re something you are used to managing. They keep your existing vendor happy? Look, it’s really all cons on this one unless you’re a cloud provider and in that case this paper isn’t for you. Virtual appliances are a virtual machine version of your friendly neighborhood security appliance and must be configured and tuned for the cloud platform you are working on. They can provide more advanced security – such as IPS, WAF, NGFW – than the cloud providers typically offer. They’re also useful for capturing network traffic, which providers tend not to support.PRO: They enable more-advanced network security and can manage the same as you do your on-premise versions of the tool. CON: Cost can be a concern, since these use resources like any other virtual server, constrains your architectures and they may not play well with auto scaling and other cloud-native features. Host security agents are software agents you build into your images that run in your

Share:
Read Post

Pragmatic Security for Cloud and Hybrid Networks: Cloud Networking 101

This is the second post in a new series I’m posting for public feedback, licensed by Algosec. Well, that is if they like it – we are sticking to our Totally Transparent Research policy. I’m also live-writing the content on GitHub if you want to provide any feedback or suggestions. Click here for the first post in the series. There isn’t one canonical cloud networking stack out there; each cloud service provider uses their own mix of technologies to wire everything up. Some of these might use known standards, tech, and frameworks, while others might be completely proprietary and so secret that you, as the customer, don’t ever know exactly what is going on under the hood. Building cloud scale networks is insanely complex, and the different providers clearly see networking capabilities as a competitive differentiator. So instead of trying to describe all the possible options, we’ll keep things at a relatively high level and focus on common building blocks we see relatively consistently on the different platforms. Types of Cloud Networks When you shop providers, cloud networks roughly fit into two buckets: Software Defined Networks (SDN) that fully decouple the virtual network from the underlying physical networking and routing. VLAN-based Networks that still rely on the underlying network for routing, lacking the full customization of an SDN. Most providers today offer full SDNs of different flavors, so we’ll focus more on those, but we do still encounter some VLAN architectures and need to cover them at a high level. Software Defined Networks As we mentioned, Software Defined Networks are a form of virtual networking that (usually) takes advantage of special features in routing hardware to fully abstract the virtual network you see from the underlying physical network. To your instance (virtual server) everything looks like a normal network. But instead of connecting to a normal network interface it connects to a virtual network interface which handles everything in software. SDNs don’t work the same as a physical network (or even an older virtual network). For example, in an SDN you can create two networks that use the same address spaces and run on the same physical hardware but never see each other. You can create an entirely new subnet not by adding hardware but with a single API call that “creates” the subnet in software. How do they work? Ask your cloud provider. Amazon Web Services, for example, intercepts every packet, wraps it and tags it, and uses a custom mapping service to figure out where to actually send the packet over the physical network with multiple security checks to ensure no customer ever sees someone else’s packet. (You can watch a video with great details at this link). Your instance never sees the real network and AWS skips a lot of the normal networking (like ARP requests/caching) within the SDN itself. SDN allows you to take all your networking hardware, abstract it, pool it together, and then allocate it however you want. On some cloud providers, for example, you can allocate an entire class B network with multiple subnets, routed to the Internet behind NAT, in just a few minutes or less. Different cloud providers use different underlying technologies and further complicate things since they all offer different ways of managing the network. Why make things so complicated? Actually, it makes management of your cloud network much easier, while allowing cloud providers to give customers a ton of flexibility to craft the virtual networks they need for different situations. The providers do the heavy lifting, and you, as the consumer, work in a simplified environment. Plus, it handles issues unique to cloud, like provisioning network resources faster than existing hardware can handle configuration changes (a very real problem), or multiple customers needing the same private IP address ranges to better integrate with their existing applications. Virtual LANs (VLANs) Although they do not offer the same flexibility as SDNs, a few providers still rely on VLANS. Customers must evaluate their own needs, but VLAN-based cloud services should be considered outdated compared to SDN-based cloud services. VLANs let you create segmentation on the network and can isolate and filter traffic, in effect just cutting off your own slice of the existing network rather than creating your own virtual environment. This means you can’t do SDN-level things like creating two networks on the same hardware with the same address range. VLANs don’t offer the same flexibility. You can create segmentation on the network and isolate and filter traffic, but can’t do SDN-level things like create two networks on the same hardware with the same address range. VLANs are built into standard networking hardware, which is why that’s where many people used to start. No special software needed. Customers don’t get to control their addresses and routing very well They can’t be trusted for security segmentation. Because VLANs are built into standard networking hardware, they used to be where most people started when creating cloud computing as no special software was required. But customers on VLANs don’t get to control their addresses and routing very well, and they scale and perform terribly when you plop a cloud on top of them. They are mostly being phased out of cloud computing due to these limitations. Defining and Managing Cloud Networks While we like to think of one big cloud out there, there is more than one kind of cloud network and several technologies that support them. Each provides different features and presents different customization options. Management can also vary between vendors, but there are certain basic characteristics that they exhibit. Different providers use different terminology, so we’ve tried out best to pick ones that will make sense once you look at particular offerings. Cloud Network Architectures An understanding of the types of cloud network architectures and the different technologies that enable them is essential to fitting your needs with the right solution. There are two basic types of cloud network architectures. Public cloud networks are Internet facing. You connect to your instances/servers via the public Internet and no special routing needed; every instance has a

Share:
Read Post

Pragmatic Security for Cloud and Hybrid Networks: Introduction

This is the start in a new series I’m posting for public feedback, licensed by Algosec. Well, that is if they like it – we are sticking to our Totally Transparent Research policy. I’m also live-writing the content on GitHub if you want to provide any feedback or suggestions. With that, here’s the content… For a few decades we have been refining our approach to network security. Find the boxes, find the wires connecting them, drop a few security boxes between them in the right spots, and move on. Sure, we continue to advance the state of the art in exactly what those security boxes do, and we constantly improve how we design networks and plug everything together, but overall change has been incremental. How we think about network security doesn’t change – just some of the particulars. Until you move to the cloud. While many of the fundamentals still apply, cloud computing releases us from the physical limitations of those boxes and wires by fully abstracting the network from the underlying resources. We move into entirely virtual networks, controlled by software and APIs, with very different rules. Things may look the same on the surface, but dig a little deeper and you quickly realize that network security for cloud computing requires a different mindset, different tools, and new fundamentals. Many of which change every time you switch cloud providers. The challenge of cloud computing and network security Cloud networks don’t run magically on pixie dust, rainbows, and unicorns – they rely on the same old physical network components we are used to. The key difference is that cloud customers never access the ‘real’ network or hardware. Instead they work inside virtual constructs – that’s the nature of the cloud. Cloud computing uses virtual networks by default. The network your servers and resources see is abstracted from the underlying physical resources. When you server gets IP address 10.0.0.12, that isn’t really that IP address on the routing hardware – it’s a virtual IP address on a virtual network. Everything is handled in software, and most of these virtual networks are Software Defined Networks (SDN). We will go over SDN in more depth in the next section. These networks vary across cloud providers, but they are all fundamentally different from traditional networks in a few key ways: Virtual networks don’t provide the same visibility as physical networks because packets don’t move around the same way. We can’t plug a wire into the network to grab all the traffic – there is no location all traffic traverses, and much of the traffic is wrapped and encrypted anyway. Cloud networks are managed via Application Programming Interfaces – not by logging in and provisioning hardware the old-fashioned way. A developer has the power to stand up an entire class B network, completely destroy an entire subnet, or add a network interface to a server and bridge to an entirely different subnet on a different cloud account, all within minutes with a few API calls. Cloud networks change faster than physical networks, and constantly. It isn’t unusual for a cloud application to launch and destroy dozens of servers in under an hour – faster than traditional security and network tools can track – or even build and destroy entire networks just for testing. Cloud networks look like traditional networks, but aren’t. Cloud providers tend to give you things that look like routing tables and firewalls, but don’t work quite like your normal routing tables and firewalls. It is important to know the differences. Don’t worry – the differences make a lot of sense once you start digging in, and most of them provide better security that’s more accessible than on a physical network, so long as you know how to manage them. The role of hybrid networks A hybrid network bridges your existing network into your cloud provider. If, for example, you want to connect a cloud application to your existing database, you can connect your physical network to the virtual network in your cloud. Hybrid networks are extremely common, especially as traditional enterprises begin migrating to cloud computing and need to mix and match resources instead of building everything from scratch. One popular example is setting up big data analytics in your cloud provider, where you only pay for processing and storage time, so you don’t need to buy a bunch of servers you will only use once a quarter. But hybrid networks complicate management, both in your data center and in the cloud. Each side uses a different basic configuration and security controls, so the challenge is to maintain consistency across both, even though the tools you use – such as your nifty next generation firewall – might not work the same (if at all) in both environments. This paper will explain how cloud network security is different, and how to pragmatically manage it for both pure cloud and hybrid cloud networks. We will start with some background material and cloud networking 101, then move into cloud network security controls, and specific recommendations on how to use them. It is written for readers with a basic background in networking, but if you made it this far you’ll be fine. Share:

Share:
Read Post

Friday Summary: Customer Service

Rich here. A few things this week got me thinking about customer service. For whatever reason, I have always thought the best business decision is to put the needs of the customer first, then build your business model around that. I’m enough of a realist to know that isn’t always possible, but combine that with “don’t make it hard for people to give you money” and you sure tilt the odds in your favor. First is the obvious negative example of Oracle’s CISO’s blog post. It was a thinly-veiled legal threat to customers performing code assessments of Oracle, arguing this is a violation of Oracle’s EULA and Oracle can sue them. I get it. That is well within their legal rights. And really, the threat was likely more directed towards Veracode, via mutual customers as a proxy. Why do customers assess Oracle’s code? Because they don’t trust Oracle – why else? It isn’t like these assessments are free. That is a pretty good indicator of a problem – at least customers perceiving a problem. Threatening independent security researchers? Okay, dumb move, but nothing new there. Threatening, sorry ‘reminding’, your customers in an open blog post (since removed)? I suppose that’s technically putting the customer first, but not quite what I meant. On the other side is a company like Slack. I get periodic emails from them saying they detected our usage dropped, and they are reducing our bill. That’s right – they have an automated system to determine stale accounts and not bill you for them. Or Amazon Web Services, where my sales team (yes, they exist) sends me a periodic report on usage and how to reduce my costs through different techniques or services. We’re getting warmer. Fitbit replaces lost trackers for free. The Apple Genius Bar. The free group runs, training programs, yoga, and discounts at our local Fleet Feet running store. There are plenty of examples, but let’s be honest – the enterprise tech industry isn’t usually on the list. I had two calls today with a client I have been doing project work with. I didn’t bill them for it, and those calls themselves aren’t tied to any prospective projects. But the client needs help, the cost to me is relatively low, and I know it will come back later when the sign up for another big project. Trust me, we still have our lines (sorry, investment firms, no more freebies if we have never worked together), but in every business I’ve ever run those little helpful moments add up and pay off in the end. Want some practical examples in the security industry? Adjusting pricing models for elastic clouds. Using soft service limits so when you accidentally scan that one extra server on the network, you don’t lock down the product, and you get a warning and an opportunity to up your license. Putting people on the support desk who know what the hell they are talking about. Paying attention to the product’s user experience – not merely focusing on one pretty dashboard to impress the CIO in the sales meeting. Improving provisioning so your product is actually relatively easy to install, instead of hacking together a bunch of scripts and crappy documentation. We make security a lot harder on customers than it needs to be. That makes exceptions all the more magical. (In other news, go watch Mr. Robot. If you work in this industry, it’s like a documentary). On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Rich quoted at PC World on Dropbox adding FIDO key support. Mike over at CSO Online on security spending focus. Rich in the Wall St. Journal on Apple and Google taking different approaches to smart agents like Siri and Google Now. Yep, Rich keeps press whoring with comments on Black Hat. It never ends. You know who on some Apple vulnerabilities at the Guardian. And lastly, one Rich actually wrote for TidBITS about that crappy Wired article on the Thunderstrike 2 worm. Favorite Securosis Posts Mike Rothman: Firestarter: Karma – You M.A.D., bro? It seems the entire security industry is, and justifiably so. Oracle = tone deaf. Rich: Incite 8/12/2015: Transitions. My kids are about a decade behind Mike’s, just entering kindergarten and first grade, but it’s all the same. Other Securosis Posts Incite 7/29/2015: Finding My Cause. Building a Threat Intelligence Program: Gathering TI. EMV and the Changing Payment Space: Mobile Payment. EMV and the Changing Payment Space: Systemic Tokenization. EMV and the Changing Payment Space: The Liability Shift. Building a Threat Intelligence Program [New Series]. EMV and the Changing Payment Space: Migration. Favorite Outside Posts Mike: Gossip to Grown Up: How Intelligence Sharing Developed – Awesome post on the RSAC blog by Wendy about the history and future of TI. The key issue is “getting trust to scale”. Rich: How Hackers Steal Data From Websites. Oh, my. The Onion has us dead to rights. Research Reports and Presentations Endpoint Defense: Essential Practices. Cracking the Confusion: Encryption and Tokenization for Data Centers, Servers, and Applications. Security and Privacy on the Encrypted Network. Monitoring the Hybrid Cloud: Evolving to the CloudSOC. Security Best Practices for Amazon Web Services. Securing Enterprise Applications. Secure Agile Development. Trends in Data Centric Security White Paper. Leveraging Threat Intelligence in Incident Response/Management. Pragmatic WAF Management: Giving Web Apps a Fighting Chance. Top News and Posts No, You Really Can’t (Mary Ann Davidson Blog). In case you missed it, here’s the archive. Fun, eh? Oracle’s security chief made a big gaffe in a now-deleted blog post. More on the story. Software Security: On the Wrong Side of History. Chris Wysopal of Veracode responds. Guess who used to be one of their advisors? Popcorn ensues. Cisco Warns Customers About Attacks Installing Malicious IOS Bootstrap Images. Researchers reveal electronic car lock hack after 2-year injunction by Volkswagen. Stagefright: new Android vulnerability dubbed ‘heartbleed for mobile’. Stagefright Patch Incomplete Leaving Android Devices Still Exposed. Friends don’t let friends… Hack-Fueled ‘Unprecedented’ Insider Trading Ring Nets $100M. Share:

Share:
Read Post

MAD Karma

Way back in 2004 Rich wrote an article over at Gartner on the serious issues plaguing Oracle product security. The original piece is long gone, but here is an article about it. It lead to a moderately serious political showdown, Rich flying out to meet with Oracle execs, and eventually their move to a quarterly patch update cycle (due more to the botched patch than Rich’s article). This week Oracle’s 25-year-veteran CISO Mary Ann Davidson published a blog post decrying customer security assessments of their products. Actually she threatened legal action for evaluation of Oracle products using tools that look at application code. Then she belittled security researchers (for crying wolf, not understanding what they are talking about, and wasting everybody’s time – especially her team’s), told everyone to trust Oracle because they find nearly all the bugs anyway (not that they seem to patch them in a timely fashion), and… you get it. Then, and this is the best part, Oracle pulled the post and basically issued an apology. Which never happens. So you probably don’t need us to tell you what this Firestarter is about. The short version is that the attitudes and positions expressed in her post closely match Rich’s experiences with Oracle and Mary Ann over a decade ago. Yeah, this is a fun one. Share:

Share:
Read Post

Summary: Community

Rich here. I’m going to pull an Adrian this week, and cover a few unrelated things. Nope, no secret tie-in at the end, just some interesting things that have hit over the past couple weeks, since I wrote a Summary. We are absolutely blowing out the registration for this year’s cloud security training at Black Hat. I believe we will be the best selling class at Black Hat for the second year in a row. And better yet, all my prep work is done already, which has never happened before. Bigger isn’t necessarily better when it comes to training, so we are pulling out all the stops. We have a custom room configuration and extra-special networking so we can split the class apart as needed to cover different student experience levels. James Arlen and I also built a mix of labs (we are even introducing Azure for the first time) to cover not only different skill levels, but different foci (network security, developers, etc.). For the larger class we also have two extra instructors who are only there to wander the room and help people out (Mike and Adrian). Switching my brain around from coding and building labs, to regular Securosis work, can be tough. Writing prose takes a different mindset than writing code and technical work, and switching is a bit more difficult than I like. It’s actually easier for me to swap from prose to code than the other way around. This is my first week back in Phoenix after our annual multi-week family excursion back to Boulder. This trip, more than many others, reminded me a bit of my roots and who I am. Two major events occurred. First was the OPM hack, and the fact that my data was lost. The disaster response team I’m still a part of is based out of Colorado and part of the federal government. I don’t have a security clearance, but I still had to fill out one of the security forms that are now backed up, maybe in China. Yes, just to be an EMT and drive a truck. I spoke for an hour at our team meeting and did my best to bring our world of cybersecurity to a group of medical professionals who suddenly find themselves caught up in the Big Game. To provide some understanding of what’s going on, why not to trust everything they hear, and how to understand the impact this will have on them for the rest of their lives. Because it sure won’t be over in 18 months after the credit monitoring term end (which they won’t even need if it was a foreign adversary). This situation isn’t fair. These are volunteers willing to put themselves at physical risk, but they never signed up for the intangible but very real risks created by the OPM. A few days before that meeting an air medical helicopter crashed. The pilot was killed, and a crew member badly injured. I didn’t know them well (barely at all), but had worked with both of them. I may have flown with the pilot. I debated mentioning this at all, since it really had nothing to do with me. I’m barely a part of that community any more, although I did spend over 15 years in it. Public safety, like any profession, can be a small world. Especially as we all bounced around different agencies and teams in the mountains of Colorado. I suppose it hits home more when it’s someone in your tribe, even if you don’t have a direct personal relationship. I’m barely involved in emergency services any more, but it is still a very important part of my life and identity. Someday, maybe, life will free up enough that I can be more active again. I love what I do now, but, like the military, you can’t replace the kinds of bonds built when physical risk is involved. For a short final note, I just started reading a Star Wars book for the first time in probably over 20 years. I’m incredibly excited for the new film, and all the new books and comics are now officially canon and part of the epic. The writing isn’t bad, but it really isn’t anything you want to read unless you are a huge Star Wars nerd. But I am, so I do. There you go. Black Hat, rescue, and Star Wars. No linkage except me. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Rich at SearchSecurity on the needed death of Flash Rich quoted in CSO by Ben Rothke on the role of the Cloud Security Architect Favorite Securosis Posts Mike: Firestarter: Living with the OPM. Rich has been affected by the OPM breach and that sucks. We discuss what it means for him. Other Securosis Posts Incite 7/15/15 – On Top of the Worlds. Incite 7/1/2015: Explorers. New Series: EMV, Tokenization, and the Changing Payment Space. EMV and the Changing Payments Space: the Basics. Threat Detection: Analysis. Threat Detection Evolution: Quick Wins. Favorite Outside Posts Mike: Why start-up rules don’t apply to security. VC Sam Myers points out that security is different than other tech markets. Right. But I’m not sure every security company needs to target the large enterprise to be successful. Adrian: Lowering Defenses to Increase Security I like Mike King’s take, and bringing the human side into the security story. A good post and worth reading! Rich: FBI Director to Silicon Valley: ‘Try Harder’ to Find ‘Going Dark’ Solution. This isn’t my favorite, but it’s something I think everyone needs to read. The FBI director either wants us to invent magic, or is deliberately being disingenuous in an attempt to force political hands. Flip a coin. Research Reports and Presentations Endpoint Defense: Essential Practices. Cracking the Confusion: Encryption and Tokenization for Data Centers, Servers, and Applications. Security and Privacy on the Encrypted Network. Monitoring the Hybrid Cloud: Evolving to the CloudSOC. Security Best Practices for Amazon Web Services. Securing Enterprise Applications. Secure Agile Development. Trends in Data Centric Security

Share:
Read Post

Living with the OPM Hack

And yep, thanks to his altruistic streak even Rich is affected. We don’t spend much time on blame or history, but more on the personal impact. How do you move on once you know much of your most personal information is now out there, you don’t know who has it, and you don’t know how they might want to use it? Watch or listen: Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.