Securosis

Research

Firewall Management Essentials: Managing Access Risk

We have discussed two of the three legs of comprehensive firewall management: a change management process and optimizing the rules. Now let’s work through managing risk using the firewall. Obviously we need to define risk, because depending on your industry and philosophy, risk can mean many different things. For firewall management we are talking about the risk of unauthorized parties accessing sensitive resources. Obviously if a device with critical data is inaccessible to internal and/or external attackers, the risk it presents is lower. This “access risk management” function involves understanding first and foremost the network’s topology and security controls. The ability to view attack paths provides a visual depiction of how an attacker could gain access to a device. With this information you can see which devices need remediation and/or network workarounds, and prioritize fixes. Another benefit of visualizing attack paths is in understanding when changes on the network or security devices unintentionally expose additional attack surface. So what does this have to do with your firewall? That’s a logical question, but a key firewall function is access control. You configure the firewall and its rule set to ensure that only authorized ports, protocols, applications, users, etc. have access to critical devices, applications, etc. within your network. A misconfigured firewall can have significant and severe consequences, as discussed in the last post. For example, years ago when supporting a set of email security devices, we got a call about an avalanche of spam hitting the mailboxes of key employees. The customer was not pleased, but the deployed email security gateway appeared to be working perfectly. Initially perplexed, one of our engineers checked the backup email server, and discovered it was open to Internet traffic due to a faulty firewall rule. So attackers were able to use the back-up server as a mail relay, and blast all the mailboxes in the enterprise. With some knowledge of network topology and the paths between external networks and internal devices, this issue could have been identified and remediated before any employees were troubled. Key Access Risk Management Features When examining the network and keeping track of attack paths, look for a few key features: Topology monitoring: Topology can be determined actively, passively, or both. For active mapping you will want your firewall management tool to pull configurations from firewalls and other access control devices. You also need to account for routing tables, network interfaces, and address translation rules. Interoperating with passive monitoring tools (network behavioral analysis, etc.) can provide more continuous monitoring. You need the ability to determine whether and how any specific device can be accessed, and from where – both internal and external. Analysis horsepower: Accounting for all the possible paths through a network requires an n_*(_n-1) analysis, and n gets rather large for an enterprise network. The ability to re-analyze millions of paths on every topology change is critical for providing an accurate view. What if?: You will want to assess each possible change before it is made, to understand its impact on the network and attack surface. This enables the organization to detect additional risks posed by a change before committing it. In the example above, if that customer had a tool to help understand that a firewall rule change would make their backup email server a sitting duck for attackers they would have reconsidered. Alternative rules: It is not always possible to remediate a specific device due to operational issues. So to control risk you would like a firewall management tool to suggest appropriate rule changes or alternate network routes to isolate the vulnerable device and protect the network. At this point it should be clear that all these firewall management functions depend on each other. Optimizing rules is part of the change management process, and access risk management comes into play for every change. And vice-versa, so although we discussed these function as distinct requirements of firewall management, in reality you need all of these functions to work together for operational excellence. In this series’ last post we will focus on getting a quick win with firewall management technology. We will discuss deployment architectures and integration with enterprise systems, and work through a deployment scenario to make many of these concepts a bit more tangible. Share:

Share:
Read Post

Firewall Management Essentials: Optimizing Rules

Now that you have a solid, repeatable, and automated firewall change management process, it’s time to delve into the next major aspect of managing your firewalls: optimizing rules. Back in our introduction we talked about how firewall rule sets tend to resemble a closet over time. You have a ton of crap in there, most of which you don’t use, and whatever you do use is typically hard to get to. So you need to occasionally clean up and reorganize – getting rid of stuff you don’t need, making sure the stuff that’s still in there should be, and arranging things so you can easily access the stuff you use the most. But let’s drop the closet analogy to talk firewall specifics. You need to optimize rules for a variety of reasons: Eliminate duplicate rules: When you have a lot of hands in the rule base, rules can get duplicated. Especially when the management process doesn’t require a search to make sure an overlapping rule doesn’t already exist. Address conflicting rules: At times you may add a rule (such as ALLOW PORT 22) to address a short-term issue, even though you might have other rules to lock down the port or application. Depending on where the rule resides in the tree, the rules may conflict, either adding attack surface or breaking functionality. Get rid of old and unused rules: If you don’t go back into the rule set every so often to ensure your rules are relevant, you are bound to have rules that are no longer necessary, such as access to that old legacy mainframe application that was decommissioned 4 years ago. It is also useful to go back and confirm with each rule’s business owner that their application still needs that access, and they accept responsibility for it. Simplify the rule base: The more rules, the more complicated the rule base, and the more likely something will go wrong. By analyzing and optimizing rules on a periodic basis, you can find and remove unneeded complexity. Improving performance: If you have frequently used rules at the bottom of the tree, the firewall needs to go through every preceding rule to reach them. That can bog down performance, so you want the most frequently hit rules as early as possible. Without conflicting with other rules, of course. Controlling network risk: Networks are very dynamic, so you need to ensure that every network or device configuration change doesn’t add attack surface, requiring a firewall rule change. For all these reasons, going through the rule base on a regular basis is key to keeping firewalls running optimally. Every rule should be required to support the business, and optimally configured. Key Firewall Management Rule Optimization Features The specific features you should get in your firewall management product or service apply directly to the requirements above. Centralized management: A huge benefit of more actively managing firewalls is the ability to enforce a set of consistent policies across all firewalls, regardless of vendor. So you need a scalable tool that supports all your devices. You should have a single authoritative source for firewall policies. Rule change recommendations: If a firewall rule set gets complicated enough it’s hard for any human – even your best security admin – to keep everything straight. So a tool should be able to mine the existing rule set (thousands of rules) to find and get rid of duplicate, hidden, unused, and expired rules. Tools should assess the risk of the rules, and flag rules which allow too much access (you know: ANY ANY). Optimize rule order: A key aspect of improving firewall performance is making sure the most-hit rules are closer to the top of the tree. The tool should track which rules are hit most often through firewall log analysis, and suggest an ordering to optimize performance without increasing exposure. Simulating rule changes: Clever ideas can turn out badly if a change conflicts with other rules or opens up (or closes) the wrong ports/protocols/applications/users/groups, etc. The tool should simulate rule changes and a prediction of whether the change is likely to present problems. Monitoring network topology and device configuration: Every network and device configuration change can expose additional attack surface, so the tool needs to analyze every proposed change in context of the existing rule set. This involves polling managed devices for their configurations on a periodic basis, as well as monitoring routing tables. Compliance checking: Related to monitoring topology and configurations, changes can also cause compliance violations. SO you need the firewall management tool to flag rule changes that might violate any relevant compliance mandates. Recertify rules: The firewall management tool should offer a mechanism to go back to business owners to ensure rules are still relevant and that they accept responsibility for their rules. You should be able to set an expiration date on a rule, and then require an owner to confirm each rule is still necessary. Getting rid of old rules is one of the most effective ways to optimize a rule set. Asking for Forgiveness Speaking of firewall rule recertification, you certainly can go through the process of chasing down all the business owners of rules, if you know who they are, and getting them to confirm each rule is still needed. That’s a lot of work. You could choose a less participatory approach as well: make changes and then ask forgiveness if you break something. There are a couple options with this approach: Turn off unused rules: Use the firewall management tool’s ability to flag unused rules and just turn them off. If someone complains you know the rule is still required and you can assume they would be willing to recertify the rule. If not you can get rid of it. Blow out the rule base: You can also burn the rule base to the ground and wait for complaints to start about applications that broke as a result. This is only sane in dire circumstance, where no one will take responsibility for rules or people are totally unresponsive to your attempts to clean things up. But it’s certainly an option. NGFW Support With the move

Share:
Read Post

Threat Intelligence for Ecosystem Risk Management [New Paper]

Most folks think the move towards the extended enterprise is very cool. You know, get other organizations to do the stuff your organization isn’t great at. It’s a win/win, right? From a business standpoint, there are clear advantages to building a robust ecosystem that leverages the capabilities of all organizations. But from a security standpoint, the extended enterprise adds a tremendous amount of attack surface. In order to make the extended enterprise work, your business partners need access to your critical information. And that’s where security folks tend to break out in hives. It’s hard enough to protect your networks, servers, and applications while making sure your own employees don’t do anything stupid to leave you exposed. Imagine your risk – based not just on how you protect your information, but also on how well all your business partners protect their information and devices as well. Actually, you don’t need to imagine that – it’s reality. We are pleased to announce the availability of our Threat Intelligence for Ecosystem Risk Management white paper. This paper delves into the risks of the extended enterprise and then present a process to gather information about trading partners to make decisions regarding connectivity and access more fact-based. Many of you are not in positions to build your own capabilities to assess partner networks, but this paper offers perspective on how you would, so when considering external threat intelligence services you will be an educated buyer. You can see the Threat Intelligence for Ecosystem Risk Management page in our Research Library or download the paper directly (PDF) We want to thank BitSight Technologies for licensing the content in this paper. The largesse of our licensees enables us to provide our research without cost to you. Share:

Share:
Read Post

Firewall Management Essentials: Change Management

As we dive back into Firewall Management Essentials, let’s revisit some of the high points from our Introduction: The firewalls run on a set of rules that basically define what ports, protocols, networks, users, and increasingly applications, can do on your network. And just like a closet in your house, if you don’t spend time sorting through old stuff it can become a disorganized mess, with a bunch of things you haven’t used in years and don’t need any more. The problem is that, like your closet, this issue just gets worse if you put off addressing the issue. And it’s not like rule bases are static. You have new requests coming in to open this port or allow that group of users to do something new or different pretty much every day. The situation can get out of control quickly, especially as you increase the number of devices in use. So first we will dig into building a consistent workflow to manage the change process. This process is important for numerous reasons: Accuracy: If you make an incorrect change or have rules which conflict with other rules you can add significant attack surface to your environment. So it is essential to ensure you make the proper changes, correctly. Authorization: It is difficult for many security admins to say no, especially to persuasive business and technology leaders who ‘need’ their stuff done now. So a consistent and fair authorization process eliminates bullying and other shenanigans folks use to get what they want. Verification: Was the change made correctly? Are you sure? The ability to verify the change was correct and successful is important, especially for auditing. Audit trail: Speaking of audit, making sure every change is documented, with details of the requestor and approver, is helpful both when preparing for an audit and for ensuring the audit’s outcome is positive. Network Security Operations A few years ago we tackled building a huge and granular process map for network security operations as part of our Network Security Operations Quant research. One of the functions we explicitly described was managing firewalls. Check out the detailed process map: This can be a bit ponderous for many organizations, and isn’t necessarily intended to be implemented in its entirety. But it illustrates what is involved in managing these devices. To ensure you understand how we define some of these terms, here is a brief description of each step from that report. Policy, Rule, and Signature Management In this phase we manage the content that underlies the network security devices. This includes attack signatures and the policies & rules that control response to an attack. Policy Review: Given the number of monitoring and blocking policies available on network devices, it is important to keep rules (policies) current. Keep in mind the severe performance hit (and false positive issues) of deploying too many policies on a device. It is a best practice to review network security device policies and prune rules that are obsolete, duplicative, overly exposed, prone to false positives, or otherwise unneeded. Policy review triggers include signature updates, service requests (new application support, etc.), external advisories (to block a certain attack vector or work around a missing patch, etc.), and policy updates resulting from the operational management of the device (change management process described below). Define/Update Policies & Rules: This involves defining the depth and breadth of the network security device policies, including the actions (block, alert, log, etc.) taken by the device if an attack is detected – whether via rule violation, signature trigger, or another method. Note that as the capabilities of network security devices continue to expand, a variety of additional detection mechanisms come into play. They include increasing visibility into application traffic and identity stores. Time-limited policies may also be deployed to activate or deactivate short-term policies. Logging, alerting, and reporting policies are defined in this step. Here it is important to consider the hierarchy of policies that will be implemented on devices. You will have organizational policies at the highest level, applying to all devices, which may be supplemented or supplanted by business unit or geographic policies. Those highest-level policies feed into the policies and/or rules implemented at a location, which then filter down to the rules and signatures implemented on a specific device. The hierarchy of policy inheritance can dramatically increase or decrease complexity of rules and behaviors. Initial policy deployment should include a Q/A process to ensure none of the rules impacts the ability of critical applications to communicate either internally or externally. Document Policies and Rules: As the planning stage is an ongoing process, documentation is important for operational and compliance reasons. This step lists and details the policies and rules in use on the device according to the associated operational standards, guidelines, and requirements. Change Management In this phase rule & signature additions, changes, updates, and deletions are handled. Process Change Request and Authorize: Based on either a signature or policy change within the Content Management process, a change to the network security device(s) is requested. Authorization requires both ensuring the requestor is allowed to request the change, and the change’s relative priority to select an appropriate change window. The change’s priority is based on the nature of the signature/policy update and risk of the relevant attack. Then build out a deployment schedule based on priority, scheduled maintenance windows, and other factors. This usually involves the participation of multiple stakeholders – ranging from application, network, and system owners to business unit representatives if downtime or changes to application use models is anticipated. Test and Approve: This step requires you to develop test criteria, perform any required testing, analyze the results, and approve the signature/rule change for release once it meets your requirements. Testing should include signature installation, operation, and performance impact on the device as a result of the change. Changes may be implemented in ‘log-only’ mode to observe their impact before committing to blocking mode in production. With an understanding of the impact of the change(s), the request is either approved or denied. Obviously approvals may be required from

Share:
Read Post

Incite 9/11/2013: Brave New World

On a trip to the Bay Area recently, I drove past the first electronic billboard I ever saw. It’s right on the 101 around Palo Alto, and has been there at least 7 or 8 years. This specific billboard brings up a specific and painful memory – it was also the first billboard I saw advertising Barracuda’s spam firewall many moons ago. But clearly it wasn’t the last. Working for CipherTrust (a competitor) at the time, I got calls and then started getting pictures of all the billboards from our field reps, who were sporting new phones with cameras. They wanted to know why we couldn’t have billboards. I told them we could have billboards or sales people, but not both. Amazingly enough they chose to stop calling me after that. That’s how I knew camera phones were going to be a big deal. At that point a camera built into your phone was novel. There was a time when having music and video on the phone was novel too. Not any more. Now almost every phone has these core features, and lots of other stuff we couldn’t imagine living without today. For example, when was the last time you asked a rental car company for a paper map? Or didn’t price check something you were buying in a store to see whether you could get it cheaper online? And fancy new capabilities are showing up every day. Yesterday the Apple fanboys were all excited about thumbprint authentication and a fancy flash. Unless you are a pretty good photographer, there really isn’t any reason to carry a separate camera around any more. I’m sure Samsung will come out with something else before long, and the feature war will continue. But keep in mind that just 7 years ago all these capabilities were just dreams of visionaries designing the next generation of mobile devices. And then the hard work of the engineers and designers to make those dreams a reality. And we are only getting started. It’s a brave new mobile-enabled world. And it s really exciting to see where we will end up next. –Mike Photo credit: “Brave New World #1” originally uploaded by Rodrigo Kore Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. Firewall Management Essentials Introduction Ecosystem Threat Intelligence Use Cases and Selection Criteria Assessing Ecosystem Risk The Risk of the Extended Enterprise Continuous Security Monitoring Migrating to CSM The Compliance Use Case The Change Control Use Case The Attack Use Case Classification Defining CSM Why. Continuous. Security. Monitoring? Database Denial of Service Countermeasures Attacks Introduction API Gateways Implementation Key Management Developer Tools Newly Published Papers Identity and Access Management for Cloud Services The 2014 Endpoint Security Buyer’s Guide The CISO’s Guide to Advanced Attackers Defending Cloud Data with Infrastructure Encryption Network-based Malware Detection 2.0: Assessing Scale, Accuracy, and Deployment Quick Wins with Website Protection Services Incite 4 U Touch me baby: I have long been skeptical of the possibility of widespread use of biometrics among consumers. What are the odds that someone could get a large percentage of consumers to carry around a fingerprint reader all the time? Phones were always the potential sweet spot, but most of the smaller optical readers we have seen integrated into smaller devices had serious usability issues. That’s why Apple’s Touch ID is so interesting (I wrote it up at TidBITS and Macworld). It uses a snappy capacitive sensor in a device with a crypto chip, ubiquitous network access, and even short-range wireless (Bluetooth LE). Plus, it is a single phone model which will see widespread adoption. Expect others to copy the idea (potentially a good thing, but good luck finding decent sensors) and to see some very interesting applications over the next few years. 2FA for the mass market, here we go! – RM Pull my finger: Schneier has it right that biometric systems can ‘almost certainly’ be hacked’, but shoving a fake finger in front of a fingerprint scanner isn’t it. Biometric analysis is more than just the scanner. Once you have scanned a retina or fingerprint, you send scanned data to some other location, comparing the data with a known representation of the print (probably a hash) in a database, and then send back a yea/nay to the service the user is trying to access – mobile phone, building, or whatever. That service may also perform some risk assessment before granting access. That entire ecosystem has to be secure as well. And the kicker is that the better the biometric detection piece, the more complex the system needs to be, leading to more potential methods to subvert the overall system! Biometrics should be a second factor of authentication, making fakery much more difficult. And the idea is popular because of the convenience factor for the user – biometrics can be more convenient than a password. But no one should consider them intrinsically more secure than passwords. Some people this is a bad idea. – AL Walenda CISO: Simon Wardley posted an interesting article about when it’s time to fire the CISO. You’d figure after a breach, right? Or maybe if a big compliance fine is heading your way. Those are both decent times to think about making a change. But Simon’s point is that when the CISO (or CIO, for that matter) can no longer balance the needs of business with the needs of security and make appropriate adjustments, then it is time for a change. Basically you need a tightrope walker, a Flying Walenda, to balance all the competing interests in today’s IT environments. If the business is constantly going around IT (to become Shadow IT), then there is clearly a failure to communicate or a resourcing problem. Either way, IT and/or security isn’t getting it done and some changes are probably in order. – MR Protection racket: I chuckled when completing the application for a corporate

Share:
Read Post

Incite 9/4/2013: Annual Reset

This week marks the end of one year and the beginning of the next. For a long time I took this opportunity around the holidays to revisit my goals and ensure I was still on track. I diligently wrote down my life goals and break those into 10, 5, and 1 year increments. Just to make sure I was making progress toward where I wanted to be. Then a funny thing happened. I realized that constantly trying to get somewhere else made me very unhappy. So I stopped doing that. That’s right. I don’t have specific goals any more. Besides the stuff on Maslow’s hierarchy, anyway. If I can put a roof over our heads, feed my family, provide enough to do cool stuff, and feel like I’m helping people on a daily basis, I’m good. Really. But there are times when human nature rears its (ugly) head. These are the times when I wonder whether my approach still makes sense. I mean, what kind of high-achieving individual doesn’t need goals to strive toward? How will I know when I get somewhere, if I don’t know where I’m going? Shouldn’t I be competing with something? Doesn’t a little competition bring out the best in everyone? Is this entire line of thinking just a cop-out because I failed a few times? Yup, I’m human, and my monkey brain is always placing these mental land mines in my path. Sustainable change is very hard, especially with my own mind trying to get me to sink back into my old habits. These thoughts perpetually attempt to convince me I’m not on the right path. That I need to get back to constantly striving for what I don’t have, rather than appreciating what I do have. Years ago my annual reset was focused on making sure I was moving toward my goals. Nowadays I use it to maintain my resolve to get where I want to be – even if I’m not sure where that is or when I will get there. The first year or two that was a real challenge – I am used to very specific goals. And without those goals I felt a bit lost. But not any more, because I look at results. If you are keeping score, I lead a pretty balanced life. I have the flexibility to work on what I want to work on with people I enjoying working with. I can work when I want to work, where I want to work. Today that’s my home office. Friday it will be in a coffee shop somewhere. Surprisingly enough, all this flexibility has not impacted my ability to earn at all. If anything, I am doing better than when I worked for the man. Yes, I’m a lucky guy. That doesn’t mean I don’t get stressed out during crunch time. That I don’t get frustrated with things I can’t control. Or that everything is always idyllic. I am human, which means my monkey brain wins every so often and I feel dissatisfied. But I used to feel dissatisfied most of the time, so that’s progress. I also understand that the way I live is not right for everyone. Working on a small team where everyone has to carry their own weight won’t work if you can’t sell or deliver what you sold. Likewise, without strong self-motivation to get things done, not setting goals probably won’t work out very well. But it works for me, and at least once a year I take a few hours to remind myself of that. Happy New Year (Shanah Tova) for those of you celebrating this week. May the coming year bring you health and happiness. –Mike Photo credit: “Reset” originally uploaded by Steve Snodgrass Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. Firewall Management Essentials Introduction Ecosystem Threat Intelligence Use Cases and Selection Criteria Assessing Ecosystem Risk The Risk of the Extended Enterprise Continuous Security Monitoring Migrating to CSM The Compliance Use Case The Change Control Use Case The Attack Use Case Classification Defining CSM Why. Continuous. Security. Monitoring? Database Denial of Service Countermeasures Attacks Introduction API Gateways Implementation Key Management Developer Tools Newly Published Papers The 2014 Endpoint Security Buyer’s Guide The CISO’s Guide to Advanced Attackers Defending Cloud Data with Infrastructure Encryption Network-based Malware Detection 2.0: Assessing Scale, Accuracy, and Deployment Quick Wins with Website Protection Services Incite 4 U Wherefore art thou, cyber-liability insurance?: Interesting circumstances around Liberty Mutual suing their customer to define what they will and won’t cover with cyber insurance. As Dan Glass says, Liberty Mutual treats cyber just like physical assets. That means they will pay for the cost of the breach (like they pay for the destruction of physical assets), but they don’t want to cover other losses (such as regulatory fines or customer privacy lawsuits, etc.). If they are successful in defining these boundaries around their liability, Dan correctly points out: In other words, cyber insurance will be a minor part of any technology risk management program. Don’t let your BOD, CFO, or CIO get lulled into thinking cyber insurance will do much for the organization. – MR Big R, Little r, what begins with R? My views on risk management frameworks have seriously changed over the past decade or so. I once wrote up my own qualitative framework (my motivation now eludes me, but youthful exuberance was likely involved), I have mostly been disillusioned with the application of risk management methodologies to security – particularly quantitative models that never use feedback to match predictions against reality. Russell Thomas has a great post showing the disconnect between how many of us in security look at risk, compared to more mature financial models. To paraphrase, we often take a reductionist approach and try and map vulnerabilities and threats to costs –

Share:
Read Post

Firewall Management Essentials: Introduction [New Series]

It starts right there in PCI-DSS Requirement 1. Install and maintain a firewall configuration to protect cardholder data. Since it’s the first requirement, firewalls must be important, right? Not that PCI is the be all, end all of security goodness, but it does represent the low bar of controls you should have in place to defend against attackers. As the line of first defense on a network, it’s the firewall’s job to enforce a set of access policies that dictate what traffic should be allowed to pass. It’s basically the traffic cop on your network, as well as acting as a segmentation point between separate networks. Given the compliance mandates and the fact that firewalls have been around for over 20 years, they are a mature technology which every company has installed. It may be called an access router or UTM, but it provides firewall functionality. The firewalls run on a set of rules that basically define what ports, protocols, networks, users, and increasingly applications, can do on your network. And just like a closet in your house, if you don’t spend time sorting through old stuff it can become a disorganized mess, with a bunch of things you haven’t used in years and don’t need any more. That metaphor fits your firewall rule base – when we talk to security administrators and they admit (often in a whisper) to having thousands of firewall rules, many of which haven’t been looked at in years. The problem is that, like your closet, this issue just gets worse if you put off addressing the issue. And it’s not like rule bases are static. You have new requests coming in to open this port or allow that group of users to do something new or different pretty much every day. The situation can get out of control quickly, especially as you increase the number of devices in use. That creates significant operational and security problems, including: Attack Surface Impact: When a change request comes in, how many administrators actually do some work to figure out whether the change would create any additional attack surface or contradict existing rules? Probably not enough, so firewall management – first and foremost – must maintain the integrity of the protection the firewall provides. Performance Impact: Every extra rule in the rule base means the firewall may need to do another check on every packet that comes through, so more rules impact device the performance. Keep in mind that the order of your rule set also matters, as the sooner you can block a packet, the less rules you will have to check, so the rules should be structured to eliminate connections as quickly as possible. Verification: If a change was made, was it made correctly? Even if the change is legitimate and your operational team is good there will still be human errors. So another problem with firewall management at scale is verifying each change. Weak Workflow and Nonexistent Authorization: What happens when you receive a rule change? Do you have a way to ensure each change request is legit? Or do you do everything via 10 year old forms and/or email? Do you have an audit trail to track who asked for the change and why? Can you generate documentation to show why each change was made? If you can’t it is probably an issue, because your auditor is going to need to see substantiation. Scale: The complexity of managing any operational device increases exponentially with every new device you add. Firewalls are no exception. If you have a dozen or more devices, odds are you have an unwieldy situation with inconsistent rules creating security exposure. Heterogeneity: Many enterprises use multiple firewall vendors, which makes it even more difficult to enforce consistent rules across a variety of devices. As with almost everything else in technology, innovation adds a ton of new capabilities but increases operational challenges. The new shiny object in the firewall space is the Next-Generation Firewall (NGFW). At a very high level, NGFWs add the capability to define and enforce policies at the *application( layer. That means you can finally build a rule more granular than ALLOW port 80 traffic – instead defining which specific web-based applications are permitted. Depending on the application you can also restrict specific behaviors within an application. For example you might allow use of Facebook walls but block Facebook chat. You can enforce polices for users and groups, as well as certain content rules (we call this DLP Lite). The NGFW is definitely not your grand-pappy’s firewall, which means they dramatically complicate firewall policy management. But network security is going through a period of consolidation. Traditionally separate functions such as IPS and web filtering are making their way onto a consolidated platform that we call the Perimeter Security Gateway (PSG). Yup, add more functions to the device and you increase policy complexity – making it all the more important to maintain solid operational discipline when managing these devices. In any sizable organization the PSG rule base will be difficult to manage manually. Automation is critical to improving speed, accuracy, and effectiveness of these devices. We are happy to get back to our network security roots and documenting our research on the essentials of managing firewalls. This research is relevant both to classical firewalls and PSGs. In Firewall Management Essentials we will cover the major areas of managing your installed base of firewalls. Here is our preliminary plan for posts in the series: Automating Change: Firewall management comes down to effectively managing exceptions in a way that provides proper authorization for each change, evaluating each one to ensure security is maintained (including ensuring new attack paths are not introduced), auditing all changes, and rolling back in the event of a problem. Optimizing Performance: Once the change management process is in place, the next step is to keep the rule set secure and optimized. We will discuss defining the network topology and identifying ingress and egress points to help prioritize rule sets, and point out potential weaknesses in security posture. Managing Access: With a strong change control process and optimized

Share:
Read Post

Deming and the Strategic Nature of Security

FierceCIO’s Derek Slater offers an interesting perspective on why W. Edwards Deming hates your approach to IT security. I was educated as an industrial engineer, so we had to study Deming left, right, and center in school. Of course when I graduated and went into programming, nobody realized that Deming’s concepts also apply to software development. But that’s another story for another Six Sigma. Derek’s point is that as long as security is treated as a tactical, reactive, part of the organization… it’s doomed to fail. The most common approach is that IT security is regarded as a tactical discipline. The IT security director is part of the IT department, reports to the CIO (or lower), and manages his or her work based on a set of tactical metrics–many of which are merely forms of counting: We blocked this number of web-based attacks and this other number of malware attachments. This approach is purely reactive and therefore doomed to fail. The late business management guru W. Edwards Deming said this about reactive management–that it’s not rational: “Rational behavior requires theory. Reactive behavior requires only reflex action.” He also said this about counting: “It is easy to count. Counts relieve management of the necessity to contrive a measure with meaning.” Yup. The answer is to become more strategic in the eyes of the folks who matter. You could certainly become Pragmatic as a means to do that. But Derek offers a few pointers on that front as well. First is to treat security as a risk management function. As long as you can gain consensus on how to quantify security risk, that’s a good start. Second, you had better React Faster, because you are only as good as your last response. We agree. Finally, security needs better measurement. No kidding. There, friends, is the biggest gap in security: becoming strategic to the business. It’s measuring what we do relative to the business metrics that make an impact on the value of your company. Unfortunately there is no simple answer to the question of what matters to your business. Photo credit: “W. Edwards Deming–statistician…saint” originally uploaded by Peter Kazanjy Share:

Share:
Read Post

Incite 8/27/2013: You Can’t Teach Them Everything

It’s nice that my kids are still at a stage where they don’t want to disappoint me or the Boss. They need our approval and can be crushed if we show even the slightest measure of dissatisfaction in what they do. My ego-centric self likes that, but the rest of me wants them to learn to stop worrying about what everyone thinks and do what they think is right. Of course, that involves having enough life experience to understand the difference between right and wrong. I know that a 12 (soon to be 13) year old is not there yet. She still has much to learn. I’m happy to share my experiences so she can learn from them. I told the stories about when I was bullied. About how I learned that hard work creates results (with a bunch of luck). I have tried to impress upon her how important it is to surround yourself with people who appreciate the uniqueness we all possess in different ways. And for all I do, the Boss does 10x. All to give the kids a chance to be productive citizens and good people. If I could teach her even a portion of my experiences over the past (almost) 45 years, she wouldn’t need to go through my angst, suffer my disappointment, or learn the lessons I’ve learned… the hard way. But I can’t. Because kids don’t listen. Maybe they listen or pretend to, but they certainly don’t understand. How could they understand? Some things you just have to learn for yourself. But hopefully there aren’t tens of millions of people watching as those hard lessons are learned. And hopefully the lesson isn’t documented in video and photos, and doesn’t go viral via more Tweets per second than the Super Bowl. Yes, I am talking about the fiasco that Miley Cyrus has become. To be honest, I haven’t watched the performance on the MTV VMAs. I can’t bring myself to do it. I’ve seen that movie before. Child star gets too famous too fast, makes too much money, surrounds themselves with too many vultures and predators, gets very very lost, and becomes tabloid fodder. I’ve got November 10 in the Miley rehab pool. And where are her parents to tell her she’s being an idiot? I mean, what do you think Billy Ray was thinking as he watched her performance? Actually, I don’t care what he was thinking. What would you be thinking if that was your child? It brings front and center Chris Rock’s famous line: “If you can keep your son off the pipe and your daughter off the pole, you’re ahead of the game.” But you still can’t teach kids everything. Sometimes they have to learn hard lessons themselves. And it’s gonna hurt. Your job is to pick them up. Dust them off. Then help them get back on the horse. But most of all, they need to know that you love them. During the good times and bad. Especially during the bad times… –Mike Photo credit: “Bad Teacher” originally uploaded by Sonya Cheney Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. Ecosystem Threat Intelligence Use Cases and Selection Criteria Assessing Ecosystem Risk The Risk of the Extended Enterprise Continuous Security Monitoring Migrating to CSM The Compliance Use Case The Change Control Use Case The Attack Use Case Classification Defining CSM Why. Continuous. Security. Monitoring? Database Denial of Service Countermeasures Attacks Introduction API Gateways Implementation Key Management Developer Tools Newly Published Papers The 2014 Endpoint Security Buyer’s Guide The CISO’s Guide to Advanced Attackers Defending Cloud Data with Infrastructure Encryption Network-based Malware Detection 2.0: Assessing Scale, Accuracy, and Deployment Quick Wins with Website Protection Services Incite 4 U I get AV’s cufflinks: Great thought-provoking post by Wendy Nather about the marketing-driven evolution of anti-malware technology. Succinctly stated: “what comes after advanced?” Her points are well-stated: no AV vendor merely uses signatures, and it’s all about detection – not necessarily prevention – now. I guess this React Faster stuff might have some legs. Though the best line in the post is “Nobody wants to say antivirus is dead, but let’s just say they’re planning ahead for the wake and eyeing the stereo.” That kind of prevention is obsolete, but as evidenced by the IBM/Trusteer deal, clearly there is a great future (at least from a valuation standpoint) for companies with new-age prevention technology. But what happens when that advanced stuff isn’t differentiated anymore? I guess the marketeers will need to come up with a new term to describe the next shiny object. – MR Tick tock: Dealing with a breach is never a lot of fun. First you need to detect it in the first place, then you need to figure out whether it’s real, what exactly is going on, what was affected, and how. All while containing the incident, keeping as many important things running as possible, and figuring out a recovery strategy. For anything resulting in lost data, it is an unenviable process to work through. Then, if regulated data is lost, there is the eventual breach notification, which senior executives love. Okay, now imagine that you have 24 hours to notify authorities of any breach and get all the details to them within 72 hours. Because if you operate in the EU, that is your new time limit. I’m all for breach notification laws, but that one might be a tad unrealistic. Keep in mind that we still need to see how it is going to be enforced, but you had better get your lawyers cracking on it now. You know, just in case. – RM Growth business: The latest Nilson Report on global card fraud rates is out, and fraud now accounts for 5.22% of all card transactions. In fact, even with card usage up 11.4% YoY, fraud is up 14.6% over the same period. And when you’re

Share:
Read Post

Ecosystem Threat Intelligence: Use Cases and Selection Criteria

We touched on the Risks of the Extended Enterprise and the specifics of Assessing Partner Risk, so now let’s apply these concepts to a few use cases to help make the concepts a little more tangible. We will follow a similar format for each use case, talking about the business needs for access, then the threat presented by that access, and finally how Ecosystem Threat Intelligence (EcoTI) helps you make better decisions about specific partners. Without further ado, let’s jump in. Simple Business Process Outsourcing Use Case Let’s start simply. As with many businesses, sometimes it is cheaper and better to have external parties fulfill non-strategic functions. We could be talking about anything, from legacy application maintenance to human resources form processing. But almost all outsourcing arrangements require you to provide outsiders with access to your systems so they can use some of your critical data. For any kind of software development an external party needs access to your source code. And unless you have a very advanced and segmented development network, developers have access to much more than just the (legacy) applications they are working on. So if any of their devices are compromised, attackers can gain access to your developer’s devices and build systems, and a variety of other things that would probably be bad. If we are talking about human resources outsourcing, those folks have access to personnel records, which may include sensitive information including salaries, employment agreements, health issues, and other stuff you probably don’t want published on Glassdoor. Even better, organizations increasingly use SaaS providers for HR functions, which moves that data outside your data center and removes even more of your waning control. The commonality between these two outsourcing situations is that access is restricted to just one trading partner. Of course you might use multiple development shops, but for simplicity’s sake we will just talk about one. In this case your due diligence occurs while selecting the provider and negotiating the contract. That may entail demanding background checks on external staffers and a site visit to substantiate sufficient security controls. At that point you should feel pretty good about the security of your trading partner. But what happens after that? Do you assess these folks on an ongoing basis? What happens if they hire a bad apple? Or if they are attacked and compromised due to some other issue that has nothing to do with you? Thus, the importance of an ongoing assessment capability. If you are a major client of your outsourcer you might have a big enough stick to get them to share their network topology. So at least you won’t have to build that yourself. In this scenario, you are predominately concerned with bot activity (described as Sickness from Within in our previuos Risk Assessment post) because that’s the smoking gun for compromised devices with access. Compromised Internet-facing devices can also cause issues so you need to consider them too. But as you can see, in this use case it makes sense to prioritize internal issues over the public-facing vulnerabilities when you calculate a relative risk score. In this limited scenario it is not really a relative risk score, because you aren’t really comparing the provider to anyone else, because only one external party has access any particular dataset. So if your Ecosystem Threat Intelligence alerts you to an issue with this partner you will need to act quickly. Their access could cause you real problems. Many Partners Use Case To complicate things a bit let’s consider that you may need to provide access to many trading partners. Perhaps external sales reps have access to your customer database and other proprietary information about your products and services. Or perhaps your chain of hospitals provides access to medical systems to hundreds of doctors with privileges to practice at your facilities. Or it could even be upstream suppliers who make and assemble parts for your heavy machinery products. These folks have your designs and sales forecasts, because they need to deliver inventory just in time for you to get the product out the door (and hit your quarterly numbers). Regardless of the situation, you have to support dozens of trading partners or more, offering them access to some of your most critical enterprise data. Sometimes it’s easier for targeted attackers to go after your trading partners, than to target you directly. We have seen this in the real world, with subassembly manufacturers of defense contractors hacked for access to military schematics and other critical information on a particular weapons program. In this situation, as in the use case above, the security team typically cannot refuse to connect with the partner. Sales executives frown on the security team shutting down a huge sales channel. Similarly like the security team cannot tell the final assembly folks they can’t get their seats because the seat manufacturer got breached. Although you can’t stop the business, you can certainly warn the senior team about the risks of connecting with that specific trading partner. But to substantiate those concerns, you need data to back up your claim. This is where calculating relative risk scores for multiple trading partners can really help make your case. It’s probably not a bad assumption that all trading partners are compromised in some fashion. But which ones are total fiascos? Which partners cannot even block a SQLi attack on an ecommerce site? Which has dozens of bots flooding the Internet with denial of service attacks? Specifics from your Ecosystem Threat Intel efforts enable you to make a fact-based case to senior executives that connecting to a partner is not worth the risk. Again, you can’t make the business decision for that executive, but you can arm them with enough information for them to make a rational decision. Or you could suggest an alternative set of security controls for those specific partners. You might force them to connect into your systems through a VDI (virtual desktop) service on your premises (so your data never leaves your network) and monitor everything they do in

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.