Network Security Fundamentals: Default Deny (UPDATED)

By Mike Rothman

(Update: Based on a comment, I added some caveats regarding business critical applications.)

Since I’m getting my coverage of Network and Endpoint Security, as well as Security Management, off the ground, I’ll be documenting a lot of fundamentals. The research library is bare from the perspective of infrastructure content, so I need to build that up, one post at a time.

As we start talking about the fundamentals of network security, we’ll first zero in on the perimeter of your network. The Internet-facing devices accessible by the bad guys, and usually one of the prevalent attack vectors.

Yeah, yeah, I know most of the attacks target web applications nowadays. Blah blah blah. Most, but not all, so we have to revisit how our perimeter network is architected and what kind of traffic we allow into that web application in the first place.

Defining Default Deny

Which brings us to the first topic in the fundamentals series: Default Deny, which implements what is known in the trade as a positive security model. Basically it means unless you specifically allow something, you deny it.

It’s the network version of whitelisting. In your perimeter device (most likely a firewall), you define the ports and protocols you allow, and turn everything else off.

Why is this a good idea? Lots of attacks target unused and strange ports on your firewalls. If those ports are shut down by default, you dramatically reduce your attack surface. As mentioned in the Low Hanging Fruit: Network Security, many organizations have out-of-control firewall and router rules, so this also provides an opportunity to clean those rules up as well.

As simple an idea as this sounds, it’s surprising how many organizations either don’t have default deny as a policy, or don’t enforce it tightly enough because developers and other IT folks need their special ports opened up.

Getting to Default Deny

One of the more contentious low hanging fruit recommendations, as evidenced by the comments, was the idea to just blow away your overgrown firewall rule set and wait for folks to complain. A number said that wouldn’t work in their environments, and I can understand that. So let’s map out a few ways to get to default deny:

  • One Fell Swoop: In my opinion, we should all be working to get to default deny as quickly as possible. That means taking a management by compliant approach for most of your traffic, blowing away the rule set, and waiting for the help desk phone to start ringing. Prior to blowing up your rule base, make sure to define the handful of applications that will get you fired if they go down. Management by Compliant doesn’t work when the compliant is attached to a 12-gauge pointed at your head. Support for those applications needs to go into the base firewall configuration.
  • Consensus: This method involves working with senior network and application management to define the minimal set of allowed protocols and ports. Then the impetus falls on the developers and ops folks to work within those parameters. You’ll also want a specific process for exceptions, since you know those pesky folks will absolutely positively need at least one port open for their 25-year-old application. If that won’t work, there is always the status quo approach…
  • Case by Case: This is probably how you do things already. Basically you go through each rule in the firewall and try to remember why it’s there and if it’s still necessary. If you do remember who owns the rule, go to them and confirm it’s still relevant. If you don’t, you have a choice. Turn it off and risk breaking something (the right choice) or leave it alone and keep supporting your overgrown rule set.

Regardless of how you get to Default Deny, communication is critical. Folks need to know when you plan to shut down a bunch of rules and they need to know the process to get the rules re-established.

Testing Default Deny

We at Securosis are big fans of testing your defenses. That means just because you think your firewall configuration enforces default deny, you need to be sure. So try to break it. Use vulnerability scanners and automated pen testing tools to find exposures that can be exploited. And make this kind of testing a standard part of your network security practice.

Things change, including your firewall rule set. Mistakes are made and defects are introduced. Make sure you are finding them – not the bad guys.

Default Deny Downside

OK, as simple and clean as default deny is as a concept, you do have to understand this policy can break things, and broken stuff usually results in grumpy users. Sometimes they want to play that multi-player version of Doom with their college buddies and it uses a blocked port. Oh, well, it’s now broken and the user will be grumpy. You also may break some streaming video applications, which could become a productivity boost during March Madness. But a lot of the video guys are getting more savvy and use port 80, so this rule won’t impact them.

As mentioned above, it’s important to ensure the handful of business critical applications still run after the firewall ruleset rationalization. So do an inventory of your key applications and what’s required to support those applications. Integrate those rules into your base set and then move on. Of course, mentioning that your trading applications probably shouldn’t need ports 38-934 open for all protocols is reasonable, but ultimately the business users have to balance the cost to re-engineer the application versus the impact to security posture of the status quo. That’s not the security team’s decision to make.

Also understand default deny is not a panacea. As just mentioned, lots of application traffic uses port 80 or 443 (SSL), and will largely be invisible to your firewall. Sure, some devices claim “deep packet inspection” and others talk about application awareness, but most don’t. So more sophisticated attacks require additional layers of defense.

Understand default deny for what it is: a coarse filter for your perimeter, which reduces your attack surface. And it’s one of the more basic network security fundamentals.

Next up, we’ll talk about network monitoring, since that is both a hot topic and fundamental to defending your network.

No Related Posts

Michael Dundas has claimed that “large companies” can’t use default deny because they’re apparently far too complex.  He overlooked two things which are just as true today as they were five years ago.

1) business-critical applications should be documented, and the documentation should include what they do on the network. Security implements controls based on business requirements, and if the business requirements are incomplete, then there will obviously be errors.  I suppose there are always people who try to blame Security when the controls do exactly what they’re supposed to do, but that’s a political problem rather than a technical one.  Political problems have no place in protecting customer or enterprise data.

2) an actual “large” company will have a test environment.  You don’t deploy things to production until they’ve been tested.  Some kids will inevitably come along and try to avoid testing by misusing “agile” development (hint: there’s still testing if you’re correctly doing anything agile), but anyone working at a big important company like the one Michael is implicitly referencing with the “12 people” remark definitely has a test environment.  If there is no test environment, then things are likely already breaking when there are changes—so more breakage when better security is introduced won’t be a new thing.

People are all over who claim that it’s far too hard to do things correctly, and then hide behind fallacies like “in the real world, things are more complex” and “you just don’t understand.”  Those work-avoiding people are surprisingly common, and are the reason that there is a new security breach in the news every could of days.

I work for a fairly large company (well over 100K “servers”, let alone workstations) which uses a default deny approach, BTW, and we don’t have any executives complaining that they can’t play Doom. ;)

Oh, and one more thing. One approach that I like was overlooked in the 3 implementation possibilities above.  When I implement a new control of some sort, I typically try to implement first in a log-only mechanism.  This approach allows “legacy” things to continue operating, and the logs give a good place to start when identifying what kind of traffic is “normal” and which of that traffic should be allowed.  That’s only possible when the control can log things, but anyone using a firewall that can’t generate logs might want to consider a different firewall. :)

By Danny

My first comment would be to make sure and mention both ingress and egress rule lists. Both should strive for default deny. It is still rare that egress is properly set up.

Second, I’d just like to say I share your tone and attitude when it comes to users. I’ve stopped being sincere when apologizing for it, although I’ll still half-heartedly do so.

Third, you can expound bullet #3 “case by case” by also including time spent investigating existing traffic. If you have an allow that has no detail or owner, the next best thing is to check your logs or dive into some netflow statistics (depending on your firewall capabilities) and see what, if anything, is traversing that rule (or is maybe getting caught further up the chain!). This helps avoid closing off rules that are critical, but at least keeps the ball moving forward by identifying rules and/or killing the unused ones. Ideally, the goal is to be as tight as possible and be documented for all the allowances. A firewall rule with a question mark next to it is a bad firewall rule.

By LonerVamp

As most say, my tone and attitude are “acquired tastes.” There needs to be a middle ground in terms of letting the users do what they want, and keeping a safe environment. Clearly we can’t be perceived as “Dr. No,” but being the rubber stamp committee isn’t going to yield results either.

I agree with your point about classifying a handful of business critical apps and treating them with kid gloves. I’m going to be updating the post later today to reflect that perspective, so thanks for that.

By Mike Rothman

Marcus Ranum is clapping reading this post

By Ben

“OK, as simple and clean as default deny is as a concept, you do have to understand this policy can break things, and broken stuff usually results in grumpy users. Sometimes they want to play that multi-player version of Doom with their college buddies and it uses a blocked port. Oh, well, it’s now broken and the user will be grumpy.” 

Grumpy users?  How about, you break an application or parts of an application that are responsible for what makes the company lots of money.  Take for example an application that involves financial transactions and has daily code updates that change the application. Are you honestly willing to use the default deny approach here?  I’d suggest that it is not feasible and any responsible business unit owner is not going to allow you to just deploy a default deny.  Maybe in a little office of 12 people, but with stock trading systems, online banking it is doubtful, at least in my experience.

I am all for the theory of default deny when it comes to security. It is a good objective or goal and I always try to get there.  Experience has taught me however, that the goal of security has to be put in perspective with the inconvenience, cost, and risk to the company.  You identify exposures to policies and deployment of policies and in some cases certain exposures may be an acceptable risk or they may not. That is up to the business owner to decide, based on the analysis and business data available.

While I agree with blocking unused ports etc., and blocking based on protocols, these are completely different security concepts and should be treated separately.  Try to block Skype or just Skype Video and not Skype Instant messaging and see how far you get with the latest technology today. 
I’ve seen firewalls and IDS where SMTP is blocked because it doesn’t follow the RFC to the letter, and guess what most SMTP servers do not follow the RFC 100%.  Most webservers Apache included do not follow HTTP specifications 100%. 

I also take issue with the general tone and attitude of the post, specifically towards ‘users’.  This attitude and tone is what causes rifts between security departments and business units as well as security consultants and business units. Security individuals, teams, and consultants need to be diplomatic, aware of sensitivities, and handle their approach with care.  A little bit of understanding and conversation goes a long way, something the security community often times lacks.

By Michael Dundas

If you like to leave comments, and aren’t a spammer, register for the site and email us at and we’ll turn off moderation for your account.