In Mike’s post this morning on network security he made the outlandish suggestion that rather than trying to fix your firewall rules, you could just block everything and wait for the calls to figure out what really needs to be open.
I made the exact same recommendation at the SANS data security event I was at earlier this week, albeit about blocking access to files with sensitive content.
I call this “management by complaint”, and it’s a pretty darn effective tactic. Many times in security we’re called in to fix something after the fact, or in the position of trying to clean up something that’s gotten messy over time. Nothing wrong with that – my outbound firewall rules set on my Mac (Little Snitch) are loaded with stuff that’s built up since I set up this system – including many out of date permissions for stale applications.
It can take a lot less time to turn everything off, then turn things back on as they are needed. For example, I once talked with a healthcare organization in the midst of a content discovery project. The slowest step was identifying the various owners of the data, then determining if it was needed. If it isn’t known to be part of a critical business process, they could just quarantine the data and leave a note (file) with a phone number.
There are four steps:
- Identify known rules you absolutely need to keep, e.g., outbound port 80, or an application’s access to its supporting database.
- Turn off everything else.
- Sit by the phone. Wait for the calls.
- As requests come in, evaluate them and turn things back on.
This only works if you have the right management support (otherwise, I hope you have a hell of a resume, ‘cause you won’t be there long). You also need the right granularity so this makes a difference. For example, one organization would create web filtering exemptions by completely disabling filtering for the users – rather than allowing what they needed.
Think about it – this is exactly how we go about debugging (especially when hardware hacking). Turn everything off to reduce the noise, then turn things on one by one until you figure out what’s going on. Works way better than trying to follow all the wires while leaving all the functionality in place.
Just make sure you have a lot of phone lines. And don’t duck up anything critical, even if you do have management approval. And for a big project, make sure someone is around off-hours for the first week or so… just in case.
Reader interactions
5 Replies to “Management by Complaint”
This idea ‘default deny’ has been around the security industry (physical and network) for years. Personally, I look at is as an ‘ideal’. My experience is that you rarely if ever will be able to deploy in that way. Maybe for a new application or service that has a amazing time line where the business doesn’t care when they launch or their launch date is flexible, but rarely have projects I have been involved with have that luxury. In my experiences, it is a bunch of changes to an existing on-line system where the end users must not see the changes or they are ‘transparent’. It must go smoothly, and it must be deployed by a specific date.
While I appreciate the goal of ‘default deny’ and always try obtain it for a particular project, inconveniencing users and waiting for them to call can be very costly to a business. A business has to way the risks of security against the cost and inconvenience placed on users and customers. Not to mention, it doesn’t do much to warm public relations for your security team.
Echo ds comment.
When I turned on default deny outbound on firewalls, with exceptions for known stuff, I expected a flood of calls. Nada. Zip. Zilch. Over the years, I have a lot of cruft in the firewall rules, and occasionally I go thru them and make sure that user X and project Y still need that exception—helps to keep some kind of documentation.
What infuriates me is when HQ decides that Enterprise Application X needs to run on Port Random, or even better, switch from port OldRandom to NewRandom. Except they don’t tell IT people in the field, then *my* phone starts ringing.
And don’t even get me started at all the firewall rules I had to turn off or tweak to get Active Directory to work, including really basic IDS rules. Definitely lowered our security posture.
We actively use the scream test for rolling out OS and major application patches. Our R&D heavey environment is way too diverse to do testing in advance, so we push the patches and wait for the screams. Most major vendors have gotten much better at not creating widespread mayhem with patches. But user memories can be long, and every bug that appears after Patch Week is attributed to the Patches we pushed – even on Operating Systems we *didn’t* patch. Such is life.
Locally, we require advanced notice, documentation, rollback plans, and cursory risk analysis for changes, so we stand some chance of attributing problems to causes. This is basic IT change control, but we can’t even get our corporate overlords to practice change control for enterprise apps, or major network changes. Common sense is not common around here.
I actually had to do this process recently, as my firm (firewall deny by default) acquired another firm (firewall allow any outbound). We did analyze their logs and try to model what would have been a reasonable outbound policy for them, but when we moved them behind out firewalls, we knew there would be issues.
Funny thing is, while we expected to be overwhelmed, it never did get out of hand. Not sure my experience here is representative, but I was pleasantly surprised by the ease of this process.
Place I used to work, they called this the “scream test”
Hey Rich
The Non-BSOFH way to do this: http://www.guerilla-ciso.com/archives/312
But yeah, at some time you’re going to have to shut off stuff and see who complains. The trick is that there is a point along the block_everything<—————>trace_every_connection spectrum where you don’t get flooded with calls.