How often have you heard the phrase, “Never assume” (insert the cheesy catch phrase that was funny in 6th grade here)?

For the record, it’s wrong.

When designing our security, disaster recovery, or whatever, the problem isn’t that we make assumptions, it’s that we make the wrong assumptions. To narrow it down even more, the problem is when we make false assumptions, and typically those assumptions skew towards the positive, leaving us unprepared for the negative. Actually, I’ll narrow this down even more… the one assumption to avoid is a single phrase: “That will never happen.”

There’s really no way to perform any kind of forward-looking planning without some basis for assumptions. The trick to avoiding problems is that these assumptions should generally skew to the negative, and must always be justified, rather than merely accepted. It’s important not to make all your decisions based on worst cases because that leads to excessive costs. Expose all the the assumptions helps you examine the corresponding risk tolerance.

For example, in mountain rescue we engaged in non-stop scenario planning, and had to make certain assumptions. We assumed that a well cared for rope under proper use would only break at its tested breaking strength (minus knots and other calculable factors). We didn’t assume said breaking strength was what was printed on the label by the manufacturer, but was our own internal breaking strength value, determined through testing. We would then build in a minimum of a 3:1 safety factor to account for unexpected dynamic strains/wear/whatever. In the field we were constantly calculating load levels in our heads, and would even occasionally break out a dynamometer to confirm. We also tested every single component in our rescue systems – including the litter we’d stick the patient into, just in case someone had to hang off the end of it.

Our team was very heavy with engineers, but that isn’t the case with other rescue teams. Most of them used a 10:1 safety factor, but didn’t perform the same kinds of testing or calculations we did. There’s nothing wrong with that… although it did give our team a little more flexibility.

I was recently explaining the assumptions I used to derive our internal corporate security, and realized that I’ve been using a structured assumptions framework that I haven’t ever put in writing (until now). Since all scenario planning is based on assumptions, and the trick is to pick the right assumptions, I formalized my approach in the shower the other night (an image that has likely scarred all of you for life). It consists of four components:

  1. Assumption
  2. Reasoning: The basis for the assumption.
  3. Indicators: Specific cues that indicate whether the assumption is accurate or if there’s a problem in that area.
  4. Controls: The security/recovery/safety controls to mitigate the issue.

Here’s how I put it in practice when developing our security:

Assumption: Securosis in general, and myself specifically, are a visible target.

Reasoning: We are extremely visible and vocal in the security community, and as such are not only a target of opportunity. We also have strong relationships within the vulnerability research community, where directed attacks to embarrass individuals are not uncommon. That said, we aren’t at the top of an attacker’s list – there is no financial incentive to attack us, nor does any of our work directly interfere with the income of cybercriminal organizations. While we deal with some non-public information, it isn’t particularly valuable in a financial context. Thus we are a target, but the motivation would be to embarrass us and disrupt our operations, not to generate income.

Indicators: A number of our industry friends have been targeted and successfully attacked. Last year one of my private conversations with one such victim was revealed as part of an attack. For this particular assumption, no further indicators are really needed.

Controls: This assumption doesn’t drive specific controls, but does reinforce a general need to invest heavily in security to protect against a directed attack by someone willing to take the time to compromise myself or the company. You’ll see how this impacts things with the other assumptions.


Assumption: While we are a target, we are not valuable enough to waste a serious zero-day exploit on.

Reasoning: A zero-day capable of compromising our infrastructure will be too financially valuable to waste on merely embarrassing a gaggle of analysts. This is true for our internal infrastructure, but not necessarily for our web site.

Indicators: If this assumption is wrong, it’s possible one of our outbound filtering layers will register unusual activity, or we will see odd activity from a server.

Controls: Outbound filtering is our top control here, and we’ve minimized our external surface area and compartmentalized things internally. The zero-day would probably have to target our individual desktops, or our mail server, since we don’t really have much else. Our web site is on a less common platform, and I’ll talk more about that in a second. There are other possible controls we could put in place (from DLP to HIPS), but unless we have an indication someone would burn a valuable exploit on us, they aren’t worth the cost.


Assumption: Our website will be hacked.

Reasoning: We do not have the resources to perform full code analysis and lockdown on the third party platform we built our site on. Our site is remotely co-hosted, which also opens up potential points of attack. It is the weakest link in our infrastructure, and the easiest point to attack short of developing some new zero-day against our mail server or desktops.

Indicators: Unusual activity within the site, or new administrative user accounts. We periodically review the back-end management infrastructure for indicators of an ongoing compromise, including both the file system and the content management system. For example, if HTML rendering in comments was suddenly turned on, that would be an indicator.

Controls: We deliberately chose a service provider and platform with better than average security records, and security controls not usually available for a co-hosted site. We’ve disabled any HTML rendering in comments/forum posts, and promote use of NoScript when visiting our site to reduce user exposure when it’s compromised. On our side, we mandate single-site passwords for all the staff, which are not reused anywhere else. The site is hosted separately from our other infrastructure. I encourage everyone to use a single site browser that is locked down to only render content from our site (to avoid XSS/CSRF). I use two different layers to ensure I can only access the site, and nothing but the site, from my dedicated browser. Thus our own site shouldn’t be able to be used to compromise any other part of our infrastructure when someone finally pops it. Also, right now we don’t store sensitive information about any visitors on the site (no PII). When we do start offering for-pay products, we will use external credit card processing, pay for ongoing penetration testing, and remind our users to never reuse their site password anyplace else. We have a multi-level backup scheme to minimize lost data when the site is finally hacked.


Assumption: Our mail server is the most valuable target for an attacker.

Reasoning: Assuming our attacker is out to steal proprietary information or just embarrass us, our mail server is the best target (except for maybe my personal desktop). That’s where our sensitive client information is, and we pretty much give everything else away for free.

Indicators: Either a rise in attack activity on our mail server, or new outbound connections/accounts.

Controls: We have multiple layers of security on the mail server. It’s on an isolated network with nothing else on that network segment to compromise. This is the one area I don’t want to discuss in detail, but we have at least two filtering layers to get to the server (more than just a firewall), and outbound connection restrictions with a serious deny-all policy. Our mail server is locked up in my house (no remote admins, no other sites on the server that could be compromised to get to us), but not connected to my home network. The server itself is locked down pretty tight – we don’t even allow AV/anti-spam on the server since that could be a vector for attack (in other words, we minimize message processing). There’s even more, but despite what they say a little obscurity is sometimes good for security. If someone can get this server, they’ve fracking earned it.

This is already longer than I planned, but you can see the process. I’ve done the same thing for my day to day system and laptop, with a set of corresponding controls. Despite all this I’ll probably be hacked someday, but it will take a hack of a lot of time and effort since I always assume I’m under attack, and take precautions far above normal best practices. My goal is to make the effort to get to me high enough that to succeed, someone will have to give up far more lucrative financial opportunities. Even bad guys need to feed their families.

Assumptions are good… as long as you understand the reasoning, define indicators to track if they are right or wrong over time, and use them to develop corresponding controls.

Share: