My 2015 Personal Security Guiding Principles and the New Rand Report

By Rich

In 2009, I published My Personal Security Guiding Principles. They hold up well, but my thinking has evolved over six years. Some due to personal maturing, and a lot due to massive changes in our industry.

It’s time for an update. The motivation today comes thanks to Juniper and Rand. I want to start with my update, so I will cover the report afterwards.

Here is my 2015 version:

  1. Don’t expect human behavior to change. Ever.
  2. Simple doesn’t scale.
  3. Only economics really changes security.
  4. You cannot eliminate all vulnerabilities.
  5. You are breached. Right now.

In 2009 they were:

  1. Don’t expect human behavior to change. Ever.
  2. You cannot survive with defense alone.
  3. Not all threats are equal, and all checklists are wrong.
  4. You cannot eliminate all vulnerabilities.
  5. You will be breached.

The big changes are dropping numbers 2 and 3. I think they still hold true, and they would now come in at 6 and 7 if I wasn’t trying to keep to 5 total. The other big change is #5, which was You will be breached. and is now You are breached.

Why the changes? I have always felt economics is what really matters in inciting security change, and we have more real-world examples showing that it’s actually possible. Take a look at Apple’s iOS security, Amazon Web Services, Google, and Microsoft (especially Windows). In each case we see economic drivers creating very secure platforms and services, and keeping them there.

Want to fix security in your organization? Make business units and developers pay the costs of breaches – don’t pay for them out of central budget. Or at least share some liability.

As for simple… I’m beyond tired of hearing how “If company X just did Y basic security thing, they wouldn’t get breached that particular way this particular time.” Nothing is simple at scale; not even the most basic security controls. You want secure? Lock things down and compartmentalize to the nth degree, and treat each segment like its own little criminal cell. It’s expensive, but it keeps groups of small things manageable. For a while.

Lastly, let’s face it, you are breached. Assume the bad guys are already behind your defenses and then get to work. Like one client I have, who treats their entire employee network as hostile, and makes them all VPN in with MFA to connect to anything.

Motivated by Rand

The impetus for finally writing this up is a Rand report sponsored by Juniper. I still haven’t gotten through the entire thing, but it reads like a legitimate critical analysis of our entire industry and profession from the outside, not the usual introspection or vendor-driven nonsense FUD.

Some choice quotes from the summary:

  • Customers look to extant tools for solutions even though they do not necessarily know what they need and are certain no magic wand exists.
  • When given more money for cybersecurity, a majority of CISOs choose human-centric solutions.
  • CISOs want information on the motives and methods of specific attackers, but there is no consensus on how such information could be used.
  • Current cyberinsurance offerings are often seen as more hassle than benefit, useful in only specific scenarios, and providing little return.
  • The concept of active defense has multiple meanings, no standard definition, and evokes little enthusiasm.
  • A cyberattack’s effect on reputation (rather than more-direct costs) is the biggest cause of concern for CISOs. The actual intellectual property or data that might be affected matters less than the fact that any intellectual property or data are at risk.
  • In general, loss estimation processes are not particularly comprehensive.
  • The ability to understand and articulate an organization’s risk arising from network penetrations in a standard and consistent matter does not exist and will not exist for a long time.

Most metrics? Crap. Loss metrics? Crap. Risk-based approaches? All talk. Tools? No one knows if they work. Cyberinsurance? Scam.

Overall conclusion? A marginally functional shitshow.

Those are my words. I’ve used them a lot over the years, but this report lays it out cleanly and clearly. It isn’t that we are doing everything wrong – far from it – but we are stuck in an endless cycle of blocking and tackling, and nothing will really change until we take a step back.

Personally I am quite hopeful. We have seen significant progress over the past decade, and I fell like we are at an inflection point for change and improvement.

No Related Posts

To Rebecca and Anton—great commentary.

To avoid the Hamster Wheel of Pain as well as problems of scalability, we must adopt cyber common operating models. These models should sufficiently cover each common operating picture based on TComs (threat communities). Then, you match threat and risk hunting pace to the TCom pace.

The basics of this approach is that the systems (also vis a vis the networks and apps, but I’m talking more about dynamical socio-technical systems that integrate human, cyber, and physical elements) must be challenged in the same way that the TComs challenge them. However, the target for the concept of operations is the TCom directly, closing out the continuous nature of the model when the TCom realizes that the COP domain is unsustainable. The best part is that it doesn’t have to actually be unsustainable if deception is utilized.

Right now, this model is flipped in the favor of the espionage and criminal actors. We must be careful to not allow it to be changed to favor the individual, though, as it may create an unsustainable defense against lone-wolf terrorism. I am particularly concerned about this for the kinetic cyber domain, especially 5-10 years from now, which is really sooner than later.

In my mind, mobile (RAND’s BYOD) has already been replaced by machine learning, especially natural language processing. To me, IoT is really IoE today, but this is tacit. However, I think RAND’s whole model is tacit—I can’t see their model so I ultimately don’t trust it, especially with this unaddressed issues regarding nomenclature and alternative futures analysis. Although I’m sure RAND has already planned for this as they wrote the book on what to do with bad models. This proposed RAND model appears to almost give up on itself inherently—the discussion around training should likely be around recruitment instead so that the model itself can be used for training.

By Andre Gironda

>Simple doesn’t scale.

Somehow I am strangely compelled to argue with this…. First, if simple [practice] does not scale, complex practice definitely, positively does not scale. So, no practice scales. Thus, either we are fucked - or “full automation of everything” (so, no practice). Given that the latter is totally unrealistic [not on legacy systems, maybe in some devops utopia it is], what remains… we are fucked? :-)

By Anton Chuvakin

What’s weird is that the block and tackling phenomena (and Rand’s notion of measures/countermeasures) is totally self perpetuating. We’re bringing it on ourselves. So, there needs to be some serious rethinking of how we even look at the problem set.

By Rebecca

No surprise that we disagree on nearly every point you make, as elaborate and eloquant as you make each sound.

The reason why cyberinsurance has less benefit is directly derived from your point about reputation, i.e., cyberinsurance classically cannot cover brand or reputation damage. However, some insurance companies are working on that problem, as well as efforts in subrogation, privacy-event potential outcome ranges, data-driven stochastic modeling platforms based on damage-valuation standards / historical losses, et al.

Direct costs do matter, though, as in the case of failed denial-of-service attempts. What you have there is a lot of money spent on prevention and response when no actual nightmare scenario occurs. Did you reduce losses or not?

Loss estimation processes? Try PASTA (i.e., see the book, “Risk Centric Threat Modeling: Process for Attack Simulation and Threat Analysis”).

Articulating risk from a network penetration consistently? Try FAIR (i.e., see the book, “Measuring and Managing Information Risk”).

Agree that our tools suck, but our humans also do (especially the non-existant humans we can’t find to hire). Don’t agree that technical attributions, especially motives and methods, aren’t ultimately perhaps the most-useful pieces of collected intelligence on adversaries (e.g., if you find a threat actor on a carding forum trying to sell an exploit kit, then it could lead to the buyers of said exploit kit, etc).

Finally, I do agree that metrics suck (models reflect reality and metrics only do when they are econometrics—and even then they don’t always)—and I did enjoy your intro. Keep writing along these lines, though, Rich. You are on the right track to keeping the world informed on what is this cyber, why, and what do we do about it.

By Andre Gironda

If you like to leave comments, and aren’t a spammer, register for the site and email us at and we’ll turn off moderation for your account.