Finally, it’s here: my first post! Although I doubt anyone has been holding their breath, I have had a much harder than anticipated time trying to nail down my first topic. This is probably due in part to the much larger and more focused audience at Securosis than I have ever written for in the past. That said, I’d like to thank Rich and Adrian for supporting me in this particular role and I hope to bring a different perspective to Securosis with increased frequency as I move forward.
Last week provided a situation that brought out a heated discussion with a colleague (I have a bad habit of forgetting that not everyone enjoys heated debate as much as I do). Actually, the argument only heated up when he mentioned that vulnerability scanning and penetration testing aren’t required to validate a security program. At this point I was thoroughly confused because when I asked how he could measure the effectiveness of such a security program without those tools, he didn’t have a response. Another bad habit: I prefer debating with someone who actually justifies their positions.
My position is that if you can’t measure or test the effectiveness of your security, you can’t possibly have a functioning security program.
For example, let’s briefly use the Securosis “Building a Web Application Security Program” white paper as a reference. If I take the lifecycle outline (now please turn your PDFs to page 11, class) there’s no possible way I can fulfill the Secure Deployment step without using VA and pen testing to validate our security controls are effective. Similarly, consider the current version of PCI DSS without any pen testing – again you fail in multiple requirement areas. This is the point at which I start formulating a clearer perspective on why we see security failing so frequently in certain organizations.
I believe one of the major reasons we still see this disconnect is that many people have confused compliance, frameworks, and checklists with what’s needed to keep their organizations secure. As a consultant, I see it all the time in my professional engagements. It’s like taking the first draft blueprints for a car, building said car, and assuming everything will work without any engineering, functional, or other tests. What’s interesting is that our compliance requirements are evolving to reflect, and close, this disconnect.
Here’s my thought: year over year compliance is becoming more challenging from a technical perspective. The days of paper-only compliance are now dead. Those who have already been slapped in the face with high visibility breach incidents can probably attest (but never will) that policy said one thing and reality said another. After all they were compliant – it can’t be their fault that they’ve been breached after they complied with the letter of the rules.
Let’s make a clear distinction between how security is viewed from a high level that makes sense (well, at least to me) by defining “paper security” versus “realistic security”. From the perspective of the colleague I was talking with, he believed that all controls and processes on paper would somehow magically roll over into the digital boundaries of infrastructure as he defined them. The problem is: how can anyone write those measures if there isn’t any inherent technology mapping during development of the policies? Likewise how can anyone validate a measure’s existence and future validity without some level of testing? This is exactly the opposite of my definition of realistic security. Realistic security can only be created by mapping technology controls and policies together within the security program, and that’s why we see both the technical and testing requirements growing in the various regulations.
To prove the point that technical requirements in compliance are only getting more well defined, I did some quick spot checking between DSS 1.1 and 1.2.1. Take a quick look at a few of the technically specific things expanded in 1.2.1:
- 1.3.6 states: ‘…run a port scanner on all TCP ports with “syn reset” or “syn ack” bits set’ – new as of 1.2.
- 6.5.10 states: “Failure to restrict URL access (Consistently enforce access control in presentation layer and business logic for all URLs.)” – new as of 1.2.
- 11.1.b states: “If a wireless IDS/IPS is implemented, verify the configuration will generate alerts to personnel” – new as of 1.2.
Anyone can see the changes between 1.1 and 1.2.1 are relatively minor. But think about how, as compliance matures, both its scope and specificity increase. This is why it seems obvious that technical requirements, as well as direct mappings to frameworks and models for security development, will continue to be added and expanded in future revisions of compliance regulations.
This, my friends, is on the track of what “realistic security” is to me. It can succinctly be defined as a never ending Test Driven Development (TDD) methodology applied to a security posture: if it is written in your policy then you should be able to test and verify it; and if you can’t, don’t, or fail during testing, then you need to address it. Rinse, wash, and repeat. Can you honestly say those reams of printed policy are what you have in place today? C’mon – get real(istic).
Reader interactions
5 Replies to “Realistic Security”
Would some of these lines of thinking lead down the road of making “security” more technical and less business?
I know it’s necessary (although still very fadlike) that we talk about involving the “business” in security, but really…“involving the business” tends to end up in the same boat as paper compliance and policy: a reality disconnect.
The one exception might be making sure “security” (or IT in general) is at least knowing what the business wants and needs to do, and helping support that.
(Disclaimer: This is not me arguing against security awareness and individual responsibility to not contribute to the problems, but I sometimes want to lashback a bit on this idea that the business will somehow improve and save security, while getting away from the technical realms. I think as compliance matures, security will [and already is] trending back into the server rooms and SOC/NOC.)
David,
My point re: Auditors was narrow in scope and only applied to this:
>>
From the perspective of the colleague I was talking with, he believed that all controls and processes on paper would somehow magically roll over into the digital boundaries of infrastructure as he defined them.
<< Auditors are there to make sure controls on paper become operating practices. You are right, they aren’t good at knowing if the control actually does a good thing, that is our job as security practitioners. What they are good at is pointing to operators and saying “they aren’t actually doing what the policy says they should”. As such, an auditor is producing far more accurate information, as long as the control language is well written. It is far easier to read that language, gather the specified evidence and examine to see if it is in line with expectations than it is to find (potentially) unknown security exposures in software or another equally complex system. As for your disagreement with my final point, I accept your clarification. =)
@ds
I think the point of “Anyone who thinks that IT operations or any other part of a business will follow the rules as they are written is dangerously misinformed” is mostly on par – but for particular things that are, for the most part, not well defined or understood in comparison to other, more mature rules/regulations. Breaking most regulations implied will result in absolutely nothing happening 99% of the time. The statement may be too broad, however, because there are rules which apply to other areas of business that are taken seriously. That may sound like I’m straying from the original point, but the idea is that as the noose tightens in terms of specifics for the security industry some of those written rules will have to be taken more seriously as time marches on. It’s just a matter of time and trial and error. Sure, if there’s no incentive to fix—why do it. But when there’s an option to go directly-to-jail, don’t pass Go or collect $200 and in fact pay up—that’s an incentive to, well, do the right thing.
I’m curious on the auditor point though. Only because auditors are generally not all that security savvy as I’ve had experience with. So, to me, there is little motivation to take an auditors assessment as a reflection of actual security posture. If that auditor is external then he/she needs to learn all systems and architecture as they apply to the one area of focus. Repeat work by different auditors will, likely, result in rework and a price premium being paid. Internal auditors may be more useful in the sense, but at that point if they’re that good in the security analysis they’re probably going to have better opportunities as being a not-an-auditor. Finally, at some level, auditors end up being a catch-22 altogether because if they aren’t good they’ll miss the problem areas yet give you a warm fuzzy – so do I need an auditor for the auditor for the auditor at that point?
As for pen testing / vulnerability scanning I never meant for it to be taken as an end-all-be-all solution. There was a lack of context in the first paragraph which mentions it. The main point of my argument can more clearly be defined by the statement that reads “My position is that if you can’t measure or test the effectiveness of your security, you can’t possibly have a functioning security program”. I also used the examples to show that pen testing and scanning were called out as a phase in the web application security program. The other thing is I’m not using either to validate the “security program” (which is just my way of being generalistic), but the program itself is validating something else.
To address your point about pen testing being “not super great” I think if we go back to the point called out earlier about hiring an auditor we can probably easily interchange tester with auditor in that paragraph equally well—so at that point which one can produce more viable and accurate information?
Finally I will respectfully disagree with your final thoughts. If I apply the idea of “realistic security” through TDD I get exactly what you’re looking for (pick outcome, decide how to prove, measures that prove, etc). That doesn’t have to include any of the specific testing I called out – however they do apply in specific instances but unfortunately the first paragraph gave the impression that my position always requires pen testing and scanning.
@Ben – read above. 🙂 And, yes, I’d love to crack the nut to measure effectiveness. But at the end of the day if I don’t test I can’t measure at all. So – testing (clarification: 10k foot “testing”), of sorts, has to happen.
I disagree, on the simple basis that David here implies that scans and assessments are the only, or even just best, way to get data on the enterprise. While I agree that these are important, one can achieve similar objectives through configuration management and logging & monitoring. Do you *have* to run a scanner or pentest to gather data? Not at all. One must be mindful not to get into these absolutist corners where we end up looking like narrow-minded twits; it only hurts our credibility as professionals seeking to help the business along. As to measuring effectiveness, this is not a nut that’s been well-cracked yet, and I would submit that one needs to again be careful making assertions about an area that is at best emerging rather than well-defined.
>>
From the perspective of the colleague I was talking with, he believed that all controls and processes on paper would somehow magically roll over into the digital boundaries of infrastructure as he defined them. The problem is: how can anyone write those measures if there isn’t any inherent technology mapping during development of the policies?
<< The problem really is that you were talking to a person unqualified to be in a security management position. It is as simple as that. There is no underlying flaw elsewhere, nothing to correct or adjust, except this person’s job role. Anyone who thinks that IT operations or any other part of a business will follow the rules as they are written is dangerously misinformed. People do what they are incented to do, and I’ve yet to work for a company where people were incented to “cease operations unless they can be conducted securely”. So people bend and break rules. The above is exactly why we have auditors. They validate that we do as we say. If your auditors aren’t doing this, fire them and get new ones. If you don’t have auditors, go find some. They are handy. As to your position that you need to do vulnerability scanning and pen testing, this is patently wrong. Neither is necessary to validate a “security program” (whatever a “security program” is… this isn’t a one size world). Can I just as easily examine the variance from a desired configuration baseline to determine the effectiveness of systems management? Of course, I suppose you could play semantic games and call that vulnerability scanning and you’d probably be right, but take that one step further and say I don’t examine this, I merely auto-correct things as I find them wrong (Didn’t NAC promise this?). Is that still vulnerability scanning? Is it equally, or maybe even more, effective? In a (big) nutshel, vulnerability scanning is only part of the equation, and it becomes increasingly irrelevant when faced with the approach that says to identify and remediate in one pass, as opposed to a multi-party process of assess, report, distribute, fix, re-check. Pen testing is also not super great, at least on the network/systems side. All it signifies is that your tester couldn’t break in to your system. If your tester is successful, there is some value, but who is to say that they were comprehensive? I must admit that I do see some value on application pen testing due to the wide diversity in app spaces, just not in the old “hack my DMZ” testing that so many like. In any case, your analysis is bottom up, which I dislike. Pick the outcome, decide how it could be proven as achieved, pick measures that give that proof, etc. Going your way (vulnerability scanning produces measures, therefore it is necessary) which is absent a material outcome isn’t awesome. (I believe I have succeded in having a comment longer than the post. Life goal #287 met.)