The other day it hit me: Process is not that important to secure code development. Waterfall? Doesn’t matter. Agile process? Secondary. They only frame the techniques that create success. Saying a process helps create secure code is like saying a cattle chute tames a wild Brahma bull. Guidelines, steps, and procedures do little to alter code security, only which code gets worked on. To motivate developers to improve security, try less carrot and more stick. Heck, process is not even a carrot – it’s more like those nylon dividers at the airport to keep polite people from pushing and shoving to the front of the line. No, if you want to developers to write secure code, use peer pressure.
Peer pressure is the most effective technique we have for producing secure code. That’s it. Use it every chance to you get. It’s the right thing to do.
Don’t believe me? You think pair coding is about cross training? Please. It’s about peer pressure. Co-workers will realize you suck at coding, and publicly ridicule you for failing to validate input variables. So you up your game and double-check what you are supposed to deliver. Quality assurance teams point out places in the code that you screwed up, and bug counts come up during your raise review. Peer pressure. No developer wants his or her API banned because hackers trampled over it like fans at a Who concert.
If you have taken management classes, you have heard about the Hawthorne Effect, discovered through studies in the 1920s and ’30s. In attempts to increase factory worker output, they adjusted working conditions, specifically looking for optimal lighting that produced the highest productivity. What they found, however, was that productivity has nothing to do with the light level per se, but went up whenever the light level changed. It was a study, so supervisors paid attention when the light changed to monitor the results. When the workers knew they were being watched, their productivity went up. Peer pressure.
Why do you think we have daily scrum meetings? We do it so you remember what you are supposed to be working on, and we do it in front of all your peers so you feel the shame of falling behind. That’s why we ask everyone in the room to participate. These little sessions are especially helpful at waking up those 20-something team members who were up all night partying with their ‘bros’, or drinking Guinness and watching Manchester United till the wee hours of the morning. You know who you are.
We have ‘Sprints’ for the same reason universities have exams: to get you to do the coursework. It’s your opportunity to say, “Oh, S$^)#, I forgot to read those last 8 chapters,” and start cramming for the exam. Only at work you start cramming from the deadline. 30 day sprints just provide more opportunities to prod developers with the stick than, say, 180 day waterfall cycles.
I think Kent Beck had it wrong when he said that unacknowledged fear is the root cause of all software project failures. I think fear of the wrong things causes project failures. We specify priorities so we understand the very minimum we are responsible for, and we work like crazy to get the basics done. Specify security as the primary requirement, verify people are doing their jobs, and you get results.
External code review? Peer pressure. Quality assurance? Peer pressure. Automated build failures? Peer pressure. The Velocity concept? Peer pressure. Testers fuzzing your code? Still peer pressure. Sure, creating stories, checklists, milestones, and threat analysis set direction – but none of those is a driver. Process frame the techniques we use, and the techniques alter behavior. The techniques that promote peer pressure, manifesting itself through fear or pride, are the most effective drivers we have.
Disagree? Tell me why.
Reader interactions
7 Replies to “FireStarter: For Secure Code, Process Is a Placebo—It’s All about Peer Pressure”
@Marisa – Rugged comes to mind, but what has really been rattling around in my brain for several weeks it the results of your survey. An increasingly large percentage of development life-cycles are ad-hoc process. The more I dig, the more I see mix-n-match techniques to account for the tastes & idiosyncrasies of each organization, and they glue these together with whatever process they need.
-Adrian
*cough*Rugged*cough*
What’s that? Oh, yeah this does sound a lot like Rugged Software!
When the SDL came along, it was a huge breakthrough to say “we can’t just solve the app vuln problem with technology, we need to consider process too.” Now we’re having another breakthrough, and saying “we can’t solve this without people first.”
When I first read the Extreme Programming books I never viewed what was being offered as a process. To me it was a collection of techniques to be mixed and matched for each organization. Some could be very effective, but which to choose varied from project to project, team to team. Whatever you choose from the smorgasbord of options could be wrapped in a suitable process of your choice, but process was secondary. My goal with the post is to take the emphasis off process, and look at it in terms of techniques that encourage, motivate and promote the right behaviors. A lot of developers are unique in that they are motivated by pride and quality of workmanship, but unfortunately, better than half are only motivated by the negative.
Techniques can mitigate distractions and pace/frequency by removing some of the arbitrary deadlines, but they do not address getting management buy-in and prioritization. The methods developers employ do not alter big picture items.
@ Dre – For the record, I have never seen pairs programming work. But that’s a different discussion for a different time.
-Adrian
Andre:
That is a tricky answer, likely due to my poor wording. I don’t think you can “prove an app is secure”. I think you can only prove that an application has secured the vulnerabilities you’ve set out to address and any security bugs you have found. This is why things like: developer training, quality gates, and understanding relevant risks are critical. In the same way that I need some context of a functional spec to “make it work”, I need some type of functional spec to “make it secure”.
If a business owner gives me his newest idea on the back of a napkin and tells me to go implement it… it’s not really a priority. If it was (even the inspired bar idea) he/she would pull the team together, define what it means, figure out a strategy of approach, talk with branding, figure out how it fits in the overall application, re-prioritize existing work, come up with a new development schedule, gather actual requirements, figure out testing strategies for it, etc…
To place the burden of the aforementioned on the developer is negligent. Which is why it’s equally inappropriate to assume the developer can make an application “secure” without a mutual definition of what that actually means and where to invest. If people at the highest levels don’t indeed care enough to do that work, then everything underneath them will be negatively impacted.
Andrew: Well stated. One question though. How do you prove an app is secure? How do you make security a priority, especially in cases where it should be (e.g. active fraud, current and ongoing data breaches, out-of-compliance, etc)? Sometimes people will just accept the risk until it becomes a problem. Then what do you do? Perhaps then it is a time for sticks and I would be in more agreement with Adrian.
A lot of this depends on the culture of the overall organization, the dev team, and the specific project.
Adrian,
I’d like to agree with you on this, but I just think it places the burden in the wrong spot. The reality of security and quality in programming has nothing to do with carrots or sticks, it has to do with priority. I mean, lets face it: If the boss doesn’t REALLY care, neither will the developer(s).
Every project I have EVER been on (I’ve been programming for 11 years now) has placed a greater priority on: Getting things done at breakneck speeds and fancy widgets that they can sell. It’s hard enough to make applications work at all under those conditions… let alone make them work reliably and securely.
Developers have had to, in response to the aforementioned, come up with processes (agile) and techniques (unit testing) as a way to stay sane in this industry. Adding any other weight to the team has to be done only at the bequest and focus of the project owner (boss).
In other words, if the boss actually required me to prove the app was secure (or at least attempted to be) on every iteration… it would be. Just in the same way he/she has me prove I made xyz widget work, security has to be an explicit deliverable.
Just some thoughts,
-A
I’m not a huge pair programming or scrum daily meeting fan to begin with. I do like pair testing.
I don’t think that pair programming has shown value (at least not according to McConnell 470 or other studies I’ve seen) at reducing the bug count.
Giving developers a feeling of accomplishment (any positive “carrots”) appears to work best in my world, so I really take the opposite view here. While expensive in terms of startup cost, a company-wide bug hunt can be very worthwhile—especially since you can get a lot of people from different departments (i.e. marketing, SQA, security!) in one room (or virtual room) / forum, award prizes for different activities (marketing pays for the things they want to see in one round of testing, security pays for vulns in another round of testing, etc), and get developers working with new people who have a focus different than theirs.
Sprints (I am seeing 3 week as being more common these days) also give developers a feeling of accomplishment. The goal of the sprint is to be able to build a demo of SOMETHING at the end and to mark the backlog appropriately (turning some sprints into backlog-only activities if necessary). Then, the next sprint could be refactoring—where the design can be changed. These are all positive activities that reward developers. I don’t see peer pressure here at work at all. It allows for ease in prototyping, modeling, re-design, and super easy refactoring! These are some of the biggest wins for Agile development and for decreasing the bug count (again, see McConnell 470).
Security is not something that anyone wants hanging over their heads. I think most of the time, I’m almost in violent disagreement with the things you write on this blog, and this time is no different (note: it also appears to contradict everything you’ve said before). Process is everything! Well, I should rephrase that—some “hygiene” is absolutely necessary, but that’s the whole point of Agile, Scrum, xP, etc.
The hygiene necessary to make security successful is to do threat-modeling at each elaboration phase (or right before these phases). It’s to perform exploratory testing by VALIDATING A LIST OF APPSEC CONTROLS (i.e. OWASP ASVS) after every demo is ready or when a set of new features (or a major new feature) is available for integration or system testing. It goes a little bit farther than that—test cases are only 50 percent of the work that needs to be done. The other 50 percent of exploratory testing time should be spent on “exploring” the application looking for bugs that monkey-testing will not pick up. In summary, these activities are really defect-based testing (validating specific controls around specific bugs such as injections) and experience-based testing (using the testers’—should be pair teams—ability to do whole-program understanding, build data and control flow puzzles in their heads, and extract the relevant offensive bits).
Ideally, pair security testing would be done with a software engineer and an appsec engineer. Most ideally, I’ve always wanted a single developer on a project to be responsible (and make it their full time job with performance tied) for application security and nothing else. I dare you to use the stick method on this person—you probably won’t get the reaction that you want. Appsec is hard and right now most people are consultants (with very few FTEs) making a lot of money. They can always go a place where they are more well liked. You have to constantly reward these people.
Another area that I’m interested in exploring is to combine integration/system testing, or better—Intranet usability testing (i.e. nngroup.com)—with appsec assessments. Many of these activities can inform an appsec assessment at the very least. Stick a packet capture device (or a functional equivalent such as a pluggable proxy, or even better, an appsec passive analysis proxy) in between the tester (or the test harness) and the target app. Then give the results to the appsec team. There are plenty of small tools to make this easy to integrate. If a bug hunt or exploratory test can’t be had—at least you can retain the data during those cycles for the appsec team to review later.