I always read Gary McGraw’s research on BSIMM. He posts plenty of very interesting data there, and we generally have so little good intelligence on secure code development that these reports are refreshing. His most recent post with Sammy Migues on Driving Efficiency and Effectiveness in Software Security raises some interesting questions, especially around the use of pen testing. The questions of where and how to best deploy resources are questions every development team has, and I enjoyed his entire analysis of the results of different methods of resource allocation.

Still, I have trouble relating to a lot of Gary’s research, as the BSIMM study focused on firms that have resources far in excess of anything I have ever seen. I come from a different world. Yeah, I have programmed at large corporations, but the teams were small and isolated from one another. With the exception of Oracle, budgets for tools and training were just a step above non-existent. Smaller firms I worked for did not send people to training – HR hired someone with the skills we needed and let someone else go. Brutal, but true. So while I love the data Gary provides, it’s so foreign that I have trouble disecting the findings and putting them to practical use. That’s my way of saying it does not help me in my day job.

There is a disconnect: I don’t get asked questions about what percentage of the IT budget goes for software security initiatives. That’s both because the organizations I speak with have software development as a separate department than IT; and because the expedniture for security related testing, tools, development manpower, training, and management software are embedded within the development process enough that it’s not easy to differentiate generic development stuff from security.

I can’t frame the question of efficiency in the same way Gary and Sammy do. Nobody asks what their governance policy should be. They ask: What tools should I use to track development processes? Within those tools, what metrics are available and meaningful? The entire discussion is a granular, pragmatic set of questions around collecting basic data points.

The programmers I speak with don’t bundle SDL touchpoints in this way, and they don’t qualify as balanced. They ask “of design review, code review, pen testing, assessment, and fuzzing – which two do I need most?” 800 developer buckets? 60, heck even 30, BSIMM activities? Not even close.

Even applying a capability maturity model to code development is on the fringe. Mainly that’s because the firms/groups I worked in were too small to leverage a model like BSIMM – they would have collapsed under the weight of the process itself. I talk to fewer large fims on a semi-regular basis, and plenty of small programming teams, and using BSIMM never comes up. Now that I am on the other side of the fence as an analyst, and I speak with a wider variety of firms, BSIMM is an IT mindset I don’t encounter with software development teams.

So I want to pose this question to the developers out there: Is BSIMM helpful? Has BSIMM altered the way you build secure code? Do you find the maturity model process or the metrics helpful in your situation? Are you able to pull out data relevant to your processes, or are the base assumptions too far out of line with your situation? If you answered ‘Yes’ to any of these questions, were you part of the study? I think the questions being asked are spot on – but they are framed in a context that is inaccessible or irrelevant for the majority of developers.

Share: