I always read Gary McGraw’s research on BSIMM. He posts plenty of very interesting data there, and we generally have so little good intelligence on secure code development that these reports are refreshing. His most recent post with Sammy Migues on Driving Efficiency and Effectiveness in Software Security raises some interesting questions, especially around the use of pen testing. The questions of where and how to best deploy resources are questions every development team has, and I enjoyed his entire analysis of the results of different methods of resource allocation.
Still, I have trouble relating to a lot of Gary’s research, as the BSIMM study focused on firms that have resources far in excess of anything I have ever seen. I come from a different world. Yeah, I have programmed at large corporations, but the teams were small and isolated from one another. With the exception of Oracle, budgets for tools and training were just a step above non-existent. Smaller firms I worked for did not send people to training – HR hired someone with the skills we needed and let someone else go. Brutal, but true. So while I love the data Gary provides, it’s so foreign that I have trouble disecting the findings and putting them to practical use. That’s my way of saying it does not help me in my day job.
There is a disconnect: I don’t get asked questions about what percentage of the IT budget goes for software security initiatives. That’s both because the organizations I speak with have software development as a separate department than IT; and because the expedniture for security related testing, tools, development manpower, training, and management software are embedded within the development process enough that it’s not easy to differentiate generic development stuff from security.
I can’t frame the question of efficiency in the same way Gary and Sammy do. Nobody asks what their governance policy should be. They ask: What tools should I use to track development processes? Within those tools, what metrics are available and meaningful? The entire discussion is a granular, pragmatic set of questions around collecting basic data points.
The programmers I speak with don’t bundle SDL touchpoints in this way, and they don’t qualify as balanced. They ask “of design review, code review, pen testing, assessment, and fuzzing – which two do I need most?” 800 developer buckets? 60, heck even 30, BSIMM activities? Not even close.
Even applying a capability maturity model to code development is on the fringe. Mainly that’s because the firms/groups I worked in were too small to leverage a model like BSIMM – they would have collapsed under the weight of the process itself. I talk to fewer large fims on a semi-regular basis, and plenty of small programming teams, and using BSIMM never comes up. Now that I am on the other side of the fence as an analyst, and I speak with a wider variety of firms, BSIMM is an IT mindset I don’t encounter with software development teams.
So I want to pose this question to the developers out there: Is BSIMM helpful? Has BSIMM altered the way you build secure code? Do you find the maturity model process or the metrics helpful in your situation? Are you able to pull out data relevant to your processes, or are the base assumptions too far out of line with your situation? If you answered ‘Yes’ to any of these questions, were you part of the study? I think the questions being asked are spot on – but they are framed in a context that is inaccessible or irrelevant for the majority of developers.
Reader interactions
6 Replies to “BSIMM meets Joe the Programmer”
I use SAMM from OWASP more than BSIMM. It’s always great to see what other companies are doing, but just because all of the big boys are doing it doesn’t make it right. BSIMM really isn’t a maturity model, it’s more of a survey.
When I go into a group to work on a software assurance program, I daisy pick activities from the SAMM based on what the organization needs. I’ve worked with a large amount of small software organizations where the majority of activities from SAMM are too much overhead. The basic set up I go with is:
– Some type of document from management saying software assurance must be done.
– Develop some coding guidelines…even if they just use the stock OWASP secure coding guidelines.
– Implement some type of testing, whether it’s static or dynamic or both.
– Implement a system for tracking vulnerabilities and the progress on fixing them.
That process may not look like much, but it will make a big difference in the security of applications. It’s much more manageable than standing up a security group to design a monolithic process
As a development manager for a small shop that takes security seriously, I looked at BSIMM and started to participate in the analysis (we had used Cigital for secure development training and security roadmap planning and reviews when we were starting off because we wanted to start off right). But I didn’t complete the BSIMM analysis: there was just too much that didn’t apply to how we work and the realities of a small company. I wanted to help, but there were too many questions where I couldn’t provide a meaningful answer.
Maturity models don’t fit for small companies and small teams, especially teams that follow Agile methods and are trying to get as lean as possible. We want to solve problems and deliver a system that works: how is a heavyweight practice model going to help me do that?
You are right Adrian, most small companies don’t have the money or time to work this way or think this way, and we can’t afford the training and tools and consulting and all of the overhead that software security programs like this require. In my shop, there isn’t a security program. We have budgets for software development and operations, and security costs are included where they fit. Security is part of how we build systems and run them.
Software security programs with practice assessment models and multiple checkpoints and expensive tools and training doesn’t scale down, which means that small companies don’t have much to work with, they have to find their own path and most of them are going to do a poor job of it.
Interesting, I just finished blogging about how maturity models and other solutions targeted to the enterprise aren’t helping the rest of us who write software, thinking about some of the same things for different reasons:
http://swreflections.blogspot.com/2011/01/software-security-and-long-tail.html
Neither of these things hurts app sec and that’s just inflammatory.
The touchpoints help organise security activities in a lifecycle, and the BSIMM helps organise activities across an entire portfolio, business unit, or firm. Touchpoints focus on reducing the cost of security defects because they emphasise finding and fixing problems in early phases like requirements, architecture, design, etc. where it’s cheapest. BSIMM helps firms be cost-effective by providing ideas around what they might include or leave out of their security programme.
All of this misses the point that the BSIMM is “descriptive” not “prescriptive.” We went in the field and wrote down what we saw. We never suggest (and take great pains to contradict the idea) that people should do everything in the list or even think it is a list of things you should do. No one should look at the BSIMM as “the best people do all this stuff, you should, too.” A firm would be stupid to do everything in the BSIMM list. A perfect score is doing the set of things that make sense for a given business and its environment and risk tolerance. It’s more like “the best people do all this stuff. Which ones make sense in your environment?” It’s a list of things that we’ve seen and where they fit in the lifecycles we saw them in.
Then there’s the question of perspective. Some security efforts tackle things bottom up: i.e., solve specific problems like authentication or input validation and representation. Others are top-down. There’s no silver bullet solution, so we shouldn’t complain that a top-down initiative doesn’t solve things from the bottom up, or vice versa.
And the BSIMM is creative commons licensed. Data and all. Have at it.
BSIMM and Touchpoints are actually harmful to developers and organizations. They steer folks in the absolutely wrong direction.
Let’s start with the flaws of Touchpoints, then I’ll move to BSIMM.
Why Touchpoints hurt AppSec:
1. It makes security separate from development
2. It is all verification, not build secure apps
3. It is only SDLC (one app), not full boar appsec program planning across an entire application portfolio
4. It makes security a cost, not an opportunity for improvement in other aspects of software dev
5. It is negative vulnerability focused, not positive controls centric thinking
6. It is basically hacking ourselves secure, not assurance evidence based
7. It is trivial in the sense that it’s just a concept with no backing…it’s just a picture and a book. No meat!
8. It’s designed to sell tools – not totally, but somewhat
9. It isn’t free and open (creative commons anyone?)
BSIMM continues with this tradition.
Does your organization really care if the software you are writing is secure, or is it a burden and a chore? No amount
of process will fix not caring. BSIMM does almost
nothing to create a culture of good security practices for developers. It’s again, 80% verification
activities. It extends the tradition of the Touchpoints model which was 100% verification.
BSIMM and Touchpoints do not go down and dirty and figure out how to make things secure. And frankly, that’s what we as an industry really need right now.
Paco – I appreciate the ‘painting’, but the enjoyment is not transitive. Like art, it inspires me, but it has not helped me. Sure, BSIMM is not a process unto itself. But it’s certainly positioned as a guide to put a process and secure code program together. Whether we agree on what CMM is, I stand by the statement that 30 BSIMM control activities is ‘weighty’.
I like the questions
I’m with you right up until the end. In the second to last paragraph you talk about “applying a capability maturity model”, which I think means “applying the BSIMM.” The BSIMM is not a model that one applies any more than PANTONE is a set of paints that one buys. There is no “weight” of the process because it isn’t a process. I’m hoping PANTONE is well-enough known to make a functional metaphor, because I think it applies nicely here.
One might take the whole PANTONE reference set and hold it up next to the dabs of paint in an impressionist painting and record which PANTONE colors were used. One does not, however, go to the painter and say “here’s a set of PANTONE colors. Paint an impressionist picture using only these.” One might look at the color palette and say “wow, here’s some blues and purples I could use when I’m painting a sunset,” but it’s not the other way round: “If you want to paint a sunset, these are the blues and purples you will use.” This is a particularly apt metaphor because Mondrian and Dali would use very different sets of colors, yet they’re both regarded as great artists. Depending on what you’re trying to accomplish, you’ll have different sets of security activities in your process, and that’s OK.
That said, BSIMM -is- a list of the colors that lots of (we think) important painters are using these days when they paint certain subjects. Our assertion is that if these artists are doing good art using these colors, maybe that’s something others can learn from.
So the question to developers cannot ask something about “the maturity model process” because there isn’t such a thing. It’s observational science. It reports activities we have seen, regardless of where in the lifecycle they occur, and how frequently they seem to occur across sampled firms.
One might ask developers “have you picked anything off the BSIMM list of activities and incorporated it into your process?” “If so, why?” That’s the kind of thing you might ask and be square on target with the BSIMM’s purpose.