I was rummaging through the closet yesterday, when I came across some old notebooks from college. Yes, I am a pack rat. One of the books contained notes from Computer Science 110: Algorithm Design. Most of the coursework was looking for ways to make algorithms more efficient, and to select the right algorithm to get the job done. I remember spending weeks on sorting routines: bubble sort, merge sort, heap sort, sorts based upon the Fibonacci sequence, Quicksort, and a few others. All of which we ran against sample data sets; comparing performance; and collecting information on best case, median, and worst case results. Obviously with a pre-sorted list they all ran fast, but depending on the size and distribution of the data set our results were radically different.

The more interesting discussion was the worst-case scenarios. One of the topics for discovering them was the Adversary Technique. Basically the adversary would re-arrange the data to make it as difficult as possible to sort. The premise was that, knowing the algorithm compared elements, (e.g., is X >= Y) the adversary would re-arrange all data elements into an order that forced the highest number of comparisons to be made. Some of the sorts were brilliant on average, but would be computing results until the end of time when confronted by a knowledgable adversary.

All the sort algorithms are long since purged from my memory, and I can truthfully say I have never needed to develop a sorting routine in my entire career. But the adversary technique has been very useful tool in designing code. I really started using a variant of that method for writing error-handling routines so they worked efficiently while still handling errors. What is the most difficult result I could send back? When you start trying to think of errors to send back to a calling application, it’s amazing what chaos you can cause. The first time I saw an injection attack, a malicious stream sent back from a .plan file, I thought of the intelligent adversary. This is also a pretty handy concept when writing communication protocols, where you have to establish a trust relationship during multi-phase handshaking – the adversary technique is very good for discovering logic flaws. The intelligent adversary teaches you to ask the right questions, and is useful for identifying unnecessary complexity in code. If you don’t do this already, try a little adversarial role-playing the next time you have design work.

On to the Summary:

Webcasts, Podcasts, Outside Writing, and Conferences

Favorite Securosis Posts

Other Securosis Posts

Favorite Outside Posts

Project Quant Posts

Research Reports and Presentations

Top News and Posts

Blog Comment of the Week

Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. Technically my favorite comment of the week was by David Mortman, professing shock that Andre Gironda actually agreed with someone, on a public forum no less! But alas, as he did not leave it on the blog, the award has to go to starbuck, in response to Secure Development Lifecycle–You’re Doing It Wrong.

“Before you know it, HR reps will be including “SDL certification” requirements on every engineering job description, without a clue what they are demanding or why, so let’s stop this train before it runs too far off the tracks.”

Damn right. By the way, I didn’t really see the point of your article at first as it seems quite logic to me that adopting the methodology/process that one of the biggest software editors adopted would require very strong adaptation. Then I remembered myself almost three years ago when I printed out the SDL process and came to the meeting room bragging about it “Yeah, that’s what we’ll do!!!” And I also remember the moment, one year after, when I realized that these models were just…models, that small ISVs like the one I was working for couldn’t afford both financially and technically.

Now, the most interesting (from my humble opinion) of your recommendations is the 5th one: Do what MS did, not what they do. That’s what happens, ironically: the SDL process describes Microsoft’s maturity model at its most mature stage but lacks guidance on how to reach it (the assessment kind of helps but…anyway). Every company has its own needs and resources and SDL does not provide any insights on how to identify the appropriate roadmap (aka: the cheapest and most risk-mitigating approach).

That’s a selling point of the OpenSAMM process, which proposes industry-oriented maturity roadmaps that should help the organization walk along the path towards a mature software development lifecycle. I am currently deploying security within an existing SDLC with a massive amount of developers, based on the OpenSAMM guidance. Within six months I hope I will be able to have some thoughts to share on the differences between working with SDL and working with OpenSAMM.

Let’s hope they will be more positive than my experience with SDL.

Thanks for your article!!

Share: