I was rummaging through the closet yesterday, when I came across some old notebooks from college. Yes, I am a pack rat. One of the books contained notes from Computer Science 110: Algorithm Design. Most of the coursework was looking for ways to make algorithms more efficient, and to select the right algorithm to get the job done. I remember spending weeks on sorting routines: bubble sort, merge sort, heap sort, sorts based upon the Fibonacci sequence, Quicksort, and a few others. All of which we ran against sample data sets; comparing performance; and collecting information on best case, median, and worst case results. Obviously with a pre-sorted list they all ran fast, but depending on the size and distribution of the data set our results were radically different.
The more interesting discussion was the worst-case scenarios. One of the topics for discovering them was the Adversary Technique. Basically the adversary would re-arrange the data to make it as difficult as possible to sort. The premise was that, knowing the algorithm compared elements, (e.g., is X >= Y) the adversary would re-arrange all data elements into an order that forced the highest number of comparisons to be made. Some of the sorts were brilliant on average, but would be computing results until the end of time when confronted by a knowledgable adversary.
All the sort algorithms are long since purged from my memory, and I can truthfully say I have never needed to develop a sorting routine in my entire career. But the adversary technique has been very useful tool in designing code. I really started using a variant of that method for writing error-handling routines so they worked efficiently while still handling errors. What is the most difficult result I could send back? When you start trying to think of errors to send back to a calling application, it’s amazing what chaos you can cause. The first time I saw an injection attack, a malicious stream sent back from a .plan file, I thought of the intelligent adversary. This is also a pretty handy concept when writing communication protocols, where you have to establish a trust relationship during multi-phase handshaking – the adversary technique is very good for discovering logic flaws. The intelligent adversary teaches you to ask the right questions, and is useful for identifying unnecessary complexity in code. If you don’t do this already, try a little adversarial role-playing the next time you have design work.
On to the Summary:
Webcasts, Podcasts, Outside Writing, and Conferences
- Rich at Dark Reading: A New Way to Choose Database Encryption.
- Adrian’s featured article on Database Activity Monitoring for Information Security Magazine.
- Adrian quoted in Goldman Sachs Sued for Illegal Database Access.
- Rich on the Digital Underground podcast with Dennis Fisher.
- Other Securosis mentions: MSDN SDL group response to Monday’s FireStarter. Robert Graham thinks we’re both full of $#!%. I confess that I am uncertain why Robert thinks our recommendations differ.
Favorite Securosis Posts
- Rich: Unintended Consequences of Consumerization. One of the very first presentations I ever built as an analyst was on consumerization… mostly because I didn’t really know what I was doing at the time. But one tenet from that presentation still holds true – never underestimate the power of consumers, and we are all consumers.
- Mike Rothman: We Have Ways of Making You … Use a Password. Yet another example of legislation gone wild…
- David Mortman and Adrian Lane: FireStarter: Secure Development Lifecycle – You’re Doing It Wrong.
Other Securosis Posts
- SAP Buys Sybase.
- Incite 5/12/2010: the Power of Unplugging.
- Help Build the Mother of All Data Security Surveys.
- Download Our Kick-Ass Database Encryption and Tokenization Paper.
Favorite Outside Posts
- Rich: Why I left Facebook. I’m still on Facebook, but I do nothing I remotely consider private there. I only stay on it until there is an alternative to keep me connected with old friends and family. Maybe that’s hypocritical considering some of my other privacy statements.
- Mike Rothman: Getting the time dimension right. Russell helps understand security metrics versus risk analysis. “But to make a judgement about security and make decisions about alternative security postures, we need a useful estimate of risk to decide how much security is enough.”
- David Mortman: The Vulnerability Arms Race.
- Adrian Lane: A Brief, Incomplete, and Mostly Wrong History of Programming Languages.
Project Quant Posts
Research Reports and Presentations
- Understanding and Selecting a Database Encryption or Tokenization Solution.
- Low Hanging Fruit: Quick Wins with Data Loss Prevention.
- Report: Database Assessment.
Top News and Posts
- HTML 5 and SQL Injection.
- Cigital has announced the latest BSIMM. Now with three times the number of large development shops who publicly admit that they tend to follow best practices.
- Anti-Malware Bypass.
- Interesting use of DoS to avoid detection.
- Verizon’s Cloud Security Strategy.
- Facebook and the never ending privacy discussion. Personally, I used lilSnitch to block everything Facebook. End of discussion.
- Building their army of hacker commandos, Chris and Jack are indoctrinating children with a weekly regimen of XSS and pummeling drills. Rumors spread: Hoff to become real-life Matthew Sobol. FBI claimed to be watching closely.
- Open Source IDS. Beta available.
- Few details, but Visa posted a warning about settlement fraud scams.
- Stolen Laptop Exposed Data on 207K.
Blog Comment of the Week
Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. Technically my favorite comment of the week was by David Mortman, professing shock that Andre Gironda actually agreed with someone, on a public forum no less! But alas, as he did not leave it on the blog, the award has to go to starbuck, in response to Secure Development Lifecycle–You’re Doing It Wrong.
“Before you know it, HR reps will be including “SDL certification” requirements on every engineering job description, without a clue what they are demanding or why, so let’s stop this train before it runs too far off the tracks.”
Damn right. By the way, I didn’t really see the point of your article at first as it seems quite logic to me that adopting the methodology/process that one of the biggest software editors adopted would require very strong adaptation. Then I remembered myself almost three years ago when I printed out the SDL process and came to the meeting room bragging about it “Yeah, that’s what we’ll do!!!” And I also remember the moment, one year after, when I realized that these models were just…models, that small ISVs like the one I was working for couldn’t afford both financially and technically.
Now, the most interesting (from my humble opinion) of your recommendations is the 5th one: Do what MS did, not what they do. That’s what happens, ironically: the SDL process describes Microsoft’s maturity model at its most mature stage but lacks guidance on how to reach it (the assessment kind of helps but…anyway). Every company has its own needs and resources and SDL does not provide any insights on how to identify the appropriate roadmap (aka: the cheapest and most risk-mitigating approach).
That’s a selling point of the OpenSAMM process, which proposes industry-oriented maturity roadmaps that should help the organization walk along the path towards a mature software development lifecycle. I am currently deploying security within an existing SDLC with a massive amount of developers, based on the OpenSAMM guidance. Within six months I hope I will be able to have some thoughts to share on the differences between working with SDL and working with OpenSAMM.
Let’s hope they will be more positive than my experience with SDL.
Thanks for your article!!
Comments