Code Development and Security

How do we know our code is bug free? What makes us believe that our application is always going to work? Ultimately, we don’t. We test as best we can. Software vendors spend a significant percentage of their development budget on Quality Assurance. Over the years we have gotten better at it. We test more, we test earlier, and we test at module, component, and system levels. We write scripts, we buy tools, we help mentor our peers on better approaches. We do white box testing, we do black box testing. We have developers write some tests. We have QA write and run tests. We have 3rd party testing assistance. We perform auto-builds and automated tests. We may even have partners, beta customers, and resellers test so that our code is as high quality as possible. We have learned that the earlier in the process we find issues, the less money we spend fixing them (see Deming, Kaizen, Six Sigma, etc.). We have even altered the basic development processes from waterfall to things like extreme and agile methodologies to better assist with quality efforts. We have taken better advantage of object oriented programming to reuse trusted code, as well as distill and simplify code to ease maintenance issues. This did not happen overnight, but has been a gradual process every year I have been part of the industry. We continually strive to get a little better every release, and we have generally gotten better, and done so with fewer resources. None of these strategies were typical 20 years ago when developers were still testing their own code. We have come a very long way. So what, you say? I say software security is no different. We are on the cusp of several large changes in the security industry and this is one of them. Security will come to the “common man” programmer. I was discussing this with Andre Gironda from ts/sci at the SunSec gathering a couple of weeks ago, and how process has positively affected quality as well as how it is starting to positively affect security, along with some of the challenges in setting up suitable security test cases at the component and module levels. Andre put up a nice post on “What Web Application Security Really Is” the other day and he touches on several of these points. Better security needs to come from wholesale and systemic changes. Hackers spend most of their time thinking about how to abuse systems, and your programming team should too. Do we do this well today? Obviously the answer is ‘no’. Will we get better in 10 years, or even 2 years from now? I am certain we will. Mapping security issues into the QA processes, as a sub-component of quality if you will, would certainly help. The infrastructure and process is already present in the development organization to account for it. The set of reports and statistics that we gather to ‘measure’ software quality would be similar in type to those for security … meaning they would both suck, but we’ll use them anyway because we do not have anything better. We are seriously shy on education and training. It has taken a long time for security problems, security education, and security awareness to even percolate into the developer community at large. This has been getting a lot of attention in the last couple of years with the mind-boggling number of data breaches that have been going on, and the huge amount of buzz lately with security practitioners with PCI 6.6 requiring code reviews. There is a spotlight on the problem and the net result will be a slow trend toward considering security in the software design and implementation phases. It will work its way into the process, the tools, the educational programs, and the minds of typical programmers. Not overnight. Like with quality in general, in slow evolutionary steps. Share:

Read Post

Pink Slip Virus 2008

This is a very scary thing. I wrote a blog post last year about this type of thing in response to Rich’s post on lax wireless security. I was trying to think up scenarios where this would be a problem, and the best example I thought of is what I am going to call the “Pink Slip Virus 2008”. Consider a virus that does the following: Once installed, the code would periodically download pornography onto the computer, encrypt it, and then store it on the disk. Not too much, and not too often, just a few pictures or small videos. After several weeks of doing this, it would un-encrypt the data, move it to “My Documents” or some subdirectory, and then uninstall itself. It could be programmed to remove signs that it was present, such as scrubbing log files to further hide from detection. The computer could be infected randomly through a hostile web site or it could be targeted through an injection attack via some insecure service. It could even be targeted by a co-worker who installed this on your machine when you were at lunch, or loaned you an infected memory stick. A virus of this type could be subtle, and use so minimal CPU, network, and disk resources so as to go unnoticed both by the owner of the computer and the IT department. Now what you have is presumed guilt. If the downloads are discovered by IT, or someone like the malicious co-worker were to proactively mention to HR “I saw something that looked like …” on or after the date the virus uninstalled itself, a subsequent search would reveal pornography on the machine. Odds are the employee would be fired. It would be tough to convince anyone that it was anything other than the employee doing what they should not have been doing, and “innocent until proven guilty” is a legal doctrine that is not applied to corporate hiring/firing decisions. I was discussing this scenario with our former Director of Marketing at IPLocks, Tom Yates, and he raised a good point. We routinely use Occam’s Razor in our reasoning. This principle states that the simplest explanation is usually the correct one. And the simple explanation would be that you were performing unauthorized browsing with your computer, which could have negative legal consequences for the company, and is almost always a ‘fire-able’ offense. How could you prove otherwise? Who is going to bring in a forensic specialist to prove you are innocent? How could you account for the files? I have had a home computer infected with a BitTorrent-like virus storing such files on a home computer in 2003, so I know the virus part is quite feasible. I know that remote sessions can be used to instigate activity from a specific machine as well. It is a problem to assume the person and the computer are one and the same. We often assume that you are responsible for specific activity because it was your IP address, or your MAC address, or your account, or your computer that was involved. Your computer is not always under your control, passwords are usually easy to guess, and so it is a dangerous assumption that the official user is responsible for all activity on a computer. Almost every piece of software I have ever downloaded onto my machine takes some action without my consent. So how would you prove it was some guy looking at porn and not spammers, hackers and/or the malicious co-worker? Share:

Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.