Securosis

Research

I’m Not The Only Blogger Here!

I’ve been absolutely flattered by some of the positive comments on our posts this week, especially the database posts. But as much as I enjoy the credit for someone else’s work, I’d like to remind everyone that I’m not the only blogger here at Securosis anymore. Adrian Lane, our new Senior Security Strategist, has been putting up all the meat this week. Once I get back from this conference I’ll increase the font size on the writer tagline for the blog so it’s more obvious. We also occasionally have contributions from David Mortman and Chris Pepper, both of whom wrote posts I got the credit for. These are all brilliant guys, and I’m honored they contribute here. They’re probably smarter than I am… … oh. Never mind. I write it all. Share:

Share:
Read Post

Database Connections and Trust

Your Web application connects to a database. You supply the user name and password, establish the connection, and run your query. A very simple, easy to use, and essential component to web applications. The database itself has very little awareness of where the application that made the connection is located. It does not necessarily know the purpose of the application. It may or may not know the real user who is using that connection. It’s not that it cannot, it is just typically not programmed to do so. It is at the beck and call of the application and will do whatever the application asks it to do. One of the great reasons to use Database Activity Monitoring is to de-mystify that connection. These monitoring tools pay close attention to where the connection is coming from, what application is making the connection, what time of day it is, how much data is being moved, what queries are being run, what fails to execute, and on and on. This provides a very real benefit in detecting attacks and other types of misuse. There is a strong market for this type of tool because application developers rarely develop this capability within the context of the service they are providing. Can this be done from within the database? Yep. Do people do this? Rarely to never. Should it be done? I contend that to some degree it should always be there. Much in the same way we provide range checking on database values, we should also have some degree of business consistency checking. But we don’t because it is typically not part of the scope of the application project to program the database to perform additional checking and verifications. Usually it is only scoped out to store data and provide some reports, just a basic repository for storage of data and application state. We have gotten to the point where we use Hibernate <http://www.hibernate.org/> to abstract the concept of a database altogether and further remove any native database visibility. Give the database user name and password and it will give you everything you have permissions to do … and then some. It is set up to trust you. And why not, you gave it the right credentials! And the converse of that is the application developer views the database as some abstract object. Security of that object is someone else’s problem. The loss of visibility does not mean that the functionality is not there, or that it is not important, or that the application developer can ignore it. What I am trying to say is the database is set up to trust the application connection and it should not be. Whatever you gave the user who connects permission to do, it will do, whenever asked. But should you be accepting local connections? Remote connections? Ad-hoc queries? What stored procedure execution is appropriate? If the database is used in an SOA environment, or the omnipresent ‘hub-and-spoke’ model, how do those rules change per application connection? And unless you instruct the database to do more, to question the authenticity of the connection over and above access rights, it will not provide you any additional value in terms of security, data consistency, or data privacy. Why is it that application security, and quite specifically web application security, is so often viewed soley as a web application security problem? The application has a strong relationship with the database but typically does not have bi-directional trust enforcement or security. For example, in production database environments we had a requirement that there would be no ad-hoc access under normal usage of the system. We would implement login triggers similar to NoToad.sql to prohibit this access via an ad-hoc administration tool. We had stored procedures built into our packages that recorded an audit event whenever a user was selecting more than some predetermined number of customer rows. But I think this was atypical, and these types of security constraints are not systemic, meaning they are often left out of the back end design. The application is designed to serve a business function and we buy security products to monitor, assess and audit the business function externally. Do you see where I am going with this? We can build security in systemically if we choose, and reduce the dependency on external security. We can and should do more to verify that the application that is connecting to the database not only has appropriate credentials, but appropriate usage. A database is an application platform, and an application in and of itself. This becomes even more important in a virtualized environment where some of the underlying network assumptions are thrown out the window. Hackers spend a lot of time determining how best to access and utilize the database not only because it typically contains the information they are after, but also it is an extraordinarily complex, feature rich platform. That means a fertile field of opportunity for misused trust relationships and insecure functions … unless you program the database to perform these verifications. Share:

Share:
Read Post

Code Development and Security

How do we know our code is bug free? What makes us believe that our application is always going to work? Ultimately, we don’t. We test as best we can. Software vendors spend a significant percentage of their development budget on Quality Assurance. Over the years we have gotten better at it. We test more, we test earlier, and we test at module, component, and system levels. We write scripts, we buy tools, we help mentor our peers on better approaches. We do white box testing, we do black box testing. We have developers write some tests. We have QA write and run tests. We have 3rd party testing assistance. We perform auto-builds and automated tests. We may even have partners, beta customers, and resellers test so that our code is as high quality as possible. We have learned that the earlier in the process we find issues, the less money we spend fixing them (see Deming, Kaizen, Six Sigma, etc.). We have even altered the basic development processes from waterfall to things like extreme and agile methodologies to better assist with quality efforts. We have taken better advantage of object oriented programming to reuse trusted code, as well as distill and simplify code to ease maintenance issues. This did not happen overnight, but has been a gradual process every year I have been part of the industry. We continually strive to get a little better every release, and we have generally gotten better, and done so with fewer resources. None of these strategies were typical 20 years ago when developers were still testing their own code. We have come a very long way. So what, you say? I say software security is no different. We are on the cusp of several large changes in the security industry and this is one of them. Security will come to the “common man” programmer. I was discussing this with Andre Gironda from ts/sci at the SunSec gathering a couple of weeks ago, and how process has positively affected quality as well as how it is starting to positively affect security, along with some of the challenges in setting up suitable security test cases at the component and module levels. Andre put up a nice post on “What Web Application Security Really Is” the other day and he touches on several of these points. Better security needs to come from wholesale and systemic changes. Hackers spend most of their time thinking about how to abuse systems, and your programming team should too. Do we do this well today? Obviously the answer is ‘no’. Will we get better in 10 years, or even 2 years from now? I am certain we will. Mapping security issues into the QA processes, as a sub-component of quality if you will, would certainly help. The infrastructure and process is already present in the development organization to account for it. The set of reports and statistics that we gather to ‘measure’ software quality would be similar in type to those for security … meaning they would both suck, but we’ll use them anyway because we do not have anything better. We are seriously shy on education and training. It has taken a long time for security problems, security education, and security awareness to even percolate into the developer community at large. This has been getting a lot of attention in the last couple of years with the mind-boggling number of data breaches that have been going on, and the huge amount of buzz lately with security practitioners with PCI 6.6 requiring code reviews. There is a spotlight on the problem and the net result will be a slow trend toward considering security in the software design and implementation phases. It will work its way into the process, the tools, the educational programs, and the minds of typical programmers. Not overnight. Like with quality in general, in slow evolutionary steps. Share:

Share:
Read Post

Pink Slip Virus 2008

This is a very scary thing. I wrote a blog post last year about this type of thing in response to Rich’s post on lax wireless security. I was trying to think up scenarios where this would be a problem, and the best example I thought of is what I am going to call the “Pink Slip Virus 2008”. Consider a virus that does the following: Once installed, the code would periodically download pornography onto the computer, encrypt it, and then store it on the disk. Not too much, and not too often, just a few pictures or small videos. After several weeks of doing this, it would un-encrypt the data, move it to “My Documents” or some subdirectory, and then uninstall itself. It could be programmed to remove signs that it was present, such as scrubbing log files to further hide from detection. The computer could be infected randomly through a hostile web site or it could be targeted through an injection attack via some insecure service. It could even be targeted by a co-worker who installed this on your machine when you were at lunch, or loaned you an infected memory stick. A virus of this type could be subtle, and use so minimal CPU, network, and disk resources so as to go unnoticed both by the owner of the computer and the IT department. Now what you have is presumed guilt. If the downloads are discovered by IT, or someone like the malicious co-worker were to proactively mention to HR “I saw something that looked like …” on or after the date the virus uninstalled itself, a subsequent search would reveal pornography on the machine. Odds are the employee would be fired. It would be tough to convince anyone that it was anything other than the employee doing what they should not have been doing, and “innocent until proven guilty” is a legal doctrine that is not applied to corporate hiring/firing decisions. I was discussing this scenario with our former Director of Marketing at IPLocks, Tom Yates, and he raised a good point. We routinely use Occam’s Razor in our reasoning. This principle states that the simplest explanation is usually the correct one. And the simple explanation would be that you were performing unauthorized browsing with your computer, which could have negative legal consequences for the company, and is almost always a ‘fire-able’ offense. How could you prove otherwise? Who is going to bring in a forensic specialist to prove you are innocent? How could you account for the files? I have had a home computer infected with a BitTorrent-like virus storing such files on a home computer in 2003, so I know the virus part is quite feasible. I know that remote sessions can be used to instigate activity from a specific machine as well. It is a problem to assume the person and the computer are one and the same. We often assume that you are responsible for specific activity because it was your IP address, or your MAC address, or your account, or your computer that was involved. Your computer is not always under your control, passwords are usually easy to guess, and so it is a dangerous assumption that the official user is responsible for all activity on a computer. Almost every piece of software I have ever downloaded onto my machine takes some action without my consent. So how would you prove it was some guy looking at porn and not spammers, hackers and/or the malicious co-worker? Share:

Share:
Read Post

Speaking in Seattle And New York This Week

It’s a good thing Adrian joined when he did, because I’m slammed with speaking events this week and he gets to mind the blog. Tomorrow I head up to Bellevue to speak at the Association for Enterprise Integration’s Enterprise Security Management event. This is a mixed audience with mostly defense contractors and NSA types. Bit of a different venue for me, but I love talking with .gov/.mil/.nothingtoseehere types. Wednesday I shift over to the City (NYC) for the Financial Information Security Decisions conference put on by Information Security magazine. I’m presenting in the Data Security track, recording a session on virtualization security with Dino and Hoff, and squeezing in a few other things. I can’t speak for Dino, but Hoff and I are both battling travel-related colds, so the panel could end up as the Great Snot War of 2008. Share:

Share:
Read Post

Crime, Communication, and Statistics

‘I’m not sure if it’s the innate human desire to recognize patterns even when they don’t exist, or if the stars really do align on occasion, but sometimes a series of random events hit at just the right time to inspire a little thought. Or maybe I’m just fishing. This week is an interesting one on the home front. It’s slowly emerging that we’re having some crime problems in the community. There has been a rash of vehicle break-ins and other light burglary. I found out about it when a board member of our HOA (and former cop) posted in our community forums that we’ve hired an off-duty Phoenix police officer to patrol our neighborhood, on top of the security company we already have here. We’ve got a big community center with a pool, so we need a little more security than the average subdivision. Our community forums are starting to fill up with reports from throughout the community and I highly suspect this recent spree will be ending soon. All 900 homes now have access to suspect descriptions, targets, areas of concern, and so on. We’re all locking up tighter and keeping our eyes open. Already some activity was caught on camera and turned over to the police. We know the bad guy’s techniques, tactics, and operations. With this many eyeballs looking for them, the odds are low they’ll be working around here much longer. We’ve had problems for months, and the private security was ineffective. There is just too much territory for them to cover effectively. This spree could have potentially gone on forever, but now that the community is engaged we’ve moved from relying on 2 people to nearly 900 for our monitoring and defense. We’ve taken the edge, just by sharing and talking. In the security world some interesting tidbits have popped up this week. First came Debix with their fraud numbers, and now Verizon with their forensic investigation’s breach report. On a private email list I was slightly critical of Verizon, but I realized I’m just being greedy and wanted more detail. While it could be better, this is some great information to get out there (thanks for making me take a second look, Hoff). I shouldn’t have been critical, because when it comes to data breaches we should be thankful for any moderately reliable stats we can get our hands on. Between these two reports, a couple of things jumped out at me. First, I think these finally debunk all the insider threat marketing garbage. No one ever really had those numbers; trust me, since I saw my “estimate” from Gartner quoted as a hard number for years. This now aligns with my gut feeling, which is that there are more bad guys on the outside than the inside, although inside attacks can be more devastating under the right circumstances. To further support this, the Verizon report also indicates that many attacks on the inside (or from partners) are really attacks from the outside that compromised an internal system. This supports my controversial positions on how we should treat the insider threat. The second major point is that we rarely know where our data is, or if our systems are really configured correctly. Both of these are cited in the report as major sources of breaches- unknown data, unknown systems, and misconfigured systems. This is strongly supported by the root cause analysis work I’ve done on data breaches (in my data breach presentation; haven’t written it in paper/blog form yet). People wonder why I’m such a big fan of DLP. Just think about how much risk you can reduce by scanning your environment for sensitive data in the wrong places. FInally, it’s clear that web applications are a huge problem. Verizon claims web apps were involved in 34% of cases. Again, this supports my conclusion from data breach analysis that links more fraud to application compromises than lost tapes or laptops. The Debix numbers also indicate no higher fraud levels for lost tapes than normal background levels of fraud. We’re on the early edge of building our own neighborhood watch. We’re starting to see the first little nibs of hard breach data, and they’re already defying conventional wisdom. By communicating more and sharing, we are better able to make informed risk and security decisions. Without this information, the bad guys can keep cruising our neighborhoods with impunity, stealing whatever we accidentally leave in our cars overnight. Share:

Share:
Read Post

Separation of Duties/Functions & SQL Injection

In a previous post  I have noted that ultimately SQL Injection is a database attack through a web application proxy, and that the Database and the associated Database Administrators need to play a larger part in the defense of data and applications. I recommended a couple steps to assist in combating attacks through the use of stored procedures to help in input parameter validation. I also want to make additional recommendations in the areas of separation of duties and compartmentalization of functions. Most of the relational database platforms now provide the ability to have more than one DBA role. This is typically accomplished by removal of the single all-powerful DBA user, and separating the DBA functions into specific accounts, with each assigned a distinct role like backup & recovery or user setup. The goal obviously is to limit the scope of damage should any single account be compromised, promote more granular auditing, and help prevent the type of abuse that happened with FIS. I find many large corporations are in fact moving to this model. Which leads me to my first point- that I have not seen this change within the application development community, to use databases to compartmentalize functions and users. I was reading a post on SQL Injection Attacks over on the Vulnerability Research and Defense blog a couple days back. On their continuing thread of advice on how to address SQL Injection, they recommend IT and Database Administrators take steps to help prevent SQL Injection. Specifically, review IIS logs for signs of attack, consult your ISV on potential vulnerabilities of your 3rd party code, and validate that the accounts have the ‘least privilege’ needed to perform the work. While I have no disagreement with any of these items per se, I think it misses the point. I want to use this to illustrate the issue of perspective, and suggest a change in thinking that needs to happen here. Most applications perform all database activities under a single database user. This is a problem in that a database administrator is supposed to apply the concept of least privilege to the database user and group, but that single generic database user performs every application function. Application of the least privilege concept in this context is almost meaningless. Limiting the features or the scope of access available is just as important. Think about this as separation of duties, so that the scope of what is possible through the web is restricted. The application developer must take some steps to assist in this area by reducing functional scope for individual users. Any web application that uses a database establishes a trusted connection to that database regardless of whether it is ASP or JSP or whatever. Ultimately, a SQL Injection attack is the user of the web application, exploiting that trust relationship between the application and the database to their advantage by piggy-backing code onto the legitimate access. I don’t want to say that if you are considering ‘least privilege’ to assess risk you have already lost the battle, but this really should be done in the design phase as well as with periodic reviews of the system. Collaborate with Database Administrators and Architects (Or stop treating the database like a black box) They say if your only tool is a hammer, everything begins to look like a nail. That accurately describes many of the web application developers I have worked with in the last 10 years. They attempt to provide all of the functionality for their application within their application and use the database as a simple repository to store, sort and report data. In reality database engines like Oracle, MS SQL Server, and DB2 are extraordinarily feature rich applications and, in data processing related activities, provide more advanced processing capabilities. Yet I still find application developers writing tools, functions and utilities that would be better served being in the database itself. So separation of duties in the processing environment is a good idea, where different programs or different roles within those programs provide different pieces of functionality. Siloed, if you will. So is constant collaboration between application developers and database administrators, designers and programmers. Smaller, dedicated pieces of code are easier to review. And this is being driven not just by PCI, but also by more modern development processes and QA strategies. In the next post I want to comment on trust relationships and distributed application use of databases. Share:

Share:
Read Post

Adrian Lane Joining Securosis

Earlier today I had a bit of a shock when our fearless editor Chris Pepper congratulated me on our 500th post. I started this blog just under two years ago to test the waters of this whole new media thing. Much to my surprise, almost exactly a year after that I took the plunge, quit a heck of a good job, and turned Securosis into a company, not just a place for my random rants. Over that time Chris joined me as editor, and David Mortman as an occasional contributor. Today we’re taking the next big step as Adrian Lane joins me on the business side of Securosis, L.L.C. as our Senior Security Strategist. That’s right, I just officially doubled the size of our full time staff. Adrian is the former CTO of IPLocks and has a long history in the security industry as both a CTO and VP of Engineering for various companies. We met about four years ago on one of my analyst gigs, and it didn’t take long to realize that despite our different backgrounds, we shared many elements of a common vision of information centric security. Adrian’s been a frequent commenter on this blog since the start, and lives only a few miles from me just outside of Phoenix. When I found out he was on the market and thinking of moving into consulting, there was no way in heck I was going to pass on the opportunity. He has deep technical skills and an intuitive understanding of markets, product development, and the big picture. For our existing clients, Adrian is available as a resource on current contracts and for new engagements. He’ll be a regular contributor to the blog and we’re working on some new, exciting content. He previously blogged over here if you’d like to check out his other content. Prepare yourselves for the flood of juicy information-centric goodness! And did I mention he’s friends with Hoff? Yeah, we should probably hold that against him. Share:

Share:
Read Post

There Are No Safe Web Sites

I spend a reasonable amount of time writing security articles for the consumer audience over at TidBITS, never mind this site. When I talk about browser security, one of my top tips is to avoid risky behavior and “those” sites. Although that’s pretty standard advice, it’s become a load of bollocks, and I can no longer give it in good conscience. I spend a lot of time these days focusing on web application security and talking with some of the leading web app researchers like Rnake and Jeremiah Grossman. It’s increasingly obvious that a combination of cross site scripting and some more nefarious web app attacks are destroying the concept of “safe” websites. We’ve seen everything from banks, to security vendors, to advertising servers, to major providers like Google and Yahoo, fall victim to attacks where malicious content is embedded or executed in the context of the trusted sites. PayPal may make a big deal about extended validation digital certificates and colorful anti-phishing banners, yet an EV cert doesn’t do squat if the bad guy sneaks in a little malicious JavaScript and you’ve now run the nasty code in a trusted context. Today, Dark Reading ran an article on some major security sites with cross site scripting vulnerabilities. Combined with a few beers with Rsnake last week, it pushed me over the edge. These days, it’s hard to call any site trusted. Thats one reason I’ve shifted to my multi-browser/multi-operating system strategy. Realistically, I can’t tell everyone in the world to adopt my level of paranoia. In part because as bad as things are, most people aren’t suffering real damage because of it. That said, it strongly emphasizes the need not only to keep your system up to date, but to at least split browsers for financial vs. regular sites. It also strongly points to the need to change the fundamental trust model of browsers, and to push us in the security industry towards solutions like ADMP and browser session virtualization (or better yet, a combination of both). This isn’t a “the world is ending” post. It’s merely a recognition that “safe” browsing is only a partial security control these days, and one that’s losing effectiveness. We need to think about adopting new strategies before we start seeing more mass exploitation leveraging commonly trusted sites. One that transcends current browser trust models, which do little but make life easier for the smart attackers who take advantage of them. Oh yeah, and stop wasting money on EV certs. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.