Securosis

Research

ATM PIN Thefts

The theft of Citibank ATM PINs is in the news again as it appears that indictments have been handed down on the three suspects. This case will be interesting to watch, to see what the fallout will be. It is not still really clear if the PINs were leaked in transit, or if the clearing house servers were breached. There are a couple of things about this story that I still find amusing. The first is that Fiserv, the company that operates the majority of the network, is pointing fingers at Cardtronics Inc. The quote by the Fiserv representative “Fiserv is confident in the integrity and security of our system” is great. They both manage elements of the ‘system’. When it comes down to it, this is like two parties who are standing in a puddle of gasoline, accusing each other of lighting a match. It won’t matter who is at fault when they both go up in flames. In the public mind, no one is going to care, and they will be blamed equally and quite possibly both go out of business if their security was shown to be grossly lacking. My second though on this subject was, once you breach the ‘system’, you have to get the money out. In this case, it has been reported that over $2M was ‘illegally gained’. If the average account is hacked for $200.00, we are talking about at least 10,000 separate ATM withdrawals. That is a lot of time spent at the 7-11! But seriously, that is a lot of time to spend making ATM withdrawals. I figure that they way they got caught is that the thief’s picture keept turning up on security cameras … otherwise this is a difficult crime to detect and catch. I also got to thinking about ATMs and the entire authentication process is not much more than basic two factor authentication combined with some simple behavioral checks at the back end. The security of these networks is really not all that advanced. Typically PIN codes are four digits in length, and it really does not make a lot of sense to use hash algorithms given the size of the PIN and the nature of the communications protocol. And while it requires some degree of technical skill, the card itself can be duplicated, making a fairly weak two factor system. Up until a couple years ago, DES was still the typical encryption algorithm in use, and only parts of the overall transaction processing systems keep the data encrypted. Many of the ATMs are not on private networks, but utilize the public Internet and airwaves. Given the amount of money and the number of transactions that are processed around the world, it is really quite astonishing how well the system as a whole holds up. Finally, while I have been known to bash Microsoft for various security miscues over the years, it seems somewhat specious to state “Hackers are targeting the ATM system’s infrastructure, which is increasingly built on Microsoft Corp.’s Windows operating system.” Of course they are targetting the infrastructure; that is the whole point of electronic fraud. They probably meant the back end processing infrastructure. And why mention Windows? Windows may make familiarity with the software easier; this case does not show that any MS product was at fault for the breach. Throwing that into the story seems like they are trying to cast blame on MS software without any real evidence. Share:

Share:
Read Post

What’s My Motivation?

‘Or more appropriately, “Why are we talking about ADMP?” In his first post on the future of application and database security, Rich talked about Forces and Assumptions heading us down an evolutionary path towards ADMP. I want to offer a slightly different take on my motivation, or belief, in this strategy. One of the beautiful things about mode application development is our ability to cobble together small, simple pieces of code into a larger whole in order to accomplish some task. Not only do I get to leverage existing code, but I get to bundle it together in such a way that I alter the behavior depending upon my needs. With simple additions, extensions and interfaces, I can make a body of code behave very differently depending upon how I organize and deploy the pieces. Further, I can bundle different application platforms together in a seamless manner to offer extraordinary services without a great deal of re-engineering. A loose confederation of applications cooperating together to solve business problems is the typical implementation strategy today, and I think that the security challenge needs to account for the model rather than the specific components within the model. Today, we secure components. We need to be able to ‘link up’ security in the same way that we do the application platforms (I would normally go off on an Information Centric Security rant here, but that is pure evangelism, and a topic for another day). I have spent the last four years with a security vendor that provided assessment, monitoring, and auditing of databases and databases specifically. Do enough research into security problems, customer needs, and general market trends; and you start to understand the limitations of securing just a single application in the chain of events. For example, I found that database security issues detected as part of an assessment scan may have specific relevance to the effectiveness of database monitoring. I believe Web Application security providers witness the same phenomenon with SQL Injection as they may lack some context for the attack, or at least the more subtle subversions of the system or exploitation of logic flaws in the database or database application. A specific configuration might be necessary for business continuity and processing, but could open an acknowledged security weakness that I would like to address with another tool, such as database monitoring. That said, where I am going with this line of thought is not just the need for detective and preventative controls on a single application like a web server or database server, but rather the Inter-application benefit of a more unified security model. There were many cases where I wanted to share some aspect of the database setup with the application or access control system that could make for a more compelling security offering (or visa-versa, for that matter). It is hard to understand context when looking at security from a single point outside an application, or from the perspective of a single application component. I have said many times that the information we have at any single processing node is limited. Yes, my bias towards application level data collection vs. network level data collection is well documented, but I am advocating collection of data from multiple sources. A combination of monitoring of multiple information sources, coupled with a broad security and compliance policy set, would be very advantageous. I do not believe this is simply a case of (monitoring) more is better, but of solving specific problems where it is most efficient to do so. There are certain attacks that are easier to address at the web application level, and others best dealt with in the database, while others should be intercepted at the network level. But the sharing of policies, policy enforcement, and suspect behaviors, can be both more effective and more efficient. Application and Database Monitoring and Protection is a concept that I have been considering/researching/working towards for several years now. With my previous employer, this was a direction I wanted to take the product line, as well as some of the partner relationships to make this happen across multiple security products. When Rich branded the concept with the “ADMP” moniker it just clicked with me for the reasons stated above, and I am glad he posted more on the subject last week. But I wanted to put a little more focus on the motivation for what he is describing and why it is important. This is one of the topics we will both be writing about more often in the weeks and months ahead. Share:

Share:
Read Post

Database Connections and Trust

Your Web application connects to a database. You supply the user name and password, establish the connection, and run your query. A very simple, easy to use, and essential component to web applications. The database itself has very little awareness of where the application that made the connection is located. It does not necessarily know the purpose of the application. It may or may not know the real user who is using that connection. It’s not that it cannot, it is just typically not programmed to do so. It is at the beck and call of the application and will do whatever the application asks it to do. One of the great reasons to use Database Activity Monitoring is to de-mystify that connection. These monitoring tools pay close attention to where the connection is coming from, what application is making the connection, what time of day it is, how much data is being moved, what queries are being run, what fails to execute, and on and on. This provides a very real benefit in detecting attacks and other types of misuse. There is a strong market for this type of tool because application developers rarely develop this capability within the context of the service they are providing. Can this be done from within the database? Yep. Do people do this? Rarely to never. Should it be done? I contend that to some degree it should always be there. Much in the same way we provide range checking on database values, we should also have some degree of business consistency checking. But we don’t because it is typically not part of the scope of the application project to program the database to perform additional checking and verifications. Usually it is only scoped out to store data and provide some reports, just a basic repository for storage of data and application state. We have gotten to the point where we use Hibernate <http://www.hibernate.org/> to abstract the concept of a database altogether and further remove any native database visibility. Give the database user name and password and it will give you everything you have permissions to do … and then some. It is set up to trust you. And why not, you gave it the right credentials! And the converse of that is the application developer views the database as some abstract object. Security of that object is someone else’s problem. The loss of visibility does not mean that the functionality is not there, or that it is not important, or that the application developer can ignore it. What I am trying to say is the database is set up to trust the application connection and it should not be. Whatever you gave the user who connects permission to do, it will do, whenever asked. But should you be accepting local connections? Remote connections? Ad-hoc queries? What stored procedure execution is appropriate? If the database is used in an SOA environment, or the omnipresent ‘hub-and-spoke’ model, how do those rules change per application connection? And unless you instruct the database to do more, to question the authenticity of the connection over and above access rights, it will not provide you any additional value in terms of security, data consistency, or data privacy. Why is it that application security, and quite specifically web application security, is so often viewed soley as a web application security problem? The application has a strong relationship with the database but typically does not have bi-directional trust enforcement or security. For example, in production database environments we had a requirement that there would be no ad-hoc access under normal usage of the system. We would implement login triggers similar to NoToad.sql to prohibit this access via an ad-hoc administration tool. We had stored procedures built into our packages that recorded an audit event whenever a user was selecting more than some predetermined number of customer rows. But I think this was atypical, and these types of security constraints are not systemic, meaning they are often left out of the back end design. The application is designed to serve a business function and we buy security products to monitor, assess and audit the business function externally. Do you see where I am going with this? We can build security in systemically if we choose, and reduce the dependency on external security. We can and should do more to verify that the application that is connecting to the database not only has appropriate credentials, but appropriate usage. A database is an application platform, and an application in and of itself. This becomes even more important in a virtualized environment where some of the underlying network assumptions are thrown out the window. Hackers spend a lot of time determining how best to access and utilize the database not only because it typically contains the information they are after, but also it is an extraordinarily complex, feature rich platform. That means a fertile field of opportunity for misused trust relationships and insecure functions … unless you program the database to perform these verifications. Share:

Share:
Read Post

Code Development and Security

How do we know our code is bug free? What makes us believe that our application is always going to work? Ultimately, we don’t. We test as best we can. Software vendors spend a significant percentage of their development budget on Quality Assurance. Over the years we have gotten better at it. We test more, we test earlier, and we test at module, component, and system levels. We write scripts, we buy tools, we help mentor our peers on better approaches. We do white box testing, we do black box testing. We have developers write some tests. We have QA write and run tests. We have 3rd party testing assistance. We perform auto-builds and automated tests. We may even have partners, beta customers, and resellers test so that our code is as high quality as possible. We have learned that the earlier in the process we find issues, the less money we spend fixing them (see Deming, Kaizen, Six Sigma, etc.). We have even altered the basic development processes from waterfall to things like extreme and agile methodologies to better assist with quality efforts. We have taken better advantage of object oriented programming to reuse trusted code, as well as distill and simplify code to ease maintenance issues. This did not happen overnight, but has been a gradual process every year I have been part of the industry. We continually strive to get a little better every release, and we have generally gotten better, and done so with fewer resources. None of these strategies were typical 20 years ago when developers were still testing their own code. We have come a very long way. So what, you say? I say software security is no different. We are on the cusp of several large changes in the security industry and this is one of them. Security will come to the “common man” programmer. I was discussing this with Andre Gironda from ts/sci at the SunSec gathering a couple of weeks ago, and how process has positively affected quality as well as how it is starting to positively affect security, along with some of the challenges in setting up suitable security test cases at the component and module levels. Andre put up a nice post on “What Web Application Security Really Is” the other day and he touches on several of these points. Better security needs to come from wholesale and systemic changes. Hackers spend most of their time thinking about how to abuse systems, and your programming team should too. Do we do this well today? Obviously the answer is ‘no’. Will we get better in 10 years, or even 2 years from now? I am certain we will. Mapping security issues into the QA processes, as a sub-component of quality if you will, would certainly help. The infrastructure and process is already present in the development organization to account for it. The set of reports and statistics that we gather to ‘measure’ software quality would be similar in type to those for security … meaning they would both suck, but we’ll use them anyway because we do not have anything better. We are seriously shy on education and training. It has taken a long time for security problems, security education, and security awareness to even percolate into the developer community at large. This has been getting a lot of attention in the last couple of years with the mind-boggling number of data breaches that have been going on, and the huge amount of buzz lately with security practitioners with PCI 6.6 requiring code reviews. There is a spotlight on the problem and the net result will be a slow trend toward considering security in the software design and implementation phases. It will work its way into the process, the tools, the educational programs, and the minds of typical programmers. Not overnight. Like with quality in general, in slow evolutionary steps. Share:

Share:
Read Post

Pink Slip Virus 2008

This is a very scary thing. I wrote a blog post last year about this type of thing in response to Rich’s post on lax wireless security. I was trying to think up scenarios where this would be a problem, and the best example I thought of is what I am going to call the “Pink Slip Virus 2008”. Consider a virus that does the following: Once installed, the code would periodically download pornography onto the computer, encrypt it, and then store it on the disk. Not too much, and not too often, just a few pictures or small videos. After several weeks of doing this, it would un-encrypt the data, move it to “My Documents” or some subdirectory, and then uninstall itself. It could be programmed to remove signs that it was present, such as scrubbing log files to further hide from detection. The computer could be infected randomly through a hostile web site or it could be targeted through an injection attack via some insecure service. It could even be targeted by a co-worker who installed this on your machine when you were at lunch, or loaned you an infected memory stick. A virus of this type could be subtle, and use so minimal CPU, network, and disk resources so as to go unnoticed both by the owner of the computer and the IT department. Now what you have is presumed guilt. If the downloads are discovered by IT, or someone like the malicious co-worker were to proactively mention to HR “I saw something that looked like …” on or after the date the virus uninstalled itself, a subsequent search would reveal pornography on the machine. Odds are the employee would be fired. It would be tough to convince anyone that it was anything other than the employee doing what they should not have been doing, and “innocent until proven guilty” is a legal doctrine that is not applied to corporate hiring/firing decisions. I was discussing this scenario with our former Director of Marketing at IPLocks, Tom Yates, and he raised a good point. We routinely use Occam’s Razor in our reasoning. This principle states that the simplest explanation is usually the correct one. And the simple explanation would be that you were performing unauthorized browsing with your computer, which could have negative legal consequences for the company, and is almost always a ‘fire-able’ offense. How could you prove otherwise? Who is going to bring in a forensic specialist to prove you are innocent? How could you account for the files? I have had a home computer infected with a BitTorrent-like virus storing such files on a home computer in 2003, so I know the virus part is quite feasible. I know that remote sessions can be used to instigate activity from a specific machine as well. It is a problem to assume the person and the computer are one and the same. We often assume that you are responsible for specific activity because it was your IP address, or your MAC address, or your account, or your computer that was involved. Your computer is not always under your control, passwords are usually easy to guess, and so it is a dangerous assumption that the official user is responsible for all activity on a computer. Almost every piece of software I have ever downloaded onto my machine takes some action without my consent. So how would you prove it was some guy looking at porn and not spammers, hackers and/or the malicious co-worker? Share:

Share:
Read Post

Separation of Duties/Functions & SQL Injection

In a previous post  I have noted that ultimately SQL Injection is a database attack through a web application proxy, and that the Database and the associated Database Administrators need to play a larger part in the defense of data and applications. I recommended a couple steps to assist in combating attacks through the use of stored procedures to help in input parameter validation. I also want to make additional recommendations in the areas of separation of duties and compartmentalization of functions. Most of the relational database platforms now provide the ability to have more than one DBA role. This is typically accomplished by removal of the single all-powerful DBA user, and separating the DBA functions into specific accounts, with each assigned a distinct role like backup & recovery or user setup. The goal obviously is to limit the scope of damage should any single account be compromised, promote more granular auditing, and help prevent the type of abuse that happened with FIS. I find many large corporations are in fact moving to this model. Which leads me to my first point- that I have not seen this change within the application development community, to use databases to compartmentalize functions and users. I was reading a post on SQL Injection Attacks over on the Vulnerability Research and Defense blog a couple days back. On their continuing thread of advice on how to address SQL Injection, they recommend IT and Database Administrators take steps to help prevent SQL Injection. Specifically, review IIS logs for signs of attack, consult your ISV on potential vulnerabilities of your 3rd party code, and validate that the accounts have the ‘least privilege’ needed to perform the work. While I have no disagreement with any of these items per se, I think it misses the point. I want to use this to illustrate the issue of perspective, and suggest a change in thinking that needs to happen here. Most applications perform all database activities under a single database user. This is a problem in that a database administrator is supposed to apply the concept of least privilege to the database user and group, but that single generic database user performs every application function. Application of the least privilege concept in this context is almost meaningless. Limiting the features or the scope of access available is just as important. Think about this as separation of duties, so that the scope of what is possible through the web is restricted. The application developer must take some steps to assist in this area by reducing functional scope for individual users. Any web application that uses a database establishes a trusted connection to that database regardless of whether it is ASP or JSP or whatever. Ultimately, a SQL Injection attack is the user of the web application, exploiting that trust relationship between the application and the database to their advantage by piggy-backing code onto the legitimate access. I don’t want to say that if you are considering ‘least privilege’ to assess risk you have already lost the battle, but this really should be done in the design phase as well as with periodic reviews of the system. Collaborate with Database Administrators and Architects (Or stop treating the database like a black box) They say if your only tool is a hammer, everything begins to look like a nail. That accurately describes many of the web application developers I have worked with in the last 10 years. They attempt to provide all of the functionality for their application within their application and use the database as a simple repository to store, sort and report data. In reality database engines like Oracle, MS SQL Server, and DB2 are extraordinarily feature rich applications and, in data processing related activities, provide more advanced processing capabilities. Yet I still find application developers writing tools, functions and utilities that would be better served being in the database itself. So separation of duties in the processing environment is a good idea, where different programs or different roles within those programs provide different pieces of functionality. Siloed, if you will. So is constant collaboration between application developers and database administrators, designers and programmers. Smaller, dedicated pieces of code are easier to review. And this is being driven not just by PCI, but also by more modern development processes and QA strategies. In the next post I want to comment on trust relationships and distributed application use of databases. Share:

Share:
Read Post

The Rumor Is True … I’m Joining Rich At Securosis.

Believe it or not, I’m going to work with Rich Mogull at Securosis. Worse yet, I’m excited about it! On the outside looking in, Rich and I have dissimilar backgrounds. I have been working in product development and IT over the last ten years, and Rich has been an analyst and market strategist. But during the four years I have known Rich, we have shown an uncanny similarity in our views on data security across the board. We are both tech guys at the core, and have independently arrived at the same ideas and conclusions about security and what it will look like in the years to come. As our backgrounds are both diverse and complementary, my joining Securosis will facilitate taking on additional clients and slowly expand the types of services provided. I will contribute to strategy, evaluations, architecture, and end-user guidance, as well as projects that involve more “hands-on” assistance. I will also be making contributions to the Blog on a regular basis as well. Anyway, I am really looking forward to working with Rich on a daily basis. And yes, before Amrit Williams has a chance to ask, I am a card carrying NAHMLA (North American Hoff-Mogull Love Association) member. We may even sell Polo Shirts on the web site. Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.