Securosis

Research

Critical MS Vulnerabilities – September 2009

Got an IM from Rich today: “nasty windows flaw out there – worst in a long time”. I looked over the Microsoft September Security Bulletin and what was posted this morning on their Security Research and Defense blog, and it was clear he is right. MS09-045 and MS09-046 are both “drive-by style” vulnerabilities. The attack vector is most likely malicious websites hosting specially-crafted JavaScript (MS09-045) or malicious use of the DHTML ActiveX control (MS09-046) to infect browsing users. Vulnerabilities that confuse the script engine can be tough to reverse-engineer from the update so it may take a while for attackers to discover and weaponize. We still might see a reliable exploit within 30 days, hence the “1” … The attack vector for both CVE’s addressed by MS09-047 is most likely again a malicious website but these vulnerabilities could also be exploited via media files attached to email. When a victim double-clicks the attachment and clicks “Open” on the dialog box, the media file could hit the vulnerable code. I started writing up an analysis of the remotely exploitable threats, which can completely hose your system, when it dawned on me that technical analysis in this case is irrelevant. I hate to get all “Uh, remote code execution is bad, mmmkay” as that is unhelpful, but I think in this case, simplicity is best. Patch your Vista and Windows machines now! If you need someone else to tell you “Yeah, you’re screwed, patch now”, there is a nice post on the MSRC blog you can check out. If there is not an exploit in the wild already, I am not as optimistic as the MS staff, and think we will probably see something by week’s end. Share:

Share:
Read Post

Friday Summary – September 4, 2009

As much as I love what I do, it’s turned me into a cynical bastard. And no, I don’t mean skeptical, which we’ve talked about before (the application of critical thinking to determine truth), but truly cynical (everyone is a right bastard who will fleece you for everything you’re worth if given the opportunity). While I think both skepticism and cynicism are important traits for a security professional, they do have their downside… especially cynicism. Marketing, for example, really pisses cynics off – even the regular ole’ marketing that finds its way onto every available surface capable of supporting a sticker, poster, or other form of advertising. Even enjoying movies and such is a bit harder (Star Trek nearly lost me completely with that Nokia bit). Don’t even get me started on blatant manipulation of emotions come Emmy/Oscar time. But credulity is a core aspect of the human experience. You can’t maintain social relationships without a degree of trust, and you can’t enjoy any form of entertainment without the ability to suspend disbelief. That’s why I’m a complete nut-job of a Parrothead. Although I know that behind all Margaritaville blenders there’s some guy making absolutely silly money, I don’t care. I’ve put my stake in the ground and decided that here and now I will suspend my cynicism and completely buy into some fantasy world propagated by a corporate entity. And I love every minute of it. I’ve been a Parrothead since high school, and it’s frightening how influential Jimmy Buffett ended up being on my life. His music got me through paramedic school, and has always helped me escape when life veered to the stressful. Six years ago I met my wife at a Jimmy Buffett concert, our first date was at a show, and we got engaged on a trip to Hawaii for a show. Yes, I’ve blown massive amounts of cash on CDs, DVDs, decorative glassware, and various home decor items featuring palm trees and salt shakers, but I figure Mr. Buffett has earned every cent of it with the enjoyment he’s brought into my life. That’s why, although I’ve met plenty of celebrities over the years (mostly work related), I nearly peed myself when I was grabbed from the backstage pre-show last weekend and told it was time to meet Jimmy. A few years ago a friend of mine was the network admin for the South Pole, and he sent a video to margaritaville.com of some of the Antarctic parrotheads while Jimmy was on his Party at the End of the World tour. They played it all over the country, and when Erik decided to go to the show with us he casually emailed his contact there. Next thing you know we have 10th row seats, backstage passes, and Jimmy wants to meet Erik. Since I took him to his first Buffett show, he grabbed me when they told him he could bring a friend. We spent a few minutes in Jimmy’s dressing room, and I mostly listened as they talked Antarctica. It was an amazing experience, and reminded me why sometimes it’s okay to suspend the cynicism and just enjoy the ride. I won’t ruin the moment by trying to tie this to some sort of analogy or life lesson. The truth is I met Jimmy Buffett, it was totally freaking awesome, and nothing else matters. Don’t forget that you can subscribe to the Friday Summary via email. And now for the week in review: Webcasts, Podcasts, Outside Writing, and Conferences Adrian wrote Truth, lies and fiction about encryption for Information Security Magazine (he did the hard work, I only helped with some of the edits). Rich was quoted on Mac security in the New York Times Gadgetwise Blog. Rich and Martin on The Network Security Podcast. Favorite Securosis Posts Rich: My start on Data Security in the Cloud. I think I’ve finally figured out a framework for this, and will be blogging the heck out of it over the coming weeks. Adrian: Part 6 of Understanding and Choosing a Database Assessment Solution. Other Securosis Posts Sentrigo and MS SQL Server Vulnerability Musings on Data Security in the Cloud OWASP and SunSec Announcement Project Quant Posts Raw Project Quant Survey Results Favorite Outside Posts Adrian: Robert Graham as an interesting article on using DMCA counter-claims. Rich: Jack Daniel on the evisceration of the Massachusetts security/privacy law Top News and Posts Microsoft IIS FTP flaw Smart grid hacking Major Twitter flaw Security fundamentals apply to virtualization Faster WiFi cracking (only affects WPA, not WPA2) Panaera gift card (in)security Blog Comment of the Week This week’s best comment comes from ds in response to Musings on Data Security in the Cloud: Good post, I couldn’t agree more. I think a lot of the fear of cloud security is that, for many security pros, this paradigm shift changes the way that they work, makes existing skill sets less relevant and demands they learn new ones. They raise issues of trust and quality much as other IT pros have when faced with other types of sourcing options, but miss the facts that it is our job to determine the trustworthiness of any solution, internal or external and that an internal solution isn’t inherently trusted just because we go to lunch with the people who implement and manage it. Share:

Share:
Read Post

Understanding and Choosing a Database Assessment Solution, Part 6: Administration

Reporting for compliance and security, job scheduling, and integration with other business systems are the topics this post will focus on. These are the features outside the core scanning function that make managing a database vulnerability assessment product easier. Most database assessment vendors have listed these features for years, but they were implemented in a marketing “check the box” way, not really to provide ease of use and not particularly intended to help customers. Actually, that comment applies to the products in general. In the 2003-2005 time frame, database assessment products pretty much sucked. There really is no other way to capture the essence of the situation. They had basic checks for vulnerabilities, but most lacked security best practices and operational policies, and were insecure in their own right. Reliability, separation of duites, customization, result set management, trend analysis, workflow, integration with reporting or trouble-ticketing – for any of these, you typically had to look elsewhere. Application Security’s product was the best of a bad lot, which included crappy offerings from IPLocks, NGS, ISS, nTier, and a couple others. I was asked the other day “Why are you writing about database assessment? Why now? Don’t most people know what assessment is?” There are a lot of reasons for this. Unlike DAM or DLP, we’re not defining and demystifying a market. Database security and compliance requirements have been at issue for many years now, but only recently have platforms have matured sufficiently to realize their promise. These are not funky little homegrown tools any longer, but maturing into enterprise-ready products. There are new vendors in the space, and (given some of the vendor calls we get) several more will join the mix. They are bringing considerable resources to table beyond what the startups of 5 years ago were capable of, integrating the assessment feature into a broader security portfolio of preventative and detective controls. Even the database vendors are starting to take notice and invest in their products. If you reviewed database assessment products more than two years ago and were dissatisfied, it’s time for another look. On to some of the management features that warrant closer review: Reporting As with nearly any security tool, you’ll want flexible reporting options, but pay particular attention to compliance and auditing reports, to support compliance needs. What is suitable for the security staffer or administrator may be entirely unsuitable for a different internal audience, both in content and level of detail. Further, some products generate one or more reports from scan results while others tie scan results to a single report. Reports should fall into at least three broad categories: compliance and non-technical reports, security reports (incidents), and general technical reports. Built-in report templates can save valuable time by not only grouping together the related policies, providing the level of granularity you want. Some vendors have worked with auditors from the major firms to help design reports for specific regulations, like SOX & PCI, and automatically generate reports during an audit. If your organization needs flexibility in report creation, you may exceed the capability of the assessment product and need to export the data to a third party tool. Plan on taking some time to analyze built-in reports, report templates, and report customization capabilities. Alerts Some vendors offer single policy alerts for issues deemed critical. These issues can be highlighted and escalated independent of other reporting tools, providing flexibility in how to handle high priority issues. Assessment products are considered a preventative security measure, and unlike monitoring, alerting is not a typical use case. Policies are grouped by job function, and rather than provide single policy scanning or escalation internally, critical policy failures are addressed through trouble-ticketing systems, as part of normal maintenance. If your organization is moving to a “patch and shield” model, prioritized policy alerts are a long-term feature to consider. Scheduling You will want to schedule policies to run on a periodic basis, and all of the platforms provide schedulers to launch scans. Job control may be provided internally, or handled via external software or even as “cron jobs”. Most customers we speak with run security scans on a weekly basis, but compliance scans vary widely. Frequency depends upon type and category of the policy. For example, change management / work order reconciliation is a weekly cycle for some companies, and a quarterly job at others. Vendors should be able to schedule scans to match your cycles. Remediation & Integration Once policy violation are identified, you need to get the information into the right hands so that corrective action can be taken. Since incident handlers may come from either a database or a security background, look for a tool that appeals to both audiences and supplies each with the information they need to understand incidents and investigate appropriately. This can be done through reports or workflow systems, such as Remedy from BMC. As we discussed in the policy section, each policy should have a thorough description, remediation instructions, and references to additional information. Addressing all of the audiences may be a policy and report customization effort for your team. Some vendors provide hooks for escalation procedures and delivery to different audiences. Others use relational databases to store scan results and can be directly integrated into third-party systems. Result Set Management All the assessment products store scan results, but differ on where and how. Some store the raw data retrieved from the database, some store the result of a comparison of raw data against the policy, and still others store the result within a report structure. Both for trend analysis, and pursuant to certain regulatory requirements, you might need to store scan results for a period of a year or more. Depending upon how these results are stored, the results and the reports may change with time! Examine how the product stores and retrieves prior scan results and reports as they may keep raw result data, or the reports, or both. Regenerated reports might be different if the policies they were mapped

Share:
Read Post

Sentrigo and MS SQL Server Vulnerability

We do not cover press releases. We are flooded with them and, quite frankly, most are not very interesting. You can only read “We’re the market leader in Mumblefoo” or “We’re the only vendor to offer revolutionary widget X” so many times without spitting up. Neither is true, and even if it was, I still wouldn’t care. This morning I am making an exception to the rule as I got a press release that caught my attention: it announces a database vulnerability, touches on issues of vulnerability disclosure, and was discovered by one of the DAM vendors who product is a little different than most. Most of the press releases I read this morning didn’t cover some of the areas I feel need to be discussed and analyzed, so think release gets a pass for. First, the vulnerability: Sentrigo announced today that they had discovered a flaw in SQL Server (ref: CVE-2009-3039). From what I understand SQL Server is keeping unencrypted passwords in memory for a period of time. This means that anyone who has permission to run memory dumping tools would be able to sift through the database memory structures and find cleartext passwords. The prerequisites to exploit the vulnerability are that you need some subset of administrative privileges, a tool to examine memory, and the time & motivation to rummage around memory looking for passwords. While it is serious if exploited, given the hurdles you have to jump through to get the data, it’s not likely to occur. Still, being able to take a compromised OS admin account and parlay that into collecting database passwords is pretty serious cascade failure. I am making the assumption that encryption keys for transparent encryption were NOT discovered hanging around in memory, but if they were, I would appreciate someone from the Sentrigo team letting me know. For those not familiar with Sentrigo’s Hedgehog technology, it’s a database activity monitoring tool. Hedgehog collects SQL statements by scanning database memory structures, one of the event collection methods I discussed last year. It works by scanning the memory location where the database stores queries prior to and during execution. As the database does not store the original query in memory, but instead a machine-readable variant, Hedgehog also performs cross reference checks to collect additional information and ‘bind variables’ (i.e., query parameters) so you get the original query. This type of technology has been around for a while, but the majority of DAM vendors do not provide this option, as it is expensive to build and difficult to maintain. The internal memory structures of the database change as database vendors alter their platforms or provide memory optimization packages, so such scanners need to be updated on a regular basis to stay current. The first tool I saw of use this strategy was produced by the BMC team many years ago as an admin tool for query analysis and tuning, but it is suitable for security as well. There are a handful of database memory scanners out there, with two available commercially. One, used by IPLocks Japan, is a derivative of the original BMC technology; the other is Sentrigo’s. They differ in two significant ways. One, IPLocks gathers every statement to construct an audit trail, while Sentrigo is more focused on security monitoring, and only collects statements relevant to security policies. Two, Sentrigo performs policy analysis on the database platform which means additional platform overhead, coupled with faster turnaround on the analysis. Because the analysis is performed on the database, they have the potential to react in time to block malicious queries. There are pros and cons to blocking, and I want to push that philosophical debate to another time. If you have interest in this type of capability, you will need to thoroughly evaluate it in a production setting. I have not personally witnessed successful deployment at a customer site and would not make a recommendation until I see that. Other vendors have botched their implementations in the past, so this warrants careful inspection. What’s good about this type of technology? This is one way to collect SQL statements when turning on native auditing is not an option. It can collect every query executed, including batch jobs that are not visible outside the database. This type of event collection is hard for a DBA or admin to intercept or alter to “cover their tracks” if they want to do something malicious. Finally, this is one of the DAM tools that can perform blocking, and that is an advantage for addressing some security threats. What’s bad about this type of technology is that it can miss statements under heavy load. As the many ‘small’ or pre-compiled statements execute quickly, there is a possibility that some statements could executed and flushed from memory too quickly for the scanner to detect. Second, it needs to be tuned to omit statements that are irrelevant to avoid too much processing overhead. This type of technology is agent-based, which can be an advantage or disadvantage depending upon your IT setup and operational policies. For example, if you have a thousand databases, you are managing a thousand agents. And as Hedgehog code resides on the OS, it is accessible by IT admin staff with OS credentials, allowing admins to snoop inside the database. This is an issue for IT organizations which want strict separation of access between DBAs and platform administrators. The reality is a skilled and determined admin will get access to the database or the data if they really want to, and you have to draw the line on trust somewhere, but this concern is common to both enterprises and SMB customers. On patching the vulnerability (and I am making a guess here), I am willing to bet that Microsoft’s cool response on this issue is due to memory scanning. As most firms don’t allow memory scanning or dumping tools to admins on production machines, and Sentrigo is a memory scanner, the perception is that you have to violate a

Share:
Read Post

Musings on Data Security in the Cloud

So I’ve written about data security, and I’ve written about cloud security, thus it’s probably about time I wrote something about data security in the cloud. To get started, I’m going to skip over defining the cloud. I recommend you take a look at the work of the Cloud Security Alliance, or skip on over to Hoff’s cloud architecture post, which was the foundation of the architectural section of the CSA work. Today’s post is going to be a bit scattershot, as I throw out some of the ideas rolling around my head from I thinking about building a data security cycle/framework for the cloud. We’ve previously published two different data/information-centric security cycles. The first, the Data Security Lifecycle (second on the Research Library page) is designed to be a comprehensive forward-looking model. The second, The Pragmatic Data Security Cycle, is designed to be more useful in limited-scope data security projects. Together they are designed to give you the big picture, as well as a pragmatic approach for securing data in today’s resource-constrained environments. These are different than your typical Information Lifecycle Management cycles to reflect the different needs of the security audience.   When evaluating data security in the context of the cloud, the issues aren’t that we’ve suddenly blasted these cycles into oblivion, but that when and where you can implement controls is shifted, sometimes dramatically. Keep in mind that moving to the cloud is every bit as much an opportunity as a risk. I’m serious – when’s the last time you had the chance to completely re-architect your data security from the ground up? For example, one of the most common risks cited when considering cloud deployment is lack of control over your data; any remote admin can potentially see all your sensitive secrets. Then again, so can any local admin (with access to the system). What’s the difference? In one case you have an employment agreement and their name, in the other you have a Service Level Agreement and contracts… which should include a way to get the admin’s name. The problems are far more similar than they are different. I’m not one of those people saying the cloud isn’t anything new – it is, and some of these subtle differences can have a big impact – but we can definitely scope and manage the data security issues. And when we can’t achieve our desired level of security… well, that’s time to figure out what our risk tolerance is. Let’s take two specific examples: Protecting Data on Amazon S3 – Amazon S3 is one of the leading IaaS services for stored data, but it includes only minimal security controls compared to an internal storage repository. Access controls (which may not integrate with your internal access controls) and transit encryption (SSL) are available, but data is not encrypted in storage and may be accessible to Amazon staff or anyone who compromises your Amazon credentials. One option, which we’ve talked about here before, is Virtual Private Storage. You encrypt your data before sending it off to Amazon S3, giving you absolute control over keys and ACLs. You maintain complete control while still retaining the benefits of cloud-based storage. Many cloud backup solutions use this method. Protecting Data at a SaaS Provider – I’d be more specific and list a SaaS provider, but I can’t remember which ones follow this architecture. With SaaS we have less control and are basically limited to the security controls built into the SaaS offering. That isn’t necessarily bad – the SaaS provider might be far more secure than you are – but not all SaaS offerings are created equal. To secure SaaS data you need to rely more on your contracts and an understanding of how your provider manages your data. One architectural option for your SaaS provider is to protect your data with individual client keys managed outside the application (this is actually a useful internal data security architectural choice). It’s application-level encryption with external key management. All sensitive client data is encrypted in the SaaS provider’s database. Keys are managed in a dedicated appliance/service, and provided temporally to the application based on user credentials. Ideally the SaaS prover’s admins are properly segregated – where no single admin has database, key management, and application credentials. Since this potentially complicates support, it might be restricted to only the most sensitive data. (All your information might still be encrypted, but for support purposes could be accessible to the approved administrators/support staff). The SaaS provider then also logs all access by internal and external users. This is only one option, but your SaaS provider should be able to document their internal data security, and even provide you with external audit reports. As you can see, just because you are in the cloud doesn’t mean you completely give up any chance of data security. It’s all about understanding security boundaries, control options, technology, and process controls. In future posts we’ll start walking through the Data Security Lifecycle and matching specific issues and control options in each phase against the SPI (SaaS, PaaS, IaaS) cloud models. Share:

Share:
Read Post

Friday Summary – August 28, 2009

I got my first CTO promotion at the age of 29, and though I was very strong in technology, it’s shocking how little I knew back them in terms of process, communication, presentation, leadership, business, and a dozen other important things. However, I was fortunate to learn one management lesson early that really helped me define the role. It turned out that my personal productivity was no longer relevant in the big picture. Intead by taking the time to communicate vision, intent, process, and tools – and to educate my fellow development team members – their resultant rise in productivity dwarfed anything that I could produce. Even on my first small team, making every staff member 10% better, in productivity or quality, the power of leadership and communication was demonstrable in lines of code produced, reduced bug counts, reusable code, and other ways. The role evolved as I did, from pure technologist, to engineering leader, to outward market evangelist, customer liaison, and ultimately supporting sales, product, marketing, and PR efforts at large. With age and experience, being able to communicate technical complexities in a simple way to a larger external audience magnified my positive impact on the company. Being able to pick the right message, communicate the value a product has, and express how technology addresses business challenges in a meaningful way to non-technical audiences is a very powerful thing. You can literally watch as marketing, PR, and sales teams align themselves – becoming more efficient and more effective – and customers who were not interested now open the door for you. Between two companies with equivalent products, communication can be the difference between efficiency and disorganization, motivation and apathy, commercial success and failure. And it’s clear to me why I need both in this role as analyst. During the RSA show I interrupted two different presentations at two different vendor booths because the presenter was failing to capture their product’s value. The audience members may have been disinterested tchochke hunters, or they may have been potential customers, but just in case I did not want to see them lose a sale. One of them was Secerno, whom I feel comfortable picking on because I know them and I like their product, so I was an arrogant bastard and re-delivered their sales pitch. Simpler language, more concrete examples, tangible value. And rather than throw me out, the booth manager and tchochke hunter potential customer thanked me because he got ‘it’. Being able to deliver the key messages and communicate value is hard. Creating a value statement that encompasses what you do, and speaking to potential customer needs while avoiding pigeon-holing yourself into a tiny market is really hard. Most go to the opposite extreme, citing how wonderful they are and how quickly all your problems will be solved without actually bothering to mention what it is they do. Fortune 500 companies can get away with this, and may even do it deliberately to force face to face meetings, but it’s the kiss of death for startups without deeply established relationships. On the other side of the equation, I have no idea how most customers wade through the garbage vendors push out there because I know what value most of the data security products provide and it’s not what’s in the marketing collateral. If their logo and web address was not on the web page, I wouldn’t have a clue about what their product did. Or if they actually did any of the things they claimed to. It’s as if the marketing departments don’t know what their product does but do know how they want to be perceived and that’s all that matters. Another example, reading the BitArmor blog, is that they missed the principal value of their product. Why should you be interested in Data Centric Security? Content and context awareness! Why is that important? Because it provides the extra information needed to create real business usage policies, not just network security policies. It allows the data to be self-defending. You have the ability to provide much finer-grained controls for data. Policy distribution and enforcement are easier. Those are core values to Data Loss Prevention and Digital Rights Management, the two most common instantiations of Data Centric Security. Sure, device independence is cool too, but that is not really a customer problem. Working with small startup firms, you desperately want to get noticed, and I have worked with many ultra-aggressive CEOs who want to latch onto every major security event as public justification of their product/service value. This form of “bandwagon jumping” is very enticing if your product is indeed a great way to address the problem, but you have to be very careful as it can backfire on you as well. While their web site does a good job at communicating what they do, this week’s Acunetix blog makes this mistake by tying their product value to addressing the SQL injection attacks (allegedly) used by Albert Gonzales and others. I have no problems with the claims of the post, but the real value of Acunetix and similar firms is finding possible injection attacks before the general public does: during the development cycle. It’s proven cost effective to do it that way. Once someone finds the vulnerability and the attack is in the wild, cleaning up the code is not the fastest fix, nor the most cost-effective, and certainly not the least disruptive to operations. Customers are wise to this and too broadly defining your value costs you market credibility. Anyway, sorry to pick on you guys, but you can do better. For all of you security technology geeks out there who smirked when you read “communicating value is hard”, have some sympathy for your marketing and product marketing teams, because the best technology is only occasionally the right customer solution. Oh, once again, don’t forget that you can subscribe to the Friday Summary via email. And now for the week in review: Webcasts, Podcasts, Outside Writing, and Conferences Rich’s

Share:
Read Post

OWASP and SunSec Announcement

Rich wanted me to put up a reminder that he will be speaking at OWASP next Tuesday (September 1, 2009). I’d say where this was located, but I honestly don’t know. He said it was a secret. Also, for those of you in the greater Phoenix area, we are planning SunSec next week on Tuesday as well. Keep the date on your calendar free. Location TBD. We’ll update this post with details next week. # Update: Ben Tomhave was nice enough to post SunSec details here. Share:

Share:
Read Post

Burden of Online Fraud

One of my favorite posts of the last week, and one of the scariest, is Brian Krebs’ Washington Post article on Businesses Are Reluctant to Report Online Fraud. This is not a report on a single major bank heist, but instead what many of us have worried about for a long time in Internet fraud: automated, distributed and repeatable theft. The worry has never been the single million-dollar theft, but scalable, repeatable theft of electronic funds. We are going to be hearing a lot more about this in the coming year. The question that will be discussed is who’s to blame in these situations? The customer for having almost no security on their small business computer and being completely ignorant of basic security precautions? The bank, both for having crummy authentication and fraud detection, with an understanding the security threats as part of their business model? Is it contributory negligence? This issue will gain more national attention as more businesses have their bank say “too bad, your computer was hacked!” Let’s face it, the bank has your money. They are the scorekeeper and if they say you withdrew your money, the burden of proof is on you to show they are wrong. And no one wants to make them mad for fear they might tell you to piss off. The lines of responsibility need to be drawn. I feel like I am the last person in the U.S. to say this, but I don’t do my banking on line. Would it be convenient? Sure, but I think it’s too risky. My bank account information? Not going to see a computer, or at least a computer I own because I cannot afford to make a mistake. I asked a handful of security researches I was having lunch with during Defcon – who know a heck of a lot more about web hacking than I do – if they did their banking online. They all said they did, saying “It’s convenient.” Me? I have to use my computer for research, and I am way too worried that I would make one simple mistake and be completely hosed and have to rebuild from scratch … after my checking account was cleaned out. In each of the last two years, the majority of the people I spoke with at Black Hat/Defcon … no, let’s make that the overwhelming majority of the people I have spoken with overall, had an ‘Oh $&(#’ moment at the conference. At some point we said to ourselves “These threats are really bad!” Granted, many of the security researchers I spoke with take extraordinary precautions, but we need to recognize how badly the browsers and web apps we use every day are fundamentally broken from a security standpoint. We need to acknowledge that out of the box, PCs are insecure and the people who use them are willfully ignorant of security. I may be the last person with a computer who simply won’t budge on this subject. I even get mad when the bank sends me a credit card that has ATM capabilities as a convenience for me. I did not ask for that ‘feature’ and I don’t want the liability. While the banks keep sending me incentives and encouragements to do it, I think online banking remains too risky unless you have a dedicated machine. Maybe banks will start issues smart tokens or some additional security measures to help, but right now, the infrastructure appears broken to me. Share:

Share:
Read Post

Database Assessment Solutions, Part 5: Operations and Compliance policies

Technically speaking, the market segment we are talking about is “Database Vulnerability Assessment”. You might have noticed that we titled this series “Database Assessment”. No, it was not just because the titles of these posts are too long (they are). The primary motivation for this name was to stress that this is not just about vulnerabilities and security. While the genesis of this market is security, compliance with regulatory mandates and operations policies are what drives the buying decisions, as noted in part 2. (For easy reference, here are Part 1, Part 3, and Part 4). In many ways, compliance and operational consistency are harder problems to solve because they requires more work and tuning on your part, and that need for customization is our focus in this post. In 4GL programming we talk about objects and instantiation. The concept of instantiation is to take a generic object and give it life; make it a real instance of the generic thing, with unique attributes and possibly behavior. You need to think about databases in the same way as, when started up, no two are alike. There may be two installations of DB2 that serve the same application, but they are run by different companies, store different data, are managed by different DBAs, have altered the base functions in various ways, run on different hardware, and have different configurations. This is why configuration tuning can be difficult: unlike vulnerability policies that detect specific buffer overflows or SQL injection attacks, operational policies are company specific and are derived from best practices. We have already listed a number of the common vulnerability and security policies. The following is a list of policies that apply to IT operations on the database environment or system: Operations Policies Password requirements (lifespan, composition) Data files (number, location, permissions) Audit log files (presence, permissions, currency) Product version (version control, patches) Itemize (unneeded) functions Database consistency (i.e., DBCC-DB on SQL Server) checks Statistics (statspack, auto-statistics) Backup report (last, frequency, destination) Error log generation and access Segregation of admin role Simultaneous admin logins Ad hoc query usage Discovery (databases, data) Remediation instructions & approved patches Orphaned databases Stored procedures (list, last modified) Changes (files, patches, procedures, schema, supporting functions) There are a lot more, but these should give you an idea of the basics a vendor should have in place, and allow you to contrast with the general security and vulnerability policies we listed in section 4. Compliance Policies Most regulatory requirements, from industry or government, are fulfilled by access control and system change policies we have already introduced. PCI adds a few extra requirements in the verification of security settings, access rights and patch levels, but compliance policies are generally a subset of security rules and operational policies. As the list varies by regulation, and the requirements change over time, we are not going to list them separately here. Since compliance is likely what is motivating your purchase of database assessment, you must to dig into vendor claims to verify they offer what you need. It gets tricky because some vendors tout compliance, for example “configuration compliance”, which only means you will be compliant with their list of accepted settings. These policies may not be endorsed by anyone other than the vendor, and only have coincidental relevance to PCI or SOX. In their defense, most commercially available database assessment platforms are sufficiently evolved to offer packaged sets of relevant polices for regulatory compliance, industry best practices, and detection of security vulnerabilities across all database platforms. They offer sufficient breadth and depth for what you need to get up and running very quickly, but you will need to verify your needs are met, and if not, what the deviation is. What most of the platforms do not do very well is allow for easy policy customization, multiple policy groupings, policy revisions, and creating copies of the “out of the box” policies provided by the vendor. You need all of these features for day-to-day management, so let’s delve into each of these areas a little more. This leads into our next section on policy customization. Policy Customization Remember how I said in Part 3 that “you are going to be most interested in evaluating assessment tools on how well they cover the policies you need”? That is true, but probably not for the reasons that you thought. What I deliberately omitted is that the policies you are interested in prior to product evaluation will not be the same policy set you are interested in afterwards. This is especially true for regulatory policies, which grow in number and change over time. Most DBAs will tell you that the steps a database vendor advises to remediate a problem may break your applications, so you will need a customized set of steps appropriate to your environment. Further, most enterprises have evolved database usage polices far beyond “best practices”, and greatly augment what the assessment vendor provides. This means both the set of policies, and the contents of the policies themselves, will need to change. And I am not just talking about criticality, but description, remediation, the underlying query, and the result set demanded to demonstrate adherence. As you learn more about what is possible, as you refine your internal requirements, or as auditor expectations evolve, you will experience continual drift in your policy set. Sure, you will have static vulnerability and security policies, but as the platform, process, and requirements change, your operations and compliance policy sets will be fluid. How easy it is to customize policies and manage policy sets is extremely important, as it directly affects the time and complexity required to manage the platform. Is it a minute to change a policy, or an hour? Can the auditor do it, or does it require a DBA? Don’t learn this after you have made your investment. On a day-to-day basis, this will be the single biggest management challenge you face, on par with remediation costs. Policy Groupings & Separation of Duties For

Share:
Read Post

Some Follow-Up Questions for Bob Russo, General Manager of the PCI Council

I just finished reading a TechTarget editorial by Bob Russo, the General Manager of the PCI Council where he responded to an article by Eric Ogren Believe it or not, I don’t intend this to be some sort of snarky anti-PCI post. I’m happy to see Mr. Russo responding directly to open criticism, and I’m hoping he will see this post and maybe we can also get a response. I admit I’ve been highly critical of PCI in my past, but I now take the position that it is an overall positive development for the state of security. That said, I still consider it to be deeply flawed, and when it comes to payments it can never materially improve the security of a highly insecure transaction system (plain text data and magnetic stripe cards). In other words, as much as PCI is painful, flawed, and ineffective, it has also done more to improve security than any other regulation or industry initiative in the past 10 years. Yes, it’s sometimes a distraction; and the checklist mentality reduces security in some environments, but overall I see it as a net positive. Mr. Russo states: It has always been the PCI Security Standards Council’s assertion that everyone in the payment chain, from (point-of-sale) POS manufacturers to e-shopping cart vendors, merchants to financial institutions, should play a role to keep payment information secure. There are many links in this chain – and each link must do their part to remain strong. and However, we will only be able to improve the security of the overall payment environment if we work together, globally. It is only by working together that we can combat data compromise and escape the blame game that is perpetuated post breach. I agree completely with those statements, which leads to my questions. In your list of the payment chain you do not include the card companies. Don’t they also have responsibility for securing payment information and don’t they technically have the power to implement the most effective changes by improving the technical foundation of transactions? You have said in the past that no PCI compliant company has ever been breached. Since many of those organizations were certified as compliant, that appears to be either a false statement, or an indicator of a very flawed certification process. Do you feel the PCI process itself needs to be improved? Following up on question 2, if so, how does the PCI Council plan on improving the process to prevent compliant companies from being breached? Following up (again) on question 2, does this mean you feel that a PCI compliant company should be immune from security breaches? Is this really an achievable goal? One of the criticisms of PCI is that there seems to be a lack of accountability in the certification process. Do you plan on taking more effective actions to discipline or drop QSAs and ASVs that were negligent in their certification of non-compliant companies? Is the PCI Council considering controls to prevent “QSA shopping” where companies bounce around to find a QSA that is more lenient? QSAs can currently offer security services to clients that directly affect compliance. This is seen as a conflict of interest in all other major audit processes, such as financial audits. Will the PCI Council consider replacing restrictions on these conflict of interest situations? Do you believe we will ever reach a state where a company that was certified as compliant is later breached, and the PCI Council will be willing to publicly back that company and uphold their certification? (I realize this relates again to question 2). I know you may not be able to answer all of these, but I’ve tried to keep the questions fair and relevant to the PCI process without devolving into the blame game. Thank you, Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.