By Adrian Lane
We do not cover press releases. We are flooded with them and, quite frankly, most are not very interesting. You can only read “We’re the market leader in Mumblefoo” or “We’re the only vendor to offer revolutionary widget X” so many times without spitting up. Neither is true, and even if it was, I still wouldn’t care. This morning I am making an exception to the rule as I got a press release that caught my attention: it announces a database vulnerability, touches on issues of vulnerability disclosure, and was discovered by one of the DAM vendors who product is a little different than most. Most of the press releases I read this morning didn’t cover some of the areas I feel need to be discussed and analyzed, so think release gets a pass for.
First, the vulnerability: Sentrigo announced today that they had discovered a flaw in SQL Server (ref: CVE-2009-3039). From what I understand SQL Server is keeping unencrypted passwords in memory for a period of time. This means that anyone who has permission to run memory dumping tools would be able to sift through the database memory structures and find cleartext passwords. The prerequisites to exploit the vulnerability are that you need some subset of administrative privileges, a tool to examine memory, and the time & motivation to rummage around memory looking for passwords. While it is serious if exploited, given the hurdles you have to jump through to get the data, it’s not likely to occur. Still, being able to take a compromised OS admin account and parlay that into collecting database passwords is pretty serious cascade failure. I am making the assumption that encryption keys for transparent encryption were NOT discovered hanging around in memory, but if they were, I would appreciate someone from the Sentrigo team letting me know.
For those not familiar with Sentrigo’s Hedgehog technology, it’s a database activity monitoring tool. Hedgehog collects SQL statements by scanning database memory structures, one of the event collection methods I discussed last year. It works by scanning the memory location where the database stores queries prior to and during execution. As the database does not store the original query in memory, but instead a machine-readable variant, Hedgehog also performs cross reference checks to collect additional information and ‘bind variables’ (i.e., query parameters) so you get the original query. This type of technology has been around for a while, but the majority of DAM vendors do not provide this option, as it is expensive to build and difficult to maintain. The internal memory structures of the database change as database vendors alter their platforms or provide memory optimization packages, so such scanners need to be updated on a regular basis to stay current. The first tool I saw of use this strategy was produced by the BMC team many years ago as an admin tool for query analysis and tuning, but it is suitable for security as well.
There are a handful of database memory scanners out there, with two available commercially. One, used by IPLocks Japan, is a derivative of the original BMC technology; the other is Sentrigo’s. They differ in two significant ways. One, IPLocks gathers every statement to construct an audit trail, while Sentrigo is more focused on security monitoring, and only collects statements relevant to security policies. Two, Sentrigo performs policy analysis on the database platform which means additional platform overhead, coupled with faster turnaround on the analysis. Because the analysis is performed on the database, they have the potential to react in time to block malicious queries. There are pros and cons to blocking, and I want to push that philosophical debate to another time. If you have interest in this type of capability, you will need to thoroughly evaluate it in a production setting. I have not personally witnessed successful deployment at a customer site and would not make a recommendation until I see that. Other vendors have botched their implementations in the past, so this warrants careful inspection.
What’s good about this type of technology? This is one way to collect SQL statements when turning on native auditing is not an option. It can collect every query executed, including batch jobs that are not visible outside the database. This type of event collection is hard for a DBA or admin to intercept or alter to “cover their tracks” if they want to do something malicious. Finally, this is one of the DAM tools that can perform blocking, and that is an advantage for addressing some security threats.
What’s bad about this type of technology is that it can miss statements under heavy load. As the many ‘small’ or pre-compiled statements execute quickly, there is a possibility that some statements could executed and flushed from memory too quickly for the scanner to detect. Second, it needs to be tuned to omit statements that are irrelevant to avoid too much processing overhead. This type of technology is agent-based, which can be an advantage or disadvantage depending upon your IT setup and operational policies. For example, if you have a thousand databases, you are managing a thousand agents. And as Hedgehog code resides on the OS, it is accessible by IT admin staff with OS credentials, allowing admins to snoop inside the database. This is an issue for IT organizations which want strict separation of access between DBAs and platform administrators. The reality is a skilled and determined admin will get access to the database or the data if they really want to, and you have to draw the line on trust somewhere, but this concern is common to both enterprises and SMB customers.
On patching the vulnerability (and I am making a guess here), I am willing to bet that Microsoft’s cool response on this issue is due to memory scanning. As most firms don’t allow memory scanning or dumping tools to admins on production machines, and Sentrigo is a memory scanner, the perception is that you have to violate a best practice just to allow someone to exploit the vulnerability. I have to commend the way Sentrigo handled disclosure of the vulnerability by giving the vendor ample time to address, and for providing a workaround. Disclosure is a huge point of friction in the research community right now, due to issues exactly like this one. I agree with Sentrigo that if we don’t spotlight these issues in a very public way, the vendors will never be sufficiently motivated to clean up their sloppy coding practices. And make no mistake, this is a sloppy coding vulnerability. But I think Sentrigo showed professionalism in giving the SQL Server team ample time before public disclosure.
Posted at Wednesday 2nd September 2009 6:35 pm
(5) Comments •
So I’ve written about data security, and I’ve written about cloud security, thus it’s probably about time I wrote something about data security in the cloud.
To get started, I’m going to skip over defining the cloud. I recommend you take a look at the work of the Cloud Security Alliance, or skip on over to Hoff’s cloud architecture post, which was the foundation of the architectural section of the CSA work. Today’s post is going to be a bit scattershot, as I throw out some of the ideas rolling around my head from I thinking about building a data security cycle/framework for the cloud.
We’ve previously published two different data/information-centric security cycles. The first, the Data Security Lifecycle (second on the Research Library page) is designed to be a comprehensive forward-looking model. The second, The Pragmatic Data Security Cycle, is designed to be more useful in limited-scope data security projects. Together they are designed to give you the big picture, as well as a pragmatic approach for securing data in today’s resource-constrained environments. These are different than your typical Information Lifecycle Management cycles to reflect the different needs of the security audience.
When evaluating data security in the context of the cloud, the issues aren’t that we’ve suddenly blasted these cycles into oblivion, but that when and where you can implement controls is shifted, sometimes dramatically. Keep in mind that moving to the cloud is every bit as much an opportunity as a risk. I’m serious – when’s the last time you had the chance to completely re-architect your data security from the ground up?
For example, one of the most common risks cited when considering cloud deployment is lack of control over your data; any remote admin can potentially see all your sensitive secrets. Then again, so can any local admin (with access to the system). What’s the difference? In one case you have an employment agreement and their name, in the other you have a Service Level Agreement and contracts… which should include a way to get the admin’s name.
The problems are far more similar than they are different. I’m not one of those people saying the cloud isn’t anything new – it is, and some of these subtle differences can have a big impact – but we can definitely scope and manage the data security issues. And when we can’t achieve our desired level of security… well, that’s time to figure out what our risk tolerance is.
Let’s take two specific examples:
Protecting Data on Amazon S3 – Amazon S3 is one of the leading IaaS services for stored data, but it includes only minimal security controls compared to an internal storage repository. Access controls (which may not integrate with your internal access controls) and transit encryption (SSL) are available, but data is not encrypted in storage and may be accessible to Amazon staff or anyone who compromises your Amazon credentials. One option, which we’ve talked about here before, is Virtual Private Storage. You encrypt your data before sending it off to Amazon S3, giving you absolute control over keys and ACLs. You maintain complete control while still retaining the benefits of cloud-based storage. Many cloud backup solutions use this method.
Protecting Data at a SaaS Provider – I’d be more specific and list a SaaS provider, but I can’t remember which ones follow this architecture. With SaaS we have less control and are basically limited to the security controls built into the SaaS offering. That isn’t necessarily bad – the SaaS provider might be far more secure than you are – but not all SaaS offerings are created equal. To secure SaaS data you need to rely more on your contracts and an understanding of how your provider manages your data.
One architectural option for your SaaS provider is to protect your data with individual client keys managed outside the application (this is actually a useful internal data security architectural choice). It’s application-level encryption with external key management. All sensitive client data is encrypted in the SaaS provider’s database. Keys are managed in a dedicated appliance/service, and provided temporally to the application based on user credentials. Ideally the SaaS prover’s admins are properly segregated – where no single admin has database, key management, and application credentials. Since this potentially complicates support, it might be restricted to only the most sensitive data. (All your information might still be encrypted, but for support purposes could be accessible to the approved administrators/support staff). The SaaS provider then also logs all access by internal and external users.
This is only one option, but your SaaS provider should be able to document their internal data security, and even provide you with external audit reports.
As you can see, just because you are in the cloud doesn’t mean you completely give up any chance of data security. It’s all about understanding security boundaries, control options, technology, and process controls.
In future posts we’ll start walking through the Data Security Lifecycle and matching specific issues and control options in each phase against the SPI (SaaS, PaaS, IaaS) cloud models.
Posted at Tuesday 1st September 2009 10:19 pm
(2) Comments •
Rich wanted me to put up a reminder that he will be speaking at OWASP next Tuesday (September 1, 2009). I’d say where this was located, but I honestly don’t know. He said it was a secret.
Also, for those of you in the greater Phoenix area, we are planning SunSec next week on Tuesday as well. Keep the date on your calendar free. Location TBD. We’ll update this post with details next week.
Update: Ben Tomhave was nice enough to post SunSec details here.
Posted at Friday 28th August 2009 5:47 pm
(0) Comments •
I got my first CTO promotion at the age of 29, and though I was very strong in technology, it’s shocking how little I knew back them in terms of process, communication, presentation, leadership, business, and a dozen other important things. However, I was fortunate to learn one management lesson early that really helped me define the role. It turned out that my personal productivity was no longer relevant in the big picture. Intead by taking the time to communicate vision, intent, process, and tools – and to educate my fellow development team members – their resultant rise in productivity dwarfed anything that I could produce. Even on my first small team, making every staff member 10% better, in productivity or quality, the power of leadership and communication was demonstrable in lines of code produced, reduced bug counts, reusable code, and other ways.
The role evolved as I did, from pure technologist, to engineering leader, to outward market evangelist, customer liaison, and ultimately supporting sales, product, marketing, and PR efforts at large. With age and experience, being able to communicate technical complexities in a simple way to a larger external audience magnified my positive impact on the company. Being able to pick the right message, communicate the value a product has, and express how technology addresses business challenges in a meaningful way to non-technical audiences is a very powerful thing. You can literally watch as marketing, PR, and sales teams align themselves – becoming more efficient and more effective – and customers who were not interested now open the door for you. Between two companies with equivalent products, communication can be the difference between efficiency and disorganization, motivation and apathy, commercial success and failure.
And it’s clear to me why I need both in this role as analyst.
During the RSA show I interrupted two different presentations at two different vendor booths because the presenter was failing to capture their product’s value. The audience members may have been disinterested tchochke hunters, or they may have been potential customers, but just in case I did not want to see them lose a sale. One of them was Secerno, whom I feel comfortable picking on because I know them and I like their product, so I was an arrogant bastard and re-delivered their sales pitch. Simpler language, more concrete examples, tangible value. And rather than throw me out, the booth manager and
tchochke hunter potential customer thanked me because he got ‘it’.
Being able to deliver the key messages and communicate value is hard. Creating a value statement that encompasses what you do, and speaking to potential customer needs while avoiding pigeon-holing yourself into a tiny market is really hard. Most go to the opposite extreme, citing how wonderful they are and how quickly all your problems will be solved without actually bothering to mention what it is they do. Fortune 500 companies can get away with this, and may even do it deliberately to force face to face meetings, but it’s the kiss of death for startups without deeply established relationships.
On the other side of the equation, I have no idea how most customers wade through the garbage vendors push out there because I know what value most of the data security products provide and it’s not what’s in the marketing collateral. If their logo and web address was not on the web page, I wouldn’t have a clue about what their product did. Or if they actually did any of the things they claimed to. It’s as if the marketing departments don’t know what their product does but do know how they want to be perceived and that’s all that matters.
Another example, reading the BitArmor blog, is that they missed the principal value of their product. Why should you be interested in Data Centric Security? Content and context awareness! Why is that important? Because it provides the extra information needed to create real business usage policies, not just network security policies. It allows the data to be self-defending. You have the ability to provide much finer-grained controls for data. Policy distribution and enforcement are easier. Those are core values to Data Loss Prevention and Digital Rights Management, the two most common instantiations of Data Centric Security. Sure, device independence is cool too, but that is not really a customer problem.
Working with small startup firms, you desperately want to get noticed, and I have worked with many ultra-aggressive CEOs who want to latch onto every major security event as public justification of their product/service value. This form of “bandwagon jumping” is very enticing if your product is indeed a great way to address the problem, but you have to be very careful as it can backfire on you as well. While their web site does a good job at communicating what they do, this week’s Acunetix blog makes this mistake by tying their product value to addressing the SQL injection attacks (allegedly) used by Albert Gonzales and others. I have no problems with the claims of the post, but the real value of Acunetix and similar firms is finding possible injection attacks before the general public does: during the development cycle. It’s proven cost effective to do it that way. Once someone finds the vulnerability and the attack is in the wild, cleaning up the code is not the fastest fix, nor the most cost-effective, and certainly not the least disruptive to operations. Customers are wise to this and too broadly defining your value costs you market credibility.
Anyway, sorry to pick on you guys, but you can do better. For all of you security technology geeks out there who smirked when you read “communicating value is hard”, have some sympathy for your marketing and product marketing teams, because the best technology is only occasionally the right customer solution.
Oh, once again, don’t forget that you can subscribe to the Friday Summary via email.
And now for the week in review:
Webcasts, Podcasts, Outside Writing, and Conferences
Favorite Securosis Posts
Other Securosis Posts
Project Quant Posts
We are close to releasing the next round of Quant data, so stand by…
Favorite Outside Posts
Top News and Posts
Blog Comment of the Week
This week’s best comment comes from Jim Ivers in response to the We Know How Breaches Happen post:
Your analysis is spot on. Why should a cyber criminal go through the laborious effort to build a zero day attack when it is simple to spin up an exploit that picks off the multitude of unpatched and misconfigured endpoints available? Conficker used a known exploit as have many of the well publicized attacks. It is more glamorous to think of cyber criminals as evil geniuses building exotic attacks, but the collective lack of security discipline creates a path of least resistance that is easily taken.
I would suggest that there is some proof to support the customized malware vector when you look at the reports and blogs posts from Symantec and McAfee in regards to the geometric growth they are reporting in the context of number of signatures written. Both report writing more signatures in 2008 that they had written through 2007 and McAfee noted that they wrote twice as many signatures in the second half of 2008 than in the first half. But it is very likely that these were variants of known attacks with just enough difference to evade the signatures rather than markedly new attacks.
Posted at Friday 28th August 2009 6:00 am
(0) Comments •
I just finished reading a TechTarget editorial by Bob Russo, the General Manager of the PCI Council where he responded to an article by Eric Ogren Believe it or not, I don’t intend this to be some sort of snarky anti-PCI post. I’m happy to see Mr. Russo responding directly to open criticism, and I’m hoping he will see this post and maybe we can also get a response.
I admit I’ve been highly critical of PCI in my past, but I now take the position that it is an overall positive development for the state of security. That said, I still consider it to be deeply flawed, and when it comes to payments it can never materially improve the security of a highly insecure transaction system (plain text data and magnetic stripe cards). In other words, as much as PCI is painful, flawed, and ineffective, it has also done more to improve security than any other regulation or industry initiative in the past 10 years. Yes, it’s sometimes a distraction; and the checklist mentality reduces security in some environments, but overall I see it as a net positive.
Mr. Russo states:
It has always been the PCI Security Standards Council’s assertion that everyone in the payment chain, from (point-of-sale) POS manufacturers to e-shopping cart vendors, merchants to financial institutions, should play a role to keep payment information secure. There are many links in this chain – and each link must do their part to remain strong.
However, we will only be able to improve the security of the overall payment environment if we work together, globally. It is only by working together that we can combat data compromise and escape the blame game that is perpetuated post breach.
I agree completely with those statements, which leads to my questions.
- In your list of the payment chain you do not include the card companies. Don’t they also have responsibility for securing payment information and don’t they technically have the power to implement the most effective changes by improving the technical foundation of transactions?
- You have said in the past that no PCI compliant company has ever been breached. Since many of those organizations were certified as compliant, that appears to be either a false statement, or an indicator of a very flawed certification process. Do you feel the PCI process itself needs to be improved?
- Following up on question 2, if so, how does the PCI Council plan on improving the process to prevent compliant companies from being breached?
- Following up (again) on question 2, does this mean you feel that a PCI compliant company should be immune from security breaches? Is this really an achievable goal?
- One of the criticisms of PCI is that there seems to be a lack of accountability in the certification process. Do you plan on taking more effective actions to discipline or drop QSAs and ASVs that were negligent in their certification of non-compliant companies?
- Is the PCI Council considering controls to prevent “QSA shopping” where companies bounce around to find a QSA that is more lenient?
- QSAs can currently offer security services to clients that directly affect compliance. This is seen as a conflict of interest in all other major audit processes, such as financial audits. Will the PCI Council consider replacing restrictions on these conflict of interest situations?
- Do you believe we will ever reach a state where a company that was certified as compliant is later breached, and the PCI Council will be willing to publicly back that company and uphold their certification? (I realize this relates again to question 2).
I know you may not be able to answer all of these, but I’ve tried to keep the questions fair and relevant to the PCI process without devolving into the blame game.
Posted at Thursday 27th August 2009 11:04 pm
(4) Comments •
Technically speaking, the market segment we are talking about is “Database Vulnerability Assessment”. You might have noticed that we titled this series “Database Assessment”. No, it was not just because the titles of these posts are too long (they are). The primary motivation for this name was to stress that this is not just about vulnerabilities and security. While the genesis of this market is security, compliance with regulatory mandates and operations policies are what drives the buying decisions, as noted in part 2. (For easy reference, here are Part 1, Part 3, and Part 4). In many ways, compliance and operational consistency are harder problems to solve because they requires more work and tuning on your part, and that need for customization is our focus in this post.
In 4GL programming we talk about objects and instantiation. The concept of instantiation is to take a generic object and give it life; make it a real instance of the generic thing, with unique attributes and possibly behavior. You need to think about databases in the same way as, when started up, no two are alike. There may be two installations of DB2 that serve the same application, but they are run by different companies, store different data, are managed by different DBAs, have altered the base functions in various ways, run on different hardware, and have different configurations. This is why configuration tuning can be difficult: unlike vulnerability policies that detect specific buffer overflows or SQL injection attacks, operational policies are company specific and are derived from best practices.
We have already listed a number of the common vulnerability and security policies. The following is a list of policies that apply to IT operations on the database environment or system:
Password requirements (lifespan, composition)
Data files (number, location, permissions)
Audit log files (presence, permissions, currency)
Product version (version control, patches)
Itemize (unneeded) functions
Database consistency (i.e., DBCC-DB on SQL Server) checks
Statistics (statspack, auto-statistics)
Backup report (last, frequency, destination)
Error log generation and access
Segregation of admin role
Simultaneous admin logins
Ad hoc query usage
Discovery (databases, data)
Remediation instructions & approved patches
Stored procedures (list, last modified)
Changes (files, patches, procedures, schema, supporting functions)
There are a lot more, but these should give you an idea of the basics a vendor should have in place, and allow you to contrast with the general security and vulnerability policies we listed in section 4.
Most regulatory requirements, from industry or government, are fulfilled by access control and system change policies we have already introduced. PCI adds a few extra requirements in the verification of security settings, access rights and patch levels, but compliance policies are generally a subset of security rules and operational policies. As the list varies by regulation, and the requirements change over time, we are not going to list them separately here. Since compliance is likely what is motivating your purchase of database assessment, you must to dig into vendor claims to verify they offer what you need. It gets tricky because some vendors tout compliance, for example “configuration compliance”, which only means you will be compliant with their list of accepted settings. These policies may not be endorsed by anyone other than the vendor, and only have coincidental relevance to PCI or SOX. In their defense, most commercially available database assessment platforms are sufficiently evolved to offer packaged sets of relevant polices for regulatory compliance, industry best practices, and detection of security vulnerabilities across all database platforms. They offer sufficient breadth and depth for what you need to get up and running very quickly, but you will need to verify your needs are met, and if not, what the deviation is.
What most of the platforms do not do very well is allow for easy policy customization, multiple policy groupings, policy revisions, and creating copies of the “out of the box” policies provided by the vendor. You need all of these features for day-to-day management, so let’s delve into each of these areas a little more. This leads into our next section on policy customization.
Remember how I said in Part 3 that “you are going to be most interested in evaluating assessment tools on how well they cover the policies you need”? That is true, but probably not for the reasons that you thought. What I deliberately omitted is that the policies you are interested in prior to product evaluation will not be the same policy set you are interested in afterwards. This is especially true for regulatory policies, which grow in number and change over time. Most DBAs will tell you that the steps a database vendor advises to remediate a problem may break your applications, so you will need a customized set of steps appropriate to your environment. Further, most enterprises have evolved database usage polices far beyond “best practices”, and greatly augment what the assessment vendor provides. This means both the set of policies, and the contents of the policies themselves, will need to change. And I am not just talking about criticality, but description, remediation, the underlying query, and the result set demanded to demonstrate adherence. As you learn more about what is possible, as you refine your internal requirements, or as auditor expectations evolve, you will experience continual drift in your policy set. Sure, you will have static vulnerability and security policies, but as the platform, process, and requirements change, your operations and compliance policy sets will be fluid. How easy it is to customize policies and manage policy sets is extremely important, as it directly affects the time and complexity required to manage the platform. Is it a minute to change a policy, or an hour? Can the auditor do it, or does it require a DBA? Don’t learn this after you have made your investment. On a day-to-day basis, this will be the single biggest management challenge you face, on par with remediation costs.
Policy Groupings & Separation of Duties
For any given rule, you have several different potential audiences who may be interested in the results. IT, internal audit, external audit, security, or the DBAs may need the results from the rule in their reports. Conversely, each of these audiences might not be interested, or might be affected by and thus disallowed from seeing the results from certain rules. For example, your SQL Server database group does not need Oracle results, internal audit reports need not contain all security settings, your European database staff may not be interested in US database reports, and separation of duties may require some information be blocked from some users. Managing and grouping policies into logical sets is very important, as the reports derived from the policy set must be specific to certain audiences. You need the ability to group according to function, location, regulatory requirements, security clearance, and so on. The ability to import, update, save different versions, and schedule one or more policy sets is mandatory for modern database assessment tools.
If you take one thing away from this post it should be that you need to compare what policies are available from the vendor, what will you need to create, and how difficult that will be to accomplish. In the next post we will cover what you actually do with all the data you collect from the vulnerability, security, and operational policies. We will discuss reporting, scheduling, and integration with workflow and trouble ticket systems. We will also cover some of the more advanced topics having to do with platform management, scheduling, data storage, separation of assessment roles, and security of the assessment system itself.
Posted at Thursday 27th August 2009 9:55 pm
(0) Comments •
By Adrian Lane
One of my favorite posts of the last week, and one of the scariest, is Brian Krebs’ Washington Post article on Businesses Are Reluctant to Report Online Fraud. This is not a report on a single major bank heist, but instead what many of us have worried about for a long time in Internet fraud: automated, distributed and repeatable theft. The worry has never been the single million-dollar theft, but scalable, repeatable theft of electronic funds. We are going to be hearing a lot more about this in the coming year. The question that will be discussed is who’s to blame in these situations? The customer for having almost no security on their small business computer and being completely ignorant of basic security precautions? The bank, both for having crummy authentication and fraud detection, with an understanding the security threats as part of their business model? Is it contributory negligence? This issue will gain more national attention as more businesses have their bank say “too bad, your computer was hacked!” Let’s face it, the bank has your money. They are the scorekeeper and if they say you withdrew your money, the burden of proof is on you to show they are wrong. And no one wants to make them mad for fear they might tell you to piss off. The lines of responsibility need to be drawn.
I feel like I am the last person in the U.S. to say this, but I don’t do my banking on line. Would it be convenient? Sure, but I think it’s too risky. My bank account information? Not going to see a computer, or at least a computer I own because I cannot afford to make a mistake. I asked a handful of security researches I was having lunch with during Defcon – who know a heck of a lot more about web hacking than I do – if they did their banking online. They all said they did, saying “It’s convenient.” Me? I have to use my computer for research, and I am way too worried that I would make one simple mistake and be completely hosed and have to rebuild from scratch … after my checking account was cleaned out. In each of the last two years, the majority of the people I spoke with at Black Hat/Defcon … no, let’s make that the overwhelming majority of the people I have spoken with overall, had an ‘Oh $&(#’ moment at the conference. At some point we said to ourselves “These threats are really bad!” Granted, many of the security researchers I spoke with take extraordinary precautions, but we need to recognize how badly the browsers and web apps we use every day are fundamentally broken from a security standpoint. We need to acknowledge that out of the box, PCs are insecure and the people who use them are willfully ignorant of security. I may be the last person with a computer who simply won’t budge on this subject. I even get mad when the bank sends me a credit card that has ATM capabilities as a convenience for me. I did not ask for that ‘feature’ and I don’t want the liability. While the banks keep sending me incentives and encouragements to do it, I think online banking remains too risky unless you have a dedicated machine. Maybe banks will start issues smart tokens or some additional security measures to help, but right now, the infrastructure appears broken to me.
Posted at Wednesday 26th August 2009 9:15 pm
(7) Comments •
I first started tracking data breaches back in December of 2000 when I received my very first breach notification email, from Egghead Software. When Egghead wen bankrupt in 2001 and was acquired by Amazon, rather than assuming the breach caused the bankruptcy, I did some additional research and learned they were on a downward spiral long before their little security incident. This broke with the conventional wisdom floating around the security rubber-chicken circuit at the time, and was a fine example of the differences between correlation and causation.
Since then I’ve kept trying to translate what little breach material we’ve been able to get our collective hands on into as accurate a picture as possible on the real state of security. We don’t really have a lot to work with, despite the heroic efforts of the Open Security Foundation Data Loss Database (for a long time the only source on breach statistics). As with the rest of us, the Data Loss DB is completely reliant on public breach disclosures. Thanks to California S.B. 1386 and the mishmash of breach notification laws that have developed since 2005, we have a lot more information than we used to, but anyone in the security industry knows only a portion of breaches are reported (despite notification laws), and we often don’t get any details of how the intrusions occurred.
The problem with the Data Loss DB is that it’s based on incomplete information. They do their best, but more often than not we lack the real meat needed to make appropriate security and risk decisions. For example, we’ve seen plenty of vendor press releases on how lost laptops, backup tapes, and other media are the biggest source of data breaches. In reality, lost laptops and media are merely the greatest source of reported potential exposures. As I’ve talked about before, there is little or no correlation between these lost devices and any actual fraud. All those stats mean is a physical thing was lost or stolen… no more, no less, unless we find a case where we can correlate a loss with actual fraud.
On the research side I try to compensate for the statistics problem by taking more of a case study approach, at best I can using public resources. Even with the limited information released, as time passes we tend to dig up more and more details about breaches, especially once cases make it into court. That’s how we know, for example, that both CardSystems and Heartland Payment Systems were breached (5 years apart) using SQL injection against a web application (the
xp_cmdshell command in a poorly configured version of SQL Server, to be specific).
In the past year or two we’ve gained some additional data sources, most notably the Verizon Data Breach Investigations Report which provides real, anonymized data regarding breaches. It’s limited in that it only reflects those incidents where Verizon participated in the investigation, and by the standardized information they collected, but it starts to give us better insight beyond public breach reports.
Yet we still only have a fraction of the information we need to make appropriate risk management decisions. Even after 20 years in the security world (if you count my physical security work), I’m still astounded that the bad guys share more real information on means and methods than we do.
We are thus extremely limited in assessing macro trends in security breaches. We’re forced to use far more anecdotal information than a skeptic like myself is comfortable with. We don’t even have a standard for assessing breach costs (as I’ve proposed, never mind more accurate crime and investigative statistics that could help craft our prioritization of security defenses.
Seriously – decades into the practice of security we don’t have any fracking idea if forcing users to change passwords every 90 days provides more benefit than burden.
All that said, we can’t sit on our asses and wait for the data. As unscientific as it may be, we still need to decide which security controls to apply where and when.
In the past couple weeks we’ve seen enough information emerging that I believe we now have a good idea of two major methods of attack:
- As we discussed here on the blog, SQL injection via web applications is one of the top attack vectors identified in recent breaches. These attacks are not only against transaction processing systems, but are also used to gain a toehold on internal networks to execute more invasive attacks.
- Brian Krebs has identified another major attack vector, where malware is installed on insecure consumer and business PCs, then used to gather information to facilitate illicit account transfers. I’ve seen additional reports that suggest this is also a major form of attack.
I’d love to back these with better statistics, but until those are available we have to rely on a mix of public disclosure and anecdotal information. We hear rumors of other vectors, such as customized malware (to avoid AV filters) and the ever-present-and-all-powerful insider threat, but there isn’t enough to validate those as a major trend quite yet.
If we look across all our sources, we see a consistent picture emerging. The vast majority of cybercrime still seems to take advantage of known vulnerabilities that are can be addressed using common practices. The Verizon report certainly calls out unpatched systems, configuration errors, and default passwords as the most common breach sources.
While we can’t state with complete certainty that patching systems, blocking SQL injection, removing default passwords, and enforcing secure configurations will prevent most breaches, the information we have does indicate that’s a reasonable direction. Combine that with following the Data Breach Triangle by reducing use of sensitive data (and using something like DLP to find it), and tightening up egress filtering on transaction processing networks and other sensitive data locations, and you are probably in pretty good shape.
For financial institutions struggling with their clients being breached, they can add out-of-band transaction verification (phone calls or even automated text messages), and/or consider using something like Trusteer that helps secure browser sessions (note – I only mention specifically them because I don’t know of any competitor).
None of this necessarily correlates with other kinds of security incidents, but based on the various information sources we do have access to, it seems a reasonable understanding of current means and methods is emerging, and we know which security controls can mitigate those attacks. This is all based on an extremely small sample set, but unfortunately that’s all we have.
The bad guys will, of course, change attacks once the current batch becomes less profitable, but that’s the way the world works. There’s nothing we can possibly do that can eliminate every potential attack method.
It’s also possible all the public information and reports are steering us in the wrong direction, but we need to make the best decisions we can until new data emerges.
Hopefully this is helpful. I know my recommendations have started to change based on the information that’s come out in the past year.
Posted at Wednesday 26th August 2009 8:46 pm
(3) Comments •
By Adrian Lane
Understanding and Choosing a Database Assessment Solution, Part 4: Vulnerability and Security Policies
I was always fascinated by the Sapphire/Slammer worm. The simplicity of the attack and how quickly it spread were astounding. Sure, it didn’t have a malicious payload, but the simple fact that it could have created quite a bit of panic. This event is what I consider the dawn of database vulnerability assessment tools. From that point on it seemed like every couple of weeks we were learning of new database vulnerabilities on every platform. Compliance may drive today’s assessment purchase, but the vulnerabilities are always what grabs the media’s attention, and it remains a key feature for any database security product.
Prior to writing this post I went back and looked at all the buffer overflow and SQL injection attacks on DB2, Oracle, and SQL Server. It struck me when looking at them – especially those on SQL Server – why half of the administrative functions had vulnerabilities: whoever wrote them assumed that the functions were inaccessible to anyone who was not a DBA. The functions were conceptually supposed to be gated by access control and therefore safe. It was not so much that the programmers were not thinking about security, but they made incorrect assumptions about how the database internals like the parser and preprocessor worked. I have always said that SQL injection is an attack on the database through an application. It’s true, but technically the attacks are also getting through internal database processing layers prior to the exploit, as well as an eternal application layer. Looking back at the details it just seemed reasonable we would have these vulnerabilities, given the complexity of the database platforms and the lack of security training among software developers. Anyway, enough rambling about database security history.
Understanding database vulnerabilities and knowing how to remediate – whether through patches, workarounds, or third party detection tools – requires significant skill and training. Policy research is expensive, and so is writing and testing these policies. In my experience over the four years that I helped define and build database assessment policies, it would take an average of 3 days to construct a policy after a vulnerability was understood: A day to write and optimize the SQL test case, a day to create the description and put together remediation information, and another day to test on supported platforms. Multiply by 10 policies across 6 different platforms and you get an idea of the cost involved. Policy development requires a full-time team of skilled practitioners to manage and update vulnerability and security policies across the half dozen platforms commonly supported by the vendors. This is not a reasonable burden for non-security vendors to take on, so if database security is an issue, don’t try to do this in-house! Buying an aftermarket product excuses your organization from developing these checks, protecting you from specific threats hackers are likely to deploy, as well as more generic security threats.
What specific vulnerability checks should be present in your database assessment product? In a practical sense, it does not matter. Specific vulnerabilities come and go too fast for any list to be relevant. What I am going to do is provide a list of general security checks that should be present, and list the classes of vulnerabilities any product you evaluate should have policies for. Then I will cover other relevant buying criteria to consider.
General Database Security Policies
- List database administrator accounts and how they map to domain users.
- Product version (security patch level)
- List users with admin/special privileges
- List users with access to sensitive columns or data (credit cards, passwords)
- List users with access to system tables
- Database access audit (failed logins)
- Authentication method (domain, database, mixed)
- List locked accounts
- Listener / SQL Agent / UDP, network configuration (passwords in clear text, ports, use of named pipes)
- Systems tables (subset) not updatable
- Ownership chains
- Database links
- Sample Databases (Northwind, pubs, scott/tiger)
- Remote systems and data sources (remote trust relationships)
- Default Passwords
- Weak/blank/same as login passwords
- Public roles or guest accounts to anything
- External procedures (
xp_cmdshell, active scripting,
exproc, or any programatic access to OS level code)
- Buffer overflow conditions (XP, admin functions, Slammer/Sapphire, HEAP, etc. – too numerous to list)
- SQL Injection (1=1, most admin functions, temporary stored procedures, database name as code – too numerous to list)
- Network (Connection reuse, man in the middle, named pipe hijacking)
- Authentication escalation (XStatus / XP / SP, exploiting batch jobs, DTS leakage, remote access trust)
- Task injection (Webtasks, sp_xxx, MSDE service, reconfiguration)
- Registry access (SQL Server)
- DoS (named pipes, malformed requests,
IN clause, memory leaks, page locks creating deadlocks)
There are many more. It is really important to understand that the total number of in policies any given product is irrelevant. As an example, let’s assume that your database has two modules with buffer overflow vulnerabilities, and each has eight different ways to exploit it. Comparing two assessment products, one might have 16 policies checking for each exploit, and the other could have two policies checking for two vulnerabilities. These products are functionally equivalent, but one vendor touts an order of magnitude more policies, which have no actual benefit. Do NOT let the number of policies influence your buying decision and don’t get bogged down in what I call a “policy escalation war”. You need to compare functional equivalence and realize that if one product can check for more vulnerabilities in fewer queries, it runs faster! It may take a little work on your part to comb through the policies to make sure what you need is present, but you need to perform that inspection regardless.
You will want to carefully confirm that the assessment platform covers the database versions you have. And just because your company supposedly migrated to Oracle 11 some time back does not mean you get to discount Oracle 9 database support, because odds are better than even that you have at least one still hanging around. Or you don’t officially support SQL Server, but it just so happens that some of the applications you run have it embedded. Furthermore, mergers and acquisitions bring unexpected benefits, such as database platforms you did not previously have in house. Or plans to migrate off a current platform have a way of changing suddenly, with the database sticking around in your organization many years longer than anticipated. Broad database coverage should weigh heavily in your buying decision.
The currency of policies (at least for coverage of the latest vulnerabilities) is very important. Check to make sure the vendor has a solid track record of delivering policy updates no less than once a quarter. The database vendors typically release a security patch each quarter, so your assessment vendor should as well. A ‘plan’ to do so is insufficient and should be a warning signal. Press vendors for proof of delivery, such as release documentation or policy maintenance update announcements to demonstrate consistent delivery of updated policies vulnerability polices.
Cross reference policies with the database vendor information. One of the interesting friction points between the database vendors and the vulnerability scanning vendors is the production of complete and detailed information on vulnerabilities. You need to have a clear and detailed explanation of each vulnerability to understand how a vulnerability affects your organization and what workarounds may be at your disposal. While assessment vendors are motivated to provide detailed information on the vulnerability itself, for whatever reason the database vendors tend to offer terse descriptions of the threats and corresponding patch data. Press the assessment vendor for detailed information, but keep in mind the database vendor must be considered the primary source of complete and accurate remediation information. It is wise, either during an evaluation or in production, to cross reference the information provided, and weigh how well each policy documents each threat.
Finally, database security advice comes from many different sources. The database vendors usually supply best practice checklists for free. The assessment products usually list the policies that they have developed over time from what they have learned in the field. There are other independent blogs, such as Pete Finnigan’s, that offer solid advice. Finally, most database vendors have regional user groups that share information on how to approach database security, which I have always found useful. Check to see if your assessment vendor has what you need, and they probably will given that the major data breaches as of this writing are leveraging the basic vulnerabilities. If you find something is missing, find out if your vendor can provide it for you. We will get into policy customization in more detail in the next post, as well as cover integration and policy set management topics.
This was supposed to be a short post on vulnerability and security best practices. Short because these two topics are not good indicators of how well a particular database assessment product will meet your needs. It may be interesting to a researcher like myself, but I realize this might be more information than you need or want. Data collection options discussed in part 3, alongside operations and compliance policies which we will discuss in part 5, have a greater bearing on how useful the product will be.
Posted at Wednesday 26th August 2009 12:48 am
(0) Comments •
I’m a pretty typical guy. I like beer, football, action movies, and power tools. I’ve never been overly interested in kids, even though I wanted them eventually. It isn’t that I don’t like kids, but until they get old enough to challenge me in Guitar Hero, they don’t exactly hold my attention. And babies? I suppose they’re cute, but so are puppies and kittens, and they’re actually fun to play with, and easier to tell apart.
This all, of course, changed when I had my daughter (just under 6 months ago). Oh, I still have no interest in anyone else’s baby, and until the past couple weeks was pretty paranoid about picking up the wrong one from daycare, but she definitely holds my attention better than (most) puppies. I suppose it’s weird that I always wanted kids, just not anyone else’s kids.
Riley is in one of those accelerated learning modes right now. It’s fascinating to watch her eyes, expressions, and body language as she struggles to grasp the world around her (literally, anything within arms reach + 10). Her powers of observation are frightening… kind of like a superpower of some sort. It’s even more interesting when her mind is running ahead of her body as she struggles on a task she clearly understands, but doesn’t have the muscle control to pull off. And when she’s really motivated to get that toy/cat? You can see every synapse and sinew strain to achieve her goal with complete and utter focus. (That cats do that too, but only if it involves food or the birds that taunt them through the window).
On the Ranting Roundtable a few times you hear us call security folks lazy or apathetic. We didn’t mean everyone, but it’s also a general statement that extends far beyond security. To be honest, most people, even hard working people, are pretty resistent to change; to doing things in new ways, even if they’re better. In every industry I’ve ever worked, the vast majority of people didn’t want to be challenged. Even in my paramedic and firefighter days people would gripe constantly about changes that affected their existing work habits. They might hop on some new car-crushing tool, but god forbid you change their shift structure or post-incident paperwork. And go take any CPR class these days, with the new procedures, and you’ll hear a never-ending rant by the old timers who have no intention of changing how many stupid times they pump and blow per minute.
Not to over-do an analogy (well, that is what we analysts tend to do), but I wish more security professionals approached the world like my daughter. With intense observation, curiosity, adaptability, drive, and focus. Actually, she’s kind of like a hacker – drop her by something new, and her little hands start testing (and breaking) anything within reach. She’s constantly seeking new experiences and opportunities to learn, and I don’t think those are traits that have to stop once she gets older. No, not all security folks are lazy, but far too many lack the intellectual curiosity that’s so essential to success.
Security is the last fracking profession to join if you want stability or consistency. An apathetic, even if hardworking, security professional is as dangerous as he or she is worthless. That’s why I love security; I can’t imagine a career that isn’t constantly changing and challenging. I think it’s this curiosity and drive that defines ‘hacker’, no matter the color of the hat.
All security professionals should be hackers. (Despite that silly CISSP oath).
Don’t forget that you can subscribe to the Friday Summary via email.
And now for the week in review:
Webcasts, Podcasts, Outside Writing, and Conferences
Favorite Securosis Posts
Other Securosis Posts
Project Quant Posts
We are close to releasing the next round of Quant data… so stand by…
Favorite Outside Posts
Top News and Posts
Blog Comment of the Week
This week’s best comment comes from Arthur in response to the New Details and Lessons on Heartland Breach post:
Great advice. Remember folks, that vulnerability scanning is more then just running Qualys or nessus, you need web app scanning tools and database scanning tools as well, to look for issues there as well. Similarly, you want to be looking for more then just vulns per se, but services and tools you don’t need (case in point xp_cmdshell stored procedures)
Posted at Friday 21st August 2009 5:44 am
(2) Comments •
Sometimes you just need to let it all out.
With all the recent events around breaches and PCI, I thought it might be cathartic to pull together a few of our favorite loudmouths and spend a little time in a no-rules roundtable. There’s a little bad language, a bit of ranting, and a little more productive discussion than I intended.
Joining me were Mike Rothman, Alex Hutton, Nick Selby, and Josh Corman. It runs about 50 minutes, and we mostly focus on PCI.
The Ranting Roundtable, PCI.
Odds are we’ll do more of these in the future. Even if you don’t like them, they’re fun for us.
No goats were harmed in the making of this podcast.
Posted at Thursday 20th August 2009 4:44 pm
(3) Comments •
In the first part of this series we introduced database assessment as a fully differentiated form of assessment scan, and in part two we discussed some of the use cases and business benefits database assessment provides. In this post we will begin dissecting the technology, and take a close look at the deployment options available. What and how your requirements are addressed is more a function of the way the product is implemented than the policies it contains. Architecturally, there is little variation in database assessment platforms. Most are two-tiered systems, either appliances or pure software, with the data storage and analysis engine located away from the target database server. Many vendors offer remote credentialed scans, with some providing an optional agent to assist with data collection issues we will discuss later. Things get interesting around how the data is collected, and that is the focus of this post.
As a customer, the most important criteria for evaluating assessment tools are how well they cover the policies you need, and how easily they integrate within your organization’s systems and processes. The single biggest technology factor to consider for both is how data is collected from the database system. Data collection methods dictate what information will be available to you – and as a direct result, what policies you will be able to implement. Further, how the scanner interacts with the database plays a deciding role in how you will deploy and manage the product. Obtaining and installing credentials, mapping permissions, agent installation and maintenance, secure remote sessions, separation of duties, and creation of custom policies are all affected by the data collection architecture.
Database assessment begins with the collection of database configuration information, and each vendor offers a slightly different combination of data collection capabilities. In this context, I am using the word ‘configuration’ in a very broad sense to cover everything from resource allocation (disk, memory, links, tablespaces), operational allocation (user access rights, roles, schemas, stored procedures), database patch levels, network, and features/functions that have been installed into the database system. Pretty much anything you could want to know about a database.
There are three ways to collect configuration and vulnerability information from a database system:
Credentialed Scanning: A credentialed database scan leverages a user account to gain access to the database system internals. Once logged into the system, the scanner collects configuration data by querying system tables and sending the results back to the scanner for analysis. The scan can be run over the network or through a local agent proxy – each provides advantages and disadvantages which we will discuss later. In both cases the scanner connects to the database communication port with the user credentials provided in the same way as any other application. A credentialed database scan potentially has access to everything a database administrator would, and returns information that is not available outside database. This method of collection is critical as it determines such settings as password expiration, administrative roles, active and locked user accounts, internal and external stored procedures, batch jobs, and database/domain user account mismatches. It is recommended that a dedicated account with (mostly) read only permissions be issued for the vulnerability scanning team in case of a system/account compromise.
External Scanning (File & OS Inspection): This method of data collection deduces database configuration by examining settings outside database. This type of scan may also require credentials, but not database user credentials. External assessment has two components: file system and operating system. Some but not all configuration information resides in files stored as part of the database installation. A file system assessment examines both contents and metadata of initialization and configuration files, to determine database setup – such as permissions on data files, network settings, and control file locations. In addition, OS utilities are used to discover vulnerabilities and security settings not determinable by examining files within the database installation. The user account the database systems runs as, registry settings, and simultaneous administrator sessions are all examples of information accessible this way. While there is overlap between the data collected between credentialed and external scans, most of the information is distinct and relevant to different policies. Most traditional OS scanners which claim to offer database scanning provide this type of external assessment.
Network (Port) Inspection. In a port inspection, the scanner performs a mock connection to a database communication port; during the network ‘conversation’ either the database returns its type and revision are explicitly, or the scanner deduces them from other characteristics of its response. Once the scanner understand the patch revision of the database, a simple cross reference for known vulnerabilities is generated. Older databases leak enough information that scanners can make educated guesses at configuration settings and installed features. This form of assessment is typically a “quick and dirty” that provides basic patch inspection with minimal overhead and without requiring agents or credentials. As network assessment lacks the user and feature assessments required by many security and audit groups, and as database vendors have blocked most of the information leakage from simple connectinos, this type of scan is falling out of favor.
There are other ways to collect information, including eavesdropping and penetration testing, but they are not reliable; additionally, penetration testing and exploitation can have catastrophic side-effects on production databases. In this series we will ignore other options.
The bulk of configuration and vulnerability data is obtained from the credentialed scans, so they should be the bare minimum of data collection techniques in any assessment you consider. To capture the complete picture of database setup and vulnerabilities, you need both a credentialed database scan and an inspection of the underlying platform the database is installed on. You can accomplish this by leveraging a different (possibly pre-existing) OS assessment scanning tool, or obtaining this information as part of your database assessment. In either case, this is where things get a little tricky, and require careful attention on your part to make sure you get the functions you need without introducing additional security problems.
Traditionally, database assessment products used external stored procedures or locally installed agents to collect both database internals and external configuration information. The problem is that each of these methods poses a serious security risk. External stored procedures are a classic technique for attackers to access or subvert a database system. They might start by getting into the underlying platform and then using external stored procedure calls to exercise database functions, or by gaining access to the database and then launching code on the underlying platform. Enabling functions like SQL Server’s
xp-cmdshell or Oracle’s
extproc is considered a critical security vulnerability, so they are no longer available to assessment products. Historically, agents have been used to address connectivity, network bandwidth, local policy analysis, secure communication, and various other concerns that are no longer relevant. Now their principal value is that they can launch both credentialed and external scans. That also means they provide a way for IT administrators to gain access to database credentials, and DBAs to access the underlying operating system. Multi-purpose agents with mixed credentials violate common security practices, both because they give attackers an avenue for breaching systems, and also because they violate separation of duties between administrative roles.
We understand not all products offer credentialed and external scanning capabilities, so when in doubt, choose a credentialed scan. It will cover a greater number of security and compliance issues. If you have the choice, choose a platform that does both securely, meaning a vendor that offers both will need to provide one of the following options:
- Use separate tools for internal and external data collection, and merge data on the back end inside the policy analysis or reporting tools. As many firms already have OS assessment in place, this is a cost effective yet slightly clumsy option.
- Deploy external database inspection scripts in ‘push’ mode, where the local software agent has the ability to execute file and OS scripts, but the results must be pushed out to the database assessment scanner. In this way the scanning tool can perform remote credentialed scans, but does not need to store both OS and database credentials.
- Audit your database assessment vendor’s platform to verify it offloads one or both sets of credentials to a third party access control service, and that there is proper separation of duties within the UI so access to internal and external scanning functions are not co-mingled.
There are other options, but none that we are aware of available in a commercially available product. As we said before, data collection has a significant impact on the policies you implement and how you manage the installation. Of all the technology aspects we will cover, this is the most important one, and data collection should be a focus in your product evaluation. Please make sure you understand this section and ask questions if something we have discussed is not clear, as it’s important for finding the right product.
One final note: we omitted credentialed database assessment scans as SaaS – they are not readily available at this time, but are expected soon.
Posted at Thursday 20th August 2009 2:17 pm
(0) Comments •
It’s not often, but every now and then there are people in our lives we can clearly identify as having a massive impact on our careers. I don’t mean someone we liked to work with, but someone who gave us that big break, opportunity, or push in the right direction that leads you to where you are today.
In my case I know exactly who helped me make the transition from academia to the career I have today. I met Jim Brancheau while I was working at the University of Colorado as a systems and network administrator. He was an information systems professor in the College of Business, and some friends roped me into taking his class even though I was a history and molecular biology major. He liked my project on security, hired me to do some outside consulting with him, and eventually hired me full time after we both left the University. That company was acquired by Gartner, and the rest is history. Flat out, I wouldn’t be where I am today without Jim’s help.
Jim and I ended up on different teams at Gartner, and we both eventually left. After taking a few years off to ski and hike, Jim’s back in the analyst game focusing on smart grids and sustainability at Carbon Pros, and he’s currently researching and writing a new book for the corporate world on the topic. When he asked me to help out on the security side, it was an offer Karma wouldn’t let me refuse.
I covered energy/utilities and SCADA issues back in my Gartner days, but smart grids amplify those issues to a tremendous degree. Much of the research I’ve seen on security for smart grids has focused on metering systems, but the technologies are extending far beyond smarter meters into our homes, cars, and businesses. For example, Ford just announced a vehicle to grid communications system for hybrid and electric vehicles. Your car will literally talk to the grid when you plug it in to enable features such as only charging at off-peak rates.
I highly recommend you read Jim’s series on smart grids and smart homes to get a better understanding of where we are headed. For example, opt-in programs where you will allow your power company to send signals to your house to change your thermostat settings if they need to broadly reduce consumption during peak hours. That’s a consumer example, but we expect to see similar technologies also adopted by the enterprise, in large part due to expected cost-savings incentives.
Thus when we talk about smart grids, we aren’t going to limit ourselves to next-gen power grid SCADA or bidirectional meters, but try and frame the security issues for the larger ecosystem that’s developing. We also have to discuss legal and regulatory issues, such as the draft NIST and NERC/FERC standards, as well as technology transition issues (since legacy infrastructure isn’t going away anytime soon).
Jim kicked off our coverage with this post over at Carbon Pros, which introduces the security and privacy principles to the non-security audience. I’d like to add a little more depth in terms of how we frame the issue, and in future posts we’ll dig into these areas.
From a security perspective, we can think of a smart grid as five major components in two major domains. On the utilities side, there is power generation, transmission, and the customer (home or commercial) interface (where the wires drop from the pole to the wall). Within the utilities side there are essentially three overlapping networks – the business network (office, email, billing), the control process/SCADA network (control of generation and transmission equipment), and now, the emerging smart grid network (communications with the endpoint/user). Most work and regulation in recent years (the CIP requirements) have focused on defining and securing the “electronic security perimeter”, which delineates the systems involved in the process control side, including both legacy SCADA and IP-based systems.
In the past, I’ve advised utilities clients to limit the size and scope of their electronic security perimeter as much as possible to reduce both risks and compliance costs. I’ve even heard of some organizations that put air gaps back in place after originally co-mingling the business and process control networks to help reduce security and compliance costs. The smart grid potentially expands this perimeter by extending what’s essentially a third network, the smart grid network, to the meter in the residential or commercial site. That meter is thus the interface to the outside world, and has been the focus of much of the security research I’ve seen. There are clear security implications for the utility, ranging from fraud to distributed denial of generation attacks (imagine a million meters under-reporting usage all at the same time).
But the security domain also extends into the endpoint installation as it interfaces with the external side (the second domain) which includes the smart building/home network, and smart devices (as in refrigerators and cars). The security issues for residential and commercial consumers are different but related, and expand into privacy concerns. There could be fraud, denial of power, privacy breaches, and all sorts of other potential problems. This is compounded by the decentralization and diversity of smart technologies, including a mix of powerline, wireless, and IP tech.
In other words, smart grid security isn’t merely an issue for electric utilities – there are enterprise and consumer requirements that can’t be solely managed by your power company. They may take primary responsibility for the meter, but you’ll still be responsible for your side of the smart network and your usage of smart appliances.
On the upside, although there’s been some rapid movement on smart metering, we still have time to develop our strategies for management of our side (consumption) of smart energy technologies. I don’t think we will all be connecting our thermostats to the grid in the next few months, but there are clearly enterprise implications and we need to start investigating and developing strategies for smart grid management.
Thus as I start writing more about smart grid security we will split our focus out to talk about the differing strategies we’ll need on the utilities and consumption sides. Where possible I’ll tie into existing technologies, but much of this is under (rapid) development, so I may need to be a bit more generic than I like.
As a home automation and security geek I find this area absolutely fascinating. There is some very cool technology coming down the pipe, with benefits on all sides. And rather than flailing around over falling skies, there are some definite steps we can take to improve our security on both sides of the smart grid equation.
Posted at Wednesday 19th August 2009 9:43 pm
(0) Comments •
Thanks to an anonymous reader, we may have some additional information on how the Heartland breach occurred. Keep in mind that this isn’t fully validated information, but it does correlate with other information we’ve received, including public statements by Heartland officials.
On Monday we correlated the Heatland breach with a joint FBI/USSS bulletin that contained some in-depth details on the probable attack methodology. In public statements (and private rumors) it’s come out that Heartland was likely breached via a regular corporate system, and that hole was then leveraged to cross over to the better-protected transaction network.
According to our source, this is exactly what happened. SQL injection was used to compromise a system outside the transaction processing network segment. They used that toehold to start compromising vulnerable systems, including workstations. One of these internal workstations was connected by VPN to the transaction processing datacenter, which allowed them access to the sensitive information. These details were provided in a private meeting held by Heartland in Florida to discuss the breach with other members of the payment industry.
As with the SQL injection itself, we’ve seen these kinds of VPN problems before. The first NAC products I ever saw were for remote access – to help reduce the number of worms/viruses coming in from remote systems.
I’m not going to claim there’s an easy fix (okay, there is, patch your friggin’ systems), but here are the lessons we can learn from this breach:
- The PCI assessment likely focused on the transaction systems, network, and datacenter. With so many potential remote access paths, we can’t rely on external hardening alone to prevent breaches. For the record, I also consider this one of the top SCADA problems.
- Patch and vulnerability management is key – for the bad guys to exploit the VPN connected system, something had to be vulnerable (note – the exception being social engineering a system ‘owner’ into installing the malware manually).
- We can’t slack on vulnerability management – time after time this turns out to be the way the bad guys take control once they’ve busted through the front door with SQL injection. You need an ongoing, continuous patch and vulnerability management program. This is in every freaking security checklist out there, and is more important than firewalls, application security, or pretty much anything else.
- The bad guys will take the time to map out your network. Once they start owning systems, unless your transaction processing is absolutely isolated, odds are they’ll find a way to cross network lines.
- Don’t assume non-sensitive systems aren’t targets. Especially if they are externally accessible.
Okay – when you get down to it, all five of those points are practically the same thing.
Here’s what I’d recommend:
- Vulnerability scan everything. I mean everything, your entire public and private IP space.
- Focus on security patch management – seriously, do we need any more evidence that this is the single most important IT security function?
- Minimize sensitive data use and use heavy egress filtering on the transaction network, including some form of DLP. Egress filter any remote access, since that basically blows holes through any perimeter you might think you have.
- Someone will SQL inject any public facing system, and some of the internal ones. You’d better be testing and securing any low-value, public facing system since the bad guys will use that to get inside and go after the high value ones. Vulnerability assessments are more than merely checking patch levels.
Posted at Wednesday 19th August 2009 8:53 pm
(14) Comments •
By Adrian Lane
If you were looking for a business justification for database assessment, the joint USSS/FBI advisory referenced in Rich’s last post on Recent Breaches should be more than sufficient. What you are looking at is not a checklist of exotic security measures, but fairly basic security that should be implemented in every production database. All of the preventative controls listed in the advisory are, for the most part, addressed with database assessment scanners. Detection of known SQL injection vulnerabilities, detecting use of external stored procedures like
xp_cmdshell, and avenues for obtaining Windows credentials from a compromised database server (or vice-versa) are basic policies included with all database vulnerability scanners – some freely available for download. It is amazing that large firms like Heartland, Hannaford, and TJX – who rely on databases for core business functions – get basic database security so wrong. These attacks are a template for anyone who cares to break into your database servers. If you don’t think you are a target because you are not storing credit card numbers, think again! There are plenty of ways for attackers to earn money or commit fraud by extracting or altering the contents of your databases. As a very basic security first step, scan your databases!
Adoption of database specific assessment technologies has been sporadic outside the finance vertical because providing business justification is not always simple. For one, many firms already have generic forms of assessment and inaccurately believe they already have that function covered. If they do discover missing policies, they often get the internal DBA staff to paper ove the gaps with homegrown SQL queries. As an example of what I mean, I want to share one story about a customer who was inspecting database configurations as part of their internal audit process. They had about 18 checks, mostly having to do with user permissions, and these settings formed part of the SOX and GLBA controls. What took me by surprise was the customer’s process: twice a year a member of the internal audit staff walked from database server to database server, logged in, ran the SQL queries, captured the results, and then moved on to the other 12 systems. When finished, all of the results were dumped into a formatting tool so the control reports could be made ready for KPMG’s visit. Twice a year, she made the rounds, each time taking a day to collect the data, and a day to produce the reports. When KPMG advised the reports be run quarterly, the task became perceived as a burden and they began a search to automate the task because only then did the cost in lost productivity warrant investment in automation. Their expectations going in were simply that the cost for the product should not grossly exceed a week or two of employee time.
Where it got interesting was when we began the proof of concept – it turned out several other groups had been manually running scripts and had much the same problem. We polled other organizations across the company, and found similar requirements from internal audit, security, IT management, and DBAs alike. Not only was each group already performing a small but critical set of security and compliance tasks, they each had another list of things they would like to accomplish. While no single group could justify the expense, taken together it was easy to see how automation saved on manpower alone. We then multiplied the work across dozens, or in some cases thousands of databases – and discovered there had been ample financial justification all along. Each group might have been motivated by compliance, operations efficiency, or threat mitigation, but as their work required separation of duties, they had not cooperated on obtaining tools to solve a shared problem. Over time, we found this customer example to be fairly common.
When considering business justification for the investment into database assessment, you are unlikely to find any single irresistible reason you need database assessment technology. You may read product marketing claims that say “Because you are compelled by compliance mandate GBRSH 509 to secure your database”, or some nonsense like that, but it is simply not true. There are security and regulatory requirements that compel certain database settings, but nothing that mandates automation. But there are two very basic reasons why you need to automate the assessment process: The scope of the task, and accuracy of the results. The depth and breadth of issues to address are beyond the skill of any one of the audiences for assessment. Let’s face it: the changes in database security issues alone are difficult to keep up with – much less compliance, operations, and evolutionary changes to the database platform itself. Coupled with the boring and repetitive nature of running these scans, it’s ripe territory for shortcuts and human error.
When considering a database assessment solution, the following are common market drivers for adoption. If your company has more than a couple databases, odds are all of these factors will apply to your situation:
- Configuration Auditing for Compliance: Periodic reports on database configuration and setup are needed to demonstrate adherence to internal standards and regulatory requirements. Most platforms offer policy bundles tuned for specific regulations such as PCI, Sarbanes-Oxley, and HIPAA.
- Security: Fast and effective identification of known security issues and deviations from company and industry best practices, with specific remediation advice.
- Operational Policy Enforcement: Verification of work orders, operational standards, patch levels, and approved methods of remediation are valuable (and possibly required).
There are several ways this technology can be applied to promote and address the requirements above, including:
- Automated verification of compliance and security settings across multiple heterogenous database environments.
- Consistency in database deployment across the organization, especially important for patch and configuration management, as well as detection and remediation of exploits commonly used to gain access.
- Centralized policy management so that a single policy can be applied across multiple (possibly geographically dispersed) locations.
- Separation of duties between IT, audit, security, and database administration personnel.
- Non-techncial stakeholder usage, suitable for auditors and security professionals without detailed knowledge of database internals. Assessment platforms act as a bridge between policy and enforcement, or verify compliance for a non-technical audience.
- Reduction in development time, removing the burden of code and script development from DBAs and internal staff.
- Integration with existing reporting, workflow and trouble-ticketing systems. Assessment is only useful if the data gets into the right hands and can be acted upon.
I am really happy that we are getting some of the details from the indictment on how these database breaches were carried out. It should be a wake-up call for companies to verify their baseline security, and sufficient incentive for you to go out and evaluate database assessment technologies.
Posted at Wednesday 19th August 2009 12:59 am
(0) Comments •