Securosis

Research

Database Assessment Solutions, Part 5: Operations and Compliance policies

Technically speaking, the market segment we are talking about is “Database Vulnerability Assessment”. You might have noticed that we titled this series “Database Assessment”. No, it was not just because the titles of these posts are too long (they are). The primary motivation for this name was to stress that this is not just about vulnerabilities and security. While the genesis of this market is security, compliance with regulatory mandates and operations policies are what drives the buying decisions, as noted in part 2. (For easy reference, here are Part 1, Part 3, and Part 4). In many ways, compliance and operational consistency are harder problems to solve because they requires more work and tuning on your part, and that need for customization is our focus in this post. In 4GL programming we talk about objects and instantiation. The concept of instantiation is to take a generic object and give it life; make it a real instance of the generic thing, with unique attributes and possibly behavior. You need to think about databases in the same way as, when started up, no two are alike. There may be two installations of DB2 that serve the same application, but they are run by different companies, store different data, are managed by different DBAs, have altered the base functions in various ways, run on different hardware, and have different configurations. This is why configuration tuning can be difficult: unlike vulnerability policies that detect specific buffer overflows or SQL injection attacks, operational policies are company specific and are derived from best practices. We have already listed a number of the common vulnerability and security policies. The following is a list of policies that apply to IT operations on the database environment or system: Operations Policies Password requirements (lifespan, composition) Data files (number, location, permissions) Audit log files (presence, permissions, currency) Product version (version control, patches) Itemize (unneeded) functions Database consistency (i.e., DBCC-DB on SQL Server) checks Statistics (statspack, auto-statistics) Backup report (last, frequency, destination) Error log generation and access Segregation of admin role Simultaneous admin logins Ad hoc query usage Discovery (databases, data) Remediation instructions & approved patches Orphaned databases Stored procedures (list, last modified) Changes (files, patches, procedures, schema, supporting functions) There are a lot more, but these should give you an idea of the basics a vendor should have in place, and allow you to contrast with the general security and vulnerability policies we listed in section 4. Compliance Policies Most regulatory requirements, from industry or government, are fulfilled by access control and system change policies we have already introduced. PCI adds a few extra requirements in the verification of security settings, access rights and patch levels, but compliance policies are generally a subset of security rules and operational policies. As the list varies by regulation, and the requirements change over time, we are not going to list them separately here. Since compliance is likely what is motivating your purchase of database assessment, you must to dig into vendor claims to verify they offer what you need. It gets tricky because some vendors tout compliance, for example “configuration compliance”, which only means you will be compliant with their list of accepted settings. These policies may not be endorsed by anyone other than the vendor, and only have coincidental relevance to PCI or SOX. In their defense, most commercially available database assessment platforms are sufficiently evolved to offer packaged sets of relevant polices for regulatory compliance, industry best practices, and detection of security vulnerabilities across all database platforms. They offer sufficient breadth and depth for what you need to get up and running very quickly, but you will need to verify your needs are met, and if not, what the deviation is. What most of the platforms do not do very well is allow for easy policy customization, multiple policy groupings, policy revisions, and creating copies of the “out of the box” policies provided by the vendor. You need all of these features for day-to-day management, so let’s delve into each of these areas a little more. This leads into our next section on policy customization. Policy Customization Remember how I said in Part 3 that “you are going to be most interested in evaluating assessment tools on how well they cover the policies you need”? That is true, but probably not for the reasons that you thought. What I deliberately omitted is that the policies you are interested in prior to product evaluation will not be the same policy set you are interested in afterwards. This is especially true for regulatory policies, which grow in number and change over time. Most DBAs will tell you that the steps a database vendor advises to remediate a problem may break your applications, so you will need a customized set of steps appropriate to your environment. Further, most enterprises have evolved database usage polices far beyond “best practices”, and greatly augment what the assessment vendor provides. This means both the set of policies, and the contents of the policies themselves, will need to change. And I am not just talking about criticality, but description, remediation, the underlying query, and the result set demanded to demonstrate adherence. As you learn more about what is possible, as you refine your internal requirements, or as auditor expectations evolve, you will experience continual drift in your policy set. Sure, you will have static vulnerability and security policies, but as the platform, process, and requirements change, your operations and compliance policy sets will be fluid. How easy it is to customize policies and manage policy sets is extremely important, as it directly affects the time and complexity required to manage the platform. Is it a minute to change a policy, or an hour? Can the auditor do it, or does it require a DBA? Don’t learn this after you have made your investment. On a day-to-day basis, this will be the single biggest management challenge you face, on par with remediation costs. Policy Groupings & Separation of Duties For

Share:
Read Post

Some Follow-Up Questions for Bob Russo, General Manager of the PCI Council

I just finished reading a TechTarget editorial by Bob Russo, the General Manager of the PCI Council where he responded to an article by Eric Ogren Believe it or not, I don’t intend this to be some sort of snarky anti-PCI post. I’m happy to see Mr. Russo responding directly to open criticism, and I’m hoping he will see this post and maybe we can also get a response. I admit I’ve been highly critical of PCI in my past, but I now take the position that it is an overall positive development for the state of security. That said, I still consider it to be deeply flawed, and when it comes to payments it can never materially improve the security of a highly insecure transaction system (plain text data and magnetic stripe cards). In other words, as much as PCI is painful, flawed, and ineffective, it has also done more to improve security than any other regulation or industry initiative in the past 10 years. Yes, it’s sometimes a distraction; and the checklist mentality reduces security in some environments, but overall I see it as a net positive. Mr. Russo states: It has always been the PCI Security Standards Council’s assertion that everyone in the payment chain, from (point-of-sale) POS manufacturers to e-shopping cart vendors, merchants to financial institutions, should play a role to keep payment information secure. There are many links in this chain – and each link must do their part to remain strong. and However, we will only be able to improve the security of the overall payment environment if we work together, globally. It is only by working together that we can combat data compromise and escape the blame game that is perpetuated post breach. I agree completely with those statements, which leads to my questions. In your list of the payment chain you do not include the card companies. Don’t they also have responsibility for securing payment information and don’t they technically have the power to implement the most effective changes by improving the technical foundation of transactions? You have said in the past that no PCI compliant company has ever been breached. Since many of those organizations were certified as compliant, that appears to be either a false statement, or an indicator of a very flawed certification process. Do you feel the PCI process itself needs to be improved? Following up on question 2, if so, how does the PCI Council plan on improving the process to prevent compliant companies from being breached? Following up (again) on question 2, does this mean you feel that a PCI compliant company should be immune from security breaches? Is this really an achievable goal? One of the criticisms of PCI is that there seems to be a lack of accountability in the certification process. Do you plan on taking more effective actions to discipline or drop QSAs and ASVs that were negligent in their certification of non-compliant companies? Is the PCI Council considering controls to prevent “QSA shopping” where companies bounce around to find a QSA that is more lenient? QSAs can currently offer security services to clients that directly affect compliance. This is seen as a conflict of interest in all other major audit processes, such as financial audits. Will the PCI Council consider replacing restrictions on these conflict of interest situations? Do you believe we will ever reach a state where a company that was certified as compliant is later breached, and the PCI Council will be willing to publicly back that company and uphold their certification? (I realize this relates again to question 2). I know you may not be able to answer all of these, but I’ve tried to keep the questions fair and relevant to the PCI process without devolving into the blame game. Thank you, Share:

Share:
Read Post

We Know How Breaches Happen

I first started tracking data breaches back in December of 2000 when I received my very first breach notification email, from Egghead Software. When Egghead wen bankrupt in 2001 and was acquired by Amazon, rather than assuming the breach caused the bankruptcy, I did some additional research and learned they were on a downward spiral long before their little security incident. This broke with the conventional wisdom floating around the security rubber-chicken circuit at the time, and was a fine example of the differences between correlation and causation. Since then I’ve kept trying to translate what little breach material we’ve been able to get our collective hands on into as accurate a picture as possible on the real state of security. We don’t really have a lot to work with, despite the heroic efforts of the Open Security Foundation Data Loss Database (for a long time the only source on breach statistics). As with the rest of us, the Data Loss DB is completely reliant on public breach disclosures. Thanks to California S.B. 1386 and the mishmash of breach notification laws that have developed since 2005, we have a lot more information than we used to, but anyone in the security industry knows only a portion of breaches are reported (despite notification laws), and we often don’t get any details of how the intrusions occurred. The problem with the Data Loss DB is that it’s based on incomplete information. They do their best, but more often than not we lack the real meat needed to make appropriate security and risk decisions. For example, we’ve seen plenty of vendor press releases on how lost laptops, backup tapes, and other media are the biggest source of data breaches. In reality, lost laptops and media are merely the greatest source of reported potential exposures. As I’ve talked about before, there is little or no correlation between these lost devices and any actual fraud. All those stats mean is a physical thing was lost or stolen… no more, no less, unless we find a case where we can correlate a loss with actual fraud. On the research side I try to compensate for the statistics problem by taking more of a case study approach, at best I can using public resources. Even with the limited information released, as time passes we tend to dig up more and more details about breaches, especially once cases make it into court. That’s how we know, for example, that both CardSystems and Heartland Payment Systems were breached (5 years apart) using SQL injection against a web application (the xp_cmdshell command in a poorly configured version of SQL Server, to be specific). In the past year or two we’ve gained some additional data sources, most notably the Verizon Data Breach Investigations Report which provides real, anonymized data regarding breaches. It’s limited in that it only reflects those incidents where Verizon participated in the investigation, and by the standardized information they collected, but it starts to give us better insight beyond public breach reports. Yet we still only have a fraction of the information we need to make appropriate risk management decisions. Even after 20 years in the security world (if you count my physical security work), I’m still astounded that the bad guys share more real information on means and methods than we do. We are thus extremely limited in assessing macro trends in security breaches. We’re forced to use far more anecdotal information than a skeptic like myself is comfortable with. We don’t even have a standard for assessing breach costs (as I’ve proposed, never mind more accurate crime and investigative statistics that could help craft our prioritization of security defenses. Seriously – decades into the practice of security we don’t have any fracking idea if forcing users to change passwords every 90 days provides more benefit than burden. All that said, we can’t sit on our asses and wait for the data. As unscientific as it may be, we still need to decide which security controls to apply where and when. In the past couple weeks we’ve seen enough information emerging that I believe we now have a good idea of two major methods of attack: As we discussed here on the blog, SQL injection via web applications is one of the top attack vectors identified in recent breaches. These attacks are not only against transaction processing systems, but are also used to gain a toehold on internal networks to execute more invasive attacks. Brian Krebs has identified another major attack vector, where malware is installed on insecure consumer and business PCs, then used to gather information to facilitate illicit account transfers. I’ve seen additional reports that suggest this is also a major form of attack. I’d love to back these with better statistics, but until those are available we have to rely on a mix of public disclosure and anecdotal information. We hear rumors of other vectors, such as customized malware (to avoid AV filters) and the ever-present-and-all-powerful insider threat, but there isn’t enough to validate those as a major trend quite yet. If we look across all our sources, we see a consistent picture emerging. The vast majority of cybercrime still seems to take advantage of known vulnerabilities that are can be addressed using common practices. The Verizon report certainly calls out unpatched systems, configuration errors, and default passwords as the most common breach sources. While we can’t state with complete certainty that patching systems, blocking SQL injection, removing default passwords, and enforcing secure configurations will prevent most breaches, the information we have does indicate that’s a reasonable direction. Combine that with following the Data Breach Triangle by reducing use of sensitive data (and using something like DLP to find it), and tightening up egress filtering on transaction processing networks and other sensitive data locations, and you are probably in pretty good shape. For financial institutions struggling with their clients being breached, they can add out-of-band transaction verification (phone calls or even automated text messages),

Share:
Read Post

Database Assessment Solutions, Part 4: Vulnerability and Security Policies

Understanding and Choosing a Database Assessment Solution, Part 4: Vulnerability and Security Policies I was always fascinated by the Sapphire/Slammer worm. The simplicity of the attack and how quickly it spread were astounding. Sure, it didn’t have a malicious payload, but the simple fact that it could have created quite a bit of panic. This event is what I consider the dawn of database vulnerability assessment tools. From that point on it seemed like every couple of weeks we were learning of new database vulnerabilities on every platform. Compliance may drive today’s assessment purchase, but the vulnerabilities are always what grabs the media’s attention, and it remains a key feature for any database security product. Prior to writing this post I went back and looked at all the buffer overflow and SQL injection attacks on DB2, Oracle, and SQL Server. It struck me when looking at them – especially those on SQL Server – why half of the administrative functions had vulnerabilities: whoever wrote them assumed that the functions were inaccessible to anyone who was not a DBA. The functions were conceptually supposed to be gated by access control and therefore safe. It was not so much that the programmers were not thinking about security, but they made incorrect assumptions about how the database internals like the parser and preprocessor worked. I have always said that SQL injection is an attack on the database through an application. It’s true, but technically the attacks are also getting through internal database processing layers prior to the exploit, as well as an eternal application layer. Looking back at the details it just seemed reasonable we would have these vulnerabilities, given the complexity of the database platforms and the lack of security training among software developers. Anyway, enough rambling about database security history. Understanding database vulnerabilities and knowing how to remediate – whether through patches, workarounds, or third party detection tools – requires significant skill and training. Policy research is expensive, and so is writing and testing these policies. In my experience over the four years that I helped define and build database assessment policies, it would take an average of 3 days to construct a policy after a vulnerability was understood: A day to write and optimize the SQL test case, a day to create the description and put together remediation information, and another day to test on supported platforms. Multiply by 10 policies across 6 different platforms and you get an idea of the cost involved. Policy development requires a full-time team of skilled practitioners to manage and update vulnerability and security policies across the half dozen platforms commonly supported by the vendors. This is not a reasonable burden for non-security vendors to take on, so if database security is an issue, don’t try to do this in-house! Buying an aftermarket product excuses your organization from developing these checks, protecting you from specific threats hackers are likely to deploy, as well as more generic security threats. What specific vulnerability checks should be present in your database assessment product? In a practical sense, it does not matter. Specific vulnerabilities come and go too fast for any list to be relevant. What I am going to do is provide a list of general security checks that should be present, and list the classes of vulnerabilities any product you evaluate should have policies for. Then I will cover other relevant buying criteria to consider. General Database Security Policies List database administrator accounts and how they map to domain users. Product version (security patch level) List users with admin/special privileges List users with access to sensitive columns or data (credit cards, passwords) List users with access to system tables Database access audit (failed logins) Authentication method (domain, database, mixed) List locked accounts Listener / SQL Agent / UDP, network configuration (passwords in clear text, ports, use of named pipes) Systems tables (subset) not updatable Ownership chains Database links Sample Databases (Northwind, pubs, scott/tiger) Remote systems and data sources (remote trust relationships) Vulnerability Classes Default Passwords Weak/blank/same as login passwords Public roles or guest accounts to anything External procedures (CmdExec, xp_cmdshell, active scripting, exproc, or any programatic access to OS level code) Buffer overflow conditions (XP, admin functions, Slammer/Sapphire, HEAP, etc. – too numerous to list) SQL Injection (1=1, most admin functions, temporary stored procedures, database name as code – too numerous to list) Network (Connection reuse, man in the middle, named pipe hijacking) Authentication escalation (XStatus / XP / SP, exploiting batch jobs, DTS leakage, remote access trust) Task injection (Webtasks, sp_xxx, MSDE service, reconfiguration) Registry access (SQL Server) DoS (named pipes, malformed requests, IN clause, memory leaks, page locks creating deadlocks) There are many more. It is really important to understand that the total number of in policies any given product is irrelevant. As an example, let’s assume that your database has two modules with buffer overflow vulnerabilities, and each has eight different ways to exploit it. Comparing two assessment products, one might have 16 policies checking for each exploit, and the other could have two policies checking for two vulnerabilities. These products are functionally equivalent, but one vendor touts an order of magnitude more policies, which have no actual benefit. Do NOT let the number of policies influence your buying decision and don’t get bogged down in what I call a “policy escalation war”. You need to compare functional equivalence and realize that if one product can check for more vulnerabilities in fewer queries, it runs faster! It may take a little work on your part to comb through the policies to make sure what you need is present, but you need to perform that inspection regardless. You will want to carefully confirm that the assessment platform covers the database versions you have. And just because your company supposedly migrated to Oracle 11 some time back does not mean you get to discount Oracle 9 database support, because odds are better than even that you have at least one still hanging around. Or you

Share:
Read Post

Understanding and Choosing a Database Assessment Solution, Part 3: Data Collection

In the first part of this series we introduced database assessment as a fully differentiated form of assessment scan, and in part two we discussed some of the use cases and business benefits database assessment provides. In this post we will begin dissecting the technology, and take a close look at the deployment options available. What and how your requirements are addressed is more a function of the way the product is implemented than the policies it contains. Architecturally, there is little variation in database assessment platforms. Most are two-tiered systems, either appliances or pure software, with the data storage and analysis engine located away from the target database server. Many vendors offer remote credentialed scans, with some providing an optional agent to assist with data collection issues we will discuss later. Things get interesting around how the data is collected, and that is the focus of this post. As a customer, the most important criteria for evaluating assessment tools are how well they cover the policies you need, and how easily they integrate within your organization’s systems and processes. The single biggest technology factor to consider for both is how data is collected from the database system. Data collection methods dictate what information will be available to you – and as a direct result, what policies you will be able to implement. Further, how the scanner interacts with the database plays a deciding role in how you will deploy and manage the product. Obtaining and installing credentials, mapping permissions, agent installation and maintenance, secure remote sessions, separation of duties, and creation of custom policies are all affected by the data collection architecture. Database assessment begins with the collection of database configuration information, and each vendor offers a slightly different combination of data collection capabilities. In this context, I am using the word ‘configuration’ in a very broad sense to cover everything from resource allocation (disk, memory, links, tablespaces), operational allocation (user access rights, roles, schemas, stored procedures), database patch levels, network, and features/functions that have been installed into the database system. Pretty much anything you could want to know about a database. There are three ways to collect configuration and vulnerability information from a database system: Credentialed Scanning: A credentialed database scan leverages a user account to gain access to the database system internals. Once logged into the system, the scanner collects configuration data by querying system tables and sending the results back to the scanner for analysis. The scan can be run over the network or through a local agent proxy – each provides advantages and disadvantages which we will discuss later. In both cases the scanner connects to the database communication port with the user credentials provided in the same way as any other application. A credentialed database scan potentially has access to everything a database administrator would, and returns information that is not available outside database. This method of collection is critical as it determines such settings as password expiration, administrative roles, active and locked user accounts, internal and external stored procedures, batch jobs, and database/domain user account mismatches. It is recommended that a dedicated account with (mostly) read only permissions be issued for the vulnerability scanning team in case of a system/account compromise. External Scanning (File & OS Inspection): This method of data collection deduces database configuration by examining settings outside database. This type of scan may also require credentials, but not database user credentials. External assessment has two components: file system and operating system. Some but not all configuration information resides in files stored as part of the database installation. A file system assessment examines both contents and metadata of initialization and configuration files, to determine database setup – such as permissions on data files, network settings, and control file locations. In addition, OS utilities are used to discover vulnerabilities and security settings not determinable by examining files within the database installation. The user account the database systems runs as, registry settings, and simultaneous administrator sessions are all examples of information accessible this way. While there is overlap between the data collected between credentialed and external scans, most of the information is distinct and relevant to different policies. Most traditional OS scanners which claim to offer database scanning provide this type of external assessment. Network (Port) Inspection. In a port inspection, the scanner performs a mock connection to a database communication port; during the network ‘conversation’ either the database returns its type and revision are explicitly, or the scanner deduces them from other characteristics of its response. Once the scanner understand the patch revision of the database, a simple cross reference for known vulnerabilities is generated. Older databases leak enough information that scanners can make educated guesses at configuration settings and installed features. This form of assessment is typically a “quick and dirty” that provides basic patch inspection with minimal overhead and without requiring agents or credentials. As network assessment lacks the user and feature assessments required by many security and audit groups, and as database vendors have blocked most of the information leakage from simple connectinos, this type of scan is falling out of favor. There are other ways to collect information, including eavesdropping and penetration testing, but they are not reliable; additionally, penetration testing and exploitation can have catastrophic side-effects on production databases. In this series we will ignore other options. The bulk of configuration and vulnerability data is obtained from the credentialed scans, so they should be the bare minimum of data collection techniques in any assessment you consider. To capture the complete picture of database setup and vulnerabilities, you need both a credentialed database scan and an inspection of the underlying platform the database is installed on. You can accomplish this by leveraging a different (possibly pre-existing) OS assessment scanning tool, or obtaining this information as part of your database assessment. In either case, this is where things get a little tricky, and require careful attention on your part to make sure you get the functions you need without introducing

Share:
Read Post

The Ranting Roundtable, PCI Edition

Sometimes you just need to let it all out. With all the recent events around breaches and PCI, I thought it might be cathartic to pull together a few of our favorite loudmouths and spend a little time in a no-rules roundtable. There’s a little bad language, a bit of ranting, and a little more productive discussion than I intended. Joining me were Mike Rothman, Alex Hutton, Nick Selby, and Josh Corman. It runs about 50 minutes, and we mostly focus on PCI. The Ranting Roundtable, PCI. Odds are we’ll do more of these in the future. Even if you don’t like them, they’re fun for us. No goats were harmed in the making of this podcast. Share:

Share:
Read Post

Friday Summary – August 21, 2009

I’m a pretty typical guy. I like beer, football, action movies, and power tools. I’ve never been overly interested in kids, even though I wanted them eventually. It isn’t that I don’t like kids, but until they get old enough to challenge me in Guitar Hero, they don’t exactly hold my attention. And babies? I suppose they’re cute, but so are puppies and kittens, and they’re actually fun to play with, and easier to tell apart. This all, of course, changed when I had my daughter (just under 6 months ago). Oh, I still have no interest in anyone else’s baby, and until the past couple weeks was pretty paranoid about picking up the wrong one from daycare, but she definitely holds my attention better than (most) puppies. I suppose it’s weird that I always wanted kids, just not anyone else’s kids. Riley is in one of those accelerated learning modes right now. It’s fascinating to watch her eyes, expressions, and body language as she struggles to grasp the world around her (literally, anything within arms reach + 10). Her powers of observation are frightening… kind of like a superpower of some sort. It’s even more interesting when her mind is running ahead of her body as she struggles on a task she clearly understands, but doesn’t have the muscle control to pull off. And when she’s really motivated to get that toy/cat? You can see every synapse and sinew strain to achieve her goal with complete and utter focus. (That cats do that too, but only if it involves food or the birds that taunt them through the window). On the Ranting Roundtable a few times you hear us call security folks lazy or apathetic. We didn’t mean everyone, but it’s also a general statement that extends far beyond security. To be honest, most people, even hard working people, are pretty resistent to change; to doing things in new ways, even if they’re better. In every industry I’ve ever worked, the vast majority of people didn’t want to be challenged. Even in my paramedic and firefighter days people would gripe constantly about changes that affected their existing work habits. They might hop on some new car-crushing tool, but god forbid you change their shift structure or post-incident paperwork. And go take any CPR class these days, with the new procedures, and you’ll hear a never-ending rant by the old timers who have no intention of changing how many stupid times they pump and blow per minute. Not to over-do an analogy (well, that is what we analysts tend to do), but I wish more security professionals approached the world like my daughter. With intense observation, curiosity, adaptability, drive, and focus. Actually, she’s kind of like a hacker – drop her by something new, and her little hands start testing (and breaking) anything within reach. She’s constantly seeking new experiences and opportunities to learn, and I don’t think those are traits that have to stop once she gets older. No, not all security folks are lazy, but far too many lack the intellectual curiosity that’s so essential to success. Security is the last fracking profession to join if you want stability or consistency. An apathetic, even if hardworking, security professional is as dangerous as he or she is worthless. That’s why I love security; I can’t imagine a career that isn’t constantly changing and challenging. I think it’s this curiosity and drive that defines ‘hacker’, no matter the color of the hat. All security professionals should be hackers. (Despite that silly CISSP oath). Don’t forget that you can subscribe to the Friday Summary via email. And now for the week in review: Webcasts, Podcasts, Outside Writing, and Conferences Rich was quoted several times in the Dark Reading article “Mega-Breaches Employed Familiar, Preventable Attacks”. Rich’s Macworld article on totally paranoid web browsing went live. It will also be in the upcoming print edition. Dan Goodin at the Register mentioned our article on the Heartland breach details. Our Heartland coverage also hit Slashdot (and the server didn’t get crushed, which is always nice). Rich and Martin hit the usual spectrum of security issues in Episode 163 of The Network Security Podcast. Rich, Mike Rothman, Nick Selby, Alex Hutton, and Josh Corman let loose in the very first Ranting Roundtable – PCI Edition. Favorite Securosis Posts Rich: With all the discussion around Heartland, Adrian’s post on Understanding and Choosing a Database Assessment Solution, Part 2: Buying Decisions is very timely. Any time we talk about technology we should be providing a business justification. Adrian: With all the discussion around Heartland, it’s nice to get some confirmation from various parties with New Details, and Lessons, on Heartland Breach. Other Securosis Posts The Ranting Roundtable, PCI Edition Understanding and Choosing a Database Assessment Solution, Part 3: Data Collection Smart Grids and Security (Intro) New Details, and Lessons, on Heartland Breach Understanding and Choosing a Database Assessment Solution, Part 2: Buying Decisions Recent Breaches: We May Have All the Answers Heartland Hackers Caught; Answers and Questions Project Quant Posts We are close to releasing the next round of Quant data… so stand by… Favorite Outside Posts Adrian: Maybe not my favorite post of the week, as this is sad. Strike three! My offer still stands. Are you listening, University of California at Berkeley? Rich: It’s easy to preach security, “trust no one” and be all cynical. Now drop yourself in the middle of Africa, with limited resources and few local contacts, and see if you can get by without taking a few leaps of faith. Johnny Long’s post at the Hacker’s for Charity blog shows what happens when a security pro is forced to jump off the cliff of trust. Top News and Posts Indictments handed out for Heartland and Hannaford breaches. Nice post by Brickhouse Security on iPhone Spyware. The role of venture funding in the security market – is the well dry? I swear Corman wrote up his 8 Dirty Secrets of the Security

Share:
Read Post

New Details, and Lessons, on Heartland Breach

Thanks to an anonymous reader, we may have some additional information on how the Heartland breach occurred. Keep in mind that this isn’t fully validated information, but it does correlate with other information we’ve received, including public statements by Heartland officials. On Monday we correlated the Heatland breach with a joint FBI/USSS bulletin that contained some in-depth details on the probable attack methodology. In public statements (and private rumors) it’s come out that Heartland was likely breached via a regular corporate system, and that hole was then leveraged to cross over to the better-protected transaction network. According to our source, this is exactly what happened. SQL injection was used to compromise a system outside the transaction processing network segment. They used that toehold to start compromising vulnerable systems, including workstations. One of these internal workstations was connected by VPN to the transaction processing datacenter, which allowed them access to the sensitive information. These details were provided in a private meeting held by Heartland in Florida to discuss the breach with other members of the payment industry. As with the SQL injection itself, we’ve seen these kinds of VPN problems before. The first NAC products I ever saw were for remote access – to help reduce the number of worms/viruses coming in from remote systems. I’m not going to claim there’s an easy fix (okay, there is, patch your friggin’ systems), but here are the lessons we can learn from this breach: The PCI assessment likely focused on the transaction systems, network, and datacenter. With so many potential remote access paths, we can’t rely on external hardening alone to prevent breaches. For the record, I also consider this one of the top SCADA problems. Patch and vulnerability management is key – for the bad guys to exploit the VPN connected system, something had to be vulnerable (note – the exception being social engineering a system ‘owner’ into installing the malware manually). We can’t slack on vulnerability management – time after time this turns out to be the way the bad guys take control once they’ve busted through the front door with SQL injection. You need an ongoing, continuous patch and vulnerability management program. This is in every freaking security checklist out there, and is more important than firewalls, application security, or pretty much anything else. The bad guys will take the time to map out your network. Once they start owning systems, unless your transaction processing is absolutely isolated, odds are they’ll find a way to cross network lines. Don’t assume non-sensitive systems aren’t targets. Especially if they are externally accessible. Okay – when you get down to it, all five of those points are practically the same thing. Here’s what I’d recommend: Vulnerability scan everything. I mean everything, your entire public and private IP space. Focus on security patch management – seriously, do we need any more evidence that this is the single most important IT security function? Minimize sensitive data use and use heavy egress filtering on the transaction network, including some form of DLP. Egress filter any remote access, since that basically blows holes through any perimeter you might think you have. Someone will SQL inject any public facing system, and some of the internal ones. You’d better be testing and securing any low-value, public facing system since the bad guys will use that to get inside and go after the high value ones. Vulnerability assessments are more than merely checking patch levels. Share:

Share:
Read Post

Smart Grids and Security (Intro)

It’s not often, but every now and then there are people in our lives we can clearly identify as having a massive impact on our careers. I don’t mean someone we liked to work with, but someone who gave us that big break, opportunity, or push in the right direction that leads you to where you are today. In my case I know exactly who helped me make the transition from academia to the career I have today. I met Jim Brancheau while I was working at the University of Colorado as a systems and network administrator. He was an information systems professor in the College of Business, and some friends roped me into taking his class even though I was a history and molecular biology major. He liked my project on security, hired me to do some outside consulting with him, and eventually hired me full time after we both left the University. That company was acquired by Gartner, and the rest is history. Flat out, I wouldn’t be where I am today without Jim’s help. Jim and I ended up on different teams at Gartner, and we both eventually left. After taking a few years off to ski and hike, Jim’s back in the analyst game focusing on smart grids and sustainability at Carbon Pros, and he’s currently researching and writing a new book for the corporate world on the topic. When he asked me to help out on the security side, it was an offer Karma wouldn’t let me refuse. I covered energy/utilities and SCADA issues back in my Gartner days, but smart grids amplify those issues to a tremendous degree. Much of the research I’ve seen on security for smart grids has focused on metering systems, but the technologies are extending far beyond smarter meters into our homes, cars, and businesses. For example, Ford just announced a vehicle to grid communications system for hybrid and electric vehicles. Your car will literally talk to the grid when you plug it in to enable features such as only charging at off-peak rates. I highly recommend you read Jim’s series on smart grids and smart homes to get a better understanding of where we are headed. For example, opt-in programs where you will allow your power company to send signals to your house to change your thermostat settings if they need to broadly reduce consumption during peak hours. That’s a consumer example, but we expect to see similar technologies also adopted by the enterprise, in large part due to expected cost-savings incentives. Thus when we talk about smart grids, we aren’t going to limit ourselves to next-gen power grid SCADA or bidirectional meters, but try and frame the security issues for the larger ecosystem that’s developing. We also have to discuss legal and regulatory issues, such as the draft NIST and NERC/FERC standards, as well as technology transition issues (since legacy infrastructure isn’t going away anytime soon). Jim kicked off our coverage with this post over at Carbon Pros, which introduces the security and privacy principles to the non-security audience. I’d like to add a little more depth in terms of how we frame the issue, and in future posts we’ll dig into these areas. From a security perspective, we can think of a smart grid as five major components in two major domains. On the utilities side, there is power generation, transmission, and the customer (home or commercial) interface (where the wires drop from the pole to the wall). Within the utilities side there are essentially three overlapping networks – the business network (office, email, billing), the control process/SCADA network (control of generation and transmission equipment), and now, the emerging smart grid network (communications with the endpoint/user). Most work and regulation in recent years (the CIP requirements) have focused on defining and securing the “electronic security perimeter”, which delineates the systems involved in the process control side, including both legacy SCADA and IP-based systems. In the past, I’ve advised utilities clients to limit the size and scope of their electronic security perimeter as much as possible to reduce both risks and compliance costs. I’ve even heard of some organizations that put air gaps back in place after originally co-mingling the business and process control networks to help reduce security and compliance costs. The smart grid potentially expands this perimeter by extending what’s essentially a third network, the smart grid network, to the meter in the residential or commercial site. That meter is thus the interface to the outside world, and has been the focus of much of the security research I’ve seen. There are clear security implications for the utility, ranging from fraud to distributed denial of generation attacks (imagine a million meters under-reporting usage all at the same time). But the security domain also extends into the endpoint installation as it interfaces with the external side (the second domain) which includes the smart building/home network, and smart devices (as in refrigerators and cars). The security issues for residential and commercial consumers are different but related, and expand into privacy concerns. There could be fraud, denial of power, privacy breaches, and all sorts of other potential problems. This is compounded by the decentralization and diversity of smart technologies, including a mix of powerline, wireless, and IP tech. In other words, smart grid security isn’t merely an issue for electric utilities – there are enterprise and consumer requirements that can’t be solely managed by your power company. They may take primary responsibility for the meter, but you’ll still be responsible for your side of the smart network and your usage of smart appliances. On the upside, although there’s been some rapid movement on smart metering, we still have time to develop our strategies for management of our side (consumption) of smart energy technologies. I don’t think we will all be connecting our thermostats to the grid in the next few months, but there are clearly enterprise implications and we need to start investigating and developing strategies for smart grid

Share:
Read Post

Understanding and Choosing a Database Assessment Solution, Part 2: Buying Decisions

If you were looking for a business justification for database assessment, the joint USSS/FBI advisory referenced in Rich’s last post on Recent Breaches should be more than sufficient. What you are looking at is not a checklist of exotic security measures, but fairly basic security that should be implemented in every production database. All of the preventative controls listed in the advisory are, for the most part, addressed with database assessment scanners. Detection of known SQL injection vulnerabilities, detecting use of external stored procedures like xp_cmdshell, and avenues for obtaining Windows credentials from a compromised database server (or vice-versa) are basic policies included with all database vulnerability scanners – some freely available for download. It is amazing that large firms like Heartland, Hannaford, and TJX – who rely on databases for core business functions – get basic database security so wrong. These attacks are a template for anyone who cares to break into your database servers. If you don’t think you are a target because you are not storing credit card numbers, think again! There are plenty of ways for attackers to earn money or commit fraud by extracting or altering the contents of your databases. As a very basic security first step, scan your databases! Adoption of database specific assessment technologies has been sporadic outside the finance vertical because providing business justification is not always simple. For one, many firms already have generic forms of assessment and inaccurately believe they already have that function covered. If they do discover missing policies, they often get the internal DBA staff to paper ove the gaps with homegrown SQL queries. As an example of what I mean, I want to share one story about a customer who was inspecting database configurations as part of their internal audit process. They had about 18 checks, mostly having to do with user permissions, and these settings formed part of the SOX and GLBA controls. What took me by surprise was the customer’s process: twice a year a member of the internal audit staff walked from database server to database server, logged in, ran the SQL queries, captured the results, and then moved on to the other 12 systems. When finished, all of the results were dumped into a formatting tool so the control reports could be made ready for KPMG’s visit. Twice a year, she made the rounds, each time taking a day to collect the data, and a day to produce the reports. When KPMG advised the reports be run quarterly, the task became perceived as a burden and they began a search to automate the task because only then did the cost in lost productivity warrant investment in automation. Their expectations going in were simply that the cost for the product should not grossly exceed a week or two of employee time. Where it got interesting was when we began the proof of concept – it turned out several other groups had been manually running scripts and had much the same problem. We polled other organizations across the company, and found similar requirements from internal audit, security, IT management, and DBAs alike. Not only was each group already performing a small but critical set of security and compliance tasks, they each had another list of things they would like to accomplish. While no single group could justify the expense, taken together it was easy to see how automation saved on manpower alone. We then multiplied the work across dozens, or in some cases thousands of databases – and discovered there had been ample financial justification all along. Each group might have been motivated by compliance, operations efficiency, or threat mitigation, but as their work required separation of duties, they had not cooperated on obtaining tools to solve a shared problem. Over time, we found this customer example to be fairly common. When considering business justification for the investment into database assessment, you are unlikely to find any single irresistible reason you need database assessment technology. You may read product marketing claims that say “Because you are compelled by compliance mandate GBRSH 509 to secure your database”, or some nonsense like that, but it is simply not true. There are security and regulatory requirements that compel certain database settings, but nothing that mandates automation. But there are two very basic reasons why you need to automate the assessment process: The scope of the task, and accuracy of the results. The depth and breadth of issues to address are beyond the skill of any one of the audiences for assessment. Let’s face it: the changes in database security issues alone are difficult to keep up with – much less compliance, operations, and evolutionary changes to the database platform itself. Coupled with the boring and repetitive nature of running these scans, it’s ripe territory for shortcuts and human error. When considering a database assessment solution, the following are common market drivers for adoption. If your company has more than a couple databases, odds are all of these factors will apply to your situation: Configuration Auditing for Compliance: Periodic reports on database configuration and setup are needed to demonstrate adherence to internal standards and regulatory requirements. Most platforms offer policy bundles tuned for specific regulations such as PCI, Sarbanes-Oxley, and HIPAA. Security: Fast and effective identification of known security issues and deviations from company and industry best practices, with specific remediation advice. Operational Policy Enforcement: Verification of work orders, operational standards, patch levels, and approved methods of remediation are valuable (and possibly required). There are several ways this technology can be applied to promote and address the requirements above, including: Automated verification of compliance and security settings across multiple heterogenous database environments. Consistency in database deployment across the organization, especially important for patch and configuration management, as well as detection and remediation of exploits commonly used to gain access. Centralized policy management so that a single policy can be applied across multiple (possibly geographically dispersed) locations. Separation of duties between IT, audit, security, and database administration personnel.

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.