Securosis

Research

We Know How Breaches Happen

I first started tracking data breaches back in December of 2000 when I received my very first breach notification email, from Egghead Software. When Egghead wen bankrupt in 2001 and was acquired by Amazon, rather than assuming the breach caused the bankruptcy, I did some additional research and learned they were on a downward spiral long before their little security incident. This broke with the conventional wisdom floating around the security rubber-chicken circuit at the time, and was a fine example of the differences between correlation and causation. Since then I’ve kept trying to translate what little breach material we’ve been able to get our collective hands on into as accurate a picture as possible on the real state of security. We don’t really have a lot to work with, despite the heroic efforts of the Open Security Foundation Data Loss Database (for a long time the only source on breach statistics). As with the rest of us, the Data Loss DB is completely reliant on public breach disclosures. Thanks to California S.B. 1386 and the mishmash of breach notification laws that have developed since 2005, we have a lot more information than we used to, but anyone in the security industry knows only a portion of breaches are reported (despite notification laws), and we often don’t get any details of how the intrusions occurred. The problem with the Data Loss DB is that it’s based on incomplete information. They do their best, but more often than not we lack the real meat needed to make appropriate security and risk decisions. For example, we’ve seen plenty of vendor press releases on how lost laptops, backup tapes, and other media are the biggest source of data breaches. In reality, lost laptops and media are merely the greatest source of reported potential exposures. As I’ve talked about before, there is little or no correlation between these lost devices and any actual fraud. All those stats mean is a physical thing was lost or stolen… no more, no less, unless we find a case where we can correlate a loss with actual fraud. On the research side I try to compensate for the statistics problem by taking more of a case study approach, at best I can using public resources. Even with the limited information released, as time passes we tend to dig up more and more details about breaches, especially once cases make it into court. That’s how we know, for example, that both CardSystems and Heartland Payment Systems were breached (5 years apart) using SQL injection against a web application (the xp_cmdshell command in a poorly configured version of SQL Server, to be specific). In the past year or two we’ve gained some additional data sources, most notably the Verizon Data Breach Investigations Report which provides real, anonymized data regarding breaches. It’s limited in that it only reflects those incidents where Verizon participated in the investigation, and by the standardized information they collected, but it starts to give us better insight beyond public breach reports. Yet we still only have a fraction of the information we need to make appropriate risk management decisions. Even after 20 years in the security world (if you count my physical security work), I’m still astounded that the bad guys share more real information on means and methods than we do. We are thus extremely limited in assessing macro trends in security breaches. We’re forced to use far more anecdotal information than a skeptic like myself is comfortable with. We don’t even have a standard for assessing breach costs (as I’ve proposed, never mind more accurate crime and investigative statistics that could help craft our prioritization of security defenses. Seriously – decades into the practice of security we don’t have any fracking idea if forcing users to change passwords every 90 days provides more benefit than burden. All that said, we can’t sit on our asses and wait for the data. As unscientific as it may be, we still need to decide which security controls to apply where and when. In the past couple weeks we’ve seen enough information emerging that I believe we now have a good idea of two major methods of attack: As we discussed here on the blog, SQL injection via web applications is one of the top attack vectors identified in recent breaches. These attacks are not only against transaction processing systems, but are also used to gain a toehold on internal networks to execute more invasive attacks. Brian Krebs has identified another major attack vector, where malware is installed on insecure consumer and business PCs, then used to gather information to facilitate illicit account transfers. I’ve seen additional reports that suggest this is also a major form of attack. I’d love to back these with better statistics, but until those are available we have to rely on a mix of public disclosure and anecdotal information. We hear rumors of other vectors, such as customized malware (to avoid AV filters) and the ever-present-and-all-powerful insider threat, but there isn’t enough to validate those as a major trend quite yet. If we look across all our sources, we see a consistent picture emerging. The vast majority of cybercrime still seems to take advantage of known vulnerabilities that are can be addressed using common practices. The Verizon report certainly calls out unpatched systems, configuration errors, and default passwords as the most common breach sources. While we can’t state with complete certainty that patching systems, blocking SQL injection, removing default passwords, and enforcing secure configurations will prevent most breaches, the information we have does indicate that’s a reasonable direction. Combine that with following the Data Breach Triangle by reducing use of sensitive data (and using something like DLP to find it), and tightening up egress filtering on transaction processing networks and other sensitive data locations, and you are probably in pretty good shape. For financial institutions struggling with their clients being breached, they can add out-of-band transaction verification (phone calls or even automated text messages),

Share:
Read Post

Database Assessment Solutions, Part 4: Vulnerability and Security Policies

Understanding and Choosing a Database Assessment Solution, Part 4: Vulnerability and Security Policies I was always fascinated by the Sapphire/Slammer worm. The simplicity of the attack and how quickly it spread were astounding. Sure, it didn’t have a malicious payload, but the simple fact that it could have created quite a bit of panic. This event is what I consider the dawn of database vulnerability assessment tools. From that point on it seemed like every couple of weeks we were learning of new database vulnerabilities on every platform. Compliance may drive today’s assessment purchase, but the vulnerabilities are always what grabs the media’s attention, and it remains a key feature for any database security product. Prior to writing this post I went back and looked at all the buffer overflow and SQL injection attacks on DB2, Oracle, and SQL Server. It struck me when looking at them – especially those on SQL Server – why half of the administrative functions had vulnerabilities: whoever wrote them assumed that the functions were inaccessible to anyone who was not a DBA. The functions were conceptually supposed to be gated by access control and therefore safe. It was not so much that the programmers were not thinking about security, but they made incorrect assumptions about how the database internals like the parser and preprocessor worked. I have always said that SQL injection is an attack on the database through an application. It’s true, but technically the attacks are also getting through internal database processing layers prior to the exploit, as well as an eternal application layer. Looking back at the details it just seemed reasonable we would have these vulnerabilities, given the complexity of the database platforms and the lack of security training among software developers. Anyway, enough rambling about database security history. Understanding database vulnerabilities and knowing how to remediate – whether through patches, workarounds, or third party detection tools – requires significant skill and training. Policy research is expensive, and so is writing and testing these policies. In my experience over the four years that I helped define and build database assessment policies, it would take an average of 3 days to construct a policy after a vulnerability was understood: A day to write and optimize the SQL test case, a day to create the description and put together remediation information, and another day to test on supported platforms. Multiply by 10 policies across 6 different platforms and you get an idea of the cost involved. Policy development requires a full-time team of skilled practitioners to manage and update vulnerability and security policies across the half dozen platforms commonly supported by the vendors. This is not a reasonable burden for non-security vendors to take on, so if database security is an issue, don’t try to do this in-house! Buying an aftermarket product excuses your organization from developing these checks, protecting you from specific threats hackers are likely to deploy, as well as more generic security threats. What specific vulnerability checks should be present in your database assessment product? In a practical sense, it does not matter. Specific vulnerabilities come and go too fast for any list to be relevant. What I am going to do is provide a list of general security checks that should be present, and list the classes of vulnerabilities any product you evaluate should have policies for. Then I will cover other relevant buying criteria to consider. General Database Security Policies List database administrator accounts and how they map to domain users. Product version (security patch level) List users with admin/special privileges List users with access to sensitive columns or data (credit cards, passwords) List users with access to system tables Database access audit (failed logins) Authentication method (domain, database, mixed) List locked accounts Listener / SQL Agent / UDP, network configuration (passwords in clear text, ports, use of named pipes) Systems tables (subset) not updatable Ownership chains Database links Sample Databases (Northwind, pubs, scott/tiger) Remote systems and data sources (remote trust relationships) Vulnerability Classes Default Passwords Weak/blank/same as login passwords Public roles or guest accounts to anything External procedures (CmdExec, xp_cmdshell, active scripting, exproc, or any programatic access to OS level code) Buffer overflow conditions (XP, admin functions, Slammer/Sapphire, HEAP, etc. – too numerous to list) SQL Injection (1=1, most admin functions, temporary stored procedures, database name as code – too numerous to list) Network (Connection reuse, man in the middle, named pipe hijacking) Authentication escalation (XStatus / XP / SP, exploiting batch jobs, DTS leakage, remote access trust) Task injection (Webtasks, sp_xxx, MSDE service, reconfiguration) Registry access (SQL Server) DoS (named pipes, malformed requests, IN clause, memory leaks, page locks creating deadlocks) There are many more. It is really important to understand that the total number of in policies any given product is irrelevant. As an example, let’s assume that your database has two modules with buffer overflow vulnerabilities, and each has eight different ways to exploit it. Comparing two assessment products, one might have 16 policies checking for each exploit, and the other could have two policies checking for two vulnerabilities. These products are functionally equivalent, but one vendor touts an order of magnitude more policies, which have no actual benefit. Do NOT let the number of policies influence your buying decision and don’t get bogged down in what I call a “policy escalation war”. You need to compare functional equivalence and realize that if one product can check for more vulnerabilities in fewer queries, it runs faster! It may take a little work on your part to comb through the policies to make sure what you need is present, but you need to perform that inspection regardless. You will want to carefully confirm that the assessment platform covers the database versions you have. And just because your company supposedly migrated to Oracle 11 some time back does not mean you get to discount Oracle 9 database support, because odds are better than even that you have at least one still hanging around. Or you

Share:
Read Post

Understanding and Choosing a Database Assessment Solution, Part 3: Data Collection

In the first part of this series we introduced database assessment as a fully differentiated form of assessment scan, and in part two we discussed some of the use cases and business benefits database assessment provides. In this post we will begin dissecting the technology, and take a close look at the deployment options available. What and how your requirements are addressed is more a function of the way the product is implemented than the policies it contains. Architecturally, there is little variation in database assessment platforms. Most are two-tiered systems, either appliances or pure software, with the data storage and analysis engine located away from the target database server. Many vendors offer remote credentialed scans, with some providing an optional agent to assist with data collection issues we will discuss later. Things get interesting around how the data is collected, and that is the focus of this post. As a customer, the most important criteria for evaluating assessment tools are how well they cover the policies you need, and how easily they integrate within your organization’s systems and processes. The single biggest technology factor to consider for both is how data is collected from the database system. Data collection methods dictate what information will be available to you – and as a direct result, what policies you will be able to implement. Further, how the scanner interacts with the database plays a deciding role in how you will deploy and manage the product. Obtaining and installing credentials, mapping permissions, agent installation and maintenance, secure remote sessions, separation of duties, and creation of custom policies are all affected by the data collection architecture. Database assessment begins with the collection of database configuration information, and each vendor offers a slightly different combination of data collection capabilities. In this context, I am using the word ‘configuration’ in a very broad sense to cover everything from resource allocation (disk, memory, links, tablespaces), operational allocation (user access rights, roles, schemas, stored procedures), database patch levels, network, and features/functions that have been installed into the database system. Pretty much anything you could want to know about a database. There are three ways to collect configuration and vulnerability information from a database system: Credentialed Scanning: A credentialed database scan leverages a user account to gain access to the database system internals. Once logged into the system, the scanner collects configuration data by querying system tables and sending the results back to the scanner for analysis. The scan can be run over the network or through a local agent proxy – each provides advantages and disadvantages which we will discuss later. In both cases the scanner connects to the database communication port with the user credentials provided in the same way as any other application. A credentialed database scan potentially has access to everything a database administrator would, and returns information that is not available outside database. This method of collection is critical as it determines such settings as password expiration, administrative roles, active and locked user accounts, internal and external stored procedures, batch jobs, and database/domain user account mismatches. It is recommended that a dedicated account with (mostly) read only permissions be issued for the vulnerability scanning team in case of a system/account compromise. External Scanning (File & OS Inspection): This method of data collection deduces database configuration by examining settings outside database. This type of scan may also require credentials, but not database user credentials. External assessment has two components: file system and operating system. Some but not all configuration information resides in files stored as part of the database installation. A file system assessment examines both contents and metadata of initialization and configuration files, to determine database setup – such as permissions on data files, network settings, and control file locations. In addition, OS utilities are used to discover vulnerabilities and security settings not determinable by examining files within the database installation. The user account the database systems runs as, registry settings, and simultaneous administrator sessions are all examples of information accessible this way. While there is overlap between the data collected between credentialed and external scans, most of the information is distinct and relevant to different policies. Most traditional OS scanners which claim to offer database scanning provide this type of external assessment. Network (Port) Inspection. In a port inspection, the scanner performs a mock connection to a database communication port; during the network ‘conversation’ either the database returns its type and revision are explicitly, or the scanner deduces them from other characteristics of its response. Once the scanner understand the patch revision of the database, a simple cross reference for known vulnerabilities is generated. Older databases leak enough information that scanners can make educated guesses at configuration settings and installed features. This form of assessment is typically a “quick and dirty” that provides basic patch inspection with minimal overhead and without requiring agents or credentials. As network assessment lacks the user and feature assessments required by many security and audit groups, and as database vendors have blocked most of the information leakage from simple connectinos, this type of scan is falling out of favor. There are other ways to collect information, including eavesdropping and penetration testing, but they are not reliable; additionally, penetration testing and exploitation can have catastrophic side-effects on production databases. In this series we will ignore other options. The bulk of configuration and vulnerability data is obtained from the credentialed scans, so they should be the bare minimum of data collection techniques in any assessment you consider. To capture the complete picture of database setup and vulnerabilities, you need both a credentialed database scan and an inspection of the underlying platform the database is installed on. You can accomplish this by leveraging a different (possibly pre-existing) OS assessment scanning tool, or obtaining this information as part of your database assessment. In either case, this is where things get a little tricky, and require careful attention on your part to make sure you get the functions you need without introducing

Share:
Read Post

The Ranting Roundtable, PCI Edition

Sometimes you just need to let it all out. With all the recent events around breaches and PCI, I thought it might be cathartic to pull together a few of our favorite loudmouths and spend a little time in a no-rules roundtable. There’s a little bad language, a bit of ranting, and a little more productive discussion than I intended. Joining me were Mike Rothman, Alex Hutton, Nick Selby, and Josh Corman. It runs about 50 minutes, and we mostly focus on PCI. The Ranting Roundtable, PCI. Odds are we’ll do more of these in the future. Even if you don’t like them, they’re fun for us. No goats were harmed in the making of this podcast. Share:

Share:
Read Post

Friday Summary – August 21, 2009

I’m a pretty typical guy. I like beer, football, action movies, and power tools. I’ve never been overly interested in kids, even though I wanted them eventually. It isn’t that I don’t like kids, but until they get old enough to challenge me in Guitar Hero, they don’t exactly hold my attention. And babies? I suppose they’re cute, but so are puppies and kittens, and they’re actually fun to play with, and easier to tell apart. This all, of course, changed when I had my daughter (just under 6 months ago). Oh, I still have no interest in anyone else’s baby, and until the past couple weeks was pretty paranoid about picking up the wrong one from daycare, but she definitely holds my attention better than (most) puppies. I suppose it’s weird that I always wanted kids, just not anyone else’s kids. Riley is in one of those accelerated learning modes right now. It’s fascinating to watch her eyes, expressions, and body language as she struggles to grasp the world around her (literally, anything within arms reach + 10). Her powers of observation are frightening… kind of like a superpower of some sort. It’s even more interesting when her mind is running ahead of her body as she struggles on a task she clearly understands, but doesn’t have the muscle control to pull off. And when she’s really motivated to get that toy/cat? You can see every synapse and sinew strain to achieve her goal with complete and utter focus. (That cats do that too, but only if it involves food or the birds that taunt them through the window). On the Ranting Roundtable a few times you hear us call security folks lazy or apathetic. We didn’t mean everyone, but it’s also a general statement that extends far beyond security. To be honest, most people, even hard working people, are pretty resistent to change; to doing things in new ways, even if they’re better. In every industry I’ve ever worked, the vast majority of people didn’t want to be challenged. Even in my paramedic and firefighter days people would gripe constantly about changes that affected their existing work habits. They might hop on some new car-crushing tool, but god forbid you change their shift structure or post-incident paperwork. And go take any CPR class these days, with the new procedures, and you’ll hear a never-ending rant by the old timers who have no intention of changing how many stupid times they pump and blow per minute. Not to over-do an analogy (well, that is what we analysts tend to do), but I wish more security professionals approached the world like my daughter. With intense observation, curiosity, adaptability, drive, and focus. Actually, she’s kind of like a hacker – drop her by something new, and her little hands start testing (and breaking) anything within reach. She’s constantly seeking new experiences and opportunities to learn, and I don’t think those are traits that have to stop once she gets older. No, not all security folks are lazy, but far too many lack the intellectual curiosity that’s so essential to success. Security is the last fracking profession to join if you want stability or consistency. An apathetic, even if hardworking, security professional is as dangerous as he or she is worthless. That’s why I love security; I can’t imagine a career that isn’t constantly changing and challenging. I think it’s this curiosity and drive that defines ‘hacker’, no matter the color of the hat. All security professionals should be hackers. (Despite that silly CISSP oath). Don’t forget that you can subscribe to the Friday Summary via email. And now for the week in review: Webcasts, Podcasts, Outside Writing, and Conferences Rich was quoted several times in the Dark Reading article “Mega-Breaches Employed Familiar, Preventable Attacks”. Rich’s Macworld article on totally paranoid web browsing went live. It will also be in the upcoming print edition. Dan Goodin at the Register mentioned our article on the Heartland breach details. Our Heartland coverage also hit Slashdot (and the server didn’t get crushed, which is always nice). Rich and Martin hit the usual spectrum of security issues in Episode 163 of The Network Security Podcast. Rich, Mike Rothman, Nick Selby, Alex Hutton, and Josh Corman let loose in the very first Ranting Roundtable – PCI Edition. Favorite Securosis Posts Rich: With all the discussion around Heartland, Adrian’s post on Understanding and Choosing a Database Assessment Solution, Part 2: Buying Decisions is very timely. Any time we talk about technology we should be providing a business justification. Adrian: With all the discussion around Heartland, it’s nice to get some confirmation from various parties with New Details, and Lessons, on Heartland Breach. Other Securosis Posts The Ranting Roundtable, PCI Edition Understanding and Choosing a Database Assessment Solution, Part 3: Data Collection Smart Grids and Security (Intro) New Details, and Lessons, on Heartland Breach Understanding and Choosing a Database Assessment Solution, Part 2: Buying Decisions Recent Breaches: We May Have All the Answers Heartland Hackers Caught; Answers and Questions Project Quant Posts We are close to releasing the next round of Quant data… so stand by… Favorite Outside Posts Adrian: Maybe not my favorite post of the week, as this is sad. Strike three! My offer still stands. Are you listening, University of California at Berkeley? Rich: It’s easy to preach security, “trust no one” and be all cynical. Now drop yourself in the middle of Africa, with limited resources and few local contacts, and see if you can get by without taking a few leaps of faith. Johnny Long’s post at the Hacker’s for Charity blog shows what happens when a security pro is forced to jump off the cliff of trust. Top News and Posts Indictments handed out for Heartland and Hannaford breaches. Nice post by Brickhouse Security on iPhone Spyware. The role of venture funding in the security market – is the well dry? I swear Corman wrote up his 8 Dirty Secrets of the Security

Share:
Read Post

New Details, and Lessons, on Heartland Breach

Thanks to an anonymous reader, we may have some additional information on how the Heartland breach occurred. Keep in mind that this isn’t fully validated information, but it does correlate with other information we’ve received, including public statements by Heartland officials. On Monday we correlated the Heatland breach with a joint FBI/USSS bulletin that contained some in-depth details on the probable attack methodology. In public statements (and private rumors) it’s come out that Heartland was likely breached via a regular corporate system, and that hole was then leveraged to cross over to the better-protected transaction network. According to our source, this is exactly what happened. SQL injection was used to compromise a system outside the transaction processing network segment. They used that toehold to start compromising vulnerable systems, including workstations. One of these internal workstations was connected by VPN to the transaction processing datacenter, which allowed them access to the sensitive information. These details were provided in a private meeting held by Heartland in Florida to discuss the breach with other members of the payment industry. As with the SQL injection itself, we’ve seen these kinds of VPN problems before. The first NAC products I ever saw were for remote access – to help reduce the number of worms/viruses coming in from remote systems. I’m not going to claim there’s an easy fix (okay, there is, patch your friggin’ systems), but here are the lessons we can learn from this breach: The PCI assessment likely focused on the transaction systems, network, and datacenter. With so many potential remote access paths, we can’t rely on external hardening alone to prevent breaches. For the record, I also consider this one of the top SCADA problems. Patch and vulnerability management is key – for the bad guys to exploit the VPN connected system, something had to be vulnerable (note – the exception being social engineering a system ‘owner’ into installing the malware manually). We can’t slack on vulnerability management – time after time this turns out to be the way the bad guys take control once they’ve busted through the front door with SQL injection. You need an ongoing, continuous patch and vulnerability management program. This is in every freaking security checklist out there, and is more important than firewalls, application security, or pretty much anything else. The bad guys will take the time to map out your network. Once they start owning systems, unless your transaction processing is absolutely isolated, odds are they’ll find a way to cross network lines. Don’t assume non-sensitive systems aren’t targets. Especially if they are externally accessible. Okay – when you get down to it, all five of those points are practically the same thing. Here’s what I’d recommend: Vulnerability scan everything. I mean everything, your entire public and private IP space. Focus on security patch management – seriously, do we need any more evidence that this is the single most important IT security function? Minimize sensitive data use and use heavy egress filtering on the transaction network, including some form of DLP. Egress filter any remote access, since that basically blows holes through any perimeter you might think you have. Someone will SQL inject any public facing system, and some of the internal ones. You’d better be testing and securing any low-value, public facing system since the bad guys will use that to get inside and go after the high value ones. Vulnerability assessments are more than merely checking patch levels. Share:

Share:
Read Post

Smart Grids and Security (Intro)

It’s not often, but every now and then there are people in our lives we can clearly identify as having a massive impact on our careers. I don’t mean someone we liked to work with, but someone who gave us that big break, opportunity, or push in the right direction that leads you to where you are today. In my case I know exactly who helped me make the transition from academia to the career I have today. I met Jim Brancheau while I was working at the University of Colorado as a systems and network administrator. He was an information systems professor in the College of Business, and some friends roped me into taking his class even though I was a history and molecular biology major. He liked my project on security, hired me to do some outside consulting with him, and eventually hired me full time after we both left the University. That company was acquired by Gartner, and the rest is history. Flat out, I wouldn’t be where I am today without Jim’s help. Jim and I ended up on different teams at Gartner, and we both eventually left. After taking a few years off to ski and hike, Jim’s back in the analyst game focusing on smart grids and sustainability at Carbon Pros, and he’s currently researching and writing a new book for the corporate world on the topic. When he asked me to help out on the security side, it was an offer Karma wouldn’t let me refuse. I covered energy/utilities and SCADA issues back in my Gartner days, but smart grids amplify those issues to a tremendous degree. Much of the research I’ve seen on security for smart grids has focused on metering systems, but the technologies are extending far beyond smarter meters into our homes, cars, and businesses. For example, Ford just announced a vehicle to grid communications system for hybrid and electric vehicles. Your car will literally talk to the grid when you plug it in to enable features such as only charging at off-peak rates. I highly recommend you read Jim’s series on smart grids and smart homes to get a better understanding of where we are headed. For example, opt-in programs where you will allow your power company to send signals to your house to change your thermostat settings if they need to broadly reduce consumption during peak hours. That’s a consumer example, but we expect to see similar technologies also adopted by the enterprise, in large part due to expected cost-savings incentives. Thus when we talk about smart grids, we aren’t going to limit ourselves to next-gen power grid SCADA or bidirectional meters, but try and frame the security issues for the larger ecosystem that’s developing. We also have to discuss legal and regulatory issues, such as the draft NIST and NERC/FERC standards, as well as technology transition issues (since legacy infrastructure isn’t going away anytime soon). Jim kicked off our coverage with this post over at Carbon Pros, which introduces the security and privacy principles to the non-security audience. I’d like to add a little more depth in terms of how we frame the issue, and in future posts we’ll dig into these areas. From a security perspective, we can think of a smart grid as five major components in two major domains. On the utilities side, there is power generation, transmission, and the customer (home or commercial) interface (where the wires drop from the pole to the wall). Within the utilities side there are essentially three overlapping networks – the business network (office, email, billing), the control process/SCADA network (control of generation and transmission equipment), and now, the emerging smart grid network (communications with the endpoint/user). Most work and regulation in recent years (the CIP requirements) have focused on defining and securing the “electronic security perimeter”, which delineates the systems involved in the process control side, including both legacy SCADA and IP-based systems. In the past, I’ve advised utilities clients to limit the size and scope of their electronic security perimeter as much as possible to reduce both risks and compliance costs. I’ve even heard of some organizations that put air gaps back in place after originally co-mingling the business and process control networks to help reduce security and compliance costs. The smart grid potentially expands this perimeter by extending what’s essentially a third network, the smart grid network, to the meter in the residential or commercial site. That meter is thus the interface to the outside world, and has been the focus of much of the security research I’ve seen. There are clear security implications for the utility, ranging from fraud to distributed denial of generation attacks (imagine a million meters under-reporting usage all at the same time). But the security domain also extends into the endpoint installation as it interfaces with the external side (the second domain) which includes the smart building/home network, and smart devices (as in refrigerators and cars). The security issues for residential and commercial consumers are different but related, and expand into privacy concerns. There could be fraud, denial of power, privacy breaches, and all sorts of other potential problems. This is compounded by the decentralization and diversity of smart technologies, including a mix of powerline, wireless, and IP tech. In other words, smart grid security isn’t merely an issue for electric utilities – there are enterprise and consumer requirements that can’t be solely managed by your power company. They may take primary responsibility for the meter, but you’ll still be responsible for your side of the smart network and your usage of smart appliances. On the upside, although there’s been some rapid movement on smart metering, we still have time to develop our strategies for management of our side (consumption) of smart energy technologies. I don’t think we will all be connecting our thermostats to the grid in the next few months, but there are clearly enterprise implications and we need to start investigating and developing strategies for smart grid

Share:
Read Post

Understanding and Choosing a Database Assessment Solution, Part 2: Buying Decisions

If you were looking for a business justification for database assessment, the joint USSS/FBI advisory referenced in Rich’s last post on Recent Breaches should be more than sufficient. What you are looking at is not a checklist of exotic security measures, but fairly basic security that should be implemented in every production database. All of the preventative controls listed in the advisory are, for the most part, addressed with database assessment scanners. Detection of known SQL injection vulnerabilities, detecting use of external stored procedures like xp_cmdshell, and avenues for obtaining Windows credentials from a compromised database server (or vice-versa) are basic policies included with all database vulnerability scanners – some freely available for download. It is amazing that large firms like Heartland, Hannaford, and TJX – who rely on databases for core business functions – get basic database security so wrong. These attacks are a template for anyone who cares to break into your database servers. If you don’t think you are a target because you are not storing credit card numbers, think again! There are plenty of ways for attackers to earn money or commit fraud by extracting or altering the contents of your databases. As a very basic security first step, scan your databases! Adoption of database specific assessment technologies has been sporadic outside the finance vertical because providing business justification is not always simple. For one, many firms already have generic forms of assessment and inaccurately believe they already have that function covered. If they do discover missing policies, they often get the internal DBA staff to paper ove the gaps with homegrown SQL queries. As an example of what I mean, I want to share one story about a customer who was inspecting database configurations as part of their internal audit process. They had about 18 checks, mostly having to do with user permissions, and these settings formed part of the SOX and GLBA controls. What took me by surprise was the customer’s process: twice a year a member of the internal audit staff walked from database server to database server, logged in, ran the SQL queries, captured the results, and then moved on to the other 12 systems. When finished, all of the results were dumped into a formatting tool so the control reports could be made ready for KPMG’s visit. Twice a year, she made the rounds, each time taking a day to collect the data, and a day to produce the reports. When KPMG advised the reports be run quarterly, the task became perceived as a burden and they began a search to automate the task because only then did the cost in lost productivity warrant investment in automation. Their expectations going in were simply that the cost for the product should not grossly exceed a week or two of employee time. Where it got interesting was when we began the proof of concept – it turned out several other groups had been manually running scripts and had much the same problem. We polled other organizations across the company, and found similar requirements from internal audit, security, IT management, and DBAs alike. Not only was each group already performing a small but critical set of security and compliance tasks, they each had another list of things they would like to accomplish. While no single group could justify the expense, taken together it was easy to see how automation saved on manpower alone. We then multiplied the work across dozens, or in some cases thousands of databases – and discovered there had been ample financial justification all along. Each group might have been motivated by compliance, operations efficiency, or threat mitigation, but as their work required separation of duties, they had not cooperated on obtaining tools to solve a shared problem. Over time, we found this customer example to be fairly common. When considering business justification for the investment into database assessment, you are unlikely to find any single irresistible reason you need database assessment technology. You may read product marketing claims that say “Because you are compelled by compliance mandate GBRSH 509 to secure your database”, or some nonsense like that, but it is simply not true. There are security and regulatory requirements that compel certain database settings, but nothing that mandates automation. But there are two very basic reasons why you need to automate the assessment process: The scope of the task, and accuracy of the results. The depth and breadth of issues to address are beyond the skill of any one of the audiences for assessment. Let’s face it: the changes in database security issues alone are difficult to keep up with – much less compliance, operations, and evolutionary changes to the database platform itself. Coupled with the boring and repetitive nature of running these scans, it’s ripe territory for shortcuts and human error. When considering a database assessment solution, the following are common market drivers for adoption. If your company has more than a couple databases, odds are all of these factors will apply to your situation: Configuration Auditing for Compliance: Periodic reports on database configuration and setup are needed to demonstrate adherence to internal standards and regulatory requirements. Most platforms offer policy bundles tuned for specific regulations such as PCI, Sarbanes-Oxley, and HIPAA. Security: Fast and effective identification of known security issues and deviations from company and industry best practices, with specific remediation advice. Operational Policy Enforcement: Verification of work orders, operational standards, patch levels, and approved methods of remediation are valuable (and possibly required). There are several ways this technology can be applied to promote and address the requirements above, including: Automated verification of compliance and security settings across multiple heterogenous database environments. Consistency in database deployment across the organization, especially important for patch and configuration management, as well as detection and remediation of exploits commonly used to gain access. Centralized policy management so that a single policy can be applied across multiple (possibly geographically dispersed) locations. Separation of duties between IT, audit, security, and database administration personnel.

Share:
Read Post

Heartland Hackers Caught; Answers and Questions

UPDATE: follow up article with what may be the details of the attacks, based on the FBI/Secret Service advisory that went out earlier this year. The indictment today of Albert Gonzales and two co-conspirators for hacking Hannaford, 7-Eleven, and Heartland Payment Systems is absolutely fascinating on multiple levels. Most importantly from a security perspective, it finally reveals details of the attacks. While we don’t learn the specific platforms and commands, the indictment provides far greater insights than speculation by people like me. In the “drama” category, we learn that the main perpetrator is the same person who hacked TJX (and multiple other retailers), and was the Secret Service informant who helped bring down the Shadowcrew. Rather than rehashing the many articles popping up, let’s focus on the security implications and lessons hidden in the news reports and the indictment itself. Let’s start with a short list of the security issues and lessons learned, then dig into more detail on the case and perpetuators themselves: To summarize the security issues: The attacks on Hannaford, Heartland, 7-Eleven, and the other 2 retailers used SQL injection as the primary vector. In at least some cases, it was not SQL injection of the transaction network, but another system used to get to the transaction network. In at least some cases custom malware was installed, which indicates either command execution via the SQL injection, or XSS via SQL injection to attack internal workstations. We do not yet know the details. The custom malware did not trigger antivirus, deleted log files, sniffed the internal network for card numbers, scanned the internal network for stored data, and exfiltrated the data. The indictment doesn’t reveal the degree of automation, or if it was more manually controlled (shell). The security lessons include: Defend against SQL injection – it’s clearly one of the top vectors for attacks. Parameterized queries, WAFs, and so on. Lock databases to prevent command execution via SQL. Don’t use a privileged account for the RDBMS, and do not enable the command execution features. Then, lock down the server to prevent unneeded network services and software installation (don’t allow outbound curl, for example). Since the bad guys are scanning for unprotected data, you might as well do it yourself. Use DLP to find card data internally. While I don’t normally recommend DLP for internal network traffic, if you deal with card numbers you should considering using it to scan traffic in and out of your transaction network. AV won’t help much with the custom malware. Focus on egress filtering and lockdown of systems in the transaction network (mostly the database and application servers). Don’t assume attackers will only target transaction applications/databases with SQL injection. They will exploit any weak point they can find, then use it to weasel over to the transaction side. These attacks appear to be preventable using common security controls. It’s possible some advanced techniques were used, but I doubt it. Now let’s talk about more details: This indictment covers breaches of Heartland, Hannaford, 7-Eleven, and two “major retailers” breached in 2007 and early 2008. Those retailers have not been revealed, and we do not know if they are in violation of any breach notification laws. This is the same Albert Gonzales who was indicted last year for breaches of TJ Maxx, Barnes & Noble, BJ’s Wholesale Club, Boston Market, DSW, Forever 21, Office Max, and Sports Authority. A co-coconspirator referred to in the indictment as “P.T.” was not indicted. While it’s pure conjecture, I won’t be surprised if this is an informant who help break the case. Gonzales and friends would identify potential targets, then use a combination of online and physical surveillance to identify weaknesses. Physical visits would reveal the payment system being used (via the point of sale terminals), and other relevant information. When performing online reconnaissance, they would also attempt to determine the payment processor/processing system. In the TJX attacks it appears that wireless attacks were the primary vector (which correlates with the physical visits). In this series, it was SQL injection. Multiple systems and servers scattered globally were used in the attack. It is quite possible that these were the part of the web-based exploitation service described in this article by Brian Krebs back in April. The primary vector was SQL injection. We do not know the sophistication of the attack, since SQL injection can be simple or complex, depending on the database and security controls involved. It’s hard to tell from the indictment, but it appears that in some cases SQL injection alone may have been used, while in others it was a way of inserting malware. It is very possible that SQL injection on a less-secured area of the network was used to install malware, which was then used to attack other internal services and transition to the transaction network. Based on information in various other interviews and stories, I suspect this was the case for Heartland, if not other targets. This is conjecture, so please don’t hold me to it. More pure conjecture here, but I wonder if any of the attacks used SQL injection to XSS internal users and download malware into the target organization? Custom malware was left on target networks, and tested ensure it would evade common AV engines. SQL injection to allow command execution shouldn’t be possible on a properly configured financial transaction system. Most RDBMS systems support some level of command execution, but usually not by default (for current versions of SQL Server and Oracle after 8 – not sure about other platforms). Thus either a legacy RDBMS was used, or a current database platform that was improperly configured. This would either be due to gross error, or special requirements that should have only been allowed with additional security controls, such as strict limits on the RDBMS user account, server lockdown (everything from application whitelisting, to HIPS, to external monitoring/filtering). In one case the indictment refers to a SQL injection string used to redirect content to an external server, which seems

Share:
Read Post

Recent Breaches: We May Have All the Answers

You know how sometimes you read something and then forget about it until it smacks you in the face again? That’s how I feel right now after @BreachSecurity reminded me of this advisory from February. To pull an excerpt, it looks like we now know exactly how all these recent major breaches occurred: Attacker Methodology: In general, the attackers perform the following activities on the networks they compromise: They identify Web sites that are vulnerable to SQL injection. They appear to target MSSQL only. They use “xp_cmdshell”, an extended procedure installed by default on MSSQL, to download their hacker tools to the compromised MSSQL server. They obtain valid Windows credentials by using fgdump or a similar tool. They install network “sniffers” to identify card data and systems involved in processing credit card transactions. They install backdoors that “beacon” periodically to their command and control servers, allowing surreptitious access to the compromised networks. They target databases, Hardware Security Modules (HSMs), and processing applications in an effort to obtain credit card data or brute-force ATM PINs. They use WinRAR to compress the information they pilfer from the compromised networks. No surprises. All preventable, although clearly these guys know their way around transaction networks if they target HSMs and proprietary financial systems. Seems like almost exactly what happend with CardSystems back in 2004. No snarky comment needed. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.