Securosis

Research

OWASP and SunSec Announcement

Rich wanted me to put up a reminder that he will be speaking at OWASP next Tuesday (September 1, 2009). I’d say where this was located, but I honestly don’t know. He said it was a secret. Also, for those of you in the greater Phoenix area, we are planning SunSec next week on Tuesday as well. Keep the date on your calendar free. Location TBD. We’ll update this post with details next week. # Update: Ben Tomhave was nice enough to post SunSec details here. Share:

Share:
Read Post

Burden of Online Fraud

One of my favorite posts of the last week, and one of the scariest, is Brian Krebs’ Washington Post article on Businesses Are Reluctant to Report Online Fraud. This is not a report on a single major bank heist, but instead what many of us have worried about for a long time in Internet fraud: automated, distributed and repeatable theft. The worry has never been the single million-dollar theft, but scalable, repeatable theft of electronic funds. We are going to be hearing a lot more about this in the coming year. The question that will be discussed is who’s to blame in these situations? The customer for having almost no security on their small business computer and being completely ignorant of basic security precautions? The bank, both for having crummy authentication and fraud detection, with an understanding the security threats as part of their business model? Is it contributory negligence? This issue will gain more national attention as more businesses have their bank say “too bad, your computer was hacked!” Let’s face it, the bank has your money. They are the scorekeeper and if they say you withdrew your money, the burden of proof is on you to show they are wrong. And no one wants to make them mad for fear they might tell you to piss off. The lines of responsibility need to be drawn. I feel like I am the last person in the U.S. to say this, but I don’t do my banking on line. Would it be convenient? Sure, but I think it’s too risky. My bank account information? Not going to see a computer, or at least a computer I own because I cannot afford to make a mistake. I asked a handful of security researches I was having lunch with during Defcon – who know a heck of a lot more about web hacking than I do – if they did their banking online. They all said they did, saying “It’s convenient.” Me? I have to use my computer for research, and I am way too worried that I would make one simple mistake and be completely hosed and have to rebuild from scratch … after my checking account was cleaned out. In each of the last two years, the majority of the people I spoke with at Black Hat/Defcon … no, let’s make that the overwhelming majority of the people I have spoken with overall, had an ‘Oh $&(#’ moment at the conference. At some point we said to ourselves “These threats are really bad!” Granted, many of the security researchers I spoke with take extraordinary precautions, but we need to recognize how badly the browsers and web apps we use every day are fundamentally broken from a security standpoint. We need to acknowledge that out of the box, PCs are insecure and the people who use them are willfully ignorant of security. I may be the last person with a computer who simply won’t budge on this subject. I even get mad when the bank sends me a credit card that has ATM capabilities as a convenience for me. I did not ask for that ‘feature’ and I don’t want the liability. While the banks keep sending me incentives and encouragements to do it, I think online banking remains too risky unless you have a dedicated machine. Maybe banks will start issues smart tokens or some additional security measures to help, but right now, the infrastructure appears broken to me. Share:

Share:
Read Post

Database Assessment Solutions, Part 5: Operations and Compliance policies

Technically speaking, the market segment we are talking about is “Database Vulnerability Assessment”. You might have noticed that we titled this series “Database Assessment”. No, it was not just because the titles of these posts are too long (they are). The primary motivation for this name was to stress that this is not just about vulnerabilities and security. While the genesis of this market is security, compliance with regulatory mandates and operations policies are what drives the buying decisions, as noted in part 2. (For easy reference, here are Part 1, Part 3, and Part 4). In many ways, compliance and operational consistency are harder problems to solve because they requires more work and tuning on your part, and that need for customization is our focus in this post. In 4GL programming we talk about objects and instantiation. The concept of instantiation is to take a generic object and give it life; make it a real instance of the generic thing, with unique attributes and possibly behavior. You need to think about databases in the same way as, when started up, no two are alike. There may be two installations of DB2 that serve the same application, but they are run by different companies, store different data, are managed by different DBAs, have altered the base functions in various ways, run on different hardware, and have different configurations. This is why configuration tuning can be difficult: unlike vulnerability policies that detect specific buffer overflows or SQL injection attacks, operational policies are company specific and are derived from best practices. We have already listed a number of the common vulnerability and security policies. The following is a list of policies that apply to IT operations on the database environment or system: Operations Policies Password requirements (lifespan, composition) Data files (number, location, permissions) Audit log files (presence, permissions, currency) Product version (version control, patches) Itemize (unneeded) functions Database consistency (i.e., DBCC-DB on SQL Server) checks Statistics (statspack, auto-statistics) Backup report (last, frequency, destination) Error log generation and access Segregation of admin role Simultaneous admin logins Ad hoc query usage Discovery (databases, data) Remediation instructions & approved patches Orphaned databases Stored procedures (list, last modified) Changes (files, patches, procedures, schema, supporting functions) There are a lot more, but these should give you an idea of the basics a vendor should have in place, and allow you to contrast with the general security and vulnerability policies we listed in section 4. Compliance Policies Most regulatory requirements, from industry or government, are fulfilled by access control and system change policies we have already introduced. PCI adds a few extra requirements in the verification of security settings, access rights and patch levels, but compliance policies are generally a subset of security rules and operational policies. As the list varies by regulation, and the requirements change over time, we are not going to list them separately here. Since compliance is likely what is motivating your purchase of database assessment, you must to dig into vendor claims to verify they offer what you need. It gets tricky because some vendors tout compliance, for example “configuration compliance”, which only means you will be compliant with their list of accepted settings. These policies may not be endorsed by anyone other than the vendor, and only have coincidental relevance to PCI or SOX. In their defense, most commercially available database assessment platforms are sufficiently evolved to offer packaged sets of relevant polices for regulatory compliance, industry best practices, and detection of security vulnerabilities across all database platforms. They offer sufficient breadth and depth for what you need to get up and running very quickly, but you will need to verify your needs are met, and if not, what the deviation is. What most of the platforms do not do very well is allow for easy policy customization, multiple policy groupings, policy revisions, and creating copies of the “out of the box” policies provided by the vendor. You need all of these features for day-to-day management, so let’s delve into each of these areas a little more. This leads into our next section on policy customization. Policy Customization Remember how I said in Part 3 that “you are going to be most interested in evaluating assessment tools on how well they cover the policies you need”? That is true, but probably not for the reasons that you thought. What I deliberately omitted is that the policies you are interested in prior to product evaluation will not be the same policy set you are interested in afterwards. This is especially true for regulatory policies, which grow in number and change over time. Most DBAs will tell you that the steps a database vendor advises to remediate a problem may break your applications, so you will need a customized set of steps appropriate to your environment. Further, most enterprises have evolved database usage polices far beyond “best practices”, and greatly augment what the assessment vendor provides. This means both the set of policies, and the contents of the policies themselves, will need to change. And I am not just talking about criticality, but description, remediation, the underlying query, and the result set demanded to demonstrate adherence. As you learn more about what is possible, as you refine your internal requirements, or as auditor expectations evolve, you will experience continual drift in your policy set. Sure, you will have static vulnerability and security policies, but as the platform, process, and requirements change, your operations and compliance policy sets will be fluid. How easy it is to customize policies and manage policy sets is extremely important, as it directly affects the time and complexity required to manage the platform. Is it a minute to change a policy, or an hour? Can the auditor do it, or does it require a DBA? Don’t learn this after you have made your investment. On a day-to-day basis, this will be the single biggest management challenge you face, on par with remediation costs. Policy Groupings & Separation of Duties For

Share:
Read Post

Database Assessment Solutions, Part 4: Vulnerability and Security Policies

Understanding and Choosing a Database Assessment Solution, Part 4: Vulnerability and Security Policies I was always fascinated by the Sapphire/Slammer worm. The simplicity of the attack and how quickly it spread were astounding. Sure, it didn’t have a malicious payload, but the simple fact that it could have created quite a bit of panic. This event is what I consider the dawn of database vulnerability assessment tools. From that point on it seemed like every couple of weeks we were learning of new database vulnerabilities on every platform. Compliance may drive today’s assessment purchase, but the vulnerabilities are always what grabs the media’s attention, and it remains a key feature for any database security product. Prior to writing this post I went back and looked at all the buffer overflow and SQL injection attacks on DB2, Oracle, and SQL Server. It struck me when looking at them – especially those on SQL Server – why half of the administrative functions had vulnerabilities: whoever wrote them assumed that the functions were inaccessible to anyone who was not a DBA. The functions were conceptually supposed to be gated by access control and therefore safe. It was not so much that the programmers were not thinking about security, but they made incorrect assumptions about how the database internals like the parser and preprocessor worked. I have always said that SQL injection is an attack on the database through an application. It’s true, but technically the attacks are also getting through internal database processing layers prior to the exploit, as well as an eternal application layer. Looking back at the details it just seemed reasonable we would have these vulnerabilities, given the complexity of the database platforms and the lack of security training among software developers. Anyway, enough rambling about database security history. Understanding database vulnerabilities and knowing how to remediate – whether through patches, workarounds, or third party detection tools – requires significant skill and training. Policy research is expensive, and so is writing and testing these policies. In my experience over the four years that I helped define and build database assessment policies, it would take an average of 3 days to construct a policy after a vulnerability was understood: A day to write and optimize the SQL test case, a day to create the description and put together remediation information, and another day to test on supported platforms. Multiply by 10 policies across 6 different platforms and you get an idea of the cost involved. Policy development requires a full-time team of skilled practitioners to manage and update vulnerability and security policies across the half dozen platforms commonly supported by the vendors. This is not a reasonable burden for non-security vendors to take on, so if database security is an issue, don’t try to do this in-house! Buying an aftermarket product excuses your organization from developing these checks, protecting you from specific threats hackers are likely to deploy, as well as more generic security threats. What specific vulnerability checks should be present in your database assessment product? In a practical sense, it does not matter. Specific vulnerabilities come and go too fast for any list to be relevant. What I am going to do is provide a list of general security checks that should be present, and list the classes of vulnerabilities any product you evaluate should have policies for. Then I will cover other relevant buying criteria to consider. General Database Security Policies List database administrator accounts and how they map to domain users. Product version (security patch level) List users with admin/special privileges List users with access to sensitive columns or data (credit cards, passwords) List users with access to system tables Database access audit (failed logins) Authentication method (domain, database, mixed) List locked accounts Listener / SQL Agent / UDP, network configuration (passwords in clear text, ports, use of named pipes) Systems tables (subset) not updatable Ownership chains Database links Sample Databases (Northwind, pubs, scott/tiger) Remote systems and data sources (remote trust relationships) Vulnerability Classes Default Passwords Weak/blank/same as login passwords Public roles or guest accounts to anything External procedures (CmdExec, xp_cmdshell, active scripting, exproc, or any programatic access to OS level code) Buffer overflow conditions (XP, admin functions, Slammer/Sapphire, HEAP, etc. – too numerous to list) SQL Injection (1=1, most admin functions, temporary stored procedures, database name as code – too numerous to list) Network (Connection reuse, man in the middle, named pipe hijacking) Authentication escalation (XStatus / XP / SP, exploiting batch jobs, DTS leakage, remote access trust) Task injection (Webtasks, sp_xxx, MSDE service, reconfiguration) Registry access (SQL Server) DoS (named pipes, malformed requests, IN clause, memory leaks, page locks creating deadlocks) There are many more. It is really important to understand that the total number of in policies any given product is irrelevant. As an example, let’s assume that your database has two modules with buffer overflow vulnerabilities, and each has eight different ways to exploit it. Comparing two assessment products, one might have 16 policies checking for each exploit, and the other could have two policies checking for two vulnerabilities. These products are functionally equivalent, but one vendor touts an order of magnitude more policies, which have no actual benefit. Do NOT let the number of policies influence your buying decision and don’t get bogged down in what I call a “policy escalation war”. You need to compare functional equivalence and realize that if one product can check for more vulnerabilities in fewer queries, it runs faster! It may take a little work on your part to comb through the policies to make sure what you need is present, but you need to perform that inspection regardless. You will want to carefully confirm that the assessment platform covers the database versions you have. And just because your company supposedly migrated to Oracle 11 some time back does not mean you get to discount Oracle 9 database support, because odds are better than even that you have at least one still hanging around. Or you

Share:
Read Post

Understanding and Choosing a Database Assessment Solution, Part 3: Data Collection

In the first part of this series we introduced database assessment as a fully differentiated form of assessment scan, and in part two we discussed some of the use cases and business benefits database assessment provides. In this post we will begin dissecting the technology, and take a close look at the deployment options available. What and how your requirements are addressed is more a function of the way the product is implemented than the policies it contains. Architecturally, there is little variation in database assessment platforms. Most are two-tiered systems, either appliances or pure software, with the data storage and analysis engine located away from the target database server. Many vendors offer remote credentialed scans, with some providing an optional agent to assist with data collection issues we will discuss later. Things get interesting around how the data is collected, and that is the focus of this post. As a customer, the most important criteria for evaluating assessment tools are how well they cover the policies you need, and how easily they integrate within your organization’s systems and processes. The single biggest technology factor to consider for both is how data is collected from the database system. Data collection methods dictate what information will be available to you – and as a direct result, what policies you will be able to implement. Further, how the scanner interacts with the database plays a deciding role in how you will deploy and manage the product. Obtaining and installing credentials, mapping permissions, agent installation and maintenance, secure remote sessions, separation of duties, and creation of custom policies are all affected by the data collection architecture. Database assessment begins with the collection of database configuration information, and each vendor offers a slightly different combination of data collection capabilities. In this context, I am using the word ‘configuration’ in a very broad sense to cover everything from resource allocation (disk, memory, links, tablespaces), operational allocation (user access rights, roles, schemas, stored procedures), database patch levels, network, and features/functions that have been installed into the database system. Pretty much anything you could want to know about a database. There are three ways to collect configuration and vulnerability information from a database system: Credentialed Scanning: A credentialed database scan leverages a user account to gain access to the database system internals. Once logged into the system, the scanner collects configuration data by querying system tables and sending the results back to the scanner for analysis. The scan can be run over the network or through a local agent proxy – each provides advantages and disadvantages which we will discuss later. In both cases the scanner connects to the database communication port with the user credentials provided in the same way as any other application. A credentialed database scan potentially has access to everything a database administrator would, and returns information that is not available outside database. This method of collection is critical as it determines such settings as password expiration, administrative roles, active and locked user accounts, internal and external stored procedures, batch jobs, and database/domain user account mismatches. It is recommended that a dedicated account with (mostly) read only permissions be issued for the vulnerability scanning team in case of a system/account compromise. External Scanning (File & OS Inspection): This method of data collection deduces database configuration by examining settings outside database. This type of scan may also require credentials, but not database user credentials. External assessment has two components: file system and operating system. Some but not all configuration information resides in files stored as part of the database installation. A file system assessment examines both contents and metadata of initialization and configuration files, to determine database setup – such as permissions on data files, network settings, and control file locations. In addition, OS utilities are used to discover vulnerabilities and security settings not determinable by examining files within the database installation. The user account the database systems runs as, registry settings, and simultaneous administrator sessions are all examples of information accessible this way. While there is overlap between the data collected between credentialed and external scans, most of the information is distinct and relevant to different policies. Most traditional OS scanners which claim to offer database scanning provide this type of external assessment. Network (Port) Inspection. In a port inspection, the scanner performs a mock connection to a database communication port; during the network ‘conversation’ either the database returns its type and revision are explicitly, or the scanner deduces them from other characteristics of its response. Once the scanner understand the patch revision of the database, a simple cross reference for known vulnerabilities is generated. Older databases leak enough information that scanners can make educated guesses at configuration settings and installed features. This form of assessment is typically a “quick and dirty” that provides basic patch inspection with minimal overhead and without requiring agents or credentials. As network assessment lacks the user and feature assessments required by many security and audit groups, and as database vendors have blocked most of the information leakage from simple connectinos, this type of scan is falling out of favor. There are other ways to collect information, including eavesdropping and penetration testing, but they are not reliable; additionally, penetration testing and exploitation can have catastrophic side-effects on production databases. In this series we will ignore other options. The bulk of configuration and vulnerability data is obtained from the credentialed scans, so they should be the bare minimum of data collection techniques in any assessment you consider. To capture the complete picture of database setup and vulnerabilities, you need both a credentialed database scan and an inspection of the underlying platform the database is installed on. You can accomplish this by leveraging a different (possibly pre-existing) OS assessment scanning tool, or obtaining this information as part of your database assessment. In either case, this is where things get a little tricky, and require careful attention on your part to make sure you get the functions you need without introducing

Share:
Read Post

Friday Summary – August 21, 2009

I’m a pretty typical guy. I like beer, football, action movies, and power tools. I’ve never been overly interested in kids, even though I wanted them eventually. It isn’t that I don’t like kids, but until they get old enough to challenge me in Guitar Hero, they don’t exactly hold my attention. And babies? I suppose they’re cute, but so are puppies and kittens, and they’re actually fun to play with, and easier to tell apart. This all, of course, changed when I had my daughter (just under 6 months ago). Oh, I still have no interest in anyone else’s baby, and until the past couple weeks was pretty paranoid about picking up the wrong one from daycare, but she definitely holds my attention better than (most) puppies. I suppose it’s weird that I always wanted kids, just not anyone else’s kids. Riley is in one of those accelerated learning modes right now. It’s fascinating to watch her eyes, expressions, and body language as she struggles to grasp the world around her (literally, anything within arms reach + 10). Her powers of observation are frightening… kind of like a superpower of some sort. It’s even more interesting when her mind is running ahead of her body as she struggles on a task she clearly understands, but doesn’t have the muscle control to pull off. And when she’s really motivated to get that toy/cat? You can see every synapse and sinew strain to achieve her goal with complete and utter focus. (That cats do that too, but only if it involves food or the birds that taunt them through the window). On the Ranting Roundtable a few times you hear us call security folks lazy or apathetic. We didn’t mean everyone, but it’s also a general statement that extends far beyond security. To be honest, most people, even hard working people, are pretty resistent to change; to doing things in new ways, even if they’re better. In every industry I’ve ever worked, the vast majority of people didn’t want to be challenged. Even in my paramedic and firefighter days people would gripe constantly about changes that affected their existing work habits. They might hop on some new car-crushing tool, but god forbid you change their shift structure or post-incident paperwork. And go take any CPR class these days, with the new procedures, and you’ll hear a never-ending rant by the old timers who have no intention of changing how many stupid times they pump and blow per minute. Not to over-do an analogy (well, that is what we analysts tend to do), but I wish more security professionals approached the world like my daughter. With intense observation, curiosity, adaptability, drive, and focus. Actually, she’s kind of like a hacker – drop her by something new, and her little hands start testing (and breaking) anything within reach. She’s constantly seeking new experiences and opportunities to learn, and I don’t think those are traits that have to stop once she gets older. No, not all security folks are lazy, but far too many lack the intellectual curiosity that’s so essential to success. Security is the last fracking profession to join if you want stability or consistency. An apathetic, even if hardworking, security professional is as dangerous as he or she is worthless. That’s why I love security; I can’t imagine a career that isn’t constantly changing and challenging. I think it’s this curiosity and drive that defines ‘hacker’, no matter the color of the hat. All security professionals should be hackers. (Despite that silly CISSP oath). Don’t forget that you can subscribe to the Friday Summary via email. And now for the week in review: Webcasts, Podcasts, Outside Writing, and Conferences Rich was quoted several times in the Dark Reading article “Mega-Breaches Employed Familiar, Preventable Attacks”. Rich’s Macworld article on totally paranoid web browsing went live. It will also be in the upcoming print edition. Dan Goodin at the Register mentioned our article on the Heartland breach details. Our Heartland coverage also hit Slashdot (and the server didn’t get crushed, which is always nice). Rich and Martin hit the usual spectrum of security issues in Episode 163 of The Network Security Podcast. Rich, Mike Rothman, Nick Selby, Alex Hutton, and Josh Corman let loose in the very first Ranting Roundtable – PCI Edition. Favorite Securosis Posts Rich: With all the discussion around Heartland, Adrian’s post on Understanding and Choosing a Database Assessment Solution, Part 2: Buying Decisions is very timely. Any time we talk about technology we should be providing a business justification. Adrian: With all the discussion around Heartland, it’s nice to get some confirmation from various parties with New Details, and Lessons, on Heartland Breach. Other Securosis Posts The Ranting Roundtable, PCI Edition Understanding and Choosing a Database Assessment Solution, Part 3: Data Collection Smart Grids and Security (Intro) New Details, and Lessons, on Heartland Breach Understanding and Choosing a Database Assessment Solution, Part 2: Buying Decisions Recent Breaches: We May Have All the Answers Heartland Hackers Caught; Answers and Questions Project Quant Posts We are close to releasing the next round of Quant data… so stand by… Favorite Outside Posts Adrian: Maybe not my favorite post of the week, as this is sad. Strike three! My offer still stands. Are you listening, University of California at Berkeley? Rich: It’s easy to preach security, “trust no one” and be all cynical. Now drop yourself in the middle of Africa, with limited resources and few local contacts, and see if you can get by without taking a few leaps of faith. Johnny Long’s post at the Hacker’s for Charity blog shows what happens when a security pro is forced to jump off the cliff of trust. Top News and Posts Indictments handed out for Heartland and Hannaford breaches. Nice post by Brickhouse Security on iPhone Spyware. The role of venture funding in the security market – is the well dry? I swear Corman wrote up his 8 Dirty Secrets of the Security

Share:
Read Post

Understanding and Choosing a Database Assessment Solution, Part 2: Buying Decisions

If you were looking for a business justification for database assessment, the joint USSS/FBI advisory referenced in Rich’s last post on Recent Breaches should be more than sufficient. What you are looking at is not a checklist of exotic security measures, but fairly basic security that should be implemented in every production database. All of the preventative controls listed in the advisory are, for the most part, addressed with database assessment scanners. Detection of known SQL injection vulnerabilities, detecting use of external stored procedures like xp_cmdshell, and avenues for obtaining Windows credentials from a compromised database server (or vice-versa) are basic policies included with all database vulnerability scanners – some freely available for download. It is amazing that large firms like Heartland, Hannaford, and TJX – who rely on databases for core business functions – get basic database security so wrong. These attacks are a template for anyone who cares to break into your database servers. If you don’t think you are a target because you are not storing credit card numbers, think again! There are plenty of ways for attackers to earn money or commit fraud by extracting or altering the contents of your databases. As a very basic security first step, scan your databases! Adoption of database specific assessment technologies has been sporadic outside the finance vertical because providing business justification is not always simple. For one, many firms already have generic forms of assessment and inaccurately believe they already have that function covered. If they do discover missing policies, they often get the internal DBA staff to paper ove the gaps with homegrown SQL queries. As an example of what I mean, I want to share one story about a customer who was inspecting database configurations as part of their internal audit process. They had about 18 checks, mostly having to do with user permissions, and these settings formed part of the SOX and GLBA controls. What took me by surprise was the customer’s process: twice a year a member of the internal audit staff walked from database server to database server, logged in, ran the SQL queries, captured the results, and then moved on to the other 12 systems. When finished, all of the results were dumped into a formatting tool so the control reports could be made ready for KPMG’s visit. Twice a year, she made the rounds, each time taking a day to collect the data, and a day to produce the reports. When KPMG advised the reports be run quarterly, the task became perceived as a burden and they began a search to automate the task because only then did the cost in lost productivity warrant investment in automation. Their expectations going in were simply that the cost for the product should not grossly exceed a week or two of employee time. Where it got interesting was when we began the proof of concept – it turned out several other groups had been manually running scripts and had much the same problem. We polled other organizations across the company, and found similar requirements from internal audit, security, IT management, and DBAs alike. Not only was each group already performing a small but critical set of security and compliance tasks, they each had another list of things they would like to accomplish. While no single group could justify the expense, taken together it was easy to see how automation saved on manpower alone. We then multiplied the work across dozens, or in some cases thousands of databases – and discovered there had been ample financial justification all along. Each group might have been motivated by compliance, operations efficiency, or threat mitigation, but as their work required separation of duties, they had not cooperated on obtaining tools to solve a shared problem. Over time, we found this customer example to be fairly common. When considering business justification for the investment into database assessment, you are unlikely to find any single irresistible reason you need database assessment technology. You may read product marketing claims that say “Because you are compelled by compliance mandate GBRSH 509 to secure your database”, or some nonsense like that, but it is simply not true. There are security and regulatory requirements that compel certain database settings, but nothing that mandates automation. But there are two very basic reasons why you need to automate the assessment process: The scope of the task, and accuracy of the results. The depth and breadth of issues to address are beyond the skill of any one of the audiences for assessment. Let’s face it: the changes in database security issues alone are difficult to keep up with – much less compliance, operations, and evolutionary changes to the database platform itself. Coupled with the boring and repetitive nature of running these scans, it’s ripe territory for shortcuts and human error. When considering a database assessment solution, the following are common market drivers for adoption. If your company has more than a couple databases, odds are all of these factors will apply to your situation: Configuration Auditing for Compliance: Periodic reports on database configuration and setup are needed to demonstrate adherence to internal standards and regulatory requirements. Most platforms offer policy bundles tuned for specific regulations such as PCI, Sarbanes-Oxley, and HIPAA. Security: Fast and effective identification of known security issues and deviations from company and industry best practices, with specific remediation advice. Operational Policy Enforcement: Verification of work orders, operational standards, patch levels, and approved methods of remediation are valuable (and possibly required). There are several ways this technology can be applied to promote and address the requirements above, including: Automated verification of compliance and security settings across multiple heterogenous database environments. Consistency in database deployment across the organization, especially important for patch and configuration management, as well as detection and remediation of exploits commonly used to gain access. Centralized policy management so that a single policy can be applied across multiple (possibly geographically dispersed) locations. Separation of duties between IT, audit, security, and database administration personnel.

Share:
Read Post

Friday Summary – August 14, 2009

Rich and I have been really surprised at the quality of the resumes we have been getting for the intern and associate analyst roles. We are going to cut off submissions some time next week, so send one along if you are interested. The tough part comes in the selection process. Rich is already planning out the training, cooperative research, and how to set everything up. I have been working with Rich for a year now and we are having fun, and I am pretty sure you will learn a lot as well as have a good time doing it. I look forward to working with whomever as any of the people who have sent over their credentials are going to be good. The last couple days have been kind of a waste work-wise. Office cleanup, RSA submissions, changes to my browsing security, and driving around the world to help my wife’s business have put a damper on research and blog writing. Rich tried to warn me that RSA submissions were a pain, even sending me the off-line submission requirements document so I could prepare in advance. And I did, only to find both the online forms were different, so I ended up rewriting all three submissions. The office cleanup was the most shocking thing of my week. Throwing out or donating phones, fax, answering machines, laser printers, and filing cabinets made me think how much the home office has changed. I used to say in 1999 that the Internet had really changed things, but it has continued its impact unabated. I don’t have a land line any longer. I talk to people on the computer more than on the cell phone. There is not a watch on my wrist, a calendar hanging on the wall or a phone book in the closet. I don’t go to the library. I get the majority of my news & research through the computer. I use Google Maps every day, and while I still own paper maps, they’re just for places I cannot find online. My music arrives through the computer. I have not rented a DVD in five years. I don’t watch much television; instead that leisure time has gone to surfing the Internet. Books? Airline tickets? Hotels? Movie theaters? Are you kidding me? Almost everything I buy outside of grocery and basic hardware I buy through online vendors. When I shut off the computer because of lightning storms, it’s just like the ‘Over Logging’ episode of South Park where the internet is gone … minus the Japanese porn. The Kaminsky & Matasano hacks made Rich and me a little worried. Rich immediately started a review of all our internal systems and we have re-segmented the network and are making a bunch of other changes. It’s probably overkill for a two-person shop, but we think it needs to be that way. That also prompted the change in how I use browsers and virtual machines, as I am in the process of following Rich’s model (more articles to come discussing specifics) and having 4 different browsers, each dedicated to a specific task, and a couple virtual partitions for general browsing and research. And the entire ‘1Password’ migration is taking much more time than I thought. Anyway, I look forward to getting back to blogging next week as I am rather excited about the database assessment series. This is one of my favorite topics and I am having to pare down my research notes considerably to make it fit into reasonably succinct blog posts. Plus Rich has another project to launch that should be a lot of fun as well. And now for the week in review: Webcasts, Podcasts, Outside Writing, and Conferences Rich and Quine (Zach Lanier) host Episode 162 of The Network Security Podcast. Rich’s Open Letter to Robert Carr, CEO of Heartland Payment Systems kicked off a series of responses: Threatpost Reprint with added content, Michael Farnum at Computerworld, and Alex Howard at TechTarget. Rich was quoted on CA entering the cloud computing market at IDG. Project Quant was referred to in a Computerworld UK post by Amrit Williams. Rich wrote an article on iPhone 3GS encryption problems at TidBITS. Rich wrote up the iPhone SMS attack for Macworld. Favorite Securosis Posts Rich: Adrian’s start on the database assessment series. Adrian: Rich’s biting analysis of Robert Carr’s comments on the Heartland data breach. Other Securosis Posts It’s Thursday the 13th – Update Adobe Flash Day Not All Design Flaws Are “Features” Database Encryption, Part 7: Wrapping Up. Project Quant Posts Project Quant Version 1.0 Report and Survey Results Favorite Outside Posts Adrian: Like an itch you can’t scratch, I struggle for ways to describe why GRC is a clumsy way to think about security and compliance. Dave Mortman to the rescure with his post on GRC: Why We’re Doing It Wrong. Thanks Dave! Rich: Larry Walsh reveals the real truth of security reputations and breaches. Top News and Posts Fortinet plans an IPO. Bank of America and Citi warn of a merchant breach in Massachusetts. Adobe vulnerabilities and patch management are hitting critical mass. Bill Brenner’s interview with Heartland’s CEO. Brandon Williams, Mike Rothman, Andy the IT Guy, and the New School’s Adam Shostack respond. Interview with our very good friend, and network engineering master, JJ Mike Dahn on personal responsibility in security. USAA now takes deposits via the iPhone. I’ve tested this, and it works great. Voting machine attacks are proven to be practical under real world conditions. Ryan and Dancho cover Apple’s Mac OS X Patch. Microsoft releases several security patches. Rafal Los on the WordPress Admin Password Reset vulnerability. NSSLabs Malware and Phishing report. Blog Comment of the Week This week’s best comment comes from Jeff Allen in response to Rich’s post An Open Letter to Robert Carr, CEO of Heartland Payment Systems : Very interesting take, Rich. I heard Mr. Carr present their story at the Gartner IT Security Summit last month, and I have to say,

Share:
Read Post

Understanding and Choosing a Database Assessment Solution, Part 1: Introduction

Last week I provided some advice regarding database security to a friend’s company, which who is starting a database security program. Based on the business requirements they provided, I made several recommendations on products and processes they need to consider to secure their repositories. As some of my answers were not what they expected, I had to provide a lot of detailed analysis of why I provided the answers I did. At the end of the discussion I began asking some questions about their research and how they had formed some of their opinions. It turns out they are a customer of some of the larger research firms and they had been combing the research libraries on database security. These white papers formed the basis for their database security program and identified the technologies they would consider. They allowed me to look at one of the white papers that was most influential in forming their opinions, and I immediately saw why we had a disconnect in our viewpoints. The white paper was written by two analysts I both know and respect. While I have some nit-picks about the content, all in all it was informative and a fairly good overview document … with one glaring exception: There was no mention of vulnerability assessment! This is a serious omission as assessment is one of the core technologies for database security. Since I had placed considerable focus on assessment for configuration and vulnerabilities in our discussion, and this was at odds with the customer’s understanding based upon the paper, we rehashed a lot of the issues of preventative vs. detective security, and why assessment is a lot more than just looking for missing database patches. Don’t get me wrong. I am a major advocate and fan of several different database security tools, most notably database activity monitoring. DAM is a very powerful technology with a myriad of uses for security and compliance. My previous firm, as well as a couple of our competitors, were in such a hurry to offer this trend-setting, segment-altering technology that we under-funded assessment R&D for several years. But make no mistake, if you implement a database security program, assessment is a must-have component of that effort, and most likely your starting point for the entire process. When I was on the vendor side, a full 60% of the technical requirements customers provided us in RFP/RFI submission requests were addressed through assessment technology! Forget DAM, encryption, obfuscation, access & authorization, label security, input validation, and other technologies. The majority of requirements were fulfilled by decidedly non-sexy assessment technologies. And with good reason. Few people understand the internal complexities of database systems. So as long as the database ran trouble-free, database administrators enjoyed the luxury of implicit trust that the systems under their control were secure. Attackers demonstrate how easy it is to exploit un-patched systems, gain access to accounts with default passwords, and leverage administrative components to steal data. Database security cannot be assumed, but it must be verified. The problem is that security teams and internal auditors lack the technical skills to query database internals; this makes database assessment tools mandatory for automation of complex tasks, analysis of obscure settings, and separation of duties between audit and administrative roles. Keep in mind that we are not talking about network or OS level inspection – rather we are talking about database assessment, which is decidedly different. Assessment technologies for database platforms have continued to evolve and are completely differentiated from OS and network level scans, and must be evaluated under a different set of requirements than those other solutions. And as relational database platforms have multiple communication gateways, a complete access control and authorization scheme, and potentially multiple databases and database schemas all within a single installation, the sheer complexity requires more than a cursory inspection of patch levels and default passwords. I am defining database assessment as the following: Database Assessment is the analysis of database configuration, patch status, and security settings; it is performed by examining the database system both internally and externally – in relation to known threats, industry best practices, and IT operations guidelines. Because database assessment is continually under-covered in the media and analyst community, and because assessment is one of the core building blocks to the Securosis database security program, I figured this was a good time for the official kick-off of our blog series on Understanding and Selecting a Database Vulnerability Assessment Solution. In this series we will cover: Configuration data collection options Security & vulnerability analysis Operational best practices Policy management and remediation Security & compliance reporting Integration & advanced features I will also cover some of the evolutions in database platform technology and how assessment technologies must adapt to meet new challenges. As always, if you feel we are off the mark or missing something, tell us. Reader comments and critiques are encouraged, and if they alter or research position, we credit commentors in any research papers we produce. We have comment moderation turned on to address blog spambots, so your comment will not be immediately viewable, but Rich and I are pretty good about getting comments published during business hours. Share:

Share:
Read Post

Database Encryption, Part 7: Wrapping Up.

In our previous posts on database encryption, we presented three use cases as examples of how and why you’d use database encryption. These are not examples you will typically find cited. In fact, in most discussions and posts on database encryption, you will find experts and and analysts claiming this is a “must have” technology, a “regulatory requirement”, and critical to securing “data at rest”. Conceptually this is a great idea, as when we are not using data we would like to keep it secure. In practice, I call this “The Big Lie”: Enterprise databases are not “data at rest”. Rather the opposite is true, and databases contain information that is continuously in use. You don’t invest in a relational database just to have a place to store your data; there are far cheaper and easier ways to do that. You use relational database technology to facilitate transactional consistency, analytics, reports, and operations that continuously alter and reference data. Did you notice that “to protect data at rest” is not one of our “Three Laws of Data Encryption”? Through the course of this blog series, we have made a significant departure from the common examples and themes cited for how and why to use database encryption technologies. In trying to sift through the cruft of what is needed and what benefits you can expect, we needed to use different terminology and a different selection process, and reference use cases that more closely mimic customer perceptions. We believe that database encryption offers real value, but only for a select number of narrowly focused business problems. Throwing around overly general terms like “regulatory requirement” and “data security” without context muddies the entire discussion, makes it hard to get a handle on the segment’s real value propositions, and makes it very difficult to differentiate between database encryption and other forms of security. Most of the use cases we hear about are not useful, but rather a waste of time and money. So what do we recommend you use? Transparent Database Encryption: The problem of lost and stolen media is not going away any time soon, and as hardware is often recycled and resold – we are even seeing new avenues of data leakage. Transparent database encryption is a simple and effective option for media protection, securing the contents of the database as it moves physically or virtually. It satisfies many regulatory requirements that require encryption – for example most QSA’s find it acceptable for PCI compliance. The use case gets a little more complicated when you consider external OS, file level, and hard drive encryption products – which provide some or all of the same value. These options are perfectly adequate as long as you understand there will be some small differences in capabilities, deployment requirements, and cost. You will want to consider your roadmap for virtualized or cloud environments where underlying security controls provided by the external sources are not guaranteed. You will also need to verify that data remains encrypted when backed up, as some products have access to key and decrypt data prior to or during the archive process. This is important both because the data will need to be re-encrypted, and you lose separation of duties between DBA and IT administrator, two of the inherent advantages of this form of encryption. Regardless, we are advocates of transparent database encryption. User Level Encryption: We don’t recommend it for most scenarios. Not unless you are designing and building an application from scratch, or using a form of user level encryption that can be implemented transparently. User level encryption generally requires rewriting significant chucks of your application and database logic. Expect to make structural changes to the database schema, rewrite database queries and stored procedures, and rewrite any middleware or application layer code that talks to the database. To retrofit an existing application to get the greater degree of security offered through database encryption is not generally worth the expense. It can provide better separation of duties and possibly multi-factor authentication (depending upon how you implement the code), but they normally do not justify a complex and systemic overhaul of the application and database. Most organizations would be better off allocating that time and money into obfuscation, database activity monitoring, segmentation of DBA responsibilities within the database, and other security measures. If you are building your application and database from scratch, then we recommend building user level encryption in the initial implementation, as this allows you to avoid the complicated and risky rewriting – as a bonus you can quantify and control performance penalties as you build the system. Tokenization: While this isn’t encryption per se, it’s an interesting strategy that has recently experienced greater adoption in financial transaction environments, especially for PCI compliance. Basically, rather than encrypting sensitive data, you avoid having it in the database in the first place: you replace the credit card or account number with a random token. That token links back to a master database that serves as the direct tie to the transaction processing system. You then lock down and encrypt the master database (if you can), while only using the token throughout the rest of your infrastructure. This is an excellent option for distributed application environments, which are extremely common in financial and retail services. It reduces your overall exposure of by limiting the amount and scope of sensitive data internally, while still supporting a dynamic transaction environment. As with any security effort, having a clear understanding of the threats you need to address and the goals you need to meet are key to understanding and selecting a database encryption strategy. Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.