Securosis

Research

Database Activity Analysis Survey

I ran into Slavik Markovich of Sentrigo, and David Maman of GreenSQL, on the vendor floor at the RSA Conference. I probably startled them with my negative demeanor – having just come from one vendor who seems to deliberately misunderstand preventative and detective controls, and another who thinks regular expression checks for content analysis are cutting edge. Still, we got to chat for a few minutes before rushing off to another product briefing. During that conversation it dawned on me that we continue to see refinement in the detection of malicious database queries and deployment methods to block database activity by database activity monitoring vendors. Not just from these vendors – others are improving as well. For me, the interesting aspect is the detection methods being used – particularly how incoming SQL statements are analyzed. For blocking to be viable, the detection algorithms have to be precise, with a low rate of false positives (where have you heard that before?). While there are conceptual similarities between database blocking and traditional firewalls or WAF, the side effects of blocking are more severe and difficult to diagnose. That means people are far less tolerant of screw-ups because they are more costly, but the need to detect suspicious activity remains strong. Let’s take a look at some of the analytics being used today: Some tools block specific statements. For example, there is no need to monitor a ‘create view’ command coming from the web server. But blocking administrative use and alerting when remote administrative commands come into the database is useful for detection of problems. Some tools use metadata & attribute-based profiles. For example, I worked on a project once to protect student grades in a university database, and kill the connection if someone tried to alter the database contents between 6pm and 6am for an unapproved terminal. User, time of day, source application, affected data, location, and IP address are all attributes that can be checked to enforce authorized usage. Some tools use parameter signatures. The classic example is “1=1”, but there are many other common signatures for SQL injection, buffer overflow, and permission escalation attacks. Some tools use lexical analysis. This is one of the more interesting approaches to come along in the last couple of years. By examining the use of the SQL language, and the various structural options available with individual statements, we can detect anomalies. For example, there are many different options for the create table command on Oracle, but certain combinations of delimiters or symbols can indicate an attempt to confuse the statement parser or inject code. In essence you define the subset of the query language you will allow, along with suspicious variations. Some tools use behavior. For example, while any one query may have been appropriate, a series of specific queries indicates an attack. Or a specific database reference such as a user account lookup may be permissible, but attempting to select all customer accounts might not be. In some cases this means profiling typical user behavior, using statistical analysis to quantify unusual behavior, and blocking anything ‘odd’. Some tools use content signatures. For example, looking at the content of the variables or blobs being inserted into the database for PII, malware, or other types of anomalous content. All these analytical options work really well for one or two particular checks, but stink for other comparisions. No single method is best, so having multiple options allows choosing the best method to support each policy. Most of the monitoring solutions that employ blocking will be deployed similarly to a web application firewall: as a stand-alone proxy service in front of the database, an embedded proxy service that is installed on the database platform, or as an out-of-band monitor that kills suspicious database sessions. And all of them can be deployed to monitor or block. While the number of companies that use database activity blocking is miniscule, I expect this to grow as people gradually gain confidence with the tools in monitoring mode. Some vendors employ two detection models, but it’s still pretty early, so I expect we will see multiple options provided in the same way that Data Loss Prevention (DLP) products do. What really surprises me is that the database vendors have not snapped up a couple of these smaller firms and incorporated their technologies directly into the databases. This would ease deployment, either as an option for the networking subsystem, or even as part of the SQL pre-processor. Given that a single database installation may support multiple internal and external web applications, it’s very dangerous to rely on applications to defend against SQL injection, or to place to much faith in the appropriateness of administrative commands reaching the database engine. ACLs are particularly suspect in virtualized and cloud environments. Share:

Share:
Read Post

FireStarter: IP Breach Disclosure, No-Way, No-How

On Monday March 1st, the Experienced Security Professionals Program (ESPP) was held at the RSA conference, gathering 100+ practitioners to discuss and debate a few topics. The morning session was on “The Changing Face of Cyber-crime”, and discussed the challenges facing law enforcement to prosecute electronic crimes, as well as some of the damage companies face when attackers steal data. As could be expected, the issue of breach disclosure came up, and of course several corporate representatives pulled out the tired argument of “protecting their company” as their reason to not disclose breaches. The FBI and US Department of Justice representatives on the panel referenced several examples where public firms have gone so far as to file an injunction against the FBI and other federal entities to stop investigating breaches. Yes, you read that correctly. Companies sued to stop the FBI from investigating. And we wonder why cyber-attacks continue? It’s hard enough to catch these folks when all relevant data is available, so if you have victims intentionally stopping investigations and burying the evidence needed for prosecution, that seems like a pretty good way to ensure criminals will avoid any penalties, and to encourage attackers to continue their profitable pursuits at shareholder expense. The path of least resistance continues to get easier. Let’s look past the murky grey area of breach disclosure regarding private information (PII) for a moment, and just focus on the theft of intellectual property. If anything, there is much less disclosure of IP theft, thanks to BS arguments like – “It will hurt the stock price,” or “We have to protect the shareholders.” or “Our responsibility is to preserve shareholder value.” Those were the exact phrases I heard at the ESPP event, and they made my blood boil. All these statements are complete cop-outs, motivated by corporate officers’ wish to avoid embarrassment and potential losses of their bonuses, as opposed to making sure shareholders have full and complete information on which to base investment decisions. How does this impact stock price? If IP has been stolen and is being used by competitors, it’s reasonable to expect the company’s performance in the market will deteriorate over time. R&D advances come at significant costs and risks, and if that value is compromised, the shareholders eventually lose. Maybe it’s just me, but that seems like material information, and thus needs to be disclosed. In fact, not disclosing this material information to shareholders and providing sufficient information to understand investment risks runs counter to the fiscal responsibility corporate officers accept in exchange for their 7-figure paychecks. Many, like the SEC and members of Congress, argue that this is exactly the kind of information that is covered by the disclosure controls under Section 302 of Sarbanes-Oxley, which require companies to disclose risks to the business. That said, I understand public companies will not disclose breaches of IP. It’s not going to happen. Despite my strong personal feelings that breach notification is essential to the overall integrity of global financial markets, companies will act in their own best interests over the short term. Looking beyond the embarrassment factor, potential brand impact, and competitive disadvantages, the single question that foils my idealistic goal of full disclosure is: “How does the company benefit from disclosure?” That’s right – it’s not in the company own interest to disclose, and unless they can realize some benefit greater than the estimated loss of IP (Google’s Chinese PR stunt, anyone?), they will not disclose. Public companies need to act according to their own best interests. It’s not noble – in fact it’s entirely selfish – but it’s a fact. Unless there are potential regulatory losses due to not disclosing, since the company will already suffer the losses due to the lost IP, there is no upside to disclosing and disclosure probably only increases the losses. So we are at an impasse between what is right and what is realistic. So how to do we fix this? More legislation? A parade down Wall Street for those admitting IP theft? Financial incentives? Help a brother out here – how can we get IP breach disclosure, and get it now? Share:

Share:
Read Post

Upcoming Webinar: Database Assessment

Tuesday, March 16th at 11am PST / 2pm EST, I will be presenting a webinar: “Understanding and Selecting a Database Assessment Solution” with Application Security, Inc. I’ll cover the basic value proposition of database assessment, several use cases, deployment models, and key technologies that differentiate each platform; and then go through a basic product evaluation process. You can sign up for the webinar here. The applicability of database assessment is pretty broad, so I’ll cover as much as I can in 30 minutes. If I gloss over any areas you are especially interested in, we will have 10 minutes for Q&A. Or you can send questions in ahead of time and I will try to address them within the slides, or you can submit a question in the GoToMeeting chat facility during the presentation. Share:

Share:
Read Post

Database Security Fundamentals: Patching

Patching is a critical security operation for databases, just like for any other application. The vast majority of security concerns and logic flaws within the database will be addressed by the database vendor. While the security and IT communities are made aware of critical security flaws in databases, and may even understand the exploits, the details of the fix are never made public except for open source databases. That means the vendor is your only option for fixes and workarounds. Most of you will not be monitoring CVE notifications or penetration testing new versions of the database as they are released. Even if you have the in-house expertise do so, very very very few people have the time to conduct serious investigations. Database vendors have dedicated security teams to analyze attacks against the database, and small firms must leverage their expertise. Project Quant for Patch Management was designed to break down patch management into essential, discreet functions, and assign cost-based metrics to each task in order to provide a quantitative measurements of the patch management process. In order to achieve that goal, we needed to define a patch management process on which to build the metrics model. For database patch management, you could choose to follow that process and feel confident that it addresses all relevant aspects of patching a database system. However, that process is far too comprehensive and involved for a series on database security fundamentals. As this series is designed more for small and mid-market practitioners, who generally lack the time and tools necessary for more thorough processes, we are going to avoid the depth of coverage major enterprises require. I will follow our basic Quant model, but use a subset of the process defined in the original Project Quant series. Further, I will not assume that you have any resources in place when you begin this effort – we will define a patching process from scratch. Establish Test Environment: Testing a patch or major database revision prior to deployment is not optional. I know some of you roll patches out and then “see what happens”, rolling back when problems are found. This is simply not a viable way to proceed in a production environment. It’s better to patch less often than deploy without functional sanity and regression tests. To start set up a small test database environment, including a sample data set and test cases. This can be anything from leveraging quality assurance tests, to taking database snapshots and replaying network activity against the database to simulate real loads, or using a virtual copy of the database and running a few basic reports. Whatever you choose, make sure you have set aside a test environment, tests, and tools as needed to perform basic certification. You can even leverage development teams to help define and run the tests if you have those groups in house. Acquire Patch: Odds are, in a small IT operation, you only need to worry about one or perhaps two types of databases. That means it is relatively easy to sign up with the database vendors to get alerts when patches are going to be available. Vendors like Oracle have predictable patch release cycles, which makes it way easier to plan ahead, and allocate time and resources to patching. Review the description posted prior to patch availability. Once the patch is available, download and save a copy outside the test area so it is safely archived. Review the installation instructions so you understand the complexities of the process and can allocate the appropriate amount of time. Test & Certify: A great thing about database patches is that their release notes describe which functional areas of the database are being altered, which helps to focus testing. Install the patch, re-configure if necessary, and restart the database. Select the test scripts that cover patched database functions, and check with quality assurance groups to see if there are new tests available or automation scripts that go along with them. Import a sample data set and run the tests. Review the results. If your company has a formal acceptance policy share the results; otherwise move on to the next step. If you encounter a failure, determine if the cause was the patch or the test environment, and retest if needed. Most small & mid-sized organizations respond to patch problems by filing a bug report with the vendor, and work stops. If the patch addresses a serious loss of functionality, you may be able to escalate the issue with the vendor. Otherwise you will probably wait for the next patch to address the issue. Deploy & Certify: Following the same steps as the testing phase, install the patch, reconfigure, and restart the database as needed. Your ability to test production databases for functionality will be limited, so it is recommend to run one or two critical functions to ensure they are operational, or have your internal users exercise some database functions to provide a sanity check that everything is working. Clean up & Document: Trust me on this – anything special you did for the installation of the patch will be forgotten the next time you need those details. Anything you suspect may be an issue in the future, will be. Save the installation downloads and documentation provided by the vendor so you can refer back to them in the future, and to keep a backup in case you need to fall back to this revision in the future. You may even want to save a copy of your test results for future review, which is handy for backtracking future problems. I know that this cycle looks simple – it is intended to be. I am surprised by both how many people are unwilling to regularly patch database environments due to fear of possible side-effects, and also by how disorganized patching efforts are when people do patch databases. A lot of that has to do with lack of process and established testing; most DBAs have crystal-clear memories of cleaning up after bad patch deployments, along with a determination to

Share:
Read Post

RSAC 2010 Guide: Content Security

Two business days and counting, so today and tomorrow we’ll be wrapping up our Securosis Guide to the RSA Conference 2010. This morning let’s hit what the industry calls “content security,” which is really email and web filtering. Rich just loves the term content security, so let’s see how many times we can say it. Email/Web (Content) Security In case you missed it, every email security vendor on the planet offers web content filtering within their portfolio of products and – for better or worse – the combination is now known as content security. No other security market has embraced the concept of ‘the cloud’ and SaaS offerings as enthusiastically as content security providers. In an effort to deal with increasing volumes of spam and malware without completely overhauling all your hardware, vendors offer outsourced content filtering as a cost effective way to add both capacity and capability. Almost all vendors offer traditional on-premise software or appliances, fortified with cloud services (most refer to this as a hybrid model) for additional screening of content. What We Expect to See There are three areas of interest at the show relative to content security: Fully Integrated Platforms: As you wander the show floor at Moscone Center, we expect every vendor to say that their web and email security platforms are completely integrated. What this usually means is that your reports are shared, but cloud and appliance consoles are separate, as is policy management. It’s funny how the vendors have such a flexible definition of ‘integrated.’ If you are looking at migrating to a combined solution, you need to dig in to see what is really integrated and what simply shares the same dashboard, how your user experience will change (for the better), and how effective & clean their results are – end users get grumpy if their favorite web sites are classified as unsafe or they get spam in their inboxes. Hybrid Cloud Services: We expect every vendor to offer a ‘cloud’ service in order to jump on the cloud bandwagon. This may be nothing more that an anti-spam or remote web filtering gateway deployed on shared infrastructure as a hosted service. The quality and diversity of cloud services varies greatly, as does the level of security provided by different cloud hosting companies. Once you get past the hype of certifications and technobabble, ask the vendors what types of audits and third party security certifications they will allow. Ask what sort of financial commitments they will make in the event that they fail to live up to their service level agreements, and what their SLAs with the cloud infrastructure providers look like. Those two questions usually halt the discussion, and will quickly distinguish hype mongers rom folks who have really thought through cloud deployment. DLP Lite: As we’ll see in the Data Security section, DLP is hot again. Thus we expect to see every content security vendor offering ‘DLP’ or ‘Data Loss Prevention’ within their products, but in reality most only offer regular expression checks of network content. Yes, they’ll be able to detect an account number or a social security number, but that is only a sliver of what DLP needs to be. Content discovery and more advanced forms of content inspection (heuristic, lexical, cyclic hash, etc.) will be noticeably absent. Again, we recommend you challenge the content security vendor to dig into their discovery and detection capabilities and prove it’s more than regular expressions. Keep in mind that a trade show demo is probably inadequate for you to sufficiently explore the advanced features, so your objective should be to identify 3-4 vendors for deep dives after the show. For those so inclined (or impatient), you can download the entire guide (PDF). Or check out the other posts in our RSAC Guide: Network Security, Data Security, Application Security, and Endpoint Security. Share:

Share:
Read Post

Friday Summary: February 26, 2010

Next week is the RSA conference. You might have noticed from some of our recent blog entries. And I am really looking forward to it. It’s one of my favorite events, but I am especially anxious for good food. Yes, I want to see a bunch of friends, and yes, I have a lot of vendors I am anxious to catch up with to chat ‘bout some of their products. But honestly, all that takes a back seat to food. I like living in Arizona, but the food here sucks. Going to San Francisco, even the small hole-in-the-wall lunch places are excellent. In Phoenix, if you want a decent steak or good Mexican food, you’re covered. If you want Thai, Greek, Japanese or quality Chinese (and by that I mean a restaurant with less than two health code violations), you are out of luck. San Francisco? Every other block you find great places. And Italian. Really good Italian. sigh … What was I talking about? Oh yeah, food! Have you ever noticed that most security guys are into martial arts and food? Seriously. It’s true. Ask the people you know and you may be surprised at the pervasiveness of this phenomena. Combined with the fact that there are a lot of ‘foodies’ in the crowd of people I want to see, I am going look like I want to hang out, but still find quality pad thai. And I know there are going to be a dozen or so people I want to see who have the same priorities, so they won’t be offended by my ulterior motives. I plan to sneak off a couple of days and get a good lunch, and at least one evening for a good dinner, schedule be dammed! Maybe some of the noodle houses on the way up to Union Square or the hole-in-the-wall at the Embarcadero center that has surprisingly good sushi. Then swing by Peet’s on the way back for coffee that could fuel a nuclear reactor. Anyway, it’s a short Friday summary this week because I’ve got to pack and get my presentations ready. Hope to see you all there, and please shoot me an email if you are in town and want to catch up! Just say Venti-double-shot-iced-with-cream-n-splenda-shaken, and I’m there. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Rich interviewed by MacVoicesTV at Macworld on Security Threats and Hype. Adrian’s Webinar with Qualys on Database Vulnerability Assessment (reg req). Team Securosis’ RSA 2010 Conference Preview. Same video on blip.tv. Adrian quoted by Sentrigio. Rich and Adrian on Deep Content Analysis Techniques (video). Adrian’s Dark Reading posts on The Cost of Database Security. Adrian’s Webinar with Netezza on Understanding and Selecting a Database Activity Monitoring Solution (reg req). Favorite Securosis Posts Rich: Answering Dan Geer: It’s Time to Reexamine Priorities and Revisit Paradigms. One of the reasons Adrian and I started working together is that back when I was at Gartner and he was at IPLocks, we found ourselves kindred spirits on data security long before it was chic. Geer hits it out of the park with his call for focus on the data, but Adrian does a better job of providing context and priorities for focus. Check out our Data Security Research Library if you want to read more on information-centric/data security. David Mortman: Answering Dan Geer: It’s Time to Reexamine Priorities and Revisit Paradigms. Mike Rothman: Adrian’s “Answering Dan Geer” No one argues the importance of information protection, but the devil is in the details. Adrian: Rich’s Firestarter IT-GRC: The Paris Hilton of Unicorns. Rich beat me to the punch on this one! Other Securosis Posts RSAC 2010 Guide: Security Management Retro Buffoonery RSAC 2010 Guide: Virtualization and Cloud Security RSAC 2010 Guide: Content Security Webcast on Thursday: Pragmatic Database Compliance and Security RSAC 2010 Guide: Endpoint Security Incite 2/23/10: Flexibility RSAC 2010 Guide: Application Security RSAC 2010 Guide: Data Security RSVP for the Securosis and Threatpost Disaster Recovery Breakfast RSAC 2010 Guide: Network Security Introducing SecurosisTV: RSAC Preview RSAC 2010 Guide: Top Three Themes Upcoming Webinar: Database Activity Monitoring Favorite Outside Posts Rich: Uncommon Sense Makes Executives into Common Criminals. Great example of the social/government conflicts generated as new technology exceeds the personal frame of reference of those creating and enforcing laws. David Mortman: Identifying Opportunities for Improvement in Security Architecture. Mike Rothman: What if Bill Gates Never Wrote the Trusted Computing Memo? Normally I don’t waste time playing “what if?” games, but Dennis makes this one fun. Pepper: The Spy at Harriton High. So a school was spying on students… and making webcasts about it… and lying to the kids & families about it… and threatening students who futzed with the laptops. CRAP! Adrian: A nice overview post on Web Security Trust Models on the Freedom to Tinker blog. Project Quant Posts Project Quant: Database Security – Configuration Management Project Quant: Database Security – Masking Project Quant: Database Security – WAF Project Quant: Database Security – Encryption Project Quant – Project Comments Project Quant: Database Security – Protect through Monitoring Project Quant: Database Security – Audit Research Reports and Presentations Report: Database Assessment Top News and Posts Conflict of Interest: When Auditors Become Consultants. I keep hearing more and more about this, and from my perspective there is a lot left unspoken about Trustwave’s business models that will come under increasing scrutiny this year. Rsnake on Banks and the UUC. Google Execs Convicted in Italy. Microsoft Takedown of Waledec Botnet. Symantec State of Security Report. Glad the New School guys saw this as I would have missed it. It’s an interesting executive overview. Hacker Arrested in Billboard Porn Stunt. See? Those Russian hackers don’t just steal our credit card numbers – too bad the article doesn’t have pictures… Widespread Data Breaches Uncovered by FTC Probe. Watch that P2P file sharing folks! Criminals Hide Payment-Card Skimmers Inside Gas Station Pumps ‘Sophisticated’ Hack Hit Intel in January Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Alan Shimel, in response to RSAC 2010 Guide: Network Security. And in case you think

Share:
Read Post

Answering Dan Geer: It’s Time to Reexamine Priorities and Revisit Paradigms

Dan Geer wrote an article for SC Magazine on The enterprise information protection paradigm, discussing the fundamental disconnect between the derived value of data and the investment to protect information. He asks the important question: If we reap ever increasing returns on information, where is the investment to protect the data? Dan has an eloquent take on a long-standing viewpoint in the security community that Enterprise Information Protection (EIP) is a custodial responsibility of corporations, as it is core to generation of revenue and thus the company’s value. Dan’s point that we don’t pay enough attention (and spend enough money and time) on data security is inarguable – we lose a lot of data, and it costs. His argument that we should concentrate on (unification of) existing technologies (such as encryption, audit, NAC, and DLP), however, is flawed – we already have lots of this technology, so more of the same is not the answer. Part of our problem is that in the real world, inherent security is only part of the answer. We also have external support, such as police who arrest bank robbers – it’s not entirely up to the bank to stop bank robbers. In the computer security world – for various reasons – legal enforcement is highly problematic and much less aggressive than for physical crimes like robbery. I don’t have a problem with Dan’s reasoning on this issue. His argument for the motivation to secure information is sound. I do, however, take issue with a couple of the examples he uses to bridge his reasoning from one point to the next. First, Dan states, “We have spent centuries learning about securing the physical world, plus a few years learning about securing the digital world. What we know to be common to both is this: That which cannot be tolerated must be prevented.” He puts that in very absolute terms, and I do not believe it is true in either the physical or electronic realms. For example, our society absolutely does not tolerate bank robberies. However, preventative measures are miniscule. The banks are open for business and pretty much anyone can walk in the door. Rather than prevent a robbery, we collect information from witnesses, security cameras, and other forensic information – to find, catch, and punish bank robbers. We hope that the threat of the penalty will deter most potential robbers, and sound police work will allow us to catch up with the remainder who are daring enough to commit these crimes. While criminals are very good at extracting real value from virtual objects, law enforcement has done a crappy job at investigating, punishing, and (indirectly) deterring crimes in and around data theft. These two crucial factors are absent in electronic crimes in comparison to physical crimes. It’s not that we can’t – it’s that we don’t. This is not to undermine Dan’s basic point – that enterprises which derive value from data are not protecting themselves sufficiently, and contributorily negligent. But stating that “The EIP mechanism – an unblinking eye focused on information – has to live where the data lives.” and “EIP unifies data leakage prevention (DLP), network access control (NAC), encryption policy and enforcement, audit and forensics,” argues that network and infrastructure security are the answer. As Gunnar Peterson has so astutely pointed out many times, while the majority of IT spending is in data management applications, our security spending is predominately in and around the network. That means the investments made today are to secure data at rest and data in motion, rather than data in use. Talking about EIP as an embodiment of NAC & DLP and encryption policy reinforces the same suspect security investment choices we have been making for some time. We know how to effectively secure data “at that point where data-at-rest becomes data-in-motion”. The problem is we suck ” … at the point of use where data is truly put at risk …” – that’s not network or infrastructure, but rather in applications. A basic problem with data security is that we do not punish crimes at anywhere near the same rate as we do physical crimes. There is no (or almost no) deterrence, because examples of capturing and punishing crimes are missing. Further, investment in data security is typically misguided. I understand how this happens – protecting data in use is much harder than encrypting TCP/IP or disk drives – but where we invest is a critical part of the issue. I don’t want this to come across as disagreement with Dan’s underlying premise, but I do want to stress that we need to make more than one evolutionary shift. Share:

Share:
Read Post

Upcoming Webinar: Database Activity Monitoring

February 23rd (this Tuesday) at 12:00pm EST, I will be presenting “Understanding and Selecting a Database Activity Monitoring Solution” in a Webinar with Netezza. I’ll cover the basic value propositions of platforms, go over some of the key functional components to understand prior to an evaluation, and discuss some key deployment questions to address during a proof of concept. You can sign up for the Webinar here. We will take 10-15 minutes for Q&A, so you can send questions in ahead of time and I will try to address them within the slides, or you can submit a question in the WebEx chat facility during the presentation. Share:

Share:
Read Post

New Release: Understanding and Selecting a Database Assessment Solution

The Securosis team is proud to announce the availability of our latest white paper: Understanding and Selecting a Database Assessment Solution. We’re very excited to get this one published – not just because we have been working on it for six months, but also because we feel that with a couple new vendors and a significant evolution in product function, the entire space needed a fresh examination. This is not the same old vulnerability assessment market of 2004 that revolved around fledgling DBA productivity tools! There are big changes in the products, but more importantly there are bigger changes in the buying requirements and users who have a vested interest in the scan results. Our main goal was to bridge the gap between technical and non-technical stakeholders. We worked hard to provide enough technical information for customers to differentiate between products, while giving non-DBA stakeholders – including audit, compliance, security, and operations groups – an understanding of what to look for in any RFI/proof-of-concept. We want to especially thank our sponsors, Application Security Inc. (AppSec), Imperva, and Qualys. Without them, we couldn’t produce free research like this. As with all our papers, the content was developed independently and completely out in the open using our Totally Transparent Research process. We also want to thank our readers for helping review all our public research, and Chris Pepper for editing the paper. This is version 1.0 of the document, and we will continue to update it (and acknowledge new contributions) over time, so keep coming with the comments if you think we’ve missed anything, or gotten something wrong. Share:

Share:
Read Post

Database Security Fundamentals: Database Access Methods

It’s tough to talk about securing database access methods in a series designed to cover database security basics, because the access attacks are not basic. They tend to exploit either communications media or external functions – taking advantage of subtleties or logic flaws – capitalizing on trust relationships, or just being very unconventional and thus hard to anticipate. Still, some of the attacks are right through an open front door, like forgetting to set a TNS Listener password on Oracle. I will cover the basics here, as well as a few more involved things which can be addressed with a few hours and minimal third party tools. Relational platforms are chock full of features and functions, and many have been in use for so long that people simply forget to consider their security ramifications. Or worse, some feature came with the database, and an administrator (who did not fully understand it) was afraid that disabling it would cause side effects in applications. In this post I will cover the communications media and external services provided by the database, and their proper setup to thwart common exploits. Let’s start with the basics: Network Connections: Databases can support multiple network connections over multiple ports. I have two recommendations here. First, to reduce complexity and avoid possible inconsistency with network connection settings, I advise keeping the number of listeners to a minimum: one or two. Second, as many automated database attacks go directly default network ports directly, I recommend moving listeners to a non-standard port numbers. This will annoy application developers and complicate their setup somewhat, but more importantly it will both help stop automated attacks and highlight connection attempts to the default ports, which then indicate either misconfiguration or hardwired attacks. Network Facilities: Some databases use add-on modules to support network connections, and like the database itself are not secure out of the box. Worse, many vulnerability assessment tools omit the network from the scan. Verify that the network facility itself is set up properly, that administrative access requires a password, and that the password is not stored in clear text on the file system. Transport Protocols: Databases support multiple transport protocols. While features such as named pipes are still supported, they are open to spoofing and hijacking. I recommend that you pick a single reliable protocol such as TCP/IP), and disable the rest to prevent insecure connections. Private Communication: Use SSL. If the database contains sensitive data, use SSL. This is especially true for databases in remote or virtual environments. The path between the user or application and the database is not guaranteed to be safe, so use SSL to ensure privacy. If you have never set up SSL before, get some help – otherwise connecting applications can choose to ignore SSL. External Procedure Lockdown: All database platforms have external procedures that are very handy for performing database administration. They enable DBAs to run OS commands, or to run database functions from the OS. These procedures are also a favorite of attackers – once they have hacked either an OS or a database, stored procedures (if enabled) make it trivial to leverage that access into a compromise of the other half. This one is not optional. If you are part of a small IT organization and responsible for both IT and database administration, it will make your day-to-day job just a little harder. Checking these connection methods can be completed in under and hour, and enables you to close off the most commonly used avenues for attack and privilege escalation. A little more advanced: Basic Connection Checks: Many companies, as part of their security policy, do not allow ad hoc connections to production databases. Handy administrative tools like Quest’s Toad are not allowed because they do not enforce change control processes. If you are worried about this issue, you can write a login trigger that detects the application, user, and source location of inbound connections – and then terminates unauthorized sessions. Trusted Connections & Service Accounts: All database platforms offer some form of trusted connections. The intention is to allow the calling application to verify user credentials, and then pass the credentials or verification token through the service account to the database. The problem is that if the calling application or server has been compromised, all the permissions granted to the calling application – and possibly all the permissions assigned to any user of the connection – are available to an attacker. You should review these trust relationships and remove them for high-risk applications. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.