Securosis

Research

Myths Surrounding Databases in Virtual Environments

Every now and again I run into an article that totally baffles me. It’s as if the author had a bunch of somewhat related quotes sitting around, and then stitched a Frankenstein article together. In this case the article was in the October 5th edition of eWeek, and the topic was “Databases: The next big virtualization thing”. The intention seems to be sketching out some hazy future projections about virtualized databases, and what wonderful things virtualization can do for you. But if you closely examine the assertions, not only are they are based on bad assumptions, they are flat-out misleading. I am not sure there is a single point in the article I wholly agree with. Rather than wallow in this mess, I will offer you what I consider to be 7 myths surrounding databases in virtual environments: Myth #1 – Virtualization makes database administration easier. No. Any time you place a database into an environment, virtual or not, the database needs to be tuned to operate efficiently within that environment. Virtualization abstracts the resources underneath the database; it does not relieve you from the administrative tasks of tuning and provisioning. While it is theoretically possible to reduce administrative tasks by standardizing an environment, history has shown we need to optimize database configuration to accommodate organic changes that occur over time. Myth #2 – Virtualization improves database performance. Possibly, but not always. Improvements to database performance are more likely to result from tuning SQL and database structures. Generally speaking, improvements in database logic offer an order of magnitude greater improvement than any ‘external’ changes. Virtualization does provide an easier way to allocate more resources to a database, and is highly beneficial when a database is memory or CPU constrained. I/O constrained databases are as likely to suffer from distributed storage latency as realize gains in performance, and more likely require some redesign to take advantage of virtual resources. Sure, you can throw twice as many resources at a database, but that does not mean it will automatically perform better! Myth #3 – Virtualization lets you consolidate databases. Not really. Virtualization offers the ability to use a single central database installation, but you still normally use multiple database instances to support multiple applications. Effective consolidation of databases to take advantage of virtual environments requires some database re-engineering and does not magically (automatically) occur in a virtual environment. Myth #4 – Virtualization will reduce your database licensing costs. This is not typically the case. Check with your vendor on this, because adding a virtual CPU is likely to cost you additional fees just as if you added a real CPU. Per database pricing may mean higher licensing costs, not lower. It will depend upon your vendor’s pricing model, so do not take it for granted. Myth #5 – Virtualization provides better database security. I have never understood this claim. How exactly could virtualization make a database more secure? Through obscurity? Some giant VMotion shell game that hides the location of the data? The access to your data is still gated by access controls and governed by permissions. Security is largely dependent upon solid configuration of the database and current patches being applied, which may nor may not be easier depending upon how you have your virtual environment set up. Virtualization provides no inherent advantage to security, and opens up additional vulnerabilities. I have never been a big fan of the concept of ‘threat surface’, but if data gets copied to multiple locations there are simply more chances to gain access to the raw data files, which is why we recommend transparent database encryption for databases in virtual environments. Myth #6 – Virtualization enables all clustered databases to be active simultaneously. Nonsense! This is possible today without virtualization. SQL Server is a good example. It offers two basic models for database clustering: an active-passive setup designed for failover, and an active-active mode for distributed processing. Both require the data sets to be synchronized, often via shared disks. The former requires no special database design work – only the appropriate configuration. In the later case you really need a data allocation strategy to minimize performance and data contention issues. Virtualization does provide the means to make physically separate disks appear as one, but it does not make synchronization issues go away. Myth #7 – Virtualization helps abstract the database from applications: No, it doesn’t. Abstraction technologies like Hibernate can mask the underlying database usage from an application. Generalization of data types stored within a database or even use of XML allow data to be moved between heterogenous databases and applications. There is nothing inherent to virtualization technology that abstracts database usage. The benefit virtualization provides, in cases of disaster recovery, is being able to easily spawn a new copy of the database should the existing copy no longer be available. Share:

Share:
Read Post

Amazon RDS Announced

Amazon announced a Relational Database Service today: Amazon RDS gives you access to the full capabilities of a familiar MySQL database. This means the code, applications, and tools you already use today with your existing MySQL databases work seamlessly with Amazon RDS. Amazon RDS automatically patches the database software and backs up your database, storing the backups for a user-defined retention period. It was natural to choose the most popular open source database, MySQL 5.1, at least in the short term. With this introduction they have effectively filled out their cloud offering for database infrastructure services. To go along with the existing capabilities of Amazon’s Simple DB and a generic Amazon Machine Image that provide logical instances of any of the major database platforms, you have just about every option you could want as an application developer. There is a list of pricing options based upon tiers of memory and computational capacity for your web service. Storage is equally flexible, with the ability to select from 5GB to 1TB of storage capacity. Snapshotting, rollbacks, resource monitoring, automated backup, and pretty much everything needed for basic database setup and maintenance. What Amazon is doing is very cool, but this is a security blog so I need to make a few comments on security and not just act like an RDS fanboi. Which I sometimes hate because I feel like the guy who’s yelling “Hey kid, stop running around with that sharp stick! You’ll poke your eye out!” With the AMI variants, as Amazon takes care of patching and configuration, and the user takes care of access control and identity management. While the instances most likely have security patches applied on a consistent basis, there is a lot more to security than patching IDM. I have no evidence that these database instances are insecure, but no one gets the benefit of the doubt in this case. For most relational database platforms I look at about 125 different database settings in an assessment sweep, most of these are to ensure the factory defaults have been changed. There is no reason to believe that Amazon is doing the same, so protection against SQL injection falls on the shoulders of client developers. With MySQL databases for RDS, the situation appears to be a little different, as the user has some configuration options. The RDS Developer Guide shows that we can alter port settings and enforce SSL connections. But the API is limited and far more focused on programming than administration. The security guides don’t offer any details on usage of service accounts, default passwords, stored procedure access, networking agents, or other features that are not necessarily masked by the Amazon APIs. Many important security topics are simply not addressed. And odds are, if someone is going after your data, they are going to use SQL injection, default account access, or external stored procedures – all of which are your responsibility to secure. I would have a tough time putting any sensitive data out there until you can verify the security setup. Use caution or you might… oh, never mind. Share:

Share:
Read Post

Friday Summary – October 23, 2009

The First 90 Days. When you take a new position, what is it you will do in the first 90 days? What do you want to learn? What do you wish to accomplish? Is it enough to plan a course of action or do you immediately need to fix something? “What is your plan for your first 90 days?” is a common interview question for executives. The candidate’s answer tells the prospective employer a few things about the person’s grasp of the challenges ahead, how they operate typically, the efficiency of their approach, and how well their expectations align. Most candidates are under no illusion about taking a new role. In the best case they are filling a gap in a growing company, but more often than not they are there to fix something broken. The question cements in the mind of the candidate what is expected of them stepping in the door. And more than any other point during your tenure with a company, your first 90 days sets your boss’ and coworkers’ impressions of your effectiveness. Never in my career has fixing security been in my top 3 challenges for the first 90 days. It’s always been quality of service, failed process, a broken, product or a dysfunctional development team. I have never been a CISO or security officer so in the context of security, I don’t really know how I would answer the question “What would my first 90 days look like?” If you are a security practitioner, how would you answer the question? Or perhaps it is more interesting to ask non-security professionals what their 90-day plan for security is? What challenges could you hope to accomplish? Do you think you could come up with a security program in that amount of time? I am interested in your thoughts on this subject. Is research on the establishment of a security program interesting to you? Let us know what you think. On to the Friday Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian’s presentation on Creating a Data Classification Procedure for BusinessWeek. Rich’s TidBITS article on his trip to the Microsoft Store in Scottsdale, Arizona. Adrian’s Dark Reading post on Database Activity Monitoring. Rich presented Pragmatic Data Security and Pragmatic Database Security at TechTarget’s Information Security Decisions show in Chicago. Favorite Securosis Posts Rich: Mort’s post on IDM. Adrian: Splunk and Unstructured Data. David Meier: The First Phishing Email I Almost Fell For. David Mortman: Hacking Envelopes. Other Securosis Posts Rapid7 Acquires Metasploit Favorite Outside Posts Rich: Amrit’s post on Gartner, and working for Gartner. For the record, analysts are very well insulated from financial considerations that could affect research. That said, people who pay to speak to analysts get more time with them, and that can subtly affect opinions. Adrian: My favorite post was also Amrit’s, both for his honest quadrant diagram and for the commentary. To be honest, I felt for ZL as Gartner has the power to cut a company’s sales in half, but I agree with their assessments more than I disagree. My favorite tweet was from @securityincite: “@rmogull Would someone please give Rich some work to do? He’s loitering in shopping malls now. Next he’ll be upgrading to Windows Mobile”. Mortman: @RSnakes on a Plane. (Mort sent this in Monday, he was so convinced). Meier: Two out of five at risk from Wi-Fi Hijacking – Interesting that Talk Talk (the ISP in the UK) is taking this stance to protect end users from heavy-handed plans to tackle Internet piracy by Lord Mandelson. Chris Pepper: Time Warner Cable Exposes 65,000 Customer Routers to Remote Hacks. Top News and Posts ChoicePoint breach. Yeah, those guys. Yes, it happened again. Yeah, they claim it’s not their fault. Shostack is a little more forceful with his analysis and received a reply from (I assume) a company rep. Love Jack’s post calling out OCABR in Holding a grudge. Russell Thomas on How to Value Digital Assets. Long post, but reasonably practical methodology. Metasploit sale to Rapid7 from a developer perspective. Do the Evolution. Public Google Voice mails are searchable. Duh. But Google changed the policy to stop this anyway. Joanna’s Evil Maid encryption attack via USB stick. Another analysis of the Metasploit acquisition. I still think this will be good for Core Security. Blog Comment of the Week This week’s best comment comes from Erik Swan (a Splunk employee -Adrian) in response to Splunk and Unstructured Data: Thanks for mentioning Splunk, and your post brings up interesting points. We recommend that people dump “everything” into splunk and just keep it. I’d go further and say that i’d bet that far less than 1% of that data is ever looked-at/reported on/etc. As you point out, its likely harder and more risky to remove data than keep it. This clearly changes when you talk about multiple T per day ( average large system these days ), where even for a wealthy company, the IO required is very expensive and not sure the data has value/risk. My gut is that data generation growth is clearly outpacing the size/price curve per GB, and will likely do so until massively more scaleable and cost effective media is available. For the time being, keeping everything is likely the best starting point. At the same time, we have seen models that look a lot like email spam filtering, where “uninteresting” data is routed to different instances that have shorter retention policies. Summarization is used to capture and compress the data hopefully with no information loss. Not a great practice for compliance, but for trouble shooting and analytics can work. Longer term its an interesting area for research and something that due to the size of data we deal with needs to be solved. Share:

Share:
Read Post

Rapid7 Acquires Metasploit

Rapid7 acquires Metasploit, the open source penetration testing platform. Wow. All I can say is ‘Wow’. I had been hearing rumors that Rapid7 was going to make an acquisition for weeks, but this was a surprise to both Rich and myself. Still coming to terms with what it means, and I have no clue what the financial terms look like, but almost certainly this is a cash+stock deal. On the surface, it is a very smart move for Rapid7. Metasploit is considerably better known than Rapid7. Metasploit is a fixture in the security research world and there are far more people using Metasploit than Rapid7 has customers. If nothing else, this gets Rapid7 products in the hands of the people who are shaping web application security, and defining how penetration testing and vulnerability management will be conducted. In a quickly evolving market like pen testing, access to that community is invaluable for a commercial vendor. Plus they get H D Moore on staff, which is a huge benefit. Metasploit is a well-architected framework that provides for easy extensibility and can be customized in innumerable ways. If you want to test anything from smart phones to databases, this platform will do it, from targeted exploits to fuzzing. Sure, there is work on your part and accessibility to people other than security researchers is low compared to commercial products like Core Security’s Impact, but it’s a solid platform and the integration of the two should not be difficult. It’s more a question of how best to allow Metasploit to continue its open source evolution while leveraging scans into meaningful vulnerability chaining, as well as risk scoring. Neither is exactly an ‘enterprise ready’ product. That’s not a slam, as NeXpose performs its primary function as well as most. But Rapid7’s platform is just now breaking ground into larger companies. They have a long way to go in UI, ease of use, pragmatic analysis, integration of risk scoring, SaaS, exploit chaining, and back-end integration. That said, I am not sure they need to be an enterprise ready product, at least in the short term. It makes more sense to continue their mid-market penetration while they complete the integration. Breadth of function, which is what they now have, has proven to be a major factor in winning deals over the last couple years. They can worry about the advanced non-technical stuff later. Identity in the market is an issue for Rapid7. They have waffled between general assessment, pen testing, and vulnerability management, without a clear identity or differentiator when going toe-to-toe with Qualys, nCircle, Tenable, Secunia, and the like. Sure, ‘compliance scoring’ is a useful marketing gimmick, but Metasploit gives them a unique identity and differentiation. Rather than scan-and-patch for known vulnerabilities, focusing mostly inside the network, they will now be able to go far deeper into externally facing custom applications. Taking a risk score across multiple applications and/or platforms is a better approach. If the two platforms are properly integrated, they’ll be useful to IT, security, and software development. I am sure Rich will chime in with his own take later in the week. Wow. Share:

Share:
Read Post

Splunk and Unstructured Data

“What the heck is up with Splunk”? It’s a question I have been getting a lot lately. From end users and SIEM vendors. Larry Walsh posted a nice article on how Splunk Disrupts Security Log Auditing. His post prodded me into getting off my butt and blogging about this question. I wanted to follow up on Splunk after I wrote the post on Amazon’s SimpleDB as it relates to what I am calling the blob-ification of data. Basically creating so much data that we cannot possibly keep it in a structured environment. Mike Rothman more accurately called it ” … the further decomposition of application architecture”. In this case we collect some type of data from some type of device, put it onto some type of storage, and then we use a Google-esque search tool to find what we are looking for. And the beauty of Google is that it does not care if it is a web page or voice mail transcript – it will find what you are looking for if you give it reasonable search criteria. In essence that is the value Splunk provides a tool to find information in a sea of data. It is easy to locate information within a structure repository with known attributes and data types, and we know where certain pieces of information are stored. With unstructured data we may not know what we have or where it is located. For some time normalization techniques were used to introduce structure and reduce storage requirements, but that was a short-lived/low performance approach. Adding attributes to raw data and just linking back to those attributes is far more efficient. Enter Splunk. Throw the data into flat files and index those files. Techniques of tokenization, tagging, and indexing help categorize data with the ultimate goal of correlating events and reporting on unstructured data of differing types. Splunk is not the only vendor who does – several SIEM and Log Management vendors do the same or similar. My point is not that one vendor is better than another, but point out the general trend. It is interesting that Splunk’s success in this area has even taken their competitors by surprise. Larry’s point … “The growth Splunk is achieving is due, in part, to penetrating deeper into the security marketplace and disrupting the conventional log management and auditing vendors.” … is accurate. But they are are able to do this because of the increased volume of data we are collecting. People are data pack-rats. From experience, less than 1% of the logged data I collect has any value. Far too, often organizations do not invest the time to determine what can be thrown away. Many are too chicken to throw useless data away. They don’t want to discard data, just in case it has value, just in case you need it, just in case it contains the needle in the haystack you need for a forensic investigation. I don’t want to be buried under the wash of useless data. My recommendation is to take the time to understand what data you have, determine what you need, and throw the rest away. The pessimist in me knows that this is unlikely to happen. We are not going to start throwing data away. Storage and computing power are cheap, and we are going to store every possible piece of data we can. Amazon S3 will be the digital equivalent of those U-Haul Self Storage places where you keep your grandmother’s china and all the crap you really don’t want, but think has value. That means we must have Google-like search approaches and indexing strategies that vendors like Splunk provide just to navigate the stuff. Look for unstructured search techniques to be much sought after as the data volumes continue to grow out of control. Hopefully the vendors will begin tagging data with an expiration date. Share:

Share:
Read Post

Which Bits Are the Right Bits?

(The following post covers some rather esoteric bits of security philosophy, or what Rich has affectionately called “Security Jazz” in the past. Unless you are into obscure data-centric security minutiae, you will probably not be interested). Richard Bejtlich tweeted and posted on data integrity: The trustworthiness of a digital asset is limited by the owner’s capability to detect incidents compromising the integrity of that asset. This statement is absolutely correct and a really important point that is often overlooked. The problem is that most technologies which produce digital assets do not build tamper detection in, thus giving owners no way to detect integrity violtaions. And far too often people confuse interested party with owner of digital assets, as there can be many copies, each in the possession of a different person or group. It’s not that we can’t provide validation, because technology exists to provide assurance and authenticity. Let’s look at an example: Who owns syslog data? Is it the IT administrator? The security professional? An auditor? In my opinion, none of them do. The OS owns the syslog, as it created the content. Much like you may think you own ‘your’ credit card number, but you don’t – it is something the issuing bank created and owns. They are the custodians of that number, and change it when they choose to. syslog has no way to verify the contents of the log it creates over time. We take it on faith that it is unlikely a log file was corrupted or altered. If we need to verify integrity in the future, too bad. If you did not build in safeguards and a method for validating integrity when you created the data, it’s too late. The trustworthiness of the digital asset is limited to the owner’s capability to detect a compromise, and for many digital assets like syslog, that is nil. For most digital assets, it is sufficient that we use them every day, as this provides sufficient confidence in their integrity. Encryption keys are a useful example. If the keys are corrupted, especially in a public-key situation, either the encryption or decryption operations fail. We may keep a backup somewhere safe to compare our working copy to, and while that can be effective in the most common problem situations, it’s only relevant for certain (common) use cases. Digital assets have an additional challenge over physical objects in terms of generations. Even if we can verify a particular iteration of a digital object, we can have infinite copies, so we need to be able to verify the most current iteration is in use. For digital assets like encryption keys, account numbers, access tokens, and digital representations of self, the owner has a strong vested interest in not sharing the asset, keeping it safe, and possibly even keeping redundant copies against future emergencies or for verification. There are several technologies to prove integrity, they are just not used much. I posted a comment on Richard’s blog to this effect: The trustworthiness of a digital asset is limited more by the trustworthiness of the owner than tamper detection. An owner with desire of privacy and data integrity has the means to protect digital assets. Richard’s premise is an important one as we very seldom build in safeguards to validate ownership, state, authenticity or integrity. Non-repudiation tools and digital escrow services are nearly non-existent. There simply is not enough motivation to implement the tools we have which can provide assurance. Gunnar Peterson blogged on this subject earlier this week as well, taking a slightly more applied look at the problem. His statement that these issues are outside the purview of DLP are absolutely correct. DLP is an outside-in model. This discussion has more to do with Digital Rights Management, which is an inside-out model. The owner must attest to integrity, and while a 3rd party proxy such as a DLP service could be entrusted with object escrow and integrity certification, it would require an alteration of the DLP’s “discover and protect” model. DRM is designed to be part of the application that creates the digital object, and while it is not often discussed, digital object ownership is part of that responsibility. Attestation to ownership is not possible without some form of integrity and state checking. I have seen select DRM systems that were interested in high integrity, but none were commercially viable. Which answers Gunnar’s question: Our ability using today’s technologies to deliver vastly improved audit logging is, I believe, a worthwhile and achievable goal. But it’s fair to ask – why hasn’t it happened yet? There has been no financial incentive to do so. We have had excellent immutable log technologies for years but they are only used in limited cases. Web application audit trails are an interesting application of this technology and easy to do, but there is no compelling business problem motivating people to spend money on retrofitting what they have. I would like to see this type of feature for consumer protection, built into financial transactions where we really need to protect consumers from shoddy corporate record-keeping and failed banking institutions. Share:

Share:
Read Post

Microsoft Security Updates for October 2009

We don’t normally cover Patch Tuesday unless there is something unusual, but the October 2009 advanced notification appears to be just that. It lists patches for 13 different security bulletins, for what looks like 30 separate security problems. Eight of the bulletins are for critical vulnerabilities with the possibility of remote code execution. The majority of the patches are for Windows itself, with a couple for SQL Server, Office, and Forefront, but it looks like just about every production version of Windows is affected. Given the scope of this security patch and the seriousness of the bugs, it looks like IT departments are going to be working overtime for a while. Details of each of the vulnerabilities will be released later today, and I will update this post with specific points of interest as I find them. I am assuming that at least one of the patches is in response to the Server Message Block vulnerability discovered back in August. IIS is not listed as one of the affected products, but odds are the underlying OS will be, and folks will be restarting app servers either way. I am still trying to determine the issue with SQL Server. More to come… ==== Updated ==== Microsoft has updated the bulletin and included the security advisory links and some details on the threats. The SQL Server vulnerability is not within the core database engine, but the GDI ActiveX library in the print server. It’s in 2005, not 2000. When SQL Server Reporting Services is installed, the affected installations of SQL Server software may host the RSClientPrint ActiveX control. This ActiveX control distributes a copy of gdiplus.dll containing the affected code. Customers are only impacted when the RSClientPrint ActiveX control is installed on Microsoft Windows 2000 operating systems. If the RSClientPrint ActiveX control is installed on any other operating system, the system version of GDI+ will be used and the corresponding operating system update will protect them. The GDI+ vulnerability pretty much allows you to take down any Microsoft platform or function that uses the GDI dll, which is basically anything that uses images for forms, which is just about everything. My earlier comment that IIS was not listed was true, but there is in fact a bug linked to IIS: version 5.0 of the FTP service is vulnerable to remote code exploitation. Some of the exploits have workarounds and can be masked through firewall and web application firewall settings, however given the number and severity of the issues, we do recommend patching as soon as possible. Share:

Share:
Read Post

Barracuda Networks Acquires Purewire

Today Barracuda Networks announced their acquisition of Purewire. Barracuda has an incredibly broad product suite, including AV, WAF, Anti-spam, anti-malware, SSL gateways, and so on, but are behind their competition in web filtering and seriously lacking in solutions delivered as SaaS. The Purewire product set closes Barracuda’s biggest product gap, giving them URL filtering and some basic content inspection. But most importantly it can be delivered as SaaS. This is important for two reasons: first, Barracuda has been losing market share to email and web security vendors with comprehensive SaaS product lines. SaaS offers flexible deployment and extends the usable lifespan of existing appliance/software security investments. Second, SaaS can be sold ‘up-market’ or ‘down-market’, as pricing is simply adjusted for the desired capacity. This will keep the handful of Barracuda enterprise customers happy, and provide SME customers the ability to add capacity as needed, hopefully keeping them from bolting to other providers. I have never had my hands on the Purewire product so I have little knowledge of its internal workings or competitive differentiators. I have only spoken with a couple customers but they seemed to be satisfied with the web filtering capabilities. No wholehearted endorsements, but I did not hear any complaints either – nothing wrong if the endorsements are not passionate as often the best than can be said for web filtering products is they perform their jobs and go unnoticed. Based on recent press releases and joint customer announcements, I was expecting Proofpoint to be the acquirer. Regardless, this is a better fit for both companies given Proofpoint’s significant overlap with Purewire. And Barracuda has greater need for this technology. It has been a long time coming but they are finally turning around and showing a dedication to a service based delivery model. Remember, it was only two years ago that Barracuda bet on Web Application Firewalls acquired with Netcontinuum. That bet did not pay off particularly well, as the WAF market never blossomed as predicted. And it further entrenched Barracuda as a box shop. This is a move in the right direction. Share:

Share:
Read Post

Personal Information Dump

Interesting story of a San Francisco commercial landlord who found 46 boxes of personal information and financial data for thousands of people left behind by a failed title company. The boxes were the detitrus of what was until last year a thriving business, Financial Title. Then the economy tanked, and the company folded up its locations all across California, including the one Tookoian rented to it. “They basically abruptly closed shop,” he said as he walked past the company’s logo still affixed to a white wall. “Turned the lights off, closed the door and walked away.” Despite all of the data breaches and crazy stuff we see in the data security profession, I am still shocked at this type of carelessness. I expect to see prosecutors go after the owners of the company for failure to exercise their custodial responsibilities for these records. Ridout says the Federal Trade Commission has implemented new laws requiring businesses to properly dispose of sensitive personal information. So far, an Illinois mortgage company was fined $50,000 for throwing personal records in a dumpster. But fines like that are rare. And after his good deed of having the records destroyed, the landlord still had to pay the bill. Perhaps the FTC will set an example in this case. Share:

Share:
Read Post

Friday Summary – October 9, 2009

A lot of not this week. I was not at SECtor, although I understand it was a good time. I am not going to Oracle Open World. I should be going, but too many projects are either beginning or remain unfinished for me to travel to the Bay Area, visiting old friends and finding a good bar to hang out at. That is lots of fun I will not be having. I will not be going to Atlanta in November as the Tech Target event for data security has been knocked off the calendar. And I am not taking a free Mexican holiday in Peurta de Cancun or wherever Rich is enjoying himself. Oh well, weather has been awesome in Phoenix. With the posts for Dark Reading this week I spent a bunch of time rummaging around for old database versions and looking through notes for database audit performance testing. Some of the old Oracle 7.3 tests with nearly 50% transactional degradation still seem unreal, but I guess it should not surprising that auditing features in older databases are a problem. They were not designed to audit transactions like we do today. They were designed to capture a sample of activity so administrators could understand how people were using the database. Performance and resource allocation were the end goals. Once a sample was collected, auditing was turned off. Security was not really a consideration, and no thought given to compliance. Yet the order of use and priority has been turned upside down, as they fill a critical compliance need but require careful deployment. While I was at RSA this year, one database vendor pointed out some of the security vendors citing this 50% penalty as what you could expect. Bollocks! Database security and compliance vendors who do not use native database auditing would like you to embrace this performance myth. They have a competitive offering to sell, so the more people are fearful of performance degradation, the better their odds of selling you an alternative to accomplish this task. I hear DBAs complain a lot about using native auditing features because it used to be a huge performance problem, and DBAs would get complaints from database and application users. Auditing produces a lot of data. Something has to be done with that data. It needs to be parsed for significant events, reported on, acted upon, erased or backed up, or some combination thereof. In the past, database administrators performed these functions manually, or wrote scripts to partially automate the responsibility, and rewrote them any time something within IT changed. As a form of self preservation, DBAs in general do not like accepting this responsibility. And I admit, it takes a little time to get it set up right, and you may even discover some settings to be counter-intuitive. However, auditing is a powerful tool and it should not be dismissed out of hand. It is not my first choice for database security; no way, no how! But for compliance reporting and control validation, especially for SOX, it’s really effective. Plus, much of this burden can be removed by using third party vendors to handle the setup, data extraction, cleanup, and reporting. Anyway, enough about database auditing. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian’s Dark Reading post on Database Auditing Essentials. David Mortman’s Diversity of Thinking article on Threatpost. Adrian’s follow-up Dark Reading post on Auditing Pitfalls. Favorite Securosis Posts Rich: Database Audit Events. This is a lot of research! Adrian: This week’s Friday Summary. No link necessary! David Meier & David Mortman: Visa’s Data Field Encryption. Favorite Outside Posts Rich: Coconut Television. “No tequila yet, but we will see how the night goes.” Adrian & Mortman: JJ on SecTor’s Wall of Shame. Meier: Comcast pop-ups alert customers to PC infections. It may be effective, but why are you inspecting my traffic? How do I opt out? Top News and Posts Bloggers who review products must disclose compensation. But nothing says you need to disclose compensation for not writing about a product (wink-wink). Payola may be illegal, but hush money is bueno! Statistics from the Hotmail Phishing Scam. This closely mimics some of the weak password detection and dictionary attack work I conducted. You will notice any dictionary attack must be altered for regional preferences. Express Scripts notifying 700,000 in Pharma data breach. Bank fraud Malware that rewrites your bank statement. PayPal Pissed! Why the FBI Director does not bank online. Botnet research conducted by University of California at Santa Barbara. Full research paper forthcoming. AVG launches new AV suite while Microsoft is breathing down their necks. Hundreds arrested in Phishing scam where as much as $1M US was stolen. What I found most interesting about this is MSNBC and Fox News only mention ‘overseas’ participants, while small investigative papers like the Sacramento Bee and others gave details and noted the cooperation of Egyptian authorities. I guess ‘fair and balanced’ does not necessarily mean ‘complete and accurate’. McAfee and Verizon partnership. Passwords for Gmail, Yahoo and Hotmail accounts leaked. What’s wrong with a wall of sheep? Kidding. People who don’t understand security grasping at straws. Malware Flea Market. Blog Comment of the Week This week’s best comment comes from Adam in response to Mortman’s Online Fraud Report: It’s sort of hard to answer without knowing more about what data he has, but what I’d like is raw data, anonymized to the extent needed, and shared in both data and analyzed forms, so other people can apply their own analysis to the data. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.