Securosis

Research

Friday Summary: April 23, 2010

“Don’t worry about that 5 and 1 Adjustable Rate Mortgage. 5 years from now your house will be worth twice what you paid, and you can re-finance.” It’s worth half, and you can’t get a new loan. “That’s a great interest rate!” It wasn’t, and points were padded on the back end. “Collateralzied debt obligations are a great investment – they are Triple A rated!” Terrible investment, closer to Triple B value, and a root cause of the financial collapse. “Rates have never been lower so you should refinance now!” The reappraisal that is a part of refinancing often resets the equity proportions and amortization percentage, so you can pay an extra $100k in interest, plus PMI to protect the bank. “This credit card gives you 1 air mile for every dollar you spend!” And a 31.5% interest rate, plus a fee for the privilege. Haven’t heard these? How about “Don’t use your PIN number with your Debit Card: it’s less secure”? Are you kidding me? Signatures are pretty easy to forge, but a stolen debit card is a lot more difficult to use if you don’t have the PIN number. But this is not a little misunderstanding, like “Diet soda doesn’t make you fat.” Despite the existence of illicit card readers and hidden cameras, PINs are effective at stopping most would-be criminals from draining your bank account. Chase is actually encouraging their customers to be less secure so they can weasel a few extra bucks from the merchants. Multiply this across a few million people and we are talking serious money. And when fraud does occur, the bank is exempt from liability. Amazing! I used to get mad when I visited foreclosed homes and saw “Lawn Service by …” signs – when there was no lawn, or new “Winterized by …” signs on home in Phoenix. In June. I thought the banks were getting ripped off. Then I learned that the banks owned a significant portion of the service companies performing these unneeded services. I guess I should not be surprised by banking shenanigans any more, but this is maddening. Take my advice … use a PIN with your debit card. Or if the banks frustrate you, just use cash. Webcasts, Podcasts, Outside Writing, and Conferences Adrian’s Dark Reading post on PCI Alternatives. Rich on Database Security for Security Professionals. Rich on Security and Pen Testing. Favorite Securosis Posts Mike: Whitepaper Released: Quick Wins with Data Loss Prevention. This is a great paper. Short, focused, and to the point. How can you get quick value from a DLP investment? The answer is here. Rich: ESF: Controls: Full Disk Encryption. It’s not just a good idea, it’s the law. Sort of. In some states. Depending on the data. Not so much the law as a safe harbor. Well, sometimes a safe harbor, depending on how the data is lost. And… forget it – just encrypt your damn hard drives. Adrian: Who DAT McAfee Fail? Other Securosis Posts Database Security Fundamentals: Auditing Events. Incite 4/21/2010: Picky Picky. Google: An Example of Why Single Sign On Sucks. Level 4 Apathy. FireStarter: You Don’t Need Central Key Management. ESF: Endpoint Incident Response. Public Goods. Favorite Outside Posts Mike: Cybersecurity and National Policy This is from two weeks ago (and I mentioned it in the Incite this week), but if you missed Dan Geer’s perspectives on the challenges facing to building the national cybersecurity policy, you really missed out. Read It Now. Rich: CSRF Isn’t A Big Deal – Duh! Here’s what stuns me about the CSRF article Rsnake criticizes. My hacking skills are far from 133t, but CSRF was the first thing I figured out on my own long before I ever heard the term. It’s so simple you need to be pretty brain dead to miss it. Repeat after me: if a site maintains session persistence, odds are really darn good you can hit it with a Cross Site Request Forgery, because all you need to do is fake-submit some form data. Adrian: Measurements Over Models. Project Quant Posts Project Quant: Database Security – Change Management. Research Reports and Presentations Low Hanging Fruit: Quick Wins with Data Loss Prevention. Report: Database Assessment. Top News and Posts Get your friends to join EFF, go to Defcon! Porn Virus Blackmails Victims Over “Copyright Violation”. Network Solutions Sites Hacked Again. SANS: Critical Control 15: Data Loss Prevention. Amrit’s Securing the Mobile Workforce. PayPal Patches Critical Vulnerabilities. US government finally admits most piracy estimates are bogus. Not that it will stop them. Personally, I welcome our RIAA and MPAA thought police overlords. Google to Reveal Research into Fake AV Operations. Oracle released the April 2010 CPU. Hackers exploit new Java zero-day bug. Apple Patches Pwn2Own Flaw That Hacked Safari. Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to ds, in response to Who DAT McAfee Fail. To McAfee’s credit, they did own the issue and made numerous apologies. Personally, I think the apology should have come from DeWalt, the CEO on the blog. But they aren’t making excuses and are working diligently to fix the problem. You must not be a McAfee customer. They didn’t own the issue. They blamed the customer. They said “Corporations who kept a feature called “Scan Processes on Enable” in McAfee VirusScan Enterprise disabled, as it is by default, were not affected.” Unfortunately, the above is factually inaccurate. It is disabled by default in 8.7, if you were running an older client, you’re screwed. Not only is it on, but it cannot be disabled. Also, if you don’t scan SVChost on process enable, you may scan it when you conduct a daily memory scan or when you do a scheduled scan. Either of those can catch it and screw you. If you do a memory scan at boot, you’ll be in the same loop. They also obfuscated on the severity: “the error can result in

Share:
Read Post

Database Security Fundamentals: Auditing Events

I realized from my last post that I made a mistake. In my previous post on Auditing Transactions, attempting to simplify database auditing, I instead made it more complicated. What I want to do is to differentiate between database auditing through the native database transactional audit trail, from other forms of logging and event collection. The reason is that the native database audit trail provides a sequence of associated events, and whether and when those events were committed to disk. Simple events do not provide the same degree of context and are not as capable of providing database state. If you need application context and state – perhaps for Sarbanes-Oxley – you need the audit log. Make no mistake: there are simpler and less invasive ways of collecting data. They also provide an alternative – and in some cases clearer – picture of events. For example, it’s a heck of a lot easier to get data from syslog that native audit. And if all you are interested in is when patches are installed, syslog is a better source of information. If you are only interested in failed login attempts, a login trigger is far more efficient. The entire purpose of this Database Security Fundamentals series is to create a set of steps, which can each be performed in about an afternoon’s time, to secure your database. I believe the entire sequence can be completed in a week. My goal is to provide clarity and simplicity for database and IT administrators who do not have time to learn and deploy advanced security measures, and are instead interested in raising the security bar without spending weeks or months on the project. So I want to step back and clarify that the last post is specifically for at those who must use native database audit, primarily to populate reports or fulfill regulatory controls, with security as a secondary goal. And yes, compliance of some sort has become a fundamental requirement for the majority of DBAs. For the rest of you, we’ll dig into simple event collection for security events. If you are interested in a few simple events, but not enough to justify the burden of audit, this phase will be more useful to you. Define events: The goal here is to figure out what you need, or what others want from you. Installation of patches, alteration of specific permissions settings, granting of public roles, insertion of stored procedures, ad-hoc database access, use of management tools like Toad, adding views, 3 or more failed login attempts, and just about anything involving DBA capabilities are common concerns. These are all simple events with frequency rates low enough not to overwhelm you. Determine collection methods: Based on which events you want, select a data collection method or two that gather the data you need. There are a lot of ways to gather event data. System tables, command line tools, triggers, syslog, trace options, etc. Write scripts: To make this easier on yourself script your queries, or turn them into stored procedures, or both. Create the scripts to collect the events and, if needed, filter out what you don’t need. Use whatever scripting language you are comfortable with. Keep in mind that it is often useful to have the scripts make follow-up queries to reference other data sources, and being able to recursively gather additional information based upon simple if-then or where comparisons on data will save you a lot of work. User permission mapping is one such example, as the groups and roles a user belongs to could be a complex set of queries, depending on which platform you are using. You may want to send yourself an email for more critical events that need urgent attention. Implement: Deploy your scripts and test. Annoying though it may be, you will want to set up a specific user account with just enough privileges to perform the data collection. Secure these scripts so unprivileged users cannot use or modify them. You will want to set up a secure place to dump the results, and if necessary archive and remove files so they don’t take up too much disk space. Set review schedule: The data you collect is only valuable if you use it, so get in the habit of reviewing the results for anomalies. If security is your goal, plan on spending a few minutes every day on this, and setting alerts on the one or two events that absolutely, positively, look suspicious. Archive the scripts and document: Keep a copy of the scripts and notes on what you implemented for future reference. For a single database I find that I can create and test the scripts in an afternoon. Another few hours to set up the user accounts, cron jobs, or archive scripts. After that the entire process is pretty much self-sustaining as long as you stay on top of event review. Some of you who are the lone DBA at your job will consider this step in the Fundamentals series silly. I have had DBAs ask me, “Why would you set up a script to track your own work? Why would I send myself a reminder that I just added a table view?” Remember that this is meant to catch stuff that should not be happening, or events you were not aware of, like someone else in IT making changes to be ‘helpful’. Or when an attacker tries to compromise a database. This afternoon’s effort will all seem worth it when you have your first ‘WTF?’ moment a few months from now, when some web programmer changes the database without telling you. More advanced methods I intended to leave database activity monitoring out of this discussion. Monitoring is an advanced database security option, and does not fit into this simpler Essentials series. But those tools provide far more advanced data collection and storage capabilities, policies, and reporting. If the number of events to collect, or of databases grows, or if the policies and reports you

Share:
Read Post

FireStarter: You Don’t Need Central Key Management

If you are using encryption, somewhere you have encryption keys. Where you store them, and how they are managed and shared, are legitimate concerns. It is fashionable to store all keys in a single centralized key management server. Much as the name implies, this means storing all of your keys, of different types, for multiple use cases into a single key management server. Rich likes to call these ‘uber’ key manager, that handle any and all key functions; and are distinct from external key management servers that unify instances of single application, or provide key services across the disks in your storage array. Conceptually, a single product that handles all my key needs from a unified interface sounds great. But the real question is: why do you need it? Central key management is not simplified key management. Central key management requires architectural and deployment changes. Consolidation of key storage and use policies does not ensure easier management, but does mean increased cost. Few people want or need centralized key management. Putting all your keys from every application or services into a single monolithic key management store offers few advantages, and creates a number of problems itself. The implication is that a central server will offer easier management, increased redundancy, and greater functionality; but these are often illusory benefits, based on solving a problem you did not have to begin with. In practice the internal and external key services that came with the products you already own are likely not only sufficient, but better. Here’s why: No reason to share keys – Databases, disks, applications, file systems, and wherever else you encrypt seldom (if ever) need to share keys across these different services. Even if the encryption algorithm, key type, and key size are all consistent, there is no need to share keys between your tape drive and web application server. Using encryption to provide data integrity and privacy is a common goal, but the use cases and technical constraints are radically different. Redundant – Why use it if it is already built into your application? Internal key management is built into most applications and cryptographic systems such as storage products and file-level encryption tools. External key management – for products that really need external support for good key security and proper separation of duties (SoD) – is provided by application libraries and database encryption products. Failover, backup, management interfaces, rotation, and cipher strength are all common features, so why centralize? Multiple services mean more interfaces to learn, but inherently provides SoD and focused policy management. Cost – Central key management servers are standalone, dedicated products. They excel in areas such as key security, ease of use, key sharing, etc. But they are still an additional investment. Policy Management – A single management console to manage the system sounds like a great convenience, but I don’t always want the same policies across dissimilar applications and use cases. In fact, I usually want different key lengths, different rotation schedules, and different ciphers, depending on the data I am protecting, and prefer the granularity to specify them at the level of an individual use case. Single administration Console Having a single location to manage keys is conceptually useful. It may actually be useful if you have a very large number of users or must distribute keys and data across a large organization. Most of you reading this, however, work for small shops, and the one or two areas where you have deployed encryption do not require centralization. Few of you are at large enough organizations to worry about thousands of users each with hundreds of keys – and thus to need central key management to address data access issues across a dispersed environment. Key rotation – Having a central key server to automate key management, especially the complexity of key rotation, is a common motivator for central services. Rotation, or key-cycling, is a common feature whereby key management products periodically issue new keys on the premise that with sufficient time and effort, someone will be able to discover the encryption keys in use. Theoretically you would issue a new key and then re-encrypt all data under the new key, but in practice it would take months of even years to re-encrypt everything, as the data sets are simply too large, and the media might even be off-line or off-site. In this scenario data is only re-encrypted under new keys opportunistically when it’s rewritten (perhaps also when it’s reread). But there is no guarantee that data will be re-encrypted. With every key rotation cycle, a new set of keys is generated, and the old ones must be retained to decrypt older data. Over time data will be encrypted under so many different keys that you must use a key manager just to keep track of what’s what. It’s a side effect of the encryption scheme, and for a modicum of extra security you get a bloated key server. Better to keep this compartmentalized than centralized. Don’t go looking for central key management when external key management is all you need. Central key management is occasionally necessary – most often for existing systems with really bad built-in key management, geographically dispersed servers that require key sharing, or thousands of users each with multiple keys. A single point of management is veru much a secondary advantage, however, and should not drive your decisions. So why do you think you need it? What’s the advantage to you? Share:

Share:
Read Post

Friday Summary: April 16, 2010

I am sitting here staring at power supplies and empty cases. Cleaning out the garage and closets, looking at the remnants from my PC building days. I used to love going out to select new motherboard and chipset combinations, hand-selecting each component to build just the right database server or video game machine. Over the years one sad acknowledgement needed to be made: after a year or so, the only pieces worth a nickel were the power supply and the case. Sad, but you spend $1,500.00 and after a few months the freaking box that housed the parts was the only remaining item of value. I was thinking about this during some of our recent meetings with clients and would-be clients. Rich, Mike, and I are periodically approached by investors to review portfolio companies. We look both at their technology and market opportunities, and determine whether we feel the product is hitting the mark, and reaching buyers with the right product and message. We are engaged in response to either mis-aligned vision between investors and company operators (shocker, I know), or more commonly to give the investors some understanding of whether the company is worth salvaging through additional investment and a change of focus. Sometimes the company has followed market trends to preserve value, but quite a few turn out to be just a box of old parts. If this is not a direct consequence of Moore’s law, it should be. We see cases where technologies were obsolete before the hit they market. In a handful of instances, we do find one or two worth salvaging, but not for the reasons they thought. In those cases the engineering staff was smart enough, or lucky enough, to build a deployment model or architecture that is currently relevant. The core technology? Forget it. Pitch it in the dust bin and take the write-off because that $20M investment is spent. But the ‘box’ was valuable enough to salvage, invest in, or sell. It’s ironic, and it goes to show how tough technology start-ups are to get right, and that luck is often better than planning. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Living with Windows: security. Rich wrote this up for Macworld. Rich’s Putting the Fun in Dysfunctional presentation at RSA. Adrian’s PCI Database Security Primer, part 1, at Dark Reading. Favorite Securosis Posts Mike Rothman: FireStarter: No User Left Behind. User education is critical, but it needs to be done right, with the right incentives. David Mortman: ESF: Building the Endpoint Security Program. Rich: Anti-Malware Effectiveness: The Truth Is out There. Lies, damn lies, and testing. Adrian Lane: ESF: Full Disk Encryption. Other Securosis Posts ESF: Endpoint Compliance Reporting. Incite 4/14/2010: Just Think. ESF: Controls: Firewalls, HIPS, and Device Control. FireStarter: No User Left Behind. ESF: Controls: Anti-Malware. Friday Summary: April 9, 2010. Database Security Fundamentals: Auditing Transactions. Favorite Outside Posts Mike Rothman: if security wasn’t hard, everyone would do it. Lonervamp nails this one. Tools are great, but it’s people who have to implement the programs. David Mortman: For Thirty Pieces of Silver My Product Can Beat Your Product. Lori MacVittie brings the Rothman-esque smackdown! Pepper: apache.org server cracked: some passwords sniffed; others brute-forced. Time to change some passwords…. Rich: The Spider That Ate My Site. Okay, I hate to admit this but I did something similar to our back end management interface. When admin buttons like “delete” are merely links, a simple spider can cause some serious damage. Adrian Lane: Thinking About The Apache.org Attacks. Actually raises more questions that it answers. What does it say about security when software as prevalent as Apache has a flaw? SHA 256 has been around forever. And I get that the fewer attacks means less exposure of inherent flaws, but complexity of software plays a similar role in masking flaws. Project Quant Posts Project Quant: Database Security – Change Management. Project Quant: Database Security – Patch. Top News and Posts The Spy in the Middle. SSL Certificate Authority trust is a hot topic lately – scary because browsers inherently trust so many CAs, and we know some of them are untrustworthy. US government finally admits most piracy estimates are bogus. Not that it will stop them. Personally, I welcome our RIAA and MPAA thought police overlords. Google to Reveal Research into Fake AV Operations. Hackers exploit new Java zero-day bug. Apple Patches Pwn2Own Flaw That Hacked Safari. Rsnake on the significance of CSRF. If you don’t know who Immunet is and what they do, now’s a good time to read Brain Krebs’ interview with Adam O’Donnell and Adam Huger. Oracle’s April CPU advisory. Nothing cataclysmic. Microsoft on the other hand announced a couple critical patches, and the Authenticode bypass hack looks serious. Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to LonerVamp, in response to Security Incite: Just Think. Nice opening post, especially with the kicker at the end that you’re actually writing it on a plane! 🙂 I definitely find myself purposely unplugging at times (even if I’m still playing with something electronic) and protecting my private time when reasonable. I wonder if this ultimately has to do with the typically American concept of super-efficiency…milking every waking moment with something productive…at the expense of the great, relaxing, leisure things in life. I could listen to a podcast during this normally quiet hour in my day! And so on… @Porky Risk…Pig: It doesn’t help that every security professional shotguns everyone else’s measurements…and with good reason! At some point, I think we’ll have to accept that every company CEO is different, and it takes a person to distill the necessary information down for her consumption. No tool will ever be enough, just like no tool ever determines if a product will be successful. @My dad can beat up your dad: Ugh…I have some pretty strong opinions about cyberbullying and home internet monitoring…but I tend to

Share:
Read Post

Public Goods

Chris Pepper tweeted a very cool post on Why Content is a Public Good. The author, Milena Popova, provides an economist’s perspective on market forces and digital goods. Her premise is that in economic terms, many types of electronic content are “public goods” – that being a technical term for objects with infinite supply and no good way to control consumption. She makes the economic concepts of ‘rival’ and ‘excludable’ very easy to understand, and by breaking it down into rudimentary components, makes a compelling argument that content is a public good: It means that old business models based on content being a club good simply don’t work. It means we have to rethink our relationship with content – as creators, as distributors and as consumers. It means that there are a lot of giants in the content distribution industry whose livelihoods (profit margins) are being pulled out from under them faster than they can say “illegal downloads”, and they are fighting it. Of course they’re fighting it. They’ve had an incredibly profitable business model for about a century and suddenly they don’t. Let’s face it, human beings don’t like change at the best of times, and we sure as hell don’t like it when it means less cash in our pockets. I have written many posts on how economics affect DRM, RIAA, and ‘piracy’; and on the difference between actual security and security marketing, so I won’t rehash those subjects here; but note the common theme is that a busted business model is the root of the problem. Right now I want to stay away from some of the negativity of those posts, and instead focus on the economic drivers. Ms. Popova does a much better job than I of isolating the underlying forces, and discusses the factors in a way that helps us begin to visualize possible solutions. A lot of people have a hard time with the concept of free and how you can actually make money in a world with so much free stuff. In a capitalist society we all have trouble with this. I talk to people in IT who still don’t think Linux and Java are viable technologies, and no one could make money with those products. But the availability of free stuff requires you to think a little differently about value – fewer people will pay money for the everyday and ordinary stuff because they don’t have to, but they will pay for things they perceive as special. In fact, I don’t think I fully grasped the concept and implications until I started working at Securosis. We are a research company that gives away most of our products for free, but charges for services and engagements. One area I where was at odds with Popova was on the concept of “price discrimination”. From my perspective this looks more like the market being able to set the price, but do so far more efficiently: person to person, item by item, and adjusted over time. This is a very cool concept if you think about something like television: If you pay channel by channel, how many channels would you pay for? You have 400 or so, but I bet when it came to spending money, very few would get your hard-earned dollars. The NFL knows this, as football not only drives huge ad revenue, but single-handedly the bulk of hi-def television sales and additional add-on packages. If it was not for bundling into programming packages, many (most?) other channels would not be able to survive. All in all, one of the better posts I have seen on the problems of dealing with consumer media. Share:

Share:
Read Post

Database Security Fundamentals: Auditing Transactions

I am now switching gears to talk about some of the ‘detective’ measures that help with forensic analysis of transactions and activity. The preventative measures discussed previously are great for protecting your system from known attacks, but they don’t help detect fraudulent misuse or failure of business processes. For that we need to capture the events that make up the business processes and analyze them. Our basic tool is database auditing, and they provide plenty of useful information. Before I get too far into this discussion, it’s worth noting that the term ‘transactions’ is an arbitrary choice on my part. I use it to highlight both that audit data can group statements into logical sequences corresponding to particular business functions, and that audit trail analysis is most useful when looking for insider misuse – rather than external attacks. Audit trails are much more useful for detecting what was changed, rather than what was accessed, and for forensic examination of database ‘state’. There are easier and more efficient ways of cataloging SELECT statements and simple events like failed logins. Usually at this point I provide a business justification for auditing of transactions or specific events, and some use cases as examples of how it helps. I will skip that this time, as you already know that auditing is built into every database; and captures database queries, transactions, and important system changes. You probably already use audit logs to see what actions are most common so you can set up indices and tune your most common queries. You may even use auditing to detect suspect activity, to perform forensic audits, or even to address a specific compliance mandate. At the very least you need to have some form of database auditing enabled on production databases to answer the question “What the &!$^% happened” after a database crash. Regardless of your reasons, auditing is essential for security and compliance. In this post I will focus on capturing transactions and alterations to the database. What type of analysis you do, how long you keep the data, and what reports you create are secondary. I am focusing on gathering the audit trail rather than what do with it next. What’s critical is here understanding what data you need, and how best to capture it. All databases have some type of audit function. The ‘gotcha’ is that use of database auditing needs careful consideration to avoid storage, performance, and management nightmares. Depending on the vendor and how the feature is deployed, you can gather detailed transactional information, filter unwanted transactions to get a succinct view of database activity, and do so with only modest performance impact. Yes, I said modest performance impact. This remains a hot-button issue for most DBAs and is easy to mess up, so planning and basic tests form a bulk of this phase. Benchmark: Find a test system, gather a bunch of queries that reflect user activity on your system, and run some benchmarks. Turn on the auditing or tracing facility and rerun the benchmark. Wince and swear repeatedly at the performance degradation you just witnessed. Aren’t you glad you did this on a test system? You have a baseline for best and worst case performance. Tune: Select Audit options. Oracle, SQL Server, DB2, and Sybase have multiple options for generating audit trails. Don’t use Oracle’s fine-grained auditing when normal auditing will suffice. Don’t use event monitors on DB2 if you have many different types of events to collect. Which auditing option you choose will dramatically affect performance and data volumes. Examine the audit capture options, and select only the event types you need. If you only care about user events, don’t bother collecting all the object events. If you only care about failed logins and changes to system privileges, filter out meta-data and data changes. Examine buffer space, tablespace, block utilization, and other resource tuning options. Audit data are static in size, so their data blocks can be set to ‘write-only’, thus saving space. For audit trails that store data within database tables, you can pre-allocate table space and blocks to reduce latency from space allocation. Rerun the benchmarks and see what helps performance. Generally these steps provide significant performance gains. No more cursing the database vendor should be needed. Filter: Get rid of specific actions you don’t need. For example, batch updates may fall outside your area of interest, yet comprise a significant fraction of the audit log, and can therefore be parsed out. Or you may want to audit all transactions while the databse is running, but not need events from database startup. In some scenarios the database can filter these events, which improves performance. If the database does not provide this type of row-level filtering, you can add a WHERE clause to the extraction query or use a script to whittle down the extracted data. Implement: Take what you learned and apply it in the production environment. Verify that the audit trail is being collected. Ad-hoc Analysis: Review the logs periodically, focusing on failures of system functions and logins that indicate system probing. Any policy or report that you generate may miss unusual events, so I recommend occasional ad hoc analysis to detect anything hinkey. Record: Document the audit settings as well as the test results, as you will need them in the future to understand the impact of increased auditing levels. Communicate use of audit to users, and warn them that there will be a performance hit due to regulatory and security requirements. Create a log retention policy. This is necessary – even if the policy simply states that you will collect audit trails and delete them at the end of every week, write it down. Many compliance requirements let you define retention however you choose, so be proactive. You can always change it in the future. More advanced considerations: Automate: I recommend that you automate as much of the process as you can, once you have done the initial analysis and configuration. Collection of the audit

Share:
Read Post

Database Virtualization and Abstraction

When you think of database virtualization, do you think this term means: a) Abstracting the database installation/engine from the application and storage layers. b) Abstracting the database instance across multiple database installations or engines. c) Abstracting the data and tables from a specific database engine/type, to make the dependent application interfaces more generic. d) Abstracting the data and tables across multiple database installations/engines. e) Moving your database to the cloud. f) All of the above. I took a ‘staycation’ last month, hanging around the house to do some spring cleaning. Part of the cleaning process was cutting through the pile of unread technical magazines and trade rags to see if there was anything interesting before I threw them into the garbage. I probably should have just thrown them all away, as in the half dozen articles I read on the wonderful things database virtualization can do for you, not one offered a consistent definition. In most cases, the answer was f), and they used the term “database virtualization” to mean all of the above options with actually mentioning that database virtualization can have more than one definition. One particularly muddled piece at eWeek last October used all of the definitions interchangeably – within a single article. Databases have been using abstraction for years. Unfortunately the database techniques are often confused with other forms of platform, server, or application virtualization – which run on top of a hypervisor utilizing any of several different techniques (full, emulated, application, para-virtualization, etc.). To further confuse things, other forms of abstraction and object-relational mapping layers within applications which uses the database, do not virtualize resources at all. Let’s take a closer look at the options and differentiate between them: a) This form of database virtualization is most commonly called “database virtualization”. It’s more helpful to think about it as application virtualization because the database is an application. Sure, the classic definition of a database is simply a repository of data, but from a practical standpoint databases are managed by an application. SQL Server, Oracle and MySQL are all applications that manage data. b) This option can also be a database virtualization model, We often call this clustering, and many DBAs will be confused if you call it virtualization. Note that a) & c) are not mutually exclusive. c) This is not a database virtualization model, but rather and abstraction model. It is used to decouple specific database functions from the application, as well as enabling more powerful 4GL object-oriented programming rather than dealing directly with 3GL routines and queries. The abstraction is handled within the application layer through a service like Hibernate, rather than through system virtualization software like Xen or VMware. d) Not really database virtualization, but abstraction. Most DBAs call this ‘partitioning’, and the model has been available for years, with variants from multiple database vendors. e) The two are unrelated. Chris Hoff summarized the misconception well when he said “Virtualization is not a requirement for cloud computing, but the de-facto atomic unit of the digital infrastructure has become the virtual machine”. Actually, I am paraphrasing from memory, but I think that provides the essence of why people often equate the two. This is important for two reasons. One, the benefits that can be derived depend heavily on the model you select. Not every benefit is available with every model, so these articles are overly optimistic. Two, the deployment model affects security of the data and the database. What security measures you can deploy and how you configure them must be reconsidered in light of the options you select. Share:

Share:
Read Post

Friday Summary: April 2, 2010

It’s the new frontier. It’s like the “Wild West” meets the “Barbary Coast”, with hostile Indians and pirates all rolled into one. And like those places, lawless entrepreneurialism a major part of the economy. That was the impression I got reading Robert Mullins’ The biggest cloud on the planet is owned by … the crooks. He examines the resources under the control of Conficker-based worms and compares them to the legitimate cloud providers. I liked his post, as considering botnets in terms of their position as cloud computing leaders (by resources under management) is a startling concept. Realizing that botnets offer 18 times the computational power of Google and over 100 times Amazon Web Services is astounding. It’s fascinating to see how the shady and downright criminal have embraced technology – and in many cases drive innovation. I would also be interested in comparing total revenue and profitability between, say, AWS and a botnet. We can’t, naturally, as we don’t really know the amount of revenue spam and bank fraud yield. Plus the business models are different and botnets provide abnormally low overhead – but I am willing to bet criminals are much more efficient than Amazon or Google. It’s fascinating to see the shady and downright criminal have embraced the model so effectively. I feel like I am watching a Battlestar Galatica rerun, where the humans can’t use networked computers, as the Cylons hack into them as fast as they find them. And the sheer numbers of hacked systems support that image. I thought it was apropos that Andy the IT Guy asked Should small businesses quit using online banking, which is very relevant. Unfortunately the answer is yes. It’s just not safe for most merchants who do not – and who do not want to – have a deep understanding of computer security. Nobody really wants to go back to the old model where they drive to the bank once or twice a week and wait in line for half an hour, just so the new teller can totally screw up your deposit. Nor do they want to buy dedicated computers just to do online banking, but that may be what it comes down to, as Internet banking is just not safe for novices. Yet we keep pushing onward with more and more Internet services, and we are encouraged by so many businesses to do more of our business online (saving their processing costs). Don’t believe me? Go to your bank, and they will ask you to please use their online systems. Fun times. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences On that note, Rich on Protecting your online banking. Living with Windows: security. Rich wrote this up for Macworld. Insiders Not the Real Database Threat. RSA Video: Enterprise Database Security. Favorite Securosis Posts Rich: Help a Reader: PCI Edition. Real world problem from a reader caught between a rock and an assessor. Mike Rothman: How Much Is Your Organization Telling Google? Yes, they are the 21st century Borg. But it’s always interesting to see how much the Google is really seeing. David Mortman: FireStarter: Nasty or Not, Jericho is Irrelevant. Adrian Lane: Hit the Snooze Button on Lancope’s Data Loss Alarms. Freakin’ Unicorns. Other Securosis Posts Endpoint Security Fundamentals: Introduction. Database Security Fundamentals: Configuration. Incite 3/31/2010: Attitude Is Everything. Security Innovation Redux: Missing the Forest for the Trees. Hello World. Meet Pwn2Own. Favorite Outside Posts Rich: Is Compliance Stifling Security Innovation? Alex over at Verizon manages to tie metrics to security innovation. Why am I not surprised? 🙂 Mike Rothman: Is it time for small businesses to quit using online banking? Andy the IT Guy spews some heresy here. But there is definitely logic to at least asking the question. David Mortman: Side-Channel Leaks in Web Applications. Adrian: A nice salute to April 1 from Amrit: Chinese Government to Ban All US Technology. Project Quant Posts Project Quant: Database Security – Patch. Research Reports and Presentations The short version of the RSA Video presentation on Enterprise Database Security. Report: Database Assessment. Top News and Posts Great interview with security researcher Charlie Miller. Especially the last couple of paragraphs. Man fleeing police runs into prison yard. Google’s own glitch causes blockage in China. Senate Passes Cybersecurity Act. Not a law yet. Nick Selby on the recent NJ privacy in the workplace lawsuit. Microsoft runs fuzzing botnet, finds 1800 bugs. Key Logger Attacks on the Rise. Content Spoofing – Not Just an April Fool’s Day Attack. JC Penny and Wet Seal named as breached firms. Nice discussion: Mozilla Plans Fix for CSS History Hack. Original Mozilla announcement here. Microsoft SDL version 5. Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Martin McKeay, for offering practical advice in response to Help a Reader: PCI Edition. Unluckily, there isn’t a third party you can appeal to, at least as far as I know. My suggestion would be to get both your Approved Scanning Vendor and your hosting provider on the same phone call and have the ASV explain in detail to the hosting provider the specifics of vulnerabilities that have been found on the host. Your hosting provider may be scanning your site with a different ASV or not at all and receiving different information than your seeing. Or it may be that they’re in compliance and that your ASV is generating false positives in your report. Either way, it’s going to be far easier for them to communicate directly at a technical level than for you to try and act as an intermediary between the two. I’d also politely point out to your host that their lack of communication is costing you money and if it continues you may have to take your business elsewhere. If they’re not willing to support you, you should continue to pay them money. Explore your contract, you may have the option of subtracting the

Share:
Read Post

Database Security Fundamentals: Configuration

It’s tough for me to write a universal quick configuration management guide for databases, because the steps you take will be based upon the size, number, and complexity of the databases you manage. Every DBA works in a slightly different environment, and configuration settings get pretty specific. Further, when I got started in this industry, the cost of the database server and the cost of the database software were more than a DBA’s yearly salary. It was fairly common to see one database admin for one database server. By the time the tech bubble burst in 2001, it was common to see one database administrator tending to 15-20 databases. Now that number may approach 100, and it’s not just a single database type, but several. The greater complexity makes it harder to detect and remedy simple mistakes that lead to database compromises. That said, re-configuring a database is a straightforward task. Database administrators know it involves little more than changing some parameter value in a file or management UI and, worst case, re-starting the database. And a majority of the parameters, outside the user settings we have already discussed, will remain static over time. The difficulties are knowing what settings are appropriate for database security, and keeping settings consistent and up-to-date across a large number of databases. Research and ongoing management are what makes this step more challenging. The following is a set of basic steps to establish and maintain database configuration. This is not meant to be a process per se, but just a list of tasks to be performed. Research: How should your databases be configured for security? We have already discussed many of the major topics with user configuration management and network settings, and patching takes care of another big chunk of the vulnerabilities. But that still leaves a considerable gap. All database vendors recommend configuration and security settings, and it does not take very long to compare your configuration to the standard. Researching what settings you need to be concerned with, and the proper settings for your databases, will comprise the bulk of your work for this exercise. All database vendors provide recommended configurations and security settings, and it does not take very long to compare your configuration to the standard. There are also some free assessment tools with built-in polices that you can leverage. And your own team may have policies and recommendations. There are also third party researchers who provide detailed information on blogs, as well as CERT & Mitre advisories. Assess & Configure: Collect the configuration parameters and find out how your databases are configured. Make changes according to your research. Pay particular attention to areas where users can add or alter database functions, such as cataloging databases and nodes in DB2 or UTL_File settings in Oracle. Pay attention to the OS level settings as well, so verify that the database is installed under a non-IT or domain administration account. Things like shared memory access and read permissions on database data files need to be restricted. Also note that assessment can verify audit settings to ensure other monitoring and auditing facilities generate the appropriate data streams for other security efforts. Discard What Your Don’t Need: Databases come with tons of stuff you may never need. Test databases, advanced features, development environments, web servers, and other features. Remove modules & services you don’t need. Not using replication? Remove those packages. These services may or may not be secure, but their absence assures they are not providing open doors for hackers. Baseline and Document: Document approved configuration baseline for databases. This should be used for reference by all administrators, as well as guidelines to detect misconfigured systems. The baseline will really help so that you do not need to re-research what the correct settings are, and the documentation will help you and team members remember why certain settings were chosen. A little more advanced: Automation: If you work on a team with multiple DBAs, there will be lots of changes you are not aware of. And these changes may be out of spec. If you can, run configuration scans on a regular basis and save the results. It’s a proactive way to ensure configurations do not wander too far out of specification as you maintain your systems. Even if you do not review every scan, if something breaks, you at least have the data needed to detect what changes were made and when for after-the-fact forensics. Discovery: It’s a good idea to know what databases are on your network and what data they contain. As databases are being embedded into many applications, they surreptitiously find their way onto your network. If hacked, they provide launch points for other attacks and leverage whatever credentials the database was installed with, which you hope was not ‘root’. Data discovery is a little more difficult to do, and comes with separation of duties issues (DBAs should not be looking at data, just database setup), but understanding where sensitive data resides is helpful in setting table, group, and schema permissions. Just as an aside on the topic of configuration management, I wanted to mention that during my career I have helped design and implement database vulnerability assessment tools. I have written hundreds of policies for database security and operations for most relational database platforms, and several non-relational platforms. I am a big fan of being able to automate configuration data collection and analysis. And frankly, I am a big fan of having someone else write vulnerability assessment policies, because it is difficult and time consuming work. So I admit that I have a bias for using assessment tools for configuration management. I hate to recommend tools for an essentials guide as I want this series to stick to lightweight stuff you can do in an afternoon, but the reality is that you cannot reasonably research vulnerability and security settings for a database in an afternoon. It takes time and a willingness to learn, and means you need to learn about

Share:
Read Post

Friday Summary: March 19, 2010

Your Facebook account gets compromised. Your browser flags your favorite sports site as a malware distributor. Your Twitter account is hacked through a phishing scam. You get AV pop-ups on your machine, but cannot tell which are real and which are scareware. Your identify gets stolen. You try to repair the damage and make sure it doesn’t happen again, only to get ripped off by the credit agency (you know who I am talking about). Exasperated, you just want to go home, relax, and catch up on March Madness. But it turns out the bracket email from your friend was probably another phishing attempt, and your alma mater suspends a star player while it investigates derogatory public comments – which it eventually discovers were forged. Man, it sucks to be Generation Y. There has been an incredible cacophony over the last couple weeks across the mainstream media about social networks being manipulated for fun, personal satisfaction, and profit. Even the people my my semi-rural area are discussing how it has affected them and their children, so I know it is getting national attention. What I can’t figure out is how their behavior will change – if at all. RSnake discussed a Microsoft paper recently, expanding on its discussion of why training users on the dangers of unsafe browsing often does not make economic sense. Even if it was viable, people don’t want to learn all that stuff, as it makes web browsing more work than fun. So what gives? I believe that our increasing use of and dependency on the Internet, and the corresponding increases in fraud and misuse, require change. But will people feel differently, and will this drive them to actually behave differently? Will the changes be technological, legal, or social? We could see tighter or looser privacy rules on websites, or legal precedents or new laws – we have already seen dramatic shifts in what younger people consider private and are willing to publicize online. The paper asserts that “The wisdom of the crowd discerns that ignoring some threats brings little actual harm …” which I totally agree with, and describes Twitter phishing and Facebook hacks. Bank accounts being drained and cars being shut down are a whole different level of problem, though. I really don’t have an answer – or even an inkling – of what happens next. I do think the problem has gotten sufficiently mainstream that we will to see mainstream impacts and reactions, though. Interesting times! On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian on Database Dangers in the Cloud on Dark Reading. Video interview with Rich on endpoint security, agents, and best of breed technologies. Project Quant in Database Security Metrics Project Needs Community Input Favorite Securosis Posts Rich: FireStarter: IP Breach Disclosure, No-Way, No-How. I’m surprised this generated so little debate for a FireStarter. When I explain this verbally to people, it never fails to generate a vigorous response. David Mortman: FireStarter: IP Breach Disclosure, No-Way, No-How. Mike Rothman: Mogull’s Law. If Rich has the stones to name a law after himself, then I’m in. Not sure how proportional the causation is, but clearly users do whatever hurts the least. Adrian Lane: Network Security Fundamentals: Egress Filtering. Other Securosis Posts LHF: Quick Wins with DLP – the Conclusion Incite 3/17/2010: Seeing the Enemy Database Activity Analysis Survey LHF: Quick Wins in DLP, Part 2 Favorite Outside Posts Rich: Conversations With a Blackhat. The best takeaway from RSnake’s summary of talking with some bad guys is that at least some of what we are doing on the security side is actually working. So much for the “security is failing” meme… David Mortman: Three Steps to a Rational Security Budget. Mike Rothman: Why I’m Skeptical of “Due Diligence” Based Security. I have no idea what Alex is talking about, but he has a picture of Anakin, Obi-Wan and Yoda with the glowing ghosts of John Lennon and George Harrison. So it’s my favorite of the week. Adrian Lane: Walkthrough: Click at Your Own Risk. Analysis of privacy and the manipulation of public impressions through social media. An excellent piece of analysis from … a football statistics site. Long but very informative, and a perspective I don’t think a lot of people appreciate. Project Quant Posts Project Quant: Database Security – Configuration Management Top News and Posts What I thought was the biggest news of the week: HD Moore’s post on The Latest Adobe Exploit and Session Upgrading. – AL Penetrating Intranets through Adobe Flex Applications. A study highlights efforts to take down ISPs that allow malicious activity. This is a boon to reputation-based filtering. To be honest, I used to be skeptical of the idea but I’m slowly becoming a convert. –RM Zeus Trojan Now Has Hardware Licensing Scheme. Microsoft, security vendor clash over Virtual PC bug. Hacker Disables Over 100 Cars Remotely. Former employee using someone else’s login. Now where have we heard that before? Emerging Identity Theft Market. The $10 million number seems high to me, but the trends are not surprising. Facebook Password Scam. We have been talking about the Internet subsuming television for years. Google’s Set-top Box is an attempt to watch closely, because television is all about advertising, which is Google’s strong suit (although they have not been a TV player to date), and this would enable a new kind of advertising. Should be interesting! Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Andy Jaquith, in response to RSA Tomfoolery: APT is the Fastest Way to Identify Fools and Liars. When a comments makes me laugh out loud, it usually gets my vote! I’ve been using the phrase “Advanced Persistent Chinese” lately. It sounds good, it’s more accurate, and it’s funny. What’s not to like? I completely agree that the displays of vendor idiocy around APT are far too widespread. You can’t have a carnival without the barker, apparently. Good seeing you, by the way, Any – albeit far too briefly. Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.