Login  |  Register  |  Contact
Thursday, December 03, 2009

Friday Summary- December 4, 2009

By Rich

I had one of those weird moments today where I found an unrelated part of my life unexpectedly influenced by my martial arts background.

I was asked to critique a research paper by someone I haven’t worked with before. Without going into details, this particular paper had a fatal flaw.

It opened with a negative position, then attempted to justify the positive. It started defensively, and in the process lent credence to the opposing view, as opposed to strengthening the author’s position. In other words, it started with, “here’s what you say about X, and why I think Y” as opposed to, “here is position Y, and why it is correct and X is wrong”.

In advising the author, I remembered a lesson I learned when I first started teaching martial arts (traditional taekwondo). I was giving a class on unarmed restraint techniques, which adapted some experiences in physical security to martial arts. They’re similar to police restraint techniques, but adjusted for not having a firearm (police techniques involve protecting the firearm so the bad guy can’t grab it while being restrained) or handcuffs. In the class were two of my instructors, helping me learn to teach. I started by saying something like, “I’m no expert”, and one of them walked off right then and there.

At a break he came back and asked if I knew why he had left. He told me to never start a lesson or debate by disqualifying myself as an authority. I essentially told the class they shouldn’t listen to me, because I didn’t know what the frack I was talking about. Self-deprecating humor, applied appropriately, is fine – but never start from a position of weakness. I was trying to be humble, but instead destroyed any reason someone would want to learn from me.

Over time I expanded this lesson to “Never start with a negative when your goal is to prove a positive.” Essentially, that places the opposing view ahead of yours and forces you into a defensive position. If I’m writing research to show the value of DLP, I sure as heck better not start it with all the criticisms against DLP.

It’s kind of like a fight. If you allow the opponent to control the ring and dictate the pace, your odds of winning are much lower. You can never win on defense alone.

One important corollary is that you also shouldn’t expect someone to agree with your position based on your credentials alone. I get seriously annoyed by other analysts/pundits who make pronouncements, yet never back them with evidence. Start from a position of strength (assuming you are the expert), but also lead the reader, with evidence and logic, to reach your conclusions for themselves.

Most black belts are crappy martial artists and teachers… if their techniques suck, find another one. Respect still needs to be earned.

Enough with the preachy stuff…

On to the Summary:

Webcasts, Podcasts, Outside Writing, and Conferences

Favorite Securosis Posts

Other Securosis Posts

Project Quant for Databases:

Favorite Outside Posts

Top News and Posts

Blog Comment of the Week

This week’s best comment comes from David in response to Quick Thoughts on the Point of Sale Security Fail Lawsuit (there were a TON of good comments in this thread, including some from Anton Chuvakin):

With the Radiant POS Lawsuit one wonders if a Micros POS suit will follow? As a QIRA forensics investigator, I saw a 10 to 1 compromise rate of Micros over Radiant systems. Micros REM had such bad stretch of PCI failures.

–Rich

Wednesday, December 02, 2009

Project Quant: Database Security Planning, Part 2

By Adrian Lane

  • Rich
  • In our last post on Project Quant for Database Security Metrics, we started to examine Planning. To finish Planning, we need to address access controls, database monitoring, and data classification strategies. Once again, we are following the pattern of determining requirements, determining how the requirements apply to the business, figuring out how to accomplish the goals, and then documenting intentions. We will list the specific metrics later in this series, but at this stage research time will be the biggest cost.

    AAA

    Access controls and authorization are the most complex database security area we will cover, and given the fluidity of users and rules the one most likely to create security issues by varying from the specification. Databases have three classes of users: administrators, database programmers, and application users – each with very different needs. It is important to plan for additional users and roles, as database use cases change. It is very important to have a plan for revoking permissions quickly without impairing general usage. I hate to say “expect the unexpected”, but with database access control planning, it’s particularly useful to provide some flexibility in advance. Access control planning impacts many other database security efforts, especially with data classification and privacy policy enforcement.

    1. Define Requirements: What are the access control guidelines? Determine which business functions are being supported, which systems support those functions, who needs access to the system, and what facilities they are allowed to use. For administrative roles, determine what tasks are performed. Identify additional security and compliance requirements.
    2. Define Groups, Roles, & Ownership: Based upon requirements, develop roles and groups to support business functions and enforce security constraints. Determine object and data ownership and formulate a permissions model for the database, schemas, and tables. Plan how users will obtain permissions, revocation, and use cases not accounted for in the model. Identify service account usage.
    3. Define Implementation: Database permissions are established within the database, and externally from the database. Define which facilities are responsible for policy enforcement. Define method for verification of policy. Remember, this is a strategic planning exercise; don’t get too bogged down in the details.
    4. Document: Document requirements. Clarify database use models from administration. Train administrative staff on policy.

    Remember that this is the planning stage, and is either focused on general requirements (policies for administrator accounts), or planning for a specific database. For existing systems, we’ll document their current AAA configurations in later phases.

    Monitoring

    Database monitoring verifies database usage. It provides near-real-time analytics to detect usage violations, usage profiling, and anomaly detection. While secure configuration serves as a preventative control; monitoring is detective, and used to verify the database is being used as intended. Think of it as similar to having black and white lists for database transactions. But to build those lists, you need an idea of what you wish to accomplish, or what activities should never occur. As every database is used differently, you have to define what is appropriate and what isn’t. Identify events you are interested in, then define acceptable behaviors and outcomes.

    1. Define Activities: Investigate business processes. Define critical operations and functions. What activities does the system support, and what subset are you interested in monitoring. Identify security and compliance in relation to data privacy, fraud detection, and system misuse.
    2. Define Violations: Determine which events indicate problems. Consider users, time of day, function, data volume, and other available attributes that can identify suspect transactions. Identify criticality of events and specify desired response. Consider periodic review of general database usage in order to refine policy.
    3. Identify Event Collection: How will you capture events? Determine what event collections are available. Map policies to event collection for misuse detection.
    4. Define Event Notifications: When a policy violation is discovered, how will you react? Specify how event notification will happen and who will be responsible.
    5. Document: Ensure all concerned parties are aware of their responsibilities and coordination points with other groups.

    Classification

    Data classification for databases is a necessary step for many compliance and data privacy regulations. In a practical sense, it often devolves into a giant data labeling or classification project that wastes time and effort. You will need to investigate requirements and best practices, but we recommend that you avoid using an overly detailed model that nobody will actually use. Figure out what needs to be secure, but be general and pragmatic your data security approach.

    1. Identify Requirements: What is your high level scheme? What is considered sensitive, and how will you define it?
    2. Specify Data Security: What will you do with sensitive information? Formalize intent and security levels for the different data types, and different audiences for the information.
    3. Select Access Method: What is your classification model? Siloed, hierarchical, and labeling are all options.
    4. Map to AAA: How will your access control system implement the data security model. Based upon the security model, map access controls to systems capabilities. This step comes after access control review, but iterative adjustments to the plan are common. These models are implemented on top of access controls, but in some cases underlying data features such as labeling support more granular control.
    5. Document: Data classification affects usage, requiring education of users, IT, and application developers.

    Keep in mind that this is all strategic planning. At this stage of analysis you will not be examining specific statements or policies. During this planning, there is a tendency to begin delving into implementation specifics that are simply not helpful at this stage. Focus on the big picture: how data moves and is used within the organization. This series will delve into the specific later.

    –Adrian Lane

  • Rich
  • Tuesday, December 01, 2009

    Sign Up To Drop Comment Moderation

    By Rich

    We hate that we have to moderate comments, but the spammers are relentless and there’s no way we’ll let those jerks ruin our site.

    I realized I can disable moderation on a per-account basis without having to give you editing or moderation rights. All you have to do is register with the site, and drop us an email with your username at info@securosis.com. We’ll add you to our super secret group, and you can login and skip all that moderation silliness.

    A few of you comment on the blog pretty regularly, and we hate that we have to review everything first and slow the discussion down. Hopefully this will help ease the problem.

    –Rich

    Top Questions Regarding Guardium Acquisition

    By Adrian Lane

    I spent about 8 hours on the phone yesterday discussing the Guardium acquisition with press, analysts, security vendors, and former associates in the Database Activity Monitoring space. The breadth of questions was surprising, even from people who work with these products – enough that I thought we should do a quick recap for those who have questions. First, for those of you looking for a really quick overview of Database Activity Monitoring, I just completed an introductory series for Dark Reading on The ABCs of DAM and What DAM Does. Here are some specific questions I have gotten pertaining to the acquisition, in no particular order:

    1. What does this mean for the remaining DAM vendors? It means lots of good things. It means that a major firm has placed a big bet on Database Activity Monitoring, spotlighting the technology in a such way that a wider set of customers and competitors will be paying attention to this technology. That means more press coverage. But most importantly it means IBM will now advocate the suitability of DAM for compliance. Additionally, the remaining DAM players will be furiously tuning their marketing materials to show competitive differentiation.
    2. What did IBM want to accomplish and how will the software group roll this out? and What does this say about IBM’s security strategy? These are great questions and will require a more in-depth examination of IBM’s security strategy. I will tackle this in a future post.
    3. Is this justification for DAM as a compliance platform? Yes it is. IBM provides validation in a way that companies like Fortinet and Netezza simply cannot. DAM has never had a single “must have”, killer application, and may never. But with thousands of Global Services personnel trained on this technology and out educating customers on how it helps with security, operations management, and compliance; I expect a big uptick in acceptance.
    4. How does this fit with existing IBM products? Great, poorly, and both. Philosophically, it’s a great fit. IBM has a handful of auditing technologies for every one of their database platforms, and they have the SIM/Log Management platform from the Consul acquisition, so there are some complimentary pieces to DAM. In many ways, DAM can be used as a generic database event collection and analysis engine. It can fit a lot of different purposes from real time security analytics to detailed forensic analysis. On a more practical level this is a poor fit. The Guardium product is not on an IBM stack (Websphere, DB2, Tivoli, etc). IBM really needs a comprehensive vulnerability assessment product to fill in compliance gaps even more than it needed DAM. This is one of the reasons many felt Application Security Inc. would have been a better fit. And despite what was said at the press launch, Guardium is still viewed as a hardware firm, not a software vendor. I am going to get hate mail on these last two points, but I have spoken with enough customers who share this perspective that IBM has more to worry about than my opinions.
    5. Does the mainframe database security market needs a facelift? OK, no one really asked this specific question, but was behind several different questions on DB2 security. Mainframe database security is old school: Access controls (ACF2, RACF, Top Secret), small numbers of administrators with SOD, use of tailored audit trails and physical isolation. Encryption to secure backup media is fairly common. While the use cases for mainframes continue to grow as companies look to leverage their investments, the security model has changed very little in the last 10 years. Monitoring provides the capability to verify usage, near-real-time analysis and non-database event collection. These all advance the state of mainframe DB security.
    6. Is this an internally-facing deal to serve existing customers or is there a genuine security global strategy? It’s a little of both. I do not believe what was said in the press call: that this is all about heterogenous database security. They have it and they will use it, but the focus will be on existing IBM customers. IBM Global Services will absolutely want support for every database environment they can get because their customers have everything, but the rest of IBM will want mainframe support first and foremost. I know firsthand that there were many in IBM pushing for iSeries-AS/400 support, and a smattering who wanted Informix capabilities as well. I imagine for the time being they will continue with the current support matrix, provide deeper and more seamless mainframe monitoring, and then service the squeakiest of the wheels. I am not exactly sure which that will be, but believe the first efforts are introspective.
    7. Does this mean that DAM is mature? DAM products have been reasonably mature for a while now. Once the vendors fixed their gawd-awful UI, had appropriate compliance and security policy bundles, and offered multiple data collection and deployment models, it became a mature product space. Visibility and a must-have use case have been elusive; so DAM has not gained the same kind of traction as DLP, email, and web security.
    8. Who is going to be bought next? Probably the most common question I got and, really, I don’t know. You tell me who the interested buyer is and I can tell you who the best fit would be and why. But as [shameless promotion] product and market analysis is how I make my living [/shameless promotion], I am not sharing that information unless you are serious.

    –Adrian Lane

    Quick Thoughts on the Point of Sale Security Fail Lawsuit

    By Rich

    Let the games begin.

    It seems that Radiant Systems, a point of sale terminal company, and Computer World, the company that sold and maintained the Radiant system, are in a bit of a pickle. Seven restaurants are suing them for producing insecure systems that led to security breaches, which led to fines for the breached companies, chargebacks, card replacement costs, and investigative costs. These are real costs, people, none of that silly “lost business and reputation” garbage.

    The credit card companies forced him to hire a forensic team to investigate the breach, which cost him $19,000. Visa then fined his business $5,000 after the forensic investigators found that the Radiant Aloha system was non-compliant. MasterCard levied a $100,000 fine against his restaurant, but opted to waive the fine, due to the circumstances.

    Then the chargebacks started arriving. Bond says the thieves racked up $30,000 on 19 card accounts. He had to pay $20,000 and managed to get the remainder dropped. In total, the breach has cost him about $50,000, and he says his fellow plaintiffs have borne similar costs.

    The breaches seemed to result from two failures – one by Radiant (who makes the system), and one by Computer World (who installed and maintained it).

    1. The Radiant system stored magnetic track data unencrypted, a violation of PCI standards.
    2. Computer World enabled remote access for the system (the control server on premise) using a default username and password.

    While I’ve railed against PCI at times, this is an example of how the system can work. By defining a baseline that can be used in civil cases, it really does force the PoS vendors to improve security. This is peripheral to the intent and function of PCI, but beneficial nonetheless. This case also highlights how these issues can affect smaller businesses. If you read the source article, you can feel the anger of the merchants at the system and costs thrust on them by the card companies. Keep in mind, they are already pissed since they have to pay 2-5% on every transaction so you can get your airline miles, fake diamond bracelets, and cheap gift cards.

    The quote from the vendor is priceless, and if the accusations in the lawsuit are even close to accurate, totally baseless:

    “What we can say is that Radiant takes data security very seriously and that our products are among the most secure in the industry,” Paul Langenbahn, president of Radiant’s hospitality division, told the Atlanta Journal-Constitution. “We believe the allegations against Radiant are without merit, and we intend to vigorously defend ourselves.”

    Maybe they can go join a certain ex-governor from Illinois on the next season of The Celebrity Apprentice, since they are reading from the same playbook.

    There are a few lessons in this situation:

    • The lines have moved, and PCI now affects civil liability and government regulation.
    • PCI compliance, and Internet-based cardholder security, now affect even small merchants, even those without an Internet presence.
    • We have a growing body of direct loss measurements (time to revise my Data Breach Costs model).
    • We are seeing product liability in action… by the courts, not legislation.
    • As with many other breaches, following the most basic security principles could have prevented these.

    I think this last quote sums up the merchant side perfectly:

    “Radiant just basically hung us out to dry,” he says. “It’s quite obvious to me that they’re at fault… . When you buy a system for $20,000, you feel like you’re getting a state-of-the-art sytem. Then three to four months after I bought the sytem I’m hacked into.”

    –Rich

    Cloud Risk Thoughts: Deciding What, When, and How to Move to the Cloud

    By Rich

    I’ve been working with the Cloud Security Alliance on the next revision of their official Security Guidance document, and we decided to include a short note on risk in the beginning, to help add some context. Although we are deep in the editorial process, I realized this is the sort of thing I should put out for some public comment, as it’s at the beginning of the document and will help frame how it’s read.

    With so many different cloud deployment options – including SaaS vs. PaaS vs. IaaS, public vs. private, internal vs. external, and various hybrid scenarios – no list of security controls can cover all circumstances. As with any security area, organizations should adopt a risk-based approach to moving to the cloud and selecting security options. The following is a simple framework to help evaluate initial cloud risks and inform security decisions.

    This process is not a full risk assessment framework, nor a methodology for determining all your security requirements. It’s a quick mechanism for evaluating your tolerance for moving an asset to various different cloud computing models. There is a full section on risk management in the Guidance, and I’m also working on a data security specific post to mesh with the other cloud data security content I’m developing.

    Identify the asset for the cloud deployment

    At the simplest, assets supported by the cloud fall into two general buckets:

    1. Data
    2. Applications/Functions/Processes

    We are either moving information into the cloud, or transactions/processing (from partial functions, all the way up to full applications).

    With cloud computing our data and applications don’t need to reside in the same location, and we can even shift only parts of functions to the cloud. For example, we can host our application and data in our own data center, while still outsourcing a portion of its functionality to the cloud through a Platform as a Service.

    The first step in evaluating risk for the cloud is to determine exactly what data or function is being considered for the cloud. This should include potential uses of the asset once it moves to the cloud, to account for scope creep. Data and transaction volumes are often higher than expected, and cloud deployments often scale higher than anticipated.

    Evaluate the asset

    The next step is to determine how important the data or function is to the organization. You don’t need to perform a detailed valuation exercise unless your organization has a process for that, but you do need at least a rough assessment of how sensitive an asset is, and how important an application/function/process is.

    For each asset, ask the following questions:

    1. How would we be harmed if the asset became public and widely distributed?
    2. How would we be harmed if an employee of our cloud provider accessed the asset?
    3. How would we be harmed if the process or function was manipulated by an outsider?
    4. How would we be harmed if the process or function failed to provide expected results?
    5. How would we be harmed if the information/data was unexpectedly changed?
    6. How would we be harmed if the asset was unavailable for a period of time?

    Essentially we are assessing confidentiality, integrity, and availability requirements for the asset; and how those are affected if all or part of the asset is handled in the cloud. It’s very similar to assessing a potential outsourcing project, except that with cloud computing we also have a wider array of deployment options including internal models.

    Map the asset to potential cloud deployment models

    Now we should have an understanding of the asset’s importance. Our next step is to determine which deployment models we are comfortable with. Before we start looking at potential providers, we should know if we can accept the risks implicit to the various deployment models – private, public, community, or hybrid and internal vs. external options.

    For the asset, determine if you are willing to accept the following options:

    1. Public.
    2. Private, internal/on-premises.
    3. Private, external (including dedicated or shared infrastructure).
    4. Community; taking into account the hosting location, service provider, and identification of other community members.
    5. Hybrid. To effectively evaluate a potential hybrid deployment, you must to have at least a rough architecture of where components, functions, and data will reside.

    At this stage you should have a good idea of your comfort level for transitioning to the cloud, and which deployment models and locations best fit your security and risk requirements.

    Evaluate potential cloud service models

    In this step focus on the degree of control you’ll have at each SPI tier (Software, Platform, or Infrastructure as a Service) to implement any required risk management. If you are evaluating a specific offering, at this point you might switch to a fuller risk assessment.

    Your focus will be on the degree of control you have to implement risk mitigations in the different SPI tiers. If you already have specific requirements (e.g., for handling of PCI regulated data) you can include them in the evaluation.

    Sketch the potential data flow

    If you are evaluating a specific deployment option, map out the data flow between your organization, the cloud service, and any customers/other nodes. While most of these steps have been high-level, before making a final decision it’s absolutely essential to understand whether, and how, data can move in and out of the cloud.

    If you have yet to decide on a particular offering, you’ll want to sketch out the rough data flow for any options on your acceptable list. This is to insure that as you make final decisions, you’ll be able to identify risk exposure points.

    Document Conclusions

    You should now understand the importance of what you are considering moving to the cloud, your risk tolerance (at least at a high level), and which combinations of deployment and service models are acceptable. You’ll also have a rough idea of potential exposure points for sensitive information and operations.

    These together should give you sufficient context to evaluate any other security controls. For low-value assets you don’t need the same level of security controls and can skip many of the recommendations – such as on-site inspections, discoverability, and complex encryption schemes. A high-value regulated asset might entail audit and data retention requirements. For another high-value asset not subject to regulatory restrictions, you might focus more on technical security controls.

    Not all cloud deployments need every possible security and risk control. Spending a little time up front evaluating your risk tolerance and potential exposures will provide the context you need to pick and choose the best options for your organization and deployment.

    –Rich

    Clientless SSL VPN Redux

    By David J. Meier

    Let’s try this again. Obviously I didn’t do a very good job of defining what ‘clientless’ means, creating some confusion. In part, this is because there’s a lot of documentation that confuses ‘thin client’ with ‘clientless’. Cisco actually has a good set of definitions, but in case you don’t want to click through I’ll just reiterate them (with a little added detail):

    • Clientless: All traffic goes through a standard browser SSL session – essentially, a simple proxy for web browsing. A remote client needs only an SSL-enabled web browser to access http – or https web servers on the corporate LAN (or the outside Internet, which is part of the problem we’re talking about).
    • Thin Client: Users must download a small, Java applet for secure access to TCP applications that use static port numbers. UDP is not supported. The client can add security features, and allows tunneling of non-web traffic, such as allowing Outlook to connect to an Exchange server. [Other vendors also use ActiveX.]
    • Client: The SSL VPN client downloads a small client to the remote workstation and allows full, secure access to all the resources on the internal corporate network. It’s a VPN that tunnels all traffic over SSL, as opposed to IPSec or older alternatives.

    OK, so these definitions are a bit Cisco specific, but they do a good job. By “clientless” we’re stating no Java or ActiveX is in play here. This is key, because both the thin and full client models are immune to the flaw described in the US-CERT VU. The vulnerability is only when using a real, completely clientless, SSL VPN through the browser.

    Speaking of the CERT VU, I think everyone can agree that it was poorly written. There are vendors in there who have never provided any sort of clientless SSL VPN (i.e., glorified proxy) functionality, so it’s better not to use that list even though most are marked as “Unknown”.

    At this point if you’ve identified a true clientless SSL VPN in your environment, and are wondering how to mitigate the threat as much as possible, the best thing you can do is to make sure that the device only allows access to specified networks and domains. The more access end users have to external sites, the wider the window of opportunity is open for exploit. That being said, it is still generally a bad idea to use clientless VPNs on public networks, since they always provide a lower barrier against attacks can be provided in a (thin or full) VPN client, especially in light of all the threats to DNS in such an environment. It’s not hard to mess with a user’s DNS on an open (or hostile) network, or perform other man-in-the-middle attacks.

    Clientless SSL VPNs are ultimately very fancy proxies, and should be carefully in tightly controlled environments. In situations where full control or public access is required there are far more secure solutions, including client-based SSL VPNs (OpenVPN, etc…) and IPsec options.

    –David J. Meier

    Monday, November 30, 2009

    Serious Flaw in Clientless SSL VPNs

    By David J. Meier

    Good job! You paid tens of thousands of dollars for that shiny new name-brand VPN, and then decided to deploy its web VPN functionality because, well, it was just easier than deploying software clients.

    An underpinning of common web security that dates back to Netscape Navigator 2.0 is the “same origin” policy for JavaScript. Your clientless SSL VPN intentionally breaks this, and that’s considered a feature.

    What does this mean for you? If your implementation allows dynamic URL rewriting (i.e., end users can put in any URL and have the web VPN fetch it) it’s GAME OVER, since every website a user views through that service appears to come from the same domain – your trusted VPN server. This is worst-case, but there are many other scenarios where an attacker could set up shop to exploit the session, especially if the end user is on a public network where DNS is compromised. There are a bunch of ways to exploit this, especially in multi-step attacks when the bad guy can get on the internal network (easy enough with malware). Don’t be surprised if this shows up in BeEF (a comprehensive tool for exploiting browser vulnerabilities) soon.

    Friends don’t let friends connect clientless – fix it the right way. Read the US-CERT vulnerability note for more detailed information. You can mitigate many of the potential problems by only authorizing the SSL VPN to manage traffic for trusted domains, and avoid tunneling to random destinations. If it’s a full SSL VPN product with a re-browsing feature, turn that capability off!

    Oh, not to add to the confusion, but Sun’s JRE is also recently vulnerable to same origin policy violations as well.

    –David J. Meier

    Christmas Wish

    By Adrian Lane

    When there is good news in holiday retail, we usually hear. In this economic climate, it’s headline news. When there is bad news, we don’t hear much. The news from PayPal, according to PC Magazine’s article on Record Breaking Black Friday, was that total transactions were way up – in some cases by 20%. What they are not disclosing is the total dollar volume. In fact, most of the quotes I saw from individual retailers are along the lines of “We did well”, but we don’t know how low their expectations were, and I have yet to see hard sales numbers. Which is annoying because they have the data, so I typically assume the worst.

    As I was reading the reports I started to wonder what the fraud rates were this year. I am willing to bet the fraud curve would see higher growth than total online sales. If we see a 10-20% uptick in online transactions, did we see a 20-30% increase in fraud? If mobile transactions – the new greenfield for attackers – are up 140%, did we see exploitation of this new medium? It dawned on me that, with all of this commerce tracked and analyzed so closely, most fraud data should be available immediately, and fraud rates should be confirmed within a week or two. If retailers share holiday sales numbers with analysts, why not the fraud data?

    I know most credit card processing houses and companies like First Data have reasonably sophisticated fraud detection tools, and I am told that PayPal and eBay have incredibly advanced analysis capabilities. I would love to see even a generic breakdown of rates of ecommerce fraud, credit card fraud and fraud rates by location. I don’t need specifics, but trends would be nice – something like the a percentage they were certain was fraud, what percentage was suspect, and what sort of after-the-fact complaints are coming in. It’s a big part of the payment processors’ business, so I know they are watching closely and tracking the activity. Come on, all I want for Christmas is a little forensics! It’s the season of sharing. I know they have the data, but I guess I should not hold my breath in anticipation.

    –Adrian Lane

    Coming Soon: Bit.ly Adding Real Time Security Scanning for All Links

    By Rich

    Like many of you, for a long time I really couldn’t see the use of those URL shortener service thingies. Sure, when I was designing sites I tried to avoid long, ugly URLs, but I never saw slapping some random characters after a common base URL as being any more useful. I considered my awareness of the existence of these obscure services as an aberration induced by my geek genes, rather than validation of their existence or popularity.

    Then came Twitter, and the world of URLs was never the same.

    Twitter firmly swapped URL shorteners out of the occasionally useful into the pretty darn essential column. That magical 140 character limit, combined with the propensity of major sites to use URLs nearly as long as their software user agreements, thrust shorteners in front of millions of new eyeballs.

    One issue, pointed out by more than a few security pundits and rickrolling victims, is that these shorteners completely obscure the underlying URL. It’s trivial for a malicious attacker to hide a link and redirect a user to any sort of malicious site. It didn’t take long for phishers and drive-by malware attacks to take advantage of the growing popularity of these obfuscation services.

    Some of the more popular Twitter clients, like Tweetie, added optional URL previews to show users the full link before clicking through to the site. In part, this was enabled by shorteners like bit.ly enabling previews through their APIs. A nice feature, but it’s not one that most users enable, and it isn’t available in most web interfaces or even all standalone Twitter clients.

    Bit.ly announced today that they are taking things one major step further and will soon be scanning all links, in real time, using multiple security services. Bit.ly will be using a collection of databases and scanning services to check both new and existing links as users access them. Websense’s cloud-based scanner is one of the services (the one that pre-briefed me), and bit.ly will use at least one other commercial service as well as some free/open databases.

    Update: according to the bit.ly blog, VeriSign and Sophos are the other scanning/database engines.

    In the case of Websense, bit.ly will tie directly into their content scanning service to check links in real time as they are added to the bit.ly database. Websense uses a mix of real time scans (for things like malware and certain phishing techniques) and their database of known bad sites. The system won’t rely only on the database of previously-detected bad sites, but will also check them at access time.

    If a link is suspected of being malicious, Websense marks it and bit.ly will redirect users to a warning page instead of directly to the site. Users can still click through, and I’m sure plenty will, but at least those of us with a little common sense are less likely to be exploited.

    Bit.ly won’t only be scanning new links added to the database, but will be checking existing links in case they’ve become compromised. This also reduces the chances of the bad guys gaming the system by adding a clean version of their site for an initial scan, then sneaking in malware for future visits.

    I like bit.ly’s approach of checking existing links in case they get compromised, rather than only scanning new links as they are added. This will make it harder for bad guys to game the system. This solution is a lot better than the anti-phishing built into browsers and some search engines, since those rely only on databases of previously-discovered known bad sites.

    It’s also a two-way system, and although Websense is being paid for the scanning, they gain the additional benefit of now leveraging the results once millions of new (and old) links start flowing through their service. Every bad website Wensense finds when a user submits a link to bit.ly is added to the database used by all their other products.

    Finally, there’s nothing that says we’re only allowed to use bit.ly for Twitter. The entire Internet now gains a real-time security scanning service… for free. Have a questionable link? Shorten it through bit.ly and it’s scanned by Websense and at least one other commercial service, as well as all the free/open/cheap databases bit.ly uses (sorry, I don’t know what they are).

    This isn’t to say that any of the individual scans, or all of them together, can identify every malicious link they encounter, but this is a significant advance in web services security. It’s a perfect example of cloud computing enhancing security, rather than creating new risks. Links sent through bit.ly will now be safer than the original links viewed directly.

    This isn’t live yet, but should be by the end of the year.

    –Rich

    Sunday, November 29, 2009

    Guardium Acquired by IBM

    By Adrian Lane

    Tel Aviv newspaper TheMarker reports that IBM will complete its acquisition of database activity monitoring company Guardium Monday, November 30th. While it is early, and I have yet to confirm the number with anyone at IBM or Guardium, the sale price is being listed at $225 million. This is by far the largest acquisition in the DAM space to date! I had estimated Guardium’s revenue for 2008 at $35-38M, and $38-40M for 2009. If the $225M acquisition price is accurate, at a standard 5x multiple, it would suggest that they were closer to $45M. But my guess is, with an impressive customer list like Citigroup and BofA, the bookings multiple is a little higher than standard.

    Rumors have been circulating for over a year that large firms have approached Guardium and Imperva about being acquired. These two firms are the unquestioned leaders in database activity monitoring, and for larger technology firms looking to fill gaps in their data security portfolio, these discussions made sense. IBM has been interested in DAM for many years, with multiple divisions playing footsie with different DAM vendors, but most didn’t fit IBM’s business. Guardium is one of the only firms still standing with a mainframe monitoring solution, which is a major prerequisite for much of IBM’s customer base. From the IBM perspective, the functionality makes sense and fits well into some of their existing security products. From an architectural standpoint, integration (as opposed to just sharing data and events) will be a challenge. I do not know which section of IBM will own this product or how it will be sold, but those are certainly questions I will ask when I get the chance.

    Last year around this time I predicted, based upon the harsh economic climate, that several vendors in this space would be acquired or out of business by now. Tizor was sold for $3.1 million, and as predicted the remnants of IPLocks disappeared. From the rumors I thought Guardium would be next and it was. I was dead wrong, though, in that many security vendors – such as in the SIEM space – were seeing revenue growth despite the miserable economic climate. The impressive $225M figure really surprised me. I had estimated the DAM market at $70-80 million last year, the wide range resulting from the many smaller firms with unknown revenue. For 2009, I estimate revenue has climbed into the $85M range, and that’s with fewer players overall.

    Where does that leave us? With Guardium & Tizor now sold to IBM & Netezza respectively, and the list of viable competitors having thinned out, I think that Imperva, Sentrigo, AppSec, and Secerno just became a little more valuable. I hate to call it validation, but this is the first time we have seen a big dollar buy. There remain a lot of firms like EMC, McAfee, Oracle, Symantec, and others who would really benefit from gaining DAM technology, so I expect additional acquisitions in the next 6 months. I spoke with some security product vendors who are building their own DAM variants in house, with anticipated launch this coming year. Still others, like Fortinet, launched a DAM product based upon a combination of in house product development in conjunction licensed code. Rich and I still consider DAM more a collection of markets and tools than a single market, but regardless, IBM is betting on the value DAM can provide their customers.

    I must add a personal note regarding this sale, having competed against the Guardium product and team head to head for four years. In 2004, I thought they had a terrible product. I used to tell them as much, which made me a very popular guy! I also remember a particular ISSA meeting where the Guardium presenter was ridiculed mercilessly by the audience for what was perceived as a failed implementation (honestly, I was not one of the hecklers!), but it showed that at that time security professionals did not believe Guardium’s proxy model would work. But Guardium is the only vendor to have truly focused on their monitoring product and offer significant improvement quarter over quarter, year over year. By 2006 they were consistently beating their competition in head to head evaluations of database activity monitoring. While they started with a product that was barely good enough, I have to applaud their staff for being responsive to market trends, for consistently addressing customer complaints, and for systematically outstripping most of their competition in performance and out-of-the-box functionality. I still think the product is hard to deploy and the appliance based model has scalability and large deployment manageability issues, but hey, no one’s perfect. They have stayed focused better than anyone else in this space, and most importantly, have the most tenacious and omnipresent sales force I have ever seen in a small company. This is a personal ‘Congratulations!’ to the Guardium team on a job well done! You guys deserve it.

    –Adrian Lane

    Wednesday, November 25, 2009

    We Give Thanks

    By Adrian Lane

  • Rich
  • David J. Meier
  • David Mortman
  • I admit it’s not even 2:00 in the afternoon and my mind has already gone on vacation. Apple pies are in the oven, and pumpkin pies are queued up and waiting to go in.

    We decided to forgo the Friday summary this week because we are pretty sure no one would read it even if we wrote one, so we decided on a pre-Thanksgiving “What are we thankful for in security?” post instead.

    • Rich: “I’m thankful for good, old-fashioned human behavior; especially its propensity to never change. Without it, I’d have to find a real job.”
    • Adrian: “I am thankful most attackers exploit well known defects to penetrate defenses … they are so much harder to detect when they are clever. I am thankful for Mordac, Preventer of Information Services, who has created a face for our industry.”
    • Mortman: “I’m thankful for people who think our capabilities are far better then they actually are and as a result don’t do certain things under the assumption that they’d get caught. Without them, I’d have to work much harder.”
    • Chris: “I am thankful that I can get away with spending so little attention on personal security as a Mac user. I am pretty paranoid, but if I’d spent the same attention on securing Windows systems over the past 10 years, I would have been compromised many times. I’m thankful national breach disclosure laws are on the table.”

    Have a wonderful Thanksgiving holiday! We’ll be back Monday.

    –Adrian Lane

  • Rich
  • David J. Meier
  • David Mortman
  • Project Quant: Database Security Planning

    By Adrian Lane

  • Rich
  • This is the third post in our series on Project Quant for Database Security (see Part 1 & Part 2). The first step in our metrics process framework is to gather requirements and plan out your security program. Just as with any development project, your motivation and goals should be documented up front, and later used to gauge the success of your effort. Like most IT projects, gathering requirements is a large part of the work.

    I want to clarify a couple points based on comments we have received to date, before I delve into planning. As Rich pointed out at the beginning of the previous post, database security is an incredibly broad subject, comprised of several specific elements. The original Project Quant for Patch Management focused on the nuances of a single IT task, whereas this database security project includes a minimum of four separate efforts. We originally planned to create a separate process for each effort: configuration management, auditing & monitoring, access control, and data protection. Heck, we even considered breaking down configuration management into smaller subtasks. When we dug in one afternoon to start identifying specific actions, we realized there was both a lot of overlap between our initial processes, and a number of important functions they missed. Instead, we came up with the generalized process framework we introduced in Part 2, with a series of sub-processes.

    We know this won’t exactly match everything you do, but as with Project Quant for Patch Management, we are proposing a generic framework that encompasses most possible activities, from which you can pick and choose to meet your own needs.

    In Quant for Patch Management, we also found that a handful of the metrics accounted for the bulk of the costs. Some 30% did not have material impact on overall cost. Based on our initial research the same is true with database security, so we want to provide a lot more breadth in this series and focus on principal metrics, foregoing the level of detail we used in PQPM. We will mention these extra tasks in each phase, but leave it up to readers to include any additional cost metrics which are useful in their own analysis.

    For the planning stage, we include Configuration, AAA, Monitoring, and Classification. Starting with Configuration Standards:

    1. Identify Requirements: Requirements include everything from adopting database security best practices to PCI compliance. They may originate from external or internal sources. Requirements, especially for industry and regulatory compliance, are generic and require some interpretation. Directives such as “implement separation of duties” or “secure the database from SQL injection” are common. In other cases specific security advisories from CERT or patches from vendors are less ambiguous, but still require analysis to determine suitability. Identify sources of information and identify requirements appropriate to your situation, including vendor-provided security configuration guides, NIST, and the Center for Internet Security.
    2. Develop Standards: Starting from security or compliance requirements, which portions are relevant to you? This is where you specify standards needed to satisfy requirements. Select settings, controls, and standards as necessary, pulling from the sources and matching your requirements.
    3. Choose Implementation: Most database security functions can be accomplished in more than one way. For example, “capture failed logins” can be satisfied through external monitoring or internal auditing. Satisfying a requirement on Oracle may be accomplished differently than on SQL Server. Don’t get bogged down into specifics, but select a strategy that meets you standard and fits your operational model.
    4. Document: Record your findings and your decisions. If you are going through this process, odds are there are other people involved who will need to understand and adhere to the standard.

    For many of you, you are probably saying to yourself “Holy @&!^@, just planning is a huge effort! Where do I begin?” Identifying requirements for database security or PCI or whatever is lengthy and complex, and it’s not clear where to find this information. And I am being a hypocrite here, doing exactly what I have said you should not do and criticized others for: dropping a big, hairy task in your lap without pragmatic advice. While our focus in this project is identifying and quantifying costs to secure databases, we can’t totally ignore what it takes to do the work, and we need to provide a some specific advice along the way. I will provide much more detail later in this series with use cases, but for now I can provide a couple pointers to steer you in the right direction.

    Database configuration affects security as well as database function. For planning purposes you will be considering installation or removal of functions, network communications, platform versions, structures, use of underlying hardware or OS resources, physical location, and reliance on external programs/functions. All of these database functions are impacted by access control because appropriate use is determined by ownership and access, but we want to start with the basic capabilities and refine from there.

    Start your effort by locating sources of information and standards bodies. What are others doing to meet security requirements? The database vendors are a good place to start, as they provide recommend setup and configuration, and list recent security notifications. Leverage security and operations personnel within your company to highlight security issues. Look to local DBA groups for advice on how they set up databases securely. As far as compliance, you can wade through the law doing your best to understand it, but if you have co-workers who specialize in audit and compliance, ask for assistance. If you company has security guidelines in place you are lucky, so use them to help scope the set of tasks. These are the steps many companies must perform, but research and discovery is a very large part of the process and typically an overlooked when costing.

    I am going to keep this post short to encourage feedback on the general approach first. Rather than inundate you with details, I will cover the remaining three preparation steps in the next post.

    –Adrian Lane

  • Rich
  • Tuesday, November 24, 2009

    M86 Acquires Finjan

    By Adrian Lane

    Given how much PR email I get on a daily basis – which does help keep me up to date on what’s happening in the market segments I cover – I seldom miss newsworthy security events. On occasion I totally miss something of interest, like the M86 acquisition of Finjan … three freakin’ weeks ago! For those of you interested in email and web security, big firms don’t offer a lot of interesting tidbits to write about, which makes the smaller firms more fun to watch. In a mature market segment like email and web security, small security businesses need to innovate with technology and sales. To compete with established players like Google and Symantec, where “follow the leader” is a bad business strategy, you need to employ creative thinking in order to survive. This acquisition makes me think M86 has a slightly different vision than their competitors.

    The Finjan product is an interesting mix of capabilities for web security. Primarily they sold appliances, sitting in the enterprise, acting as gateway servers for content security. Enterprise endpoints are configured to go through the gateway for screening. The product is focused on outbound content, with URL, anti-spyware and basic ‘DLP’ content screening (i.e., regular expression checks). The interesting aspects are the introduction of a proxy model not too long ago, sending remote users through a virtual gateway (in the cloud, of course) that screens and then routes requests. In essence they extend a virtual perimeter around the end point. This is sensible, as most firms will want to secure the endpoint and enforce usage policies regardless if the user is at home, on the road or in the office. Their ‘Vital Cloud’ gives users a pathway to a hybrid appliance/SaaS model, so they can leverage existing hardware while gaining access to additional features not supported by their existing hardware. This is not moving your data to the cloud, but instead offloading the service, which matters if your company worries about security of remote data storage. The remote client and SaaS feature, if I understand the technology correctly, is nothing more than a VPN connection to a virtual server with the client policies. Simple, but it should be effective.

    You have probably noticed that the M86 team has been aggressive with acquisitions, working to create a complete portfolio of features for web content. The merger between 8e6 and Marshal gave them the web and email security pieces needed to compete on a very basic level; those two features are the minimum requirements for entry. But the Avinti acquisition seemed out of place. Rather than a cloud or SaaS play like their competition, they bought a type of behavior analysis tool. Both a powerful and flexible approach to detecting malware in what I was calling virtual Habitrail, but certainly not a novice tool. It required some skill to use, and was not something to put into the hands of your typical 8e6/Marshal customer. What’s more, neither the deployment model nor functions quite fit market trends.

    But in light of the the Finjan acquisition (and I am guessing here), it looks as if M86 is trying to carve a niche as a managed service platform. For many SMB’s, content and email security is a problem they want to pay to have solved. It’s not just that they don’t want to worry about which box is the right one, but they cannot afford to hire specialists to understand threats, create policies, manage gateways, perform content analysis, create blacklists, detect malware, and all the rest. Managed service providers care less about the deployment, and more about leverage of effort. The merger of these products and deployment models would appeal to companies like Perot / Fishnet / Solutionary / SecureWorks, and so on. They would be able to deal with the complexities of Avinti and specifics of how to set up DLP. Being able to drop in an appliance and couple it with a virtual server in your data center for both monitoring and policy enforcement would be appropriate. Granted, Finjan gives M86 a hybrid deployment model previously missing (8e6 and Marshal were on-site appliance and software companies, respectively), allowing customers to stave off hardware obsolescence and still accommodate new features and overhead associated with new policies, but I still don’t think that’s where they are headed. They cannot compete head to head on uptime, pricing, SaaS options and scalability with Websense, Cisco and Proofpoint, but they can offer a depth of function that should be potent in the right hands.

    –Adrian Lane

    Monday, November 23, 2009

    Microsoft IE Issues Reported

    By Adrian Lane

    Over the weekend 0-day exploit was reported in Microsoft Internet Explorer 6 and 7. Both Threatpost and Heise Security posted that the getElementsByTagName() JavaScript method within Microsoft’s HTML viewer has a dangling pointer. This leaves the browser susceptible to code injection; which in the best case crashes the browser, and in a worse case directs you to a malicious site.

    In first tests by heise Security, Internet Explorer crashed when trying to access the HTML page. Security firm Symantec confirms that, while the current zero day exploit is unreliable, more stable exploit code which will present a real threat is expected to appear in the near future. French security firm VUPEN managed to reproduce the security problem in Internet Explorer 6 and 7 on Windows XP SP3, warning that this allows attackers to inject arbitrary code and infect a system with malicious code. Microsoft has not yet commented on the problem.

    The workaround is to disable JavaScript until the patch is available. Yeah, yeah, I know, you have heard this before. And this means half the web pages you visit won’t work and every piece of online meeting software is completely hosed, so you will leave it enabled anyway. It was worth a shot. Be careful until you have patched.

    Another post on the Hackademix site discusses a flaw with the IE 8 XSS filter.

    … it’s way worse than a simple implementation bug. Its root is a flawed design choice: when a potential XSS attack is detected, IE 8 modifies the response (the content of the target page) in order to neuter the malicious code. This is, incidentally, the only significant departure from the NoScript approach, which modifies the request (the data sent by the client) instead, and is therefore immune. … IE 8’s response-changing mechanism can be easily exploited to turn a normally innocuous fragment of the victim page into a XSS injection.

    I will update this post when I have additional information from Microsoft on either issue.

    –Adrian Lane