Securosis

Research

Clientless SSL VPN Redux

Let’s try this again. Obviously I didn’t do a very good job of defining what ‘clientless’ means, creating some confusion. In part, this is because there’s a lot of documentation that confuses ‘thin client’ with ‘clientless’. Cisco actually has a good set of definitions, but in case you don’t want to click through I’ll just reiterate them (with a little added detail): Clientless: All traffic goes through a standard browser SSL session – essentially, a simple proxy for web browsing. A remote client needs only an SSL-enabled web browser to access http – or https web servers on the corporate LAN (or the outside Internet, which is part of the problem we’re talking about). Thin Client: Users must download a small, Java applet for secure access to TCP applications that use static port numbers. UDP is not supported. The client can add security features, and allows tunneling of non-web traffic, such as allowing Outlook to connect to an Exchange server. [Other vendors also use ActiveX.] Client: The SSL VPN client downloads a small client to the remote workstation and allows full, secure access to all the resources on the internal corporate network. It’s a VPN that tunnels all traffic over SSL, as opposed to IPSec or older alternatives. OK, so these definitions are a bit Cisco specific, but they do a good job. By “clientless” we’re stating no Java or ActiveX is in play here. This is key, because both the thin and full client models are immune to the flaw described in the US-CERT VU. The vulnerability is only when using a real, completely clientless, SSL VPN through the browser. Speaking of the CERT VU, I think everyone can agree that it was poorly written. There are vendors in there who have never provided any sort of clientless SSL VPN (i.e., glorified proxy) functionality, so it’s better not to use that list even though most are marked as “Unknown”. At this point if you’ve identified a true clientless SSL VPN in your environment, and are wondering how to mitigate the threat as much as possible, the best thing you can do is to make sure that the device only allows access to specified networks and domains. The more access end users have to external sites, the wider the window of opportunity is open for exploit. That being said, it is still generally a bad idea to use clientless VPNs on public networks, since they always provide a lower barrier against attacks can be provided in a (thin or full) VPN client, especially in light of all the threats to DNS in such an environment. It’s not hard to mess with a user’s DNS on an open (or hostile) network, or perform other man-in-the-middle attacks. Clientless SSL VPNs are ultimately very fancy proxies, and should be carefully in tightly controlled environments. In situations where full control or public access is required there are far more secure solutions, including client-based SSL VPNs (OpenVPN, etc…) and IPsec options. Share:

Share:
Read Post

Cloud Risk Thoughts: Deciding What, When, and How to Move to the Cloud

I’ve been working with the Cloud Security Alliance on the next revision of their official Security Guidance document, and we decided to include a short note on risk in the beginning, to help add some context. Although we are deep in the editorial process, I realized this is the sort of thing I should put out for some public comment, as it’s at the beginning of the document and will help frame how it’s read. With so many different cloud deployment options – including SaaS vs. PaaS vs. IaaS, public vs. private, internal vs. external, and various hybrid scenarios – no list of security controls can cover all circumstances. As with any security area, organizations should adopt a risk-based approach to moving to the cloud and selecting security options. The following is a simple framework to help evaluate initial cloud risks and inform security decisions. This process is not a full risk assessment framework, nor a methodology for determining all your security requirements. It’s a quick mechanism for evaluating your tolerance for moving an asset to various different cloud computing models. There is a full section on risk management in the Guidance, and I’m also working on a data security specific post to mesh with the other cloud data security content I’m developing. Identify the asset for the cloud deployment At the simplest, assets supported by the cloud fall into two general buckets: Data Applications/Functions/Processes We are either moving information into the cloud, or transactions/processing (from partial functions, all the way up to full applications). With cloud computing our data and applications don’t need to reside in the same location, and we can even shift only parts of functions to the cloud. For example, we can host our application and data in our own data center, while still outsourcing a portion of its functionality to the cloud through a Platform as a Service. The first step in evaluating risk for the cloud is to determine exactly what data or function is being considered for the cloud. This should include potential uses of the asset once it moves to the cloud, to account for scope creep. Data and transaction volumes are often higher than expected, and cloud deployments often scale higher than anticipated. Evaluate the asset The next step is to determine how important the data or function is to the organization. You don’t need to perform a detailed valuation exercise unless your organization has a process for that, but you do need at least a rough assessment of how sensitive an asset is, and how important an application/function/process is. For each asset, ask the following questions: How would we be harmed if the asset became public and widely distributed? How would we be harmed if an employee of our cloud provider accessed the asset? How would we be harmed if the process or function was manipulated by an outsider? How would we be harmed if the process or function failed to provide expected results? How would we be harmed if the information/data was unexpectedly changed? How would we be harmed if the asset was unavailable for a period of time? Essentially we are assessing confidentiality, integrity, and availability requirements for the asset; and how those are affected if all or part of the asset is handled in the cloud. It’s very similar to assessing a potential outsourcing project, except that with cloud computing we also have a wider array of deployment options including internal models. Map the asset to potential cloud deployment models Now we should have an understanding of the asset’s importance. Our next step is to determine which deployment models we are comfortable with. Before we start looking at potential providers, we should know if we can accept the risks implicit to the various deployment models – private, public, community, or hybrid and internal vs. external options. For the asset, determine if you are willing to accept the following options: Public. Private, internal/on-premises. Private, external (including dedicated or shared infrastructure). Community; taking into account the hosting location, service provider, and identification of other community members. Hybrid. To effectively evaluate a potential hybrid deployment, you must to have at least a rough architecture of where components, functions, and data will reside. At this stage you should have a good idea of your comfort level for transitioning to the cloud, and which deployment models and locations best fit your security and risk requirements. Evaluate potential cloud service models In this step focus on the degree of control you’ll have at each SPI tier (Software, Platform, or Infrastructure as a Service) to implement any required risk management. If you are evaluating a specific offering, at this point you might switch to a fuller risk assessment. Your focus will be on the degree of control you have to implement risk mitigations in the different SPI tiers. If you already have specific requirements (e.g., for handling of PCI regulated data) you can include them in the evaluation. Sketch the potential data flow If you are evaluating a specific deployment option, map out the data flow between your organization, the cloud service, and any customers/other nodes. While most of these steps have been high-level, before making a final decision it’s absolutely essential to understand whether, and how, data can move in and out of the cloud. If you have yet to decide on a particular offering, you’ll want to sketch out the rough data flow for any options on your acceptable list. This is to insure that as you make final decisions, you’ll be able to identify risk exposure points. Document Conclusions You should now understand the importance of what you are considering moving to the cloud, your risk tolerance (at least at a high level), and which combinations of deployment and service models are acceptable. You’ll also have a rough idea of potential exposure points for sensitive information and operations. These together should give you sufficient context to evaluate any other security controls. For low-value assets you don’t need the same level of security controls and can skip many of

Share:
Read Post

Quick Thoughts on the Point of Sale Security Fail Lawsuit

Let the games begin. It seems that Radiant Systems, a point of sale terminal company, and Computer World, the company that sold and maintained the Radiant system, are in a bit of a pickle. Seven restaurants are suing them for producing insecure systems that led to security breaches, which led to fines for the breached companies, chargebacks, card replacement costs, and investigative costs. These are real costs, people, none of that silly “lost business and reputation” garbage. The credit card companies forced him to hire a forensic team to investigate the breach, which cost him $19,000. Visa then fined his business $5,000 after the forensic investigators found that the Radiant Aloha system was non-compliant. MasterCard levied a $100,000 fine against his restaurant, but opted to waive the fine, due to the circumstances. Then the chargebacks started arriving. Bond says the thieves racked up $30,000 on 19 card accounts. He had to pay $20,000 and managed to get the remainder dropped. In total, the breach has cost him about $50,000, and he says his fellow plaintiffs have borne similar costs. The breaches seemed to result from two failures – one by Radiant (who makes the system), and one by Computer World (who installed and maintained it). The Radiant system stored magnetic track data unencrypted, a violation of PCI standards. Computer World enabled remote access for the system (the control server on premise) using a default username and password. While I’ve railed against PCI at times, this is an example of how the system can work. By defining a baseline that can be used in civil cases, it really does force the PoS vendors to improve security. This is peripheral to the intent and function of PCI, but beneficial nonetheless. This case also highlights how these issues can affect smaller businesses. If you read the source article, you can feel the anger of the merchants at the system and costs thrust on them by the card companies. Keep in mind, they are already pissed since they have to pay 2-5% on every transaction so you can get your airline miles, fake diamond bracelets, and cheap gift cards. The quote from the vendor is priceless, and if the accusations in the lawsuit are even close to accurate, totally baseless: “What we can say is that Radiant takes data security very seriously and that our products are among the most secure in the industry,” Paul Langenbahn, president of Radiant’s hospitality division, told the Atlanta Journal-Constitution. “We believe the allegations against Radiant are without merit, and we intend to vigorously defend ourselves.” Maybe they can go join a certain ex-governor from Illinois on the next season of The Celebrity Apprentice, since they are reading from the same playbook. There are a few lessons in this situation: The lines have moved, and PCI now affects civil liability and government regulation. PCI compliance, and Internet-based cardholder security, now affect even small merchants, even those without an Internet presence. We have a growing body of direct loss measurements (time to revise my Data Breach Costs model). We are seeing product liability in action… by the courts, not legislation. As with many other breaches, following the most basic security principles could have prevented these. I think this last quote sums up the merchant side perfectly: “Radiant just basically hung us out to dry,” he says. “It’s quite obvious to me that they’re at fault… . When you buy a system for $20,000, you feel like you’re getting a state-of-the-art sytem. Then three to four months after I bought the sytem I’m hacked into.” Share:

Share:
Read Post

Top Questions Regarding Guardium Acquisition

I spent about 8 hours on the phone yesterday discussing the Guardium acquisition with press, analysts, security vendors, and former associates in the Database Activity Monitoring space. The breadth of questions was surprising, even from people who work with these products – enough that I thought we should do a quick recap for those who have questions. First, for those of you looking for a really quick overview of Database Activity Monitoring, I just completed an introductory series for Dark Reading on The ABCs of DAM and What DAM Does. Here are some specific questions I have gotten pertaining to the acquisition, in no particular order: What does this mean for the remaining DAM vendors? It means lots of good things. It means that a major firm has placed a big bet on Database Activity Monitoring, spotlighting the technology in a such way that a wider set of customers and competitors will be paying attention to this technology. That means more press coverage. But most importantly it means IBM will now advocate the suitability of DAM for compliance. Additionally, the remaining DAM players will be furiously tuning their marketing materials to show competitive differentiation. What did IBM want to accomplish and how will the software group roll this out? and What does this say about IBM’s security strategy? These are great questions and will require a more in-depth examination of IBM’s security strategy. I will tackle this in a future post. Is this justification for DAM as a compliance platform? Yes it is. IBM provides validation in a way that companies like Fortinet and Netezza simply cannot. DAM has never had a single “must have”, killer application, and may never. But with thousands of Global Services personnel trained on this technology and out educating customers on how it helps with security, operations management, and compliance; I expect a big uptick in acceptance. How does this fit with existing IBM products? Great, poorly, and both. Philosophically, it’s a great fit. IBM has a handful of auditing technologies for every one of their database platforms, and they have the SIM/Log Management platform from the Consul acquisition, so there are some complimentary pieces to DAM. In many ways, DAM can be used as a generic database event collection and analysis engine. It can fit a lot of different purposes from real time security analytics to detailed forensic analysis. On a more practical level this is a poor fit. The Guardium product is not on an IBM stack (Websphere, DB2, Tivoli, etc). IBM really needs a comprehensive vulnerability assessment product to fill in compliance gaps even more than it needed DAM. This is one of the reasons many felt Application Security Inc. would have been a better fit. And despite what was said at the press launch, Guardium is still viewed as a hardware firm, not a software vendor. I am going to get hate mail on these last two points, but I have spoken with enough customers who share this perspective that IBM has more to worry about than my opinions. Does the mainframe database security market needs a facelift? OK, no one really asked this specific question, but was behind several different questions on DB2 security. Mainframe database security is old school: Access controls (ACF2, RACF, Top Secret), small numbers of administrators with SOD, use of tailored audit trails and physical isolation. Encryption to secure backup media is fairly common. While the use cases for mainframes continue to grow as companies look to leverage their investments, the security model has changed very little in the last 10 years. Monitoring provides the capability to verify usage, near-real-time analysis and non-database event collection. These all advance the state of mainframe DB security. Is this an internally-facing deal to serve existing customers or is there a genuine security global strategy? It’s a little of both. I do not believe what was said in the press call: that this is all about heterogenous database security. They have it and they will use it, but the focus will be on existing IBM customers. IBM Global Services will absolutely want support for every database environment they can get because their customers have everything, but the rest of IBM will want mainframe support first and foremost. I know firsthand that there were many in IBM pushing for iSeries-AS/400 support, and a smattering who wanted Informix capabilities as well. I imagine for the time being they will continue with the current support matrix, provide deeper and more seamless mainframe monitoring, and then service the squeakiest of the wheels. I am not exactly sure which that will be, but believe the first efforts are introspective. Does this mean that DAM is mature? DAM products have been reasonably mature for a while now. Once the vendors fixed their gawd-awful UI, had appropriate compliance and security policy bundles, and offered multiple data collection and deployment models, it became a mature product space. Visibility and a must-have use case have been elusive; so DAM has not gained the same kind of traction as DLP, email, and web security. Who is going to be bought next? Probably the most common question I got and, really, I don’t know. You tell me who the interested buyer is and I can tell you who the best fit would be and why. But as [shameless promotion] product and market analysis is how I make my living [/shameless promotion], I am not sharing that information unless you are serious. Share:

Share:
Read Post

Sign Up To Drop Comment Moderation

We hate that we have to moderate comments, but the spammers are relentless and there’s no way we’ll let those jerks ruin our site. I realized I can disable moderation on a per-account basis without having to give you editing or moderation rights. All you have to do is register with the site, and drop us an email with your username at info@securosis.com. We’ll add you to our super secret group, and you can login and skip all that moderation silliness. A few of you comment on the blog pretty regularly, and we hate that we have to review everything first and slow the discussion down. Hopefully this will help ease the problem. Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.