Securosis

Research

Misconceptions of a DMZ

A recent post tying segmented web browsing to DMZs by Daniel Miessler got me thinking more about the network segmentation that is lacking in most organizations. The concept behind that article is to establish a browser network in a DMZ, wherein nothing is trusted. When a user wants to browse the web, the article implies that the user fires up a connection into the browser network for some kind of proxy out onto the big, bad Internet. The transport for this connection is left to the user’s imagination, but it’s easy to envision something along the lines of Citrix Xenapp filling this gap. Fundamentally this may offset some risk initially, but don’t get too excited just yet. First let’s clear up what a DMZ should look like conceptually. From the perspective of most every organization, you don’t want end users going directly into a DMZ. This is because, by definition, a DMZ should be as independent and self-contained as possible. It is a segment of your network that is Internet facing and allows specific traffic to (presumably) external, untrusted parties. If something were to go wrong in this environment, such as a server being compromised, the breach shouldn’t expose the internal organization’s networks, servers, and data; and doesn’t provide the gold mine of free reign most end users have on the inside. The major risk of the DMZ network architecture is an attacker poking (or finding) holes and building paths back into your enterprise environment. Access from the inside into the primary DMZ should be restricted solely to some level of bastion hosts, user access control, and/or screened transport regimens. While Daniel’s conceptual diagram might be straightforward, it leaves out a considerable amount of magic that’s required to scale in the enterprise, given that the browser network needs to be segregated from the production environments. This entails pre-authenticating a user before he/she reaches the browser network, requiring a repository of internal user credentials outside the protected network. There must also be some level of implied trust between the browser network and that pre-authentication point, because passing production credentials from the trusted to untrusted network would be self-defeating. Conversely, maintaining a completely different user database (necessarily equivalent to the main non-DMZ system, and possibly including additional accounts if the main system is not complete) is out of the question in terms of scalability and cost (why build it once when you can build it twice, at twice the price?), so at this point we’re stuck in an odd place: either assuming more risk of untrusted users or creating more complexity – and neither is a good answer. Assuming we can get past the architecture, let’s poke at the browser network itself. Organizations like technologies they can support, which costs money. Since we assume the end user already has a supported computer and operating system, the organization now has to take on another virtual system to allow them to browse the Internet. Assuming a similar endpoint configuration, this roughly doubles the number of supported systems and required software licenses. It’s possible we could deploy this for free (as in beer) using an array of open source software, but that brings us back to square one for supportability. Knock knock. Who’s there? Mr. Economic Reality here – he says this dog won’t hunt. What about the idea of a service provider basically building this kind of browser network and offering access to it as part of a managed security service, or perhaps as part of an Internet connectivity service? Does that make this any more feasible? If an email or web security service is already in place, the user management trust issue is eliminated since the service provider already has a list of authorized users. This also could address the licensing/supportability issue from the enterprise’s perspective, since they wouldn’t be licensing the extra machines in the browser network. But what’s in it for the service provider? Is this something an enterprise would pay for? I think not. It’s hard to make the economics work, given the proliferation of browsers already on each desktop and the clear lack of understanding in the broad market of how a proxy infrastructure can increase security. Supportability and licensing aside, what about the environment itself? Going back to the original post, we find the following: Browsers are banks of virtual machines Sandboxed Constantly patched Constantly rebooted No access to the Internet except through the browser network Untrusted, just like any other DMZ Here’s where things start to fall apart even further. The first two don’t really mean much at this point. Sandboxing doesn’t buy us anything because we’re only using the system for web browsing anyway, so protecting the browser from the rest of the system (which does nothing but support the browser) is a moot point. To maintain a patch set makes sense, and one could argue that since the only thing on these systems is web browsing, patching cycles could be considerably more flexible. But since the patching is completely different than inside our normal environment, we need a new process and a new means of patch deployment. Since we all know that patching is a trivial slam-dunk which everyone easily gets right these days, we don’t have to worry, right? Yes, I’m kidding. After all, it’s one more environment to deal with. Check out Project Quant for Patch Management to understand the issues inherent to patching anything. But we’re not done yet. You could mandate access to the Internet only through the browser network for the general population of end users, but what about admins and mobile users? How do we enforce any kind of browser network use when they are at Starbucks? You cannot escape exceptions, and exceptions are the weakest link in the chain. My personal favorite aspect of the architecture is that the browser network should be considered untrusted. Basically this means anything that’s inside the browser network can’t be secured. We therefore assume something in the browser network is always compromised. That reminds me of those containment units the Ghostbusters used to put Slimer

Share:
Read Post

ReputationDefender

We’ve all heard the stories: employee gets upset, says something about their boss online, boss sees it, and BAM, fired. As information continues to stick around, people find it increasingly beneficial to think before launching a raging tweet. Here lies the opportunity: what if I can pay someone to gather that information and potentially get rid of it? Enter ReputationDefender. Their business consists of three key ideas: Search: Through search ReputationDefender will find and present information about you so it’s easy to understand. Destroy: Remove (for a per-incident fee) information that you don’t care to have strewn about the Internet. Control: Through search and destroy you can now control how others see you online. The company currently has multiple products that all play to specific areas of uncertainty most people have online: children, reputation, and privacy. Reputation is broken out into two different products, where one side takes on unwanted information, and the other appears to be SEO for your name (let’s not go there). The two main questions you may be asking yourself about the service are whether it works and, conversely, whether it’s worthwhile? ReputationDefender’s approach makes sense, but isn’t practical in terms of execution. If there was a service today that could reliably remove information that might be incriminating or defamatory in nature from all the dark corners of the Internet, the game of privacy would be considerably different. Truth be told, that’s not how it works. While this is a topic that we could discuss at great lengths the simple take away is information replicates and redistributes at an exponential rate which adds to the depth and complexity of information sprawl. Now take into consideration all the sites that go to great lengths to keep information free from manual expungement: Wikileaks, The Pirate Bay, and The Onion to name a few. OK, well, not The Onion, but that’s still some funny stuff. The point is that if someone wants to drag your otherwise good reputation through the mud, there are far too many ways to publish it with relatively little you can do about it. Paying someone $44.90 (minimum price to enroll in a monthly MyReputation subscription plus use the ‘Destroy’ assistance one time) isn’t going to change that. Not convinced? Keep an eye out for the way law enforcement is scouring the Internet these days, using it as a preemptive tool to address what some may consider an idle threat, and you can start to see that there’s more archiving done than you’d probably care to think about. Take a realistic approach to the root of the problem by saying that anything you post to the Internet will never be guaranteed private forever. Sites are bought out, information is sold, and breaches / leaks are a daily occurrence. The only control you have is how you put that information out in the first place. I wish it were different, but for $14.95 a month (sans any ‘Destroy’ attempts) you are better off investing in encryption or password management software to reduce your exposure where you do have some control. Then again, Dr. Phil may be able to persuade you otherwise. P.S. I’m confident this service is full of holes, but you might say I don’t have any real proof. That’s going to change though as we put the service to the grinder on Mike and Rich. Stay tuned! Share:

Share:
Read Post

DNS Resolvers and You

As you are already well aware (if not, see the announcement – we’ll wait), Google is now offering a free DNS resolver service. Before we get into the players, though, let’s first understand the reasons to use one of these free services. You’re obviously reading this blog post, and to get here your computer or upstream DNS cache resolved securosis.com to 209.240.81.67 – as long as that works, what’s the big deal? Why change anything? Most of you are probably reading this on a computer that dynamically obtains its IP address from the network you’re plugged into. It could be at work, home, or a Starbucks filled with entirely too much Christmas junk. Aside from assigning your own network address, whatever router you are connecting to also tells you where to look up addresses, so you can convert securosis.com to the actual IP address of the server. You never have to configuring your DNS resolver, but can rely on whatever the upstream router (or other DHCP server) tells you to use. For the most part this is fine, but there’s nothing that says the DNS resolver has to be accurate, and if it’s hacked it could be malicious. It might also be slow, unreliable, or vulnerable to certain kinds of attacks. Some resolvers actively mess with your traffic, such as ISPs that return a search pages filled with advertisements whenever you type in a bad address, instead of the expected error. If you’re on the road, your DNS resolver is normally assigned by whatever network you’re plugged into. At home, it’s your home router, which gets its upstream resolver from your ISP. At work, it’s… work. Work networks are generally safe, but aside from the reliability issues we know that home ISPs and public networks are prime targets for DNS attacks. Thus there are security, reliability, performance, and even privacy advantages to using a trustworthy service. Each of the more notable free providers cites its own advantages, along the lines of: Cache/speed – In this case a large cache should equate to a fast lookup. Since DNS is hierarchical in naturem if the immediate cache you’re asking to resolve a name already has the record you want, there is less wait to get the answer back. Maintaining the relevance and accuracy of this cache is part of what separates a good fast DNS service from, say, the not-very-well-maintained-DNS-service-from-your-ISP. Believe it or not, but depending on your ISP, a faster resolver might noticeably speed up your web browsing. Anycast/efficiency – This gets down into the network architecture weeds, but at a high level it means that when I am in Minnesota, traffic I send to a certain special IP address may end up at a server in Chicago, while traffic from Oregon to that same address may go to a server in California instead. Anycast is often used in DNS to provide faster lookups based on geolocation, user density, or any other metrics the network engineers choose, to improve speed and efficiency. Security – Since DNS is susceptible to many different attacks, it’s a common attack vector for things like create a denial-of-service on a domain name, or poisoning DNS results so users of a service (domain name) are redirected to a malicious site instead. There are many attacks, but the point is that if a vendor focuses on DNS as a service, they have probably invested more time and effort into protecting it than an ISP who regards DNS as simply a minor cost of doing business. These are just a few reasons you might want to switch to a dedicated DNS resolver. While there are a bunch of them out there, here are three major services, each offering something slightly different: OpenDNS: One of the most full featured DNS resolution services, OpenDNS offers multiple plans to suit your needs – basic is free. The thing that sets OpenDNS apart from the others is their dashboard, from which you can change how the service responds to your networks. This adds flexibility, with the ability to enable and disable features such as content filtering, phishing/botnet/malware protection, reporting, logging, and personalized shortcuts. This enables DNS to serve as a security feature, as the resolver can redirect you someplace safe if you enter the wrong address; you can also filter content in different categories. The one thing that OpenDNS often gets a bad rap for, however, is DNS redirection on non-existent domains. Like many ISPs, OpenDNS treats every failed lookup as an opportunity to redirect you to a search page with advertisements. Since many other applications (Twitter clients, Skype, VPN, online gaming, etc…) use DNS, if you are using OpenDNS with the standard configuration you could potentially leak login credentials to the network, as a bad request will fail to get back a standard NXDOMAIN response. This can result in sending authentication credentials to OpenDNS, as your confused client software sees the response as a successful NOERROR and proceeds, rather than aborting as it would if it got back the ‘proper’ NXDOMAIN. You can disable this behavior, but doing so forfeits some of the advertised features that rely on it. OpenDNS is a great option for home users who want all the free security protection they can get, as well as for organizations interested in outsourcing DNS security and gaining a level of control and insight that might otherwise be available only through on-site hardware. Until your kid figures out how to set up their own DNS, you can use it to keep them from visiting porn sites. Not that your kid would ever do that. DNSResolvers: A simple no-frills DNS resolution service. All they do is resolve addresses – no filtering, redirection, or other games. This straight up DNS resolution service also won’t filter for security (phishing/botnet/malware). DNSResolvers is a great fast service for people who want well-maintained resolvers and are handling security themselves. DNSResolvers effectively serves as an ad demonstrating the competence and usefulness of parent company easyDNS), by providing a great free DNS service, which encourages some users

Share:
Read Post

Clientless SSL VPN Redux

Let’s try this again. Obviously I didn’t do a very good job of defining what ‘clientless’ means, creating some confusion. In part, this is because there’s a lot of documentation that confuses ‘thin client’ with ‘clientless’. Cisco actually has a good set of definitions, but in case you don’t want to click through I’ll just reiterate them (with a little added detail): Clientless: All traffic goes through a standard browser SSL session – essentially, a simple proxy for web browsing. A remote client needs only an SSL-enabled web browser to access http – or https web servers on the corporate LAN (or the outside Internet, which is part of the problem we’re talking about). Thin Client: Users must download a small, Java applet for secure access to TCP applications that use static port numbers. UDP is not supported. The client can add security features, and allows tunneling of non-web traffic, such as allowing Outlook to connect to an Exchange server. [Other vendors also use ActiveX.] Client: The SSL VPN client downloads a small client to the remote workstation and allows full, secure access to all the resources on the internal corporate network. It’s a VPN that tunnels all traffic over SSL, as opposed to IPSec or older alternatives. OK, so these definitions are a bit Cisco specific, but they do a good job. By “clientless” we’re stating no Java or ActiveX is in play here. This is key, because both the thin and full client models are immune to the flaw described in the US-CERT VU. The vulnerability is only when using a real, completely clientless, SSL VPN through the browser. Speaking of the CERT VU, I think everyone can agree that it was poorly written. There are vendors in there who have never provided any sort of clientless SSL VPN (i.e., glorified proxy) functionality, so it’s better not to use that list even though most are marked as “Unknown”. At this point if you’ve identified a true clientless SSL VPN in your environment, and are wondering how to mitigate the threat as much as possible, the best thing you can do is to make sure that the device only allows access to specified networks and domains. The more access end users have to external sites, the wider the window of opportunity is open for exploit. That being said, it is still generally a bad idea to use clientless VPNs on public networks, since they always provide a lower barrier against attacks can be provided in a (thin or full) VPN client, especially in light of all the threats to DNS in such an environment. It’s not hard to mess with a user’s DNS on an open (or hostile) network, or perform other man-in-the-middle attacks. Clientless SSL VPNs are ultimately very fancy proxies, and should be carefully in tightly controlled environments. In situations where full control or public access is required there are far more secure solutions, including client-based SSL VPNs (OpenVPN, etc…) and IPsec options. Share:

Share:
Read Post

Serious Flaw in Clientless SSL VPNs

Good job! You paid tens of thousands of dollars for that shiny new name-brand VPN, and then decided to deploy its web VPN functionality because, well, it was just easier than deploying software clients. An underpinning of common web security that dates back to Netscape Navigator 2.0 is the “same origin” policy for JavaScript. Your clientless SSL VPN intentionally breaks this, and that’s considered a feature. What does this mean for you? If your implementation allows dynamic URL rewriting (i.e., end users can put in any URL and have the web VPN fetch it) it’s GAME OVER, since every website a user views through that service appears to come from the same domain – your trusted VPN server. This is worst-case, but there are many other scenarios where an attacker could set up shop to exploit the session, especially if the end user is on a public network where DNS is compromised. There are a bunch of ways to exploit this, especially in multi-step attacks when the bad guy can get on the internal network (easy enough with malware). Don’t be surprised if this shows up in BeEF (a comprehensive tool for exploiting browser vulnerabilities) soon. Friends don’t let friends connect clientless – fix it the right way. Read the US-CERT vulnerability note for more detailed information. You can mitigate many of the potential problems by only authorizing the SSL VPN to manage traffic for trusted domains, and avoid tunneling to random destinations. If it’s a full SSL VPN product with a re-browsing feature, turn that capability off! Oh, not to add to the confusion, but Sun’s JRE is also recently vulnerable to same origin policy violations as well. Share:

Share:
Read Post

What the Renegotiation Bug Means to You

A few weeks ago a new TLS and SSLv3 renegotiation vulnerability was disclosed, and there’s been a fair bit of confusion around it. When the first reports of the bug hit the wire, my initial impression was that the exploit was too complex to be practical, but as more information comes to light I’m starting to think it’s worth paying attention to. Since every web browser and most other kinds of encrypted Internet connections – such as between mail servers – use TLS or SSLv3 to protect traffic, the potential scope for this is massive. The problem is that TLS and SSLv3 allow renegotiation outside of an established TLS connection, creating a small window of opportunity for an attacker to sit in the middle and, at a particular phase of a connection, inject arbitrary data. The key bits are that the attacker must be in the middle, and there’s only a specific window for data injection. The encryption itself isn’t cracked, and the attacker can’t read the encrypted data, but the attacker now has a hole to inject something which could allow unanticipated actions, such as sending a command to a web application a user is connected to. A lot of people are comparing this to Cross Site Request Forgery (CSRF), where a malicious website tricks the browser into doing something on a trusted site the user is logged into, like changing their password. This is a bit similar because we’re injecting something into a trusted connection, but the main differentiator is where the problem lies. CSRF happens way up at the application layer, and to hit it all we need to do is trick the user (or their browser) to get access. This new flaw is at a networking layer, so we have a lot less context or feedback. For the TLS/SSL attack to work, the attacker has to be within the same local network (broadcast domain) as the victim, because the exploit is at the “transport” layer. This alone decreases the risk significantly right out of the gate. Is this a viable exploit tactic? Absolutely, but within the bounds of a local network, and within the limits of what you can do with injection. This attack vector is most useful in situations where there is easy access to networks: unsecured WiFi and large network segments that aren’t protected from man in the middle (MITM) attacks. The more significant cause for concern is if you are running an Internet facing web application that is: Vulnerable to the TLS/SSL renegotiation vulnerability as described and either… Running a web app that doesn’t have any built in application layer protections (anti-CSRF, session state, etc.). Running a web app that allows users to store and retrieve things using simple POST requests (such as Twitter). Or using TLS/SSLv3 as transport security for something else, such as IMAP/SSL, POP/SSL, or SMTP/TLS… In those cases, if an attacker can get on the same network as one of your users, they can inject data and potentially cause bad things to happen, possibly even redirecting your user to a new, malicious site. One recent example (since fixed) showed how an attacker could trick Twitter into posting the user’s account credentials. Currently the draft of the fix binds a renegotiation handshake to a particular already established TLS channel, which closes the hole. Unfortunately, since SSLv3 does not support extensions there is no possible way for a secure renegotiation to happen; thus the death of SSL is nigh, and long live (a fixed) TLS. Share:

Share:
Read Post

Welcome to Oceania

At lunch last week, location-based privacy came up. I actively opt in to a monitoring service, which gets me a discount on insurance for a vehicle I own. My counterpart stated that they would never agree to anything of the sort because of the inherent breach of personal privacy and security. I responded that the privacy statement explicitly reads that the device does not contain GPS, nor does the company track the vehicle’s location. But even if the privacy statement said the opposite – should I care? Is location directly tied to some aspect of my life that might negatively impact me? And ultimately is security really tied to privacy in this context? In a paper by Janice Tsai, Who’s Viewed You? The Impact of Feedback in a Mobile Location-Sharing Application (PDF) the abstract’s last line states, “…our study suggests that peer opinion and technical savviness contribute most to whether or not participants thought they would continue to use a mobile location technology.” This makes sense as I would self-qualify my ability to understand the technology enough to be able to control and measure the level of exposure I may create. Although the paper’s focus is ultimately on the feedback (or lack thereof) that these location-based services provide, it still contains interesting information. The thing that most intrigued me is that it never actually correlated privacy to security. I expected there to be a definitive point where users complained about being less secure somehow because they were being tracked. But nothing like that appeared. I continued on my journey, looking to tie location-based privacy to security, and ran across another paper with a more promising title: “Location-Based Services and the Privacy-Security Dichotomy” by K. Michael, L. Perusco, and M. G. Michael. The paper provides much more warning of “security compromise” and “privacy risk”, but the problem remains – again, this paper doesn’t provide any hard evidence of how these location-based services actually create a security risk. In fact it’s more the opposite – they state that if we are willing to give up privacy, then our personal security may be increased. The authors mention the obvious risks, including lack of control and data leakage, but at this point, I’m still unsatisfied and have yet to find a clear understanding of how or why using a location-based service might ultimately make me less secure. So maybe it’s simply not so, and perhaps the real problem is outlined in section 3.2 of the paper: “The Human Need for Autonomy”. Let’s be honest – it’s more psychological than anything with a placeholder for obvious exceptions, the most notable being stalker scenarios that are linked to domestic abuse of sorts. Even in this scenario it may be a stretch to say that location-based services are really the root cause of decreased personal security. Sure an angry ex may guess or even know a password to a webmail account and skim location data from communications, but the same could be done by lock picking a place of residence and stealing a daily planner. It’s a particular area that can easily be argued from either side because of different interpretations of what it is in the end. We’d like to think that nobody is tracking us, but we all carry mobile phones, we’re all recorded daily by countless cameras, we all badge in at work using RFID, we all swipe payment cards, and we all use the Internet (I’m generalizing “we” based on content distribution here, but flame if you must). The addition of things like Google Latitude, Skyhook Wireless, and Yahoo! Fire Eagle are adding a level of usability but in the grand scheme of things do they really impact your personal security? Probably not. In the meantime, my fellow netizens, we can at least make light of the situation while we discuss what it is and isn’t. It’s a place, no matter where we are, that can mockingly be referred to as: Oceania – because try as you might, someone is watching. Share:

Share:
Read Post

Name of the Game: Vested Interest

It seems as though lately a lot of heated conversations revolve around X.509. Whether it’s implementations using IPsec or SSL/TLS certificates, someone always ends up frustrated. Why? Because it really does suck when you think about it. There are many facets one could rant on and on about, when the topic is X.509: the PKI that could have been but isn’t and never will be. It’s a losing argument and if I’ve already got your blood pressure on the rise (I’m lookin’ at you, registrars!) you know why it sucks but there’s zero motivation to do anything about it. Well, there is some motivation, but that will be quickly squashed with FUD coming out of those corporations telling you how need them. You need the warm fuzzy feeling of having a Certificate Authority that’s WebTrust certified to create certificates to provide security and authenticity. But… didn’t someone break that? Enter cheesy diagram:   I know, I know – that’s a work of art in and of itself. I can be hired for crappy vector art at the low low hourly rate of $29.95. There’s my pitch – now back to the story. So I bet at this point you’re telling yourself that I could have made this diagram much more readable had I arranged it differently. In reality I did it on purpose because, like X.509, stuff is there that doesn’t work quite right. That aside, I want to make sure you get two things out of this rant: “Joe Schmoe” will never be able to make a decision at this level of complexity. Some people can; others cannot. Expecting everyone on the Internet to figure this stuff out is a recipe for failure and fraud. The X.509 chain of trust is a big reason it sucks so much. Let me explain. In the diagram “Joe” is visibly upset. Rightly so, because he’s at his local coffee shop and doing a little social network stalking and banking. Aside from all of the other possible attacks when using public WiFi today, he’s been had by a MiTM attack to explicitly steal his credentials even though he’s careful to make sure the little lock icon says that he’s good to go. There’s no way for him to validate this. So is this attack feasible today? That’s probably the wrong question to ask – the question is: is it possible? Let’s move on to the second item of interest: chain of trust. X.509 is very rigid – if any certificates along the certificate chain are invalidated, you must resign and reissue all the certs below them. Think about that as it applies to thousands of computers using IPsec and X.509 for phase one authentication: if you have a mid-level signing server that either expires or is compromised, you have to distribute and install all new certificates. Now think of that same situation as it applies to the certificate authorities you get your SSL/TLS certificates from (and other kinds, but that’s not the point). The problem is that if in fact that CA certificate is invalidated, then what is the process to revoke on the client side (meaning every browser installed on every computer across the Internet)? That really sucks. Don’t even bring up CRL or OCSP – because neither works and/or was designed to manage at this magnitude (let alone any decent-size environment). So let’s fix it! Let’s do something with DNSSEC to get around this rigidity – as Robert Hansen, Dan Kaminsky, and others have suggested. I’ve got bad news, my friends: vested interests. If we remove the existing rigid system, in favor of something more flexible and dynamic – say, as the distributed as DNS – we have destroyed the very lucrative choke point that currently creates a major revenue stream. That’s not to say this problem will never get fixed, but I expect major pressure to ensure that any replacement preserves the lucrative ‘sweet spot’ for CAs, rather than something more viable and open which might also be much cheaper. As usual, it is unlikely any real progress will occur happen without a catastrophic event to kick-start the proces, but if you’re even remotely cognizant of how things get fixed around these parts, you already knew that. Share:

Share:
Read Post

Hacking Envelopes

This story begins early last week with a phone call from a bank I hold accounts with. I didn’t actually answer the call but a polite voice mail informed me of possible fraudulent activity and stated I should call them back as soon as possible. First and foremost I thought this part of my story was a social engineering exercise, but I quickly validated the phone number as being legit, unless of course this was some fantastic setup that was either man-in-the-middling the bank’s site (which would allow them to publish the number as valid) or the number itself had been hijacked. Tinfoil hat aside, I called the bank. A friendly fraud services representative handled my call and in less than twenty minutes we had both come to the conclusion my card for the account in question was finally compromised. By finally, I mean roughly seven years as being my primary vehicle for payment on a daily basis. But this, ladies and gentlemen, is not where the fun started. No, I had to wait for the mail for that. Fast forward five to seven business days, when a replacement card showed up in my lockbox which, interestingly, is an often-ignored benefit of living in a high density residence. This particular day I received a rather thick stack of mail that included half a dozen similarly sized envelopes. Unfortunately, I quickly knew (without opening any of them) which one contained my new card – and it wasn’t based on feel. One would think a financial institution might go to trivial lengths to protect card data within an envelope, but clearly not in this case. The problem I had was that four of the sixteen numbers were readable because, and I’m assuming here, some automatic feeding mechanism at the post office put enough pressure on the embossed card number to reprint the number on the outside of the envelope. It was like someone had run that part of the card through an old-school carbon card copy machine. At this point my mail turned into a pseudo scratch lottery game and I was quickly to trying household items to finish what had already been started. I was a winner on the second try (the Clinique “smoldering plum – blushing blush powder brush” was a failure – my fiance was not impressed, and clearly I’ve watched too much CSI: Miami). Turns out a simple brass key is all that is needed to reveal the rest of the numbers, name, and expiration. At this point I’m conflicted, with two different ideas: Relief and confusion: The card security code isn’t embossed. So why must the rest of it be? Social engineering: If obtaining card data like this was easy enough, I could devise a scheme where I called recipients new cards with enough data to sound like the bank for many people to give me the security codes. After considerable thought I feel it’s safe to say that the current method of card distribution poses a low but real level of risk, wherein a significant amount of card data can be discerned short of brute force on the envelope itself. Is it possible? Surely. Is it efficient? Not really. Would someone notice the card data on the back of the envelope? Maybe. But damn – now it really makes sense that folks just go after card data TJX-style, considering all the extra effort in this route. Share:

Share:
Read Post

Where Art Thou, Security Logging?

Today you’d be hard pressed to find a decent sized network that doesn’t have some implementation of Security Event Management (SEM). It’s just a fact of modern regulation that a centralized system to collect all that logolicious information makes sense (and may be mandatory). Part of the problem with architecting and managing these systems is that one runs into the issue of securely collecting the information and subsequently verifying its authenticity. Almost every network-aware product you might buy today has a logging capability, generally based on syslog – RFC3164. Unfortunately, as defined, syslog doesn’t provide much security. In fact if you need a good laugh I’d suggest reading section 6 of the RFC. You’ll know you’re in the right place when you start to digest information about odors, moths and spiders. It becomes apparent, very quickly, when reading subparagraphs 6.1 through 6.10, that the considerations outlined are there more to tip you off that the authors already know syslog provides minimal security – so don’t complain to them. At this point most sane people question using such a protocol at all because surely there must be something better, right? Yes and no. First let me clarify: I didn’t set out to create an exhaustive comparison of [enter your favorite alternative to syslog here] for this writeup. Sure RFC5424 obsoletes the originally discussed RFC3164 and yes RFC5425 addresses using TLS as a transport to secure syslog. Or maybe it would be better to configure BEEP on your routers and let’s not forget about the many proprietary and open source agents that you can install on your servers and workstations. I freely admit there are some great technologies to read about in event logging technology. The point though is that since there is considerable immaturity and many options to choose from, most environments fall back to the path of least resistance: good ol’ syslog over UDP. Unfortunately I’ve never been asked how to do logging right by a client. As long as events are streaming to the SEM and showing up on the glass in the NOC/SOC, it’s not a question that comes up. It may not even be a big deal right now, but I’d be willing to bet you’ll see more on the topic as audits become more scrutinizing. Shouldn’t the integrity of that data be something a little more robust than the unreliable, unauthentic, repudiable and completely insecure protocol you probably have in production? You don’t have to thank me later, but I’d start thinking about it now. Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.