Securosis

Research

Is the Virtual Desktop Hype Real?

I’ve been hearing a lot about Virtual Desktops lately (VDIs), and am struggling to figure out how interested you all really are in using them. For those of you who don’t track these things, a VDI is an application of virtualization where you run a bunch of desktop images on a central server, and employees or external users connect via secure clients from whatever system they have handy. From a security standpoint this can be pretty sweet. Depending on how you configure them, VDIs can be on-demand, non-persistent, and totally locked down. We can use all sorts of whitelisting and monitoring technologies to protect them – even the persistent ones. There are also implementations for deploying individual apps instead of entire desktops. And we can support access from anywhere, on any device. I use a version of this myself sometimes, when I spin up a virtual Windows instance on AWS to perform some research or testing I don’t want touching my local machine. Virtual desktops can be a good way to allow untrusted systems access to hardened resources, although you still need to worry about compromise of the endpoint leading to lost credentials and screen scraping/keyboard sniffing. But there are technologies (admittedly not perfect ones) to further reduce those risks. Some of the vendors I talk with on the security side expect to see broad adoption, but I’m not convinced. I can’t blame them – I do talk to plenty of security departments which are drooling over these things, and plenty of end user organizations which claim they’ll be all over them like a frat boy on a fire hydrant. My gut feeling, though, is that virtual desktop use will grow, but be constrained to particular scenarios where these things make sense. I know what you’re thinking, “no sh* Sherlock”, but we tend to cater to a … more discerning reader. I have spoken with both user and vendor organizations which expect widespread and pervasive deployment. So I need your opinions. Here are the scenarios I see: To support remote access. Probably ephemeral desktops. Different options for general users and IT admin. For guest/contractor/physician access to a limited subset of apps. This includes things like docs connecting to check lab results. Call centers and other untrusted internal users. As needed to support legacy apps on tablets. For users you want to let use unsupported hardware, but probably only for a subset of your apps. That covers a fair number of desktops, but only a fraction of what some other analyst types are calling for. What do you think? Are your companies really putting muscle behind virtual desktops on a large scale? I think I know the answer, but want a sanity check for my ego here. Thanks… Share:

Share:
Read Post

Technology Caste System

There is a caste system in technology. It’s an engineering caste system, or at least that’s what I call it. A feeling of superiority developers have over their QA, IT, product management, and release management brethren. Software developers at every firm I have ever worked for – large and small – share a condescending view of their co-workers when it comes to technology. They are at the top of the totem pole, and act as if their efforts are the most important. It starts in college, where software programs are more competitive to get in and require far more rigorous curricula. It is fostered by the mindset of programmers, who approach their profession more like religion. It’s not a 9-5 day job, and most 20-something developers I have worked with put in longer hours and put in more time into self education than any other profession I have ever seen. They create something from nothing every day; and with software, anything is possible. The mindset is reinforced by pay scales and recognition when products are delivered. Their technical accumen runs far deeper than the other groups and they don’t respect those without it. This relationship between different professions is reinforced when problems arise, as developers are the ones explaining how things work and advising those around them. It’s the engineering team that writes the trickier test cases, and the engineers who comes up with the best product ideas. Heck, of the last four organization I have run, to solve serious IT issues I had to assign members of the engineering team to debug and fix. They are technology rocks stars and prima donnas. Right or wrong, good or bad, this attitude is commonplace. Why do I bring this up? Reviewing the marketing and sales collateral from several security vendors who are applying their IT marketing angles to software developers, I see a lot of approaches that will not work. When it comes to understanding buying centers, those who have traditionally sold into IT don’t get the developer mindset. They approach sales and marketing as if the two were interchangeable, but they are not. The things developers consider important are not the same things the rest of IT considers important. It is unlikely your “IT Champion” can cross-pollinate your ideas to the development team – both because your champion is likely seen as an outsider by the developers and due to internal tension between different groups. Development sets development requirements. White box test tools? Web application assessments? WAF? Even pen testing? These all need different buyers, with a different mind set and requirements than the buyers of other IT kit – especially compared to network operations gear. The product and the value proposition needs to work in the development context. Most sales and marketing teams want to target the top – the CIO – and work their way down from there. That works for most of IT, but not with developers who have their own set of requirements over and above business requirements, and often neither fear nor respect upper management. They are far less tolerant of marketing-speak and BS and much more focused on getting things done easily, so you had better show value quickly or you’re wasting time. UI, workflow, integration, and API options need to be more flexible. When it comes to application security, it’s a developer’s world, so adjust or be ignored. Share:

Share:
Read Post

Network Security in the Age of *Any* Computing: Integration

Supporting any computing – which we have defined as access to your critical information from anywhere, at any time, on any device – requires organizations to restrict access to specific communities of users/devices, based on organizational policies. In order to do this, you need to integrate with your existing installed base of security and networking technologies, ensuring management leverage and reducing complexity. No easy task, for sure. So let’s discuss how you can implement network access control to play nicely in the larger sandbox. Authentication When an endpoint/mobile device joins the network, you can start with either a specific authentication or network-based detection of the device, via passive monitoring of the network traffic or the MAC address of the connecting device. The choice of how strong an authentication comes down to whether building policies based on device and/or location will be granular enough. If you want to take into account who is driving the device into the policies, then you’ll need to know the identity of the user. Although there are techniques to identify users passively, we prefer stronger methods to determine identity; these require integration with an authoritative source for identity information. The integrated directory might be Active Directory, LDAP, or RADIUS. Authentication is either via a persistent agent, a connection portal (provided as part of the NAC solution), or a protocol such as 802.1X. Keep in mind that identity is a dynamic beast, and users & groups are constantly changing. So it’s not sufficient to provide a one-time dump of the directory. You’ll want to check for user/group moves, adds, and changes, on an ongoing basis. At authentication time you need to figure out what’s going on with the device, which involves inspecting it to understand its security posture. Endpoint/Mobile Device Integration The first decision is how deeply to scrutinize endpoints/mobiles when they connect. Obviously there is a time factor to scanning and checking security posture, which can cause user grumpiness. Though most organizations want to make sure devices are properly configured upon access, many aren’t ready to react to the answers they may get. Do you block access when a device violates policy? Even when the user has a legitimate and business critical need to be on the network? As we discussed briefly in the post on policies, you may want to define policies based on the security controls in place on the endpoints/mobiles. Compromising your security by providing access to compromised devices makes no sense, so what remediation should happen? Do you patch the device? That requires the ability to integrate with the patch management product. Do you reconfigure the device? Or update the endpoint protection platform? It depends on the nature of the policy violation and which information that user can access, but you want options for how to remediate. And each option requires support from your NAC vendor. You could just ignore the details and block users with devices which don’t comply with policy, but this tends to end with your rainmaker calling the CEO because she can’t get into the ordering system to book that critical deal. Which presumably won’t work out very well for you. Another consideration is that devices may be compromised after connecting. Detecting a compromised device involves both re-authenticating devices periodically (to ensure a man in the middle hasn’t happened), as well as assessing the security posture of the endpoint/mobile device every so often. Another tactic is to detect compromised devices by their behavior – which requires continuously checking devices for anomalous behavior. Most NAC devices are already monitoring the network anyway to detect new devices, so this anomaly detection capability is frequently available. Now that you know the posture of the endpoint/mobile, you can determine the appropriate level of access it, enforcing that policy at the network layer via integration with other infrastructure. Network Integration There are plenty of ways to enforce network access policies using your switches and firewalls. Let’s take a look at the major techniques: Inline device: Obviously an option for enforcing access policies is to be in the middle of the connection and able to block unauthorized devices as needed. Networking infrastructure players who offer NAC can provide multipurpose boxes that act as inline enforcement points. There isn’t much more to say about it, but this approach has a dramatic impact on network design. CLI: The good old command line is still one of the more popular methods of enforcing access control. This involves the NAC equipment establishing a secure, authenticated session (typically using SSH or SSL) with a switch or firewall and making an appropriate change. That might mean moving a user onto a guest VLAN or blocking their IP from access a protected network. Obviously this requires specific integration between vendors, but given that a handful of vendors control the switch and firewall markets, this isn’t too daunting. That being said, there may be delays in compatibility when network/security gear is upgraded, so make sure to check for NAC support before any upgrades. 802.1X: The standard 802.1X protocol is typically used for authentication on connect (as described above), for which it is well suited. But the protocol also includes an option to send enforcement policies to endpoints, which gets far more involved. Even though 802.1X is a mature standard, interoperability can still be problematic in heterogeneous network/security environments. Individual vendors have generally sorted interoperability between their own NAC and general networking products, but it’s never trivial to make .1X work at enterprise scale. SNMP: Another option for integration with switches is using SNMP to send commands to the networking gear. The advantages of SNMP clearly center around ubiquity of support, but security is a serious concern (especially with early versions of the protocol), so ensure you pay attention to device authentication and session security. * All of the above As usual, there is plenty of religion about which integration technique is best, which continues to amuse us. Our stance hasn’t changed: diversity in integration techniques is better than no diversity. We also prefer multiple enforcement tactics – multiple, layered controls provide additional hurdles for attackers. That means you want

Share:
Read Post

Network Security in the Age of *Any* Computing: Policy Granularity

As we discussed in the last post, there are number of ways to enforce access policies for any computing. Given the flexibility and dynamic nature of business, access policies should provide sufficient flexibility to meet business needs. To illustrate, let’s look at how an enforcement mechanism like network access control (NAC) can provide this kind of granularity. What you want is map out access models and design a set of policies to provide users with the right access at the right time from the right device. Let’s focus on mobile devices, the poster children for any computing, and typically the hardest to secure. First we will define three general categories of mobile devices trying to connect to your network: Corporate devices: You have issued these devices to your employees and they are expected to get full access to pretty much whatever they need. You’ll want to verify both the user (strong authentication) and the device itself. It is also important to monitor what the device is doing to ensure authorized use after the pre-connect authentication. Personal devices: Sure, it’s easy to just implement a blanket policy of no personal devices. There are big companies doing that right now, regardless of user grumpiness over not being able to use their fancy new iPads at work. But if draconian isn’t an option in your shop, you could move authenticated, unauthorized devices onto a logical network configured only for outbound Internet access. Or provide access to non-critical resources such as employee wikis and the like but block access to corporate email servers, assuming you don’t want company email on these devices. Everything else: Lots of guests show up at your facilities and try to connect to your networks – both wired and wireless. If they successfully gain access via WPA2 or a physical port, they need to be bounced from the network. This represents the “access” part of network access control. Depending on your pain threshold, there are many other device types and usage models that can be profiled to create specific enforcement policies. Granularity is only limited by your ability to map use cases and design access policies. Let’s not forget that you can also implement policies based on roles. For instance, your marketing group might have network access with iPads, since every good marketer needs one. But if engineers do not have a business justification for iPad use that group could be blocked. Policies aren’t defined merely by what (device) the user has, but also on who they are. Posture-based Policies What about policies based on defenses implemented on the endpoint or mobile device – such as AV, full disk encryption, and remote wipe? Clearly you need to control those devices as well. Being able to restrict users without certain patches on their device is legitimate. Or you might want want to keep end users off of your protected network segment if they don’t have full disk encryption active, to avoid breach disclosure if they lose the device. It’s not just about knowing what the device is, and who is using it, but also what’s on it. As you can see, this is problem includes at least 3 dimensions, which is why getting policies right is a prerequisite for controlling access. We’ll talk more about getting the policies right incrementally when we wrap up the series. Which, once again, brings up our main point. Make sure you can enforce security policies that reflect your desired security posture given the context of your business processes. Don’t force your security policy to map to your enforcement mechanisms. Share:

Share:
Read Post

Table Stakes

This morning I published a column over at Dark Reading that kicked off some cool comments on Twitter. Since, you know, no one leaves blog comments anymore. The article is the upshot from various frustrations that have annoyed me lately. To be honest, I could have summarized the entire thing as “grow the f* up”. I’m just as tired of the “security is failing” garbage as I am with ridonkulous fake ROI models, our obsession with threats as the only important metric, and the inability of far too many security folks to recognize operational realities. Since I’m trying to be better about linking to major articles, here’s an excerpt: There’s been a lot of hand-wringing in the security community lately. Complaints about compliance, vendors and the industry, or the general short-sightedness of those we work for who define our programs based on the media and audit results. Now we whine about developers ignoring us, executives mandating support for iPads we can’t control (while we still use the patently-insecureable Windows XP) executives who don’t always agree with our priorities, or bad guys coming after us personally. We’re despondent over endless audit and assessment cycles, FUD, checklists, and half-baked products sold for fully-baked prices; with sales guys targeting our bosses to circumvent our veto. My response? Get over it. These are the table stakes folks, and if you aren’t up for the game here’s a dollar for the slot machines. Share:

Share:
Read Post

FAM: Market Drivers, Business Justifications, and Use Cases

Now that we have defined File Activity Monitoring it’s time to talk about why people are buying it, how it’s being used, and why you might want it. Market Drivers As I mentioned earlier the first time I saw FAM was when I dropped the acronym into the Data Security Lifecycle. Although some people were tossing the general idea around, there wasn’t a single product on the market. A few vendors were considering introducing something, but in conversations with users there clearly wasn’t market demand. This has changed dramatically over the past two years; due to a combination of indirect compliance needs, headline-driven security concerns, and gaps in existing security tools. Although the FAM market is completely nascent, interest is slowly growing as organizations look for better handles on their unstructured file repositories. We see three main market drivers: As an offshoot of compliance. Few regulations require continuous monitoring of user access to files, but quite a few require some level of audit of access control, particularly for sensitive files. As you’ll see later, most FAM tools also include entitlement assessment, and they monitor and clearly report on activity. We see some organizations consider FAM initially to help generate compliance reports, and later activate additional capabilities to improve security. Security concerns. The combination of APT-style attacks against sensitive data repositories, and headline-grabbing cases like Wikileaks, are driving clear interest in gaining control over file repositories. To increase visibility. Although few FAM deployments start with the goal of providing visibility into file usage, once a deployment starts it’s not uncommon use it to gain a better understanding of how files are used within the organization, even if this isn’t to meet a compliance or security need. FAM, like its cousin Database Activity Monitoring, typically starts as a smaller project to protect a highly sensitive repository and then grows to expand coverage as it proves its value. Since it isn’t generally required directly for compliance, we don’t expect the market to explode, but rather to grow steadily. Business Justifications If we turn around the market drivers, four key business justifications emerge for deployment of FAM: To meet a compliance obligation or reduce compliance costs. For example, to generate reports on who has access to sensitive information, or who accessed regulated files over a particular time period. To reduce the risk of major data breaches. While FAM can’t protect every file in the enterprise, it provides significant protection for the major file repositories that turn a self-constrained data breach into an unmitigated disaster. You’ll still lose files, but not necessarily the entire vault. To reduce file management costs. Even if you use document management systems, few tools provide as much insight into file usage as FAM. By tying usage, entitlements, and user/group activity to repositories and individual files; FAM enables robust analysis to support other document management initiatives such as consolidation. To support content discovery. Surprisingly; many content discovery tools (mostly Data Loss Prevention), and manual processes, struggle to identify file owners. FAM can use a combination of entitlement analysis and activity monitoring to help determine who owns each file. Example Use Cases By now you likely have a good idea how FAM can be used, but here are a few direct use cases: Company A deployed FAM to protect sensitive engineering documents from external attacks and insider abuse. They monitor the shared engineering file share and generate a security alert if more than 5 documents are accessed in less than 5 minutes; then block copying of the entire directory. A pharmaceutical company uses FAM to meet compliance requirements for drug studies. The tool generates a quarterly report of all access to study files and generates security alerts when IT administrators access files. Company C recently performed a large content discovery project to locate all regulated Personally Identifiable Information, but struggled to determine file owners. Their goal is to reduce sensitive data proliferation, but simple file permissions rarely indicate the file owner, which is needed before removing or consolidating data. With FAM they monitor the discovered files to determine the most common accessors – who are often the file owners. Company D has had problems with sales executives sucking down proprietary customer information before taking jobs with competitors. They use FAM to generate alerts based on both high-volume access and authorized users accessing older files they’ve never touched before. As you can see, the combination of tying users to activity, with the capability to generate alerts (or block) based on flexible use policies, makes FAM interesting. Imagine being able to kick off a security investigation based on a large amount of file access, or low-and-slow access by a service or administrative account. File Activity Monitoring vs. Data Loss Prevention The relationship between FAM and DLP is interesting. These two technologies are extremely complementary – so much that in one case (as of this writing) FAM is a feature of a DLP product – but they also achieve slightly different goals. The core value of DLP is its content analysis capabilities; the ability to dig into a file and understand the content inside. FAM, on the other hand, doesn’t necessarily need to know the contents of a file or repository to provide value. Certain access patterns themselves often indicate a security problem, and knowing the exact file contents isn’t always needed for compliance initiatives such as access auditing. FAM and DLP work extremely well together, but each provides plenty of value on its own. Share:

Share:
Read Post

Incite 3/9/2011: Greed Is (fill in the blank)

As most of you know, I’m a huge NFL fan. In fact I made my kids watch the combine on NFL Network two weeks ago when the Boss was away. The frickin’ combine. I was on the edge of my seat watching some guy run a 4.34 40-yard dash. And heard the groans of the crowd when a top rated offensive tackle did only 21 bench presses of 225 pounds. That’s it? And some defensive lineman did 50 reps on the bench. 50 reps. If this DT thing doesn’t work out, I’m sure he’s got a future benching Pintos in the circus. Unless you have been hiding under a rock, you also know the NFL players’ union and owners are locked in a stand-off to negotiate a new collective bargaining agreement. It’s hard to sympathize with either side – either the billionaires or the multi-millionaires. Though when you read this truly outstanding piece by Bill Simmons of ESPN, you get a different perspective, and it’s even hard to feel anything but disdain for the owners. Though I’m not going to shed any tears for the players either. But if you really want, you can feel sad for the biggest bust in NFL draft history, because he made $38 million and still had his house end up in foreclosure. I’m not sure about you, but Wall Street is still one of my all-time favorite movies. Though it’s debatable whether Bud Fox is #winning nowadays. When Gekko does his soliloquy at the annual meeting, anchored by the catchphrase “Greed is good,” I still get chills down my spine. Although I’m not sure I believe it any more. You see, I grew up in a pretty modest home. We weren’t poor, but we weren’t rich either. I had stuff, but not the best stuff. I did things, but my friends did more. So I’ve always chased the money, most likely out of some misguided childhood belief that I missed out on something. That pursuit has brought me nothing but angst. I’ve made poor career decisions. I’ve worked with assholes. And I didn’t get rich. Sure, I’m comfortable and I’m fortunate to be able to provide a nice lifestyle for my family, but I can’t buy a plane. At one point in my life, I’d have viewed myself as a failure because of that. So no more chasing the money. If I find it, all the better, but my career decisions are based on what I like to do, not how much I can make. As I’ve gotten older, I have also realized that what’s right for me may not be right for you. So if you still want to own a plane, more power to you. We need folks with that drive to build great companies and create lots of value and spur the economy. Just don’t ask me to play along. I’m not interested in running a competitor out of business. Nor am I interested in extracting every nickel and dime from our clients or screwing someone over to buy another yacht. And that’s also why I’m not the owner of an NFL team. So I guess my answer is “Greed is not interesting anymore.” -Mike Photo credits: “Greed” originally uploaded by Mike Smail Incite 4 U We suck at hiring: Many of you work at reasonably sized companies. You know, the kind of company with an HR department to tell you not to surf pr0n on your corporate laptop. Those helpful HR folks also lead the hiring process for your security folks, right? This post by Chief Monkey should hit you in the banana (or taco – we don’t want to discriminate). I usually like a rent to own approach. Offer promising folks a short term contract, and if they have the goods bring them aboard. Yes, I know that in a competitive job market (like security), some candidates may not like it. But your organization is probably more screwed up than anything they have seen before, so this provides some risk mitigation for the candidate as well. They could opt out before it gets much more difficult. – MR Just say no (Rich’s take): Believe it or not, sometimes saying no is the right thing to do. I realize we’re all new-age self-actualized security pros these days, but sometimes you need to hit the brakes before ramming into the back of that car parked in the center lane while some doofus tries to finish a text message. Wells Fargo is clamping down on any use of employee-owned devices, and simultaneously experimenting with corporate iPads to supplement corporate smartphones. In a business like financial services, it only makes sense to operate a more restrictive environment and require employees to use personal devices and personal networks for personal stuff. Not that I’m saying the rest of you need to be so restrictive – you are not one of the biggest financials in the world and you probably won’t be able to get away with being so draconian. Heck, thanks to iPhones/Android/Winmo7 your users can still access Facebook all they want while at work… without hitting your network. – RM Just say no (Adrian’s take): Well’s Fargo’s IT department is saying no to personal devices being connected to the corporate network. Part of me says “Good for them!” I don’t use the same machine to surf the web as I do for online banking, so SoD (Separation of Devices) seems like a good idea. Part of me thinks Wells Fargo makes so many bad decisions in general, what if this is wrong too? I started to wonder if we could see a time when the local area network is only partially secured, and the banks let employees use their own devices on the less secure area. What if critical applications and processes are heavily secured in the cloud, as they move away from the users who create a lot of the security problems? Would that be a better model for separating general usage from critical processes and machines? Food for thought. – AL Looking for work, Tier 1 spammer… So Soloway is out of the big house. I wonder if

Share:
Read Post

The CIO Role and Security

During the e10+ event Monday at the RSA Conference, Rich and Mike moderated a panel on Optimizing Your Security Program. One of the contested topics was how to position security to upper management. Every CIO and CISO falls into the trap of having to say ‘No’ to some new idea that occurs to executive management, and then take blame for being “Negative Nancy”, “Dr. No”, “The Knight who says NEE”, or some collection of the Seven Dirty Words. I was surprised that so little has changed, as these were exactly the same problems I had a dozen years ago. While security threats were far simpler and fewer then, so was acknowledgement of the need for security. I guess it’s human nature that we still fall into the same traps. For example, I remember the principal VC of one of the many start-ups I worked at, stating we needed to take credit cards – I responded that it was too risky given our (total lack of) site security. He looked at me as though I was stupid, insubordinate, and insensitive to customer requirements. I remain convinced it was one of the reasons I was one of the first people let go during a series of cost cutting RIFs. At the brokerage there was a bi-polar attitude, that 9 days out of 10 I needed to get the sales staff what they need to do their jobs, and make things as easy as possible so the sales conversations were aided rather than hindered by technology. Day ten was a hair-on-fire security exercise because some broker was printing out the entire database to bring to a competitor. On the tenth day you will be asked by the CEO, “What are you doing to protect my data?” The correct answer is not, “making it super easy for people to get access,” or “exactly what you told me to.” As a CIO your priroity is service, but you are responsible for security. I managed accordingly, or at least I started out that way. I figured I had the authority to nix projects that compromised data security. I felt it was my responsibility, as champion of security, to halt new projects until they addressed data and compliance issues. Outsiders felt security turned simple ideas into complex – and costly – projects. In reality, though most new ideas were not fleshed out, and failed to account for total build costs – much less the cost to clean up messes down the road. Still, complexity was “my fault”. This resulted in complaints, mid-level mangers trying to bypass me altogether, and bosses realigning my bonus incentives based on project completion and satisfaction of IT users. Running IT services for popularity is dangerous, and results in bizarre feature-based prioritization of improvements, with user satisfaction hinging almost purely on system stability. You add shiny toys of no consequence and you make sure things are reliable – no more. I only recommend this if you want to ‘earn’ some short term bonuses and promotions, then quickly move to your next employer. If you want to succeed in the job for more than a couple consecutive quarters, avoid being a feature firewall, and avoid having your merit judged on convenience and system uptime. Features and new ideas are more likely to come from outside your organization than inside, and no one else wants security, so you have to field these challenges. Trying to spin security as a business enabler is great conceptually but rarely works in the real world. I used three approaches to avoid being the bad guy for advocating security: The Security Porcupine: When I started working with sales people, I learned many sales techniques. One of the best tactics I ever learned was the ‘Porcupine’ strategy from Tom Hopkins, which I bastardized a bit to help with IT projects. In essence, you don’t catch a porcupine when it’s thrown at you – instead you deflect it. Ask the originator how they feel the problem should be addressed. “What a great idea! How would you like to handle [security/compliance/auditng] requirements we must meet?” This is a choice-based strategy. It gives the person who had the idea some responsibility by recognizing their idea carries a security burden; they can then help scale back their idea, or accept security controls as part of the plan. Either way, security becomes a facet of their project. The Hidden Security Project: If it’s your responsibility to form the IT deployment plan, take responsibility for it and build security in. Weave security into the project plan, subtly or otherwise, such that it is difficult to discern core function from security or operations. Costs and functions are bundled, and a detailed presentation makes it look like you have invested time into understanding how to get the project done. Present the plan and let the executive team decide if the investment is worth it. If it passes, you have both budget and executive buy-in. If not, the work you put into the plan saved you from disaster down the road. The Gauntlet: Accept the proposal and enter a “requirements phase”. As the project undergoes scruitiny from your team, raise security as an outstanding question. Also make sure internal compliance, security, and external auditors review the plan. It’s likely someone else will have the same security concerns you do – in addition to compliance and procedural issues you never thought of. If the idea is truly worthy, it will pass these tests. Either way, don’t fight it head-on. Most ideas die in the process, and nobody gets egg on their face when the initial state of euphoria passes over to critical reasoning. It’s a transparent pass-the-buck ruse in smaller organizations, but can harden and flesh out good ideas in larger firms. I hate to advocate shenanigans, but business is business, and people will turn you into road pizza if you don’t protect yourself. Share:

Share:
Read Post

Network Security in the Age of *Any* Computing: Enforcement

As we continue with “Network Security in the Age of Any Computing”, we have already hit the risks and the need for segmentation to restrict access to sensitive data. Now we focus on technologies that can help restrict access – which tend to be NAC, firewalls, and other network layer controls (such as VLANs and physical segmentation). Each technology has pros and cons. There are no ‘right’ answers – just a set of compromises that must be made b weighing the various available technology options. NAC Everyone likes to beat on Network Access Control (NAC), including us. The technology has been around for years, and most of those years have been marketed as The Year of NAC. Unfortunately the technology was oversold years ago, and could not deliver on its promise of securing everything. Imagine that – marketing getting ahead of both technology and user requirements. But folks in the NAC space have been (more quietly) moving their products along, and more importantly building NAC capabilities into a broader network security use case. But before we jump in, let’s take a look at what NAC really does. Basically it scrutinizes the devices on your network to make sure they are configured correctly, and accessing what they are supposed to. That’s the access control part of the name. The most prevalent use case has always been for guest/contractor access, where it’s particularly important to ensure any devices connecting to the network are authorized and configured correctly. Of course, under any computing, every device should now be considered a guest as much as feasible. Given the requirement to ensure the right devices access only the right resources, integrating mobile devices security with Network Access Control offers a means to implement control structures so one can trust but verify these devices. Which is what it’s all about, right? So what’s the issue? Why isn’t NAC proliferating through everything and everywhere, including all these mobile devices? Like most interesting technologies, there is still too much complexity for mass market deployment. You’ll need to link up NAC with your identity infrastructure – which is getting easier through standard technologies like Active Directory, LDAP, and RADIUS, but is still not easy. From a deployment standpoint, the management devices need to see most of the traffic flowing through your network, which requires scalability and sensitivity to latency during design. Finally, you need to integrate with existing network and security infrastructure – at least if you want to actually block any bad stuff – which will be the subject of a later post in this series. As folks who follow markets for a living we know that once the hype around any market starts dying down, the technology starts becoming more prevalent – especially at the large enterprise level. NAC is no different. You don’t hear a lot about it, but it’s happening, largely driven by the proliferation of these mobile devices and the need to more effectively segment the network. Firewalls As described in the PCI Guidance we excerpted, our PCI friends believe firewalls are key to network segmentation and protecting sensitive information. They are right that front-ending sensitive data stores with a firewall is a best practice to control access to sensitive segments. Unfortunately, traditional firewalls tend to only understand IP addresses, ports, and protocols. With a number of newer web-based applications – increasingly encapsulated on port 80 – that can be problematic. This is driving the evolution of the firewall to become much more application aware. We have covered that evolution extensively, so we won’t rehash it all here. But in the use case of network segmentation for dealing with mobile devices, scalability of application layer inspection remains a major concern. These devices need the ability to inspect and enforce application layer policies at multi-gigabit internal network speeds. That’s a tall order for today’s firewall devices, but as with everything else in technology, the devices continue to get faster and more mature. All hail Moore’s Law! So these evolved firewalls are also instrumental for implementing a network segmentation architecture to support any computing. Network Layer Controls But the path of least resistance tends to be based around leveraging devices already in place. That means using the built-in capabilities of network switches and routers to enforce required segmentation and access control capabilities. First let’s hit on the brute force approach, which is physical segmentation. As we described in the last post, we believe that Internet access for mobile devices and guests should be on a totally disparate network. You don’t want to give a savvy attacker any chance to jump from you guest network to your internal net. This level of physical segmentation is great when the usage model supports it, but for most computing functions it doesn’t. So many folks leverage technologies such as VLAN (virtual LANs) to build logical networks on top of a single physical infrastructure. At the theoretical level, this works fine and will likely be good enough to pass your PCI assessment. That said, the objective isn’t to get the rubber stamp, but to protect the information. So we need to take a critical look at where VLANs can be broken and see whether that risk is acceptable. There are many ways to defeat VLAN-based segmentation, including VLAN hopping. To be fair, most modern switches can detect and block most of these attacks if configured correctly. That’s always the case – devices are only as strong as their configuration, and it’s rarely safe to assume a solid and secure configuration. Nor is leaving the security of your critical data to a single control. Layers of security are good. More layers are better. But given that everyone has switches and they all support VLANs and physical segmentation, this will continue to be a common means for restricting access to sensitive data, which is a good thing. VLANs + firewalls + NAC provide a comprehensive system to ensure only the right devices are accessing critical data. That doesn’t mean you need to do everything, but depending on the real sensitivity of the data, it shouldn’t hurt. Device Health As described in

Share:
Read Post

Introduction to File Activity Monitoring

A new approach to an old problem One of the more pernicious problems in information security is allowing someone to perform something they are authorized to do, but catching when they do it in a potentially harmful way. For example, in most business environments it’s important to allow users broad access to sensitive information, but this exposes us to all sorts of data loss/leakage scenarios. We want to know when a sales executive crosses the line from accessing customer information as part of their job, to siphoning it for a competitor. In recent years we have adopted tools like Data Loss Prevention to help detect data leaks of defined information, and Database Activity Monitoring to expose deep database activity and potentially detect unusual activity. But despite these developments, one major blind spot remains: monitoring and protecting enterprise file repositories. Existing system and file logs rarely offer the level of detail needed to truly track activity, generally don’t correlate across multiple repository types, don’t tie users to roles/groups, and don’t support policy-based alerts. Even existing log management and Security Information and Event Management tools can’t provide this level of information. Four years ago when I initially developed the Data Security Lifecycle, I suggested to a technology called File Activity Monitoring. At the time I saw it as similar to Database Activity Monitoring, in that it would give us the same insight into file usage as DAM provides for database access. Although the technology didn’t yet exist it seemed like a very logical extension of DLP and DAM. Over the past two years the first FAM products have entered the market, and although market demand is nascent, numerous calls with a variety of organizations show that interest and awareness are growing. FAM addresses a problem many organizations are now starting to tackle, and the time is right to dig into the technology and learn what it provides, how it works, and what features to look for. Imagine having a tool to detect when an administrator suddenly copies the entire directory containing the latest engineering plans, or when a user with rights to a file outside their business unit accesses it for the first time in 3 years. Or imagine being able to hand an auditor a list of all access, by user, to patient record files. Those are merely a few of the potential uses for FAM. Defining FAM We define FAM as: Products that monitor and record all activity within designated file repositories at the user level, and generate alerts on policy violations. This leads to the key defining characteristics: Products are able to monitor a variety of file repositories, which include at minimum standard network file shares (SMB/CIFS). They may additionally support document management systems and other network file systems. Products are able to collect all activity, including file opens, transfers, saves, deletions, and additions. Activity can be recorded and centralized across multiple repositories with a single FAM installation (although multiple products may be required, depending on network topology). Recorded activity is correlated to users through directory integration, and the product should understand file entitlements and user/group/role relationships. Alerts can be generated based on policy violations, such as an unusual volume of activity by user or file/directory. Reports can be generated on activity for compliance and other needs. You might think much of this should be possible with DLP, but unlike DLP, File Activity Monitoring doesn’t require content analysis (although FAM may be part of, or integrated with, a DLP solution). FAM expands the data security arsenal by allowing us to understand how users interact with files, and identify issues even when we don’t know their contents. DLP, DAM, and FAM are all highly complementary. Through the rest of this series we will dig more into the use cases, technology, and selection criteria. Note – the rest of the posts in the series will appear in our Complete Feed. Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.