Securosis

Research

**Updated** RSA Breached: SecurID Affected

You will see this all over the headlines during the next days, weeks, and maybe even months. RSA, the security division of EMC, announced they were breached and suffered data loss. Before the hype gets out of hand, here’s what we know, what we don’t, what you need to do, and some questions we hope are answered: What we know According to the announcement, RSA was breached in an APT attack (we don’t know if they mean China, but that’s well within the realm of possibility) and material related to the SecureID product was stolen. The exact risk to customers isn’t clear, but there does appear to be some risk that the assurance of your two factor authentication has been reduced. RSA states they are communicating directly with customers with hardening advice. We suspect those details are likely to leak or become public, considering how many people use SecurID. I can also pretty much guarantee the US government is involved at this point. Our investigation has led us to believe that the attack is in the category of an Advanced Persistent Threat (APT). Our investigation also revealed that the attack resulted in certain information being extracted from RSA’s systems. Some of that information is specifically related to RSA’s SecurID two-factor authentication products. While at this time we are confident that the information extracted does not enable a successful direct attack on any of our RSA SecurID customers, this information could potentially be used to reduce the effectiveness of a current two-factor authentication implementation as part of a broader attack. We are very actively communicating this situation to RSA customers and providing immediate steps for them to take to strengthen their SecurID implementations. What we don’t know We don’t know the nature of the attack. They specifically referenced APT, which means it’s probably related to custom malware, which could have been infiltrated in a few different ways – a web application attack (SQL injection), email/web phishing, or physical access (e.g., an infected USB device – deliberate or accidental). Everyone will have their favorite pet theory, but right now none of us know cr** about what really happened. Speculation is one of our favorite pastimes, but largely meaningless other than as entertainment, until details are released (or leak). We don’t know how SecurID is affected. This is a big deal, and the odds are just about 100% that this will leak… probably soon. For customers this is the most important question. What you need to do If you aren’t a SecurID customer… enjoy the speculation. If you are, make sure you contact your RSA representative and find out if you are at risk, and what you need to do to mitigate that risk. How high a priority this is depends on how big a target you are – the Big Bad APT isn’t interested in all of you. The letter’s wording might mean the attackers have a means to generate certain valid token values (probably only in certain cases). They would also need to compromise the password associated with that user. I’m speculating here, which is always risky, but that’s what I think we can focus on until we hear otherwise. So reviewing the passwords tied to your SecurID users might be reasonable. Open questions While we don’t need all the details, we do need to know something about the attacker to evaluate our risk. Can you (RSA) reveal more details? How is SecurID affected and will you be making mitigations public? Are all customers affected or only certain product versions and/or configurations? What is the potential vector of attack? Will you, after any investigation is complete, release details so the rest of us can learn from your victimization? Finally – if you have a token from a bank or other provider, make sure you give them a few days and then ask them for an update. If we get more information we’ll update this post. And sorry to you RSA folks… this isn’t fun, and I’m not looking forward to the day it’s our turn to disclose. Update 19:20 PT: RSA let us know they filed an 8-K. The SecureCare document is linked here and the recommendations are a laundry list of security practices… nothing specific to SecurID. This is under active investigation and the government is involved, so they are limited in what they can say at this time. Based on the advice provided, I won’t be surprised if the breach turns out to be email/phishing/malware related. Share:

Share:
Read Post

Friday Summary: March 18, 2011—Preparing for the Worst

I have been debating (in my head) whether or not to write anything about what’s going on in Japan. This is about as serious as it gets, and there is far too much under-informed material out there. But the thing is I’m actually qualified to talk about disaster response. Heck, probably more qualified than I am to talk about information security. I have over 20 years experience in emergency services, including work as a firefighter (volunteer), paramedic (paid), ski patroller, mountain rescuer (over 10 years with Rocky Mountain Rescue), and various other paid and volunteer roles. Plus, for about 10 years now, I’ve been on a federal disaster and terrorism (WMD) response team. I’ve deployed on a bunch of exercises, as standby at a few national security events, and for real to Katrina and some smaller local disasters with other agencies. Yes, I’m trained to respond to something like what’s happening right now in Japan, and might deploy if it happened here in the US. The reason I’m being borderline-exploitative is that I know it’s human nature to ignore major risks until it’s too late, or for a brief period during and after a major event. I honestly expect that out of our thousands of readers, a handful of you might pay attention, and maybe one of you will do something to prepare. Words are cheap, so I figure it won’t hurt to try. I have far too many friends in disaster magnets like California who, at best, have a commercial earthquake bag lying around, and no real disaster plans whatsoever. Instead of a big post with all the disaster prep you should do (and yes, that I’ve done, despite living in a very stable area), I will focus on three quick items to give you a place to start. First: know your risks. Figure out what sorts of disasters (natural or human) are possible in your area. Phoenix is very stable, so I focus mostly on wildfires, flash floods, nuclear (there’s a plant outside the metro area, but weather could cause a panic), and biological (pandemic). Plus standard home disasters like fire (e.g., our smoke detector is linked to a call center/fire department). My disaster kits and plans focus around these, plus some personal plans around travel related incidents (I have an medical evac service for some trips). Second: know yourself. My disaster plans when I was single, without family or pets, and living in a condo in Boulder, were very different than the ones I have now. Back then it was, “grab my go bag and lock the door”, becauase I’d be involved in any major response. These days I have to plan for my family… and for being called away from my family if something big happens (the downside of being a fed). Have pets? Do you have enough pet carriers for all of them? And some spare food? Finally: layer your plan. I suggest you have a three-tiered plan: Eject: Your bugout plan. Something so serious hits that you get the hell out immediately. At best you’ll be able to grab 1 or 2 things. I’m not joking when I say this, but there are areas of this country where, if I lived in them, I’d bury supply caches along my escape routes. Heck, when I travel I usually have essentials and survival stuff ready to go in 30 seconds in case the hotel alarm goes off. Evac: You need to leave, but have more than a few minutes to put things together… or something (like a wildfire or radiological event) happens where you might need to go on sudden notice, but not have to drop everything. I have a larger list of items to take if I had 60-90 minutes to prep, which would go in a vehicle. There’s a much smaller list if I have to go on foot – we have 2 kids and cats to carry. Entrench: For blizzards, pandemics, etc.: whatever you might need to settle in. There are certain events I would previously have evacuated for but with a family I would now entrench for. What do you need, accounting for your climate, to survive where you are and for how long? The usual rule is 3 days of supplies, but that’s a load of crap. Realistically you should plan on a minimum of 7-10 days before getting help. We could make it 30-60 days if we had to, perhaps longer if needed – but the cats wouldn’t like it. For each option think about how you get out, what you take with you, what you leave behind, how you communicate and meet up (who gets the kids?), and how to secure what you’re leaving behind. I won’t lie – my plans aren’t perfect and there is still some gear I want on my list (like backup radio communications). But I’m in pretty good shape – especially with emergency rations and base supplies. A lot of it wasn’t in place until after I got back from Katrina and realized how important this all is. Long intro, and hopefully it helps at least one of you prep better. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian’s Dark Reading post on DB Security in the Cloud. Adrian’s Database Activity Monitoring Tips for Search Security. The Network Security Podcast, Episode 233. Rich quoted in Federal Computer Week on tokenization. Favorite Securosis Posts Mike Rothman: Table Stakes. Hopefully you are detecting a theme here at Securosis. Stop bitching and start doing. Rage and bitching don’t get much done. David Mortman: Technology Caste System. Adrian Lane: Greed Is (fill in the blank). Other Securosis Posts Updated RSA Breached – SecureID Affected. The Problem with Open Source in Commercial Software. Is the Virtual Desktop Hype Real?. Incite 3/16/2011: Random Act of Burrito. The CIO Role and Security. Security Counter Culture. FAM Introduction. Technical Architecture. Market Drivers, Business Justifications, and Use Cases. Network Security in the Age of Any Computing Quick Wins. Integration. Policy Granularity. Enforcement. Containing Access. Favorite Outside Posts Mike Rothman: REVEALED: Palantir Technologies. Not much is known about HBGary’s partner

Share:
Read Post

Incite 3/16/2011: Random Act of Burrito

It’s easy to be cynical. If you want to look at the negative, things are bad. The economy isn’t great and in many parts of the world it is getting worse. Politics are divisive. The Earth is pushing back at 7.9 on the Richter scale, resulting in a generation of Japanese who may be glowing sooner rather than later. Why do we bother? Security is a microcosm of that. It’s easy to descend into rage about pretty much everything. Budgets, users, senior management, auditors, regulations. I mean everything just sucks, right? I was at BSides Austin last week, and that was the undercurrent from folks at the con. I did my Happyness presentation and it went over pretty well. At least we could laugh at the folly of our situation. When I feel bad, I try to make fun of the situation. Right after I tear something into little pieces, that is. So that presentation is all about accepting our lot in life and learning to enjoy it. They say it’s always darkest before the dawn. Despite my pessimistic view on the world, I’m trying to change – to be optimistic. We are seeing technology advance at an unprecedented pace. The world is a much smaller place with many of these new collaboration capabilities. I mean, a guy can make a living by blogging and tweeting from a coffee shop anywhere in the world. Really. I wonder what technology will look like when my kids enter the workforce in 12-15 years. But in the end it’s about the people. It’s easy to be cynical on the other end of a Twitter client, or as a troll on a blog post. It’s easy to snipe from behind a TOR node. But when you actually spend time with people, you can get optimistic. I mean, look at the outpouring of help and gifts to Japan, and Haiti & Chile before that. And then there are the little things. This week I’m on the road and needed a quick dinner. So I stop into a Chipotle, because I’m a burrito junkie. I notice the woman ahead of me talking about not having any money with her and if they don’t take her coupon, she has to leave. I figure worst case, I’ll cover her burrito since that’s the right thing to do. But the guy at the register is way ahead of me and lets it go. Turns out they did take her coupon and that entitled her to not just her meal, but 2 others. So she turns to me and the lady behind me and says she’s got it. Yeah, man, a free burrito. And that made me remember that one person can do an act of kindness at any time. Maybe it’s funding a Kiva loan. Maybe it’s volunteering at a local food bank or other worthy local organization. Maybe it’s tutoring/mentoring someone without the opportunities you had. The real message of the Happyness pitch is that you have a choice. You can deal with everything either negatively or positively. Yes, it’s a struggle, because negativity is easier – at least for me, and probably for you too. But remember that every time you feel rage, you can turn that around. Do something nice instead of something mean. Novel idea, eh? Now I’ve got to practice what I preach. Talk is cheap and I’ve been talking a lot. Maybe I’ll head over to Chipotle and pay it forward. Maybe you should too. -Mike Photo credits: “happy burrito” originally uploaded by akeg Incite 4 U HP’s Strategy: cloudy and not so seamless: Apparently I drew the short straw and ended up attending HP’s annual analyst shindig. Being locked up in a room with 300 analysts is interesting, but let’s just say it’s good I don’t carry a weapon in CA. HP’s strategy is, amazingly enough, all about the cloud. Their tagline is “seamless, secure, and context-aware.” Hmmm. Security is perceived as important for cloud stuff, so I get that. I’ll even say that on paper HP’s security story is pretty good. But then I hit myself with the clue bat. This is a company that had very few security assets and capabilities – until a year ago they rapidly acquired TippingPoint, Fortify, and ArcSight. Now they claim to be a Top 5 security provider, which seems to involve creative accounting. I guess they sell a lot of secure PCs. As I’ve mentioned before, customers can’t implement a marketecture. They have years of integration work to do, and they need to have a larger presence on the endpoint and with network security products. An IPS is not a network security strategy. So HP will continue to buy stuff. They have to, but the issue is with making their products seamless. Right now it’s anything but. – MR Amazon drops the vBomb: As a loyal Amazon Web Services subscriber I received another morning email update. In my massively sleep-deprived state I figured it was merely another cool service like Elastic Beanstalk, but once the coffee kicked in my eyes popped wide open. AWS added a massive networking update that basically wipes out the divisions between VPC and public instances (if you want) and supports complex architectures such as a hybrid internal data center-to-VPC-to-Internet facing stack. Hoff, as usual, has a good take, and I’ll probably need to write it up for Securosis. After I rewrite significant chunks of the CCSK class. This update isn’t everything a large enterprise needs, but it’s a giant leap forward. Heck, we finally get outbound filtering! – RM Incentives: Tax incentives to promote cyber security? Apparently that’s the idea. But my question is why would voluntary participation be any better for security programs than mandatory compliance? I have two problems with opt-in programs. First, the level of effort is always less than or equal to the incentive, and half-assedfunded security programs don’t cut it. Second, the effort devolves into pure marketing to give the appearance of being secure. Think PCI compliance, but without the audit. Now couple that with complex stacks of software, and try

Share:
Read Post

Is the Virtual Desktop Hype Real?

I’ve been hearing a lot about Virtual Desktops lately (VDIs), and am struggling to figure out how interested you all really are in using them. For those of you who don’t track these things, a VDI is an application of virtualization where you run a bunch of desktop images on a central server, and employees or external users connect via secure clients from whatever system they have handy. From a security standpoint this can be pretty sweet. Depending on how you configure them, VDIs can be on-demand, non-persistent, and totally locked down. We can use all sorts of whitelisting and monitoring technologies to protect them – even the persistent ones. There are also implementations for deploying individual apps instead of entire desktops. And we can support access from anywhere, on any device. I use a version of this myself sometimes, when I spin up a virtual Windows instance on AWS to perform some research or testing I don’t want touching my local machine. Virtual desktops can be a good way to allow untrusted systems access to hardened resources, although you still need to worry about compromise of the endpoint leading to lost credentials and screen scraping/keyboard sniffing. But there are technologies (admittedly not perfect ones) to further reduce those risks. Some of the vendors I talk with on the security side expect to see broad adoption, but I’m not convinced. I can’t blame them – I do talk to plenty of security departments which are drooling over these things, and plenty of end user organizations which claim they’ll be all over them like a frat boy on a fire hydrant. My gut feeling, though, is that virtual desktop use will grow, but be constrained to particular scenarios where these things make sense. I know what you’re thinking, “no sh* Sherlock”, but we tend to cater to a … more discerning reader. I have spoken with both user and vendor organizations which expect widespread and pervasive deployment. So I need your opinions. Here are the scenarios I see: To support remote access. Probably ephemeral desktops. Different options for general users and IT admin. For guest/contractor/physician access to a limited subset of apps. This includes things like docs connecting to check lab results. Call centers and other untrusted internal users. As needed to support legacy apps on tablets. For users you want to let use unsupported hardware, but probably only for a subset of your apps. That covers a fair number of desktops, but only a fraction of what some other analyst types are calling for. What do you think? Are your companies really putting muscle behind virtual desktops on a large scale? I think I know the answer, but want a sanity check for my ego here. Thanks… Share:

Share:
Read Post

Technology Caste System

There is a caste system in technology. It’s an engineering caste system, or at least that’s what I call it. A feeling of superiority developers have over their QA, IT, product management, and release management brethren. Software developers at every firm I have ever worked for – large and small – share a condescending view of their co-workers when it comes to technology. They are at the top of the totem pole, and act as if their efforts are the most important. It starts in college, where software programs are more competitive to get in and require far more rigorous curricula. It is fostered by the mindset of programmers, who approach their profession more like religion. It’s not a 9-5 day job, and most 20-something developers I have worked with put in longer hours and put in more time into self education than any other profession I have ever seen. They create something from nothing every day; and with software, anything is possible. The mindset is reinforced by pay scales and recognition when products are delivered. Their technical accumen runs far deeper than the other groups and they don’t respect those without it. This relationship between different professions is reinforced when problems arise, as developers are the ones explaining how things work and advising those around them. It’s the engineering team that writes the trickier test cases, and the engineers who comes up with the best product ideas. Heck, of the last four organization I have run, to solve serious IT issues I had to assign members of the engineering team to debug and fix. They are technology rocks stars and prima donnas. Right or wrong, good or bad, this attitude is commonplace. Why do I bring this up? Reviewing the marketing and sales collateral from several security vendors who are applying their IT marketing angles to software developers, I see a lot of approaches that will not work. When it comes to understanding buying centers, those who have traditionally sold into IT don’t get the developer mindset. They approach sales and marketing as if the two were interchangeable, but they are not. The things developers consider important are not the same things the rest of IT considers important. It is unlikely your “IT Champion” can cross-pollinate your ideas to the development team – both because your champion is likely seen as an outsider by the developers and due to internal tension between different groups. Development sets development requirements. White box test tools? Web application assessments? WAF? Even pen testing? These all need different buyers, with a different mind set and requirements than the buyers of other IT kit – especially compared to network operations gear. The product and the value proposition needs to work in the development context. Most sales and marketing teams want to target the top – the CIO – and work their way down from there. That works for most of IT, but not with developers who have their own set of requirements over and above business requirements, and often neither fear nor respect upper management. They are far less tolerant of marketing-speak and BS and much more focused on getting things done easily, so you had better show value quickly or you’re wasting time. UI, workflow, integration, and API options need to be more flexible. When it comes to application security, it’s a developer’s world, so adjust or be ignored. Share:

Share:
Read Post

Network Security in the Age of *Any* Computing: Integration

Supporting any computing – which we have defined as access to your critical information from anywhere, at any time, on any device – requires organizations to restrict access to specific communities of users/devices, based on organizational policies. In order to do this, you need to integrate with your existing installed base of security and networking technologies, ensuring management leverage and reducing complexity. No easy task, for sure. So let’s discuss how you can implement network access control to play nicely in the larger sandbox. Authentication When an endpoint/mobile device joins the network, you can start with either a specific authentication or network-based detection of the device, via passive monitoring of the network traffic or the MAC address of the connecting device. The choice of how strong an authentication comes down to whether building policies based on device and/or location will be granular enough. If you want to take into account who is driving the device into the policies, then you’ll need to know the identity of the user. Although there are techniques to identify users passively, we prefer stronger methods to determine identity; these require integration with an authoritative source for identity information. The integrated directory might be Active Directory, LDAP, or RADIUS. Authentication is either via a persistent agent, a connection portal (provided as part of the NAC solution), or a protocol such as 802.1X. Keep in mind that identity is a dynamic beast, and users & groups are constantly changing. So it’s not sufficient to provide a one-time dump of the directory. You’ll want to check for user/group moves, adds, and changes, on an ongoing basis. At authentication time you need to figure out what’s going on with the device, which involves inspecting it to understand its security posture. Endpoint/Mobile Device Integration The first decision is how deeply to scrutinize endpoints/mobiles when they connect. Obviously there is a time factor to scanning and checking security posture, which can cause user grumpiness. Though most organizations want to make sure devices are properly configured upon access, many aren’t ready to react to the answers they may get. Do you block access when a device violates policy? Even when the user has a legitimate and business critical need to be on the network? As we discussed briefly in the post on policies, you may want to define policies based on the security controls in place on the endpoints/mobiles. Compromising your security by providing access to compromised devices makes no sense, so what remediation should happen? Do you patch the device? That requires the ability to integrate with the patch management product. Do you reconfigure the device? Or update the endpoint protection platform? It depends on the nature of the policy violation and which information that user can access, but you want options for how to remediate. And each option requires support from your NAC vendor. You could just ignore the details and block users with devices which don’t comply with policy, but this tends to end with your rainmaker calling the CEO because she can’t get into the ordering system to book that critical deal. Which presumably won’t work out very well for you. Another consideration is that devices may be compromised after connecting. Detecting a compromised device involves both re-authenticating devices periodically (to ensure a man in the middle hasn’t happened), as well as assessing the security posture of the endpoint/mobile device every so often. Another tactic is to detect compromised devices by their behavior – which requires continuously checking devices for anomalous behavior. Most NAC devices are already monitoring the network anyway to detect new devices, so this anomaly detection capability is frequently available. Now that you know the posture of the endpoint/mobile, you can determine the appropriate level of access it, enforcing that policy at the network layer via integration with other infrastructure. Network Integration There are plenty of ways to enforce network access policies using your switches and firewalls. Let’s take a look at the major techniques: Inline device: Obviously an option for enforcing access policies is to be in the middle of the connection and able to block unauthorized devices as needed. Networking infrastructure players who offer NAC can provide multipurpose boxes that act as inline enforcement points. There isn’t much more to say about it, but this approach has a dramatic impact on network design. CLI: The good old command line is still one of the more popular methods of enforcing access control. This involves the NAC equipment establishing a secure, authenticated session (typically using SSH or SSL) with a switch or firewall and making an appropriate change. That might mean moving a user onto a guest VLAN or blocking their IP from access a protected network. Obviously this requires specific integration between vendors, but given that a handful of vendors control the switch and firewall markets, this isn’t too daunting. That being said, there may be delays in compatibility when network/security gear is upgraded, so make sure to check for NAC support before any upgrades. 802.1X: The standard 802.1X protocol is typically used for authentication on connect (as described above), for which it is well suited. But the protocol also includes an option to send enforcement policies to endpoints, which gets far more involved. Even though 802.1X is a mature standard, interoperability can still be problematic in heterogeneous network/security environments. Individual vendors have generally sorted interoperability between their own NAC and general networking products, but it’s never trivial to make .1X work at enterprise scale. SNMP: Another option for integration with switches is using SNMP to send commands to the networking gear. The advantages of SNMP clearly center around ubiquity of support, but security is a serious concern (especially with early versions of the protocol), so ensure you pay attention to device authentication and session security. * All of the above As usual, there is plenty of religion about which integration technique is best, which continues to amuse us. Our stance hasn’t changed: diversity in integration techniques is better than no diversity. We also prefer multiple enforcement tactics – multiple, layered controls provide additional hurdles for attackers. That means you want

Share:
Read Post

Network Security in the Age of *Any* Computing: Policy Granularity

As we discussed in the last post, there are number of ways to enforce access policies for any computing. Given the flexibility and dynamic nature of business, access policies should provide sufficient flexibility to meet business needs. To illustrate, let’s look at how an enforcement mechanism like network access control (NAC) can provide this kind of granularity. What you want is map out access models and design a set of policies to provide users with the right access at the right time from the right device. Let’s focus on mobile devices, the poster children for any computing, and typically the hardest to secure. First we will define three general categories of mobile devices trying to connect to your network: Corporate devices: You have issued these devices to your employees and they are expected to get full access to pretty much whatever they need. You’ll want to verify both the user (strong authentication) and the device itself. It is also important to monitor what the device is doing to ensure authorized use after the pre-connect authentication. Personal devices: Sure, it’s easy to just implement a blanket policy of no personal devices. There are big companies doing that right now, regardless of user grumpiness over not being able to use their fancy new iPads at work. But if draconian isn’t an option in your shop, you could move authenticated, unauthorized devices onto a logical network configured only for outbound Internet access. Or provide access to non-critical resources such as employee wikis and the like but block access to corporate email servers, assuming you don’t want company email on these devices. Everything else: Lots of guests show up at your facilities and try to connect to your networks – both wired and wireless. If they successfully gain access via WPA2 or a physical port, they need to be bounced from the network. This represents the “access” part of network access control. Depending on your pain threshold, there are many other device types and usage models that can be profiled to create specific enforcement policies. Granularity is only limited by your ability to map use cases and design access policies. Let’s not forget that you can also implement policies based on roles. For instance, your marketing group might have network access with iPads, since every good marketer needs one. But if engineers do not have a business justification for iPad use that group could be blocked. Policies aren’t defined merely by what (device) the user has, but also on who they are. Posture-based Policies What about policies based on defenses implemented on the endpoint or mobile device – such as AV, full disk encryption, and remote wipe? Clearly you need to control those devices as well. Being able to restrict users without certain patches on their device is legitimate. Or you might want want to keep end users off of your protected network segment if they don’t have full disk encryption active, to avoid breach disclosure if they lose the device. It’s not just about knowing what the device is, and who is using it, but also what’s on it. As you can see, this is problem includes at least 3 dimensions, which is why getting policies right is a prerequisite for controlling access. We’ll talk more about getting the policies right incrementally when we wrap up the series. Which, once again, brings up our main point. Make sure you can enforce security policies that reflect your desired security posture given the context of your business processes. Don’t force your security policy to map to your enforcement mechanisms. Share:

Share:
Read Post

Table Stakes

This morning I published a column over at Dark Reading that kicked off some cool comments on Twitter. Since, you know, no one leaves blog comments anymore. The article is the upshot from various frustrations that have annoyed me lately. To be honest, I could have summarized the entire thing as “grow the f* up”. I’m just as tired of the “security is failing” garbage as I am with ridonkulous fake ROI models, our obsession with threats as the only important metric, and the inability of far too many security folks to recognize operational realities. Since I’m trying to be better about linking to major articles, here’s an excerpt: There’s been a lot of hand-wringing in the security community lately. Complaints about compliance, vendors and the industry, or the general short-sightedness of those we work for who define our programs based on the media and audit results. Now we whine about developers ignoring us, executives mandating support for iPads we can’t control (while we still use the patently-insecureable Windows XP) executives who don’t always agree with our priorities, or bad guys coming after us personally. We’re despondent over endless audit and assessment cycles, FUD, checklists, and half-baked products sold for fully-baked prices; with sales guys targeting our bosses to circumvent our veto. My response? Get over it. These are the table stakes folks, and if you aren’t up for the game here’s a dollar for the slot machines. Share:

Share:
Read Post

FAM: Market Drivers, Business Justifications, and Use Cases

Now that we have defined File Activity Monitoring it’s time to talk about why people are buying it, how it’s being used, and why you might want it. Market Drivers As I mentioned earlier the first time I saw FAM was when I dropped the acronym into the Data Security Lifecycle. Although some people were tossing the general idea around, there wasn’t a single product on the market. A few vendors were considering introducing something, but in conversations with users there clearly wasn’t market demand. This has changed dramatically over the past two years; due to a combination of indirect compliance needs, headline-driven security concerns, and gaps in existing security tools. Although the FAM market is completely nascent, interest is slowly growing as organizations look for better handles on their unstructured file repositories. We see three main market drivers: As an offshoot of compliance. Few regulations require continuous monitoring of user access to files, but quite a few require some level of audit of access control, particularly for sensitive files. As you’ll see later, most FAM tools also include entitlement assessment, and they monitor and clearly report on activity. We see some organizations consider FAM initially to help generate compliance reports, and later activate additional capabilities to improve security. Security concerns. The combination of APT-style attacks against sensitive data repositories, and headline-grabbing cases like Wikileaks, are driving clear interest in gaining control over file repositories. To increase visibility. Although few FAM deployments start with the goal of providing visibility into file usage, once a deployment starts it’s not uncommon use it to gain a better understanding of how files are used within the organization, even if this isn’t to meet a compliance or security need. FAM, like its cousin Database Activity Monitoring, typically starts as a smaller project to protect a highly sensitive repository and then grows to expand coverage as it proves its value. Since it isn’t generally required directly for compliance, we don’t expect the market to explode, but rather to grow steadily. Business Justifications If we turn around the market drivers, four key business justifications emerge for deployment of FAM: To meet a compliance obligation or reduce compliance costs. For example, to generate reports on who has access to sensitive information, or who accessed regulated files over a particular time period. To reduce the risk of major data breaches. While FAM can’t protect every file in the enterprise, it provides significant protection for the major file repositories that turn a self-constrained data breach into an unmitigated disaster. You’ll still lose files, but not necessarily the entire vault. To reduce file management costs. Even if you use document management systems, few tools provide as much insight into file usage as FAM. By tying usage, entitlements, and user/group activity to repositories and individual files; FAM enables robust analysis to support other document management initiatives such as consolidation. To support content discovery. Surprisingly; many content discovery tools (mostly Data Loss Prevention), and manual processes, struggle to identify file owners. FAM can use a combination of entitlement analysis and activity monitoring to help determine who owns each file. Example Use Cases By now you likely have a good idea how FAM can be used, but here are a few direct use cases: Company A deployed FAM to protect sensitive engineering documents from external attacks and insider abuse. They monitor the shared engineering file share and generate a security alert if more than 5 documents are accessed in less than 5 minutes; then block copying of the entire directory. A pharmaceutical company uses FAM to meet compliance requirements for drug studies. The tool generates a quarterly report of all access to study files and generates security alerts when IT administrators access files. Company C recently performed a large content discovery project to locate all regulated Personally Identifiable Information, but struggled to determine file owners. Their goal is to reduce sensitive data proliferation, but simple file permissions rarely indicate the file owner, which is needed before removing or consolidating data. With FAM they monitor the discovered files to determine the most common accessors – who are often the file owners. Company D has had problems with sales executives sucking down proprietary customer information before taking jobs with competitors. They use FAM to generate alerts based on both high-volume access and authorized users accessing older files they’ve never touched before. As you can see, the combination of tying users to activity, with the capability to generate alerts (or block) based on flexible use policies, makes FAM interesting. Imagine being able to kick off a security investigation based on a large amount of file access, or low-and-slow access by a service or administrative account. File Activity Monitoring vs. Data Loss Prevention The relationship between FAM and DLP is interesting. These two technologies are extremely complementary – so much that in one case (as of this writing) FAM is a feature of a DLP product – but they also achieve slightly different goals. The core value of DLP is its content analysis capabilities; the ability to dig into a file and understand the content inside. FAM, on the other hand, doesn’t necessarily need to know the contents of a file or repository to provide value. Certain access patterns themselves often indicate a security problem, and knowing the exact file contents isn’t always needed for compliance initiatives such as access auditing. FAM and DLP work extremely well together, but each provides plenty of value on its own. Share:

Share:
Read Post

Incite 3/9/2011: Greed Is (fill in the blank)

As most of you know, I’m a huge NFL fan. In fact I made my kids watch the combine on NFL Network two weeks ago when the Boss was away. The frickin’ combine. I was on the edge of my seat watching some guy run a 4.34 40-yard dash. And heard the groans of the crowd when a top rated offensive tackle did only 21 bench presses of 225 pounds. That’s it? And some defensive lineman did 50 reps on the bench. 50 reps. If this DT thing doesn’t work out, I’m sure he’s got a future benching Pintos in the circus. Unless you have been hiding under a rock, you also know the NFL players’ union and owners are locked in a stand-off to negotiate a new collective bargaining agreement. It’s hard to sympathize with either side – either the billionaires or the multi-millionaires. Though when you read this truly outstanding piece by Bill Simmons of ESPN, you get a different perspective, and it’s even hard to feel anything but disdain for the owners. Though I’m not going to shed any tears for the players either. But if you really want, you can feel sad for the biggest bust in NFL draft history, because he made $38 million and still had his house end up in foreclosure. I’m not sure about you, but Wall Street is still one of my all-time favorite movies. Though it’s debatable whether Bud Fox is #winning nowadays. When Gekko does his soliloquy at the annual meeting, anchored by the catchphrase “Greed is good,” I still get chills down my spine. Although I’m not sure I believe it any more. You see, I grew up in a pretty modest home. We weren’t poor, but we weren’t rich either. I had stuff, but not the best stuff. I did things, but my friends did more. So I’ve always chased the money, most likely out of some misguided childhood belief that I missed out on something. That pursuit has brought me nothing but angst. I’ve made poor career decisions. I’ve worked with assholes. And I didn’t get rich. Sure, I’m comfortable and I’m fortunate to be able to provide a nice lifestyle for my family, but I can’t buy a plane. At one point in my life, I’d have viewed myself as a failure because of that. So no more chasing the money. If I find it, all the better, but my career decisions are based on what I like to do, not how much I can make. As I’ve gotten older, I have also realized that what’s right for me may not be right for you. So if you still want to own a plane, more power to you. We need folks with that drive to build great companies and create lots of value and spur the economy. Just don’t ask me to play along. I’m not interested in running a competitor out of business. Nor am I interested in extracting every nickel and dime from our clients or screwing someone over to buy another yacht. And that’s also why I’m not the owner of an NFL team. So I guess my answer is “Greed is not interesting anymore.” -Mike Photo credits: “Greed” originally uploaded by Mike Smail Incite 4 U We suck at hiring: Many of you work at reasonably sized companies. You know, the kind of company with an HR department to tell you not to surf pr0n on your corporate laptop. Those helpful HR folks also lead the hiring process for your security folks, right? This post by Chief Monkey should hit you in the banana (or taco – we don’t want to discriminate). I usually like a rent to own approach. Offer promising folks a short term contract, and if they have the goods bring them aboard. Yes, I know that in a competitive job market (like security), some candidates may not like it. But your organization is probably more screwed up than anything they have seen before, so this provides some risk mitigation for the candidate as well. They could opt out before it gets much more difficult. – MR Just say no (Rich’s take): Believe it or not, sometimes saying no is the right thing to do. I realize we’re all new-age self-actualized security pros these days, but sometimes you need to hit the brakes before ramming into the back of that car parked in the center lane while some doofus tries to finish a text message. Wells Fargo is clamping down on any use of employee-owned devices, and simultaneously experimenting with corporate iPads to supplement corporate smartphones. In a business like financial services, it only makes sense to operate a more restrictive environment and require employees to use personal devices and personal networks for personal stuff. Not that I’m saying the rest of you need to be so restrictive – you are not one of the biggest financials in the world and you probably won’t be able to get away with being so draconian. Heck, thanks to iPhones/Android/Winmo7 your users can still access Facebook all they want while at work… without hitting your network. – RM Just say no (Adrian’s take): Well’s Fargo’s IT department is saying no to personal devices being connected to the corporate network. Part of me says “Good for them!” I don’t use the same machine to surf the web as I do for online banking, so SoD (Separation of Devices) seems like a good idea. Part of me thinks Wells Fargo makes so many bad decisions in general, what if this is wrong too? I started to wonder if we could see a time when the local area network is only partially secured, and the banks let employees use their own devices on the less secure area. What if critical applications and processes are heavily secured in the cloud, as they move away from the users who create a lot of the security problems? Would that be a better model for separating general usage from critical processes and machines? Food for thought. – AL Looking for work, Tier 1 spammer… So Soloway is out of the big house. I wonder if

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.