Securosis

Research

FAM: Market Drivers, Business Justifications, and Use Cases

Now that we have defined File Activity Monitoring it’s time to talk about why people are buying it, how it’s being used, and why you might want it. Market Drivers As I mentioned earlier the first time I saw FAM was when I dropped the acronym into the Data Security Lifecycle. Although some people were tossing the general idea around, there wasn’t a single product on the market. A few vendors were considering introducing something, but in conversations with users there clearly wasn’t market demand. This has changed dramatically over the past two years; due to a combination of indirect compliance needs, headline-driven security concerns, and gaps in existing security tools. Although the FAM market is completely nascent, interest is slowly growing as organizations look for better handles on their unstructured file repositories. We see three main market drivers: As an offshoot of compliance. Few regulations require continuous monitoring of user access to files, but quite a few require some level of audit of access control, particularly for sensitive files. As you’ll see later, most FAM tools also include entitlement assessment, and they monitor and clearly report on activity. We see some organizations consider FAM initially to help generate compliance reports, and later activate additional capabilities to improve security. Security concerns. The combination of APT-style attacks against sensitive data repositories, and headline-grabbing cases like Wikileaks, are driving clear interest in gaining control over file repositories. To increase visibility. Although few FAM deployments start with the goal of providing visibility into file usage, once a deployment starts it’s not uncommon use it to gain a better understanding of how files are used within the organization, even if this isn’t to meet a compliance or security need. FAM, like its cousin Database Activity Monitoring, typically starts as a smaller project to protect a highly sensitive repository and then grows to expand coverage as it proves its value. Since it isn’t generally required directly for compliance, we don’t expect the market to explode, but rather to grow steadily. Business Justifications If we turn around the market drivers, four key business justifications emerge for deployment of FAM: To meet a compliance obligation or reduce compliance costs. For example, to generate reports on who has access to sensitive information, or who accessed regulated files over a particular time period. To reduce the risk of major data breaches. While FAM can’t protect every file in the enterprise, it provides significant protection for the major file repositories that turn a self-constrained data breach into an unmitigated disaster. You’ll still lose files, but not necessarily the entire vault. To reduce file management costs. Even if you use document management systems, few tools provide as much insight into file usage as FAM. By tying usage, entitlements, and user/group activity to repositories and individual files; FAM enables robust analysis to support other document management initiatives such as consolidation. To support content discovery. Surprisingly; many content discovery tools (mostly Data Loss Prevention), and manual processes, struggle to identify file owners. FAM can use a combination of entitlement analysis and activity monitoring to help determine who owns each file. Example Use Cases By now you likely have a good idea how FAM can be used, but here are a few direct use cases: Company A deployed FAM to protect sensitive engineering documents from external attacks and insider abuse. They monitor the shared engineering file share and generate a security alert if more than 5 documents are accessed in less than 5 minutes; then block copying of the entire directory. A pharmaceutical company uses FAM to meet compliance requirements for drug studies. The tool generates a quarterly report of all access to study files and generates security alerts when IT administrators access files. Company C recently performed a large content discovery project to locate all regulated Personally Identifiable Information, but struggled to determine file owners. Their goal is to reduce sensitive data proliferation, but simple file permissions rarely indicate the file owner, which is needed before removing or consolidating data. With FAM they monitor the discovered files to determine the most common accessors – who are often the file owners. Company D has had problems with sales executives sucking down proprietary customer information before taking jobs with competitors. They use FAM to generate alerts based on both high-volume access and authorized users accessing older files they’ve never touched before. As you can see, the combination of tying users to activity, with the capability to generate alerts (or block) based on flexible use policies, makes FAM interesting. Imagine being able to kick off a security investigation based on a large amount of file access, or low-and-slow access by a service or administrative account. File Activity Monitoring vs. Data Loss Prevention The relationship between FAM and DLP is interesting. These two technologies are extremely complementary – so much that in one case (as of this writing) FAM is a feature of a DLP product – but they also achieve slightly different goals. The core value of DLP is its content analysis capabilities; the ability to dig into a file and understand the content inside. FAM, on the other hand, doesn’t necessarily need to know the contents of a file or repository to provide value. Certain access patterns themselves often indicate a security problem, and knowing the exact file contents isn’t always needed for compliance initiatives such as access auditing. FAM and DLP work extremely well together, but each provides plenty of value on its own. Share:

Share:
Read Post

Incite 3/9/2011: Greed Is (fill in the blank)

As most of you know, I’m a huge NFL fan. In fact I made my kids watch the combine on NFL Network two weeks ago when the Boss was away. The frickin’ combine. I was on the edge of my seat watching some guy run a 4.34 40-yard dash. And heard the groans of the crowd when a top rated offensive tackle did only 21 bench presses of 225 pounds. That’s it? And some defensive lineman did 50 reps on the bench. 50 reps. If this DT thing doesn’t work out, I’m sure he’s got a future benching Pintos in the circus. Unless you have been hiding under a rock, you also know the NFL players’ union and owners are locked in a stand-off to negotiate a new collective bargaining agreement. It’s hard to sympathize with either side – either the billionaires or the multi-millionaires. Though when you read this truly outstanding piece by Bill Simmons of ESPN, you get a different perspective, and it’s even hard to feel anything but disdain for the owners. Though I’m not going to shed any tears for the players either. But if you really want, you can feel sad for the biggest bust in NFL draft history, because he made $38 million and still had his house end up in foreclosure. I’m not sure about you, but Wall Street is still one of my all-time favorite movies. Though it’s debatable whether Bud Fox is #winning nowadays. When Gekko does his soliloquy at the annual meeting, anchored by the catchphrase “Greed is good,” I still get chills down my spine. Although I’m not sure I believe it any more. You see, I grew up in a pretty modest home. We weren’t poor, but we weren’t rich either. I had stuff, but not the best stuff. I did things, but my friends did more. So I’ve always chased the money, most likely out of some misguided childhood belief that I missed out on something. That pursuit has brought me nothing but angst. I’ve made poor career decisions. I’ve worked with assholes. And I didn’t get rich. Sure, I’m comfortable and I’m fortunate to be able to provide a nice lifestyle for my family, but I can’t buy a plane. At one point in my life, I’d have viewed myself as a failure because of that. So no more chasing the money. If I find it, all the better, but my career decisions are based on what I like to do, not how much I can make. As I’ve gotten older, I have also realized that what’s right for me may not be right for you. So if you still want to own a plane, more power to you. We need folks with that drive to build great companies and create lots of value and spur the economy. Just don’t ask me to play along. I’m not interested in running a competitor out of business. Nor am I interested in extracting every nickel and dime from our clients or screwing someone over to buy another yacht. And that’s also why I’m not the owner of an NFL team. So I guess my answer is “Greed is not interesting anymore.” -Mike Photo credits: “Greed” originally uploaded by Mike Smail Incite 4 U We suck at hiring: Many of you work at reasonably sized companies. You know, the kind of company with an HR department to tell you not to surf pr0n on your corporate laptop. Those helpful HR folks also lead the hiring process for your security folks, right? This post by Chief Monkey should hit you in the banana (or taco – we don’t want to discriminate). I usually like a rent to own approach. Offer promising folks a short term contract, and if they have the goods bring them aboard. Yes, I know that in a competitive job market (like security), some candidates may not like it. But your organization is probably more screwed up than anything they have seen before, so this provides some risk mitigation for the candidate as well. They could opt out before it gets much more difficult. – MR Just say no (Rich’s take): Believe it or not, sometimes saying no is the right thing to do. I realize we’re all new-age self-actualized security pros these days, but sometimes you need to hit the brakes before ramming into the back of that car parked in the center lane while some doofus tries to finish a text message. Wells Fargo is clamping down on any use of employee-owned devices, and simultaneously experimenting with corporate iPads to supplement corporate smartphones. In a business like financial services, it only makes sense to operate a more restrictive environment and require employees to use personal devices and personal networks for personal stuff. Not that I’m saying the rest of you need to be so restrictive – you are not one of the biggest financials in the world and you probably won’t be able to get away with being so draconian. Heck, thanks to iPhones/Android/Winmo7 your users can still access Facebook all they want while at work… without hitting your network. – RM Just say no (Adrian’s take): Well’s Fargo’s IT department is saying no to personal devices being connected to the corporate network. Part of me says “Good for them!” I don’t use the same machine to surf the web as I do for online banking, so SoD (Separation of Devices) seems like a good idea. Part of me thinks Wells Fargo makes so many bad decisions in general, what if this is wrong too? I started to wonder if we could see a time when the local area network is only partially secured, and the banks let employees use their own devices on the less secure area. What if critical applications and processes are heavily secured in the cloud, as they move away from the users who create a lot of the security problems? Would that be a better model for separating general usage from critical processes and machines? Food for thought. – AL Looking for work, Tier 1 spammer… So Soloway is out of the big house. I wonder if

Share:
Read Post

The CIO Role and Security

During the e10+ event Monday at the RSA Conference, Rich and Mike moderated a panel on Optimizing Your Security Program. One of the contested topics was how to position security to upper management. Every CIO and CISO falls into the trap of having to say ‘No’ to some new idea that occurs to executive management, and then take blame for being “Negative Nancy”, “Dr. No”, “The Knight who says NEE”, or some collection of the Seven Dirty Words. I was surprised that so little has changed, as these were exactly the same problems I had a dozen years ago. While security threats were far simpler and fewer then, so was acknowledgement of the need for security. I guess it’s human nature that we still fall into the same traps. For example, I remember the principal VC of one of the many start-ups I worked at, stating we needed to take credit cards – I responded that it was too risky given our (total lack of) site security. He looked at me as though I was stupid, insubordinate, and insensitive to customer requirements. I remain convinced it was one of the reasons I was one of the first people let go during a series of cost cutting RIFs. At the brokerage there was a bi-polar attitude, that 9 days out of 10 I needed to get the sales staff what they need to do their jobs, and make things as easy as possible so the sales conversations were aided rather than hindered by technology. Day ten was a hair-on-fire security exercise because some broker was printing out the entire database to bring to a competitor. On the tenth day you will be asked by the CEO, “What are you doing to protect my data?” The correct answer is not, “making it super easy for people to get access,” or “exactly what you told me to.” As a CIO your priroity is service, but you are responsible for security. I managed accordingly, or at least I started out that way. I figured I had the authority to nix projects that compromised data security. I felt it was my responsibility, as champion of security, to halt new projects until they addressed data and compliance issues. Outsiders felt security turned simple ideas into complex – and costly – projects. In reality, though most new ideas were not fleshed out, and failed to account for total build costs – much less the cost to clean up messes down the road. Still, complexity was “my fault”. This resulted in complaints, mid-level mangers trying to bypass me altogether, and bosses realigning my bonus incentives based on project completion and satisfaction of IT users. Running IT services for popularity is dangerous, and results in bizarre feature-based prioritization of improvements, with user satisfaction hinging almost purely on system stability. You add shiny toys of no consequence and you make sure things are reliable – no more. I only recommend this if you want to ‘earn’ some short term bonuses and promotions, then quickly move to your next employer. If you want to succeed in the job for more than a couple consecutive quarters, avoid being a feature firewall, and avoid having your merit judged on convenience and system uptime. Features and new ideas are more likely to come from outside your organization than inside, and no one else wants security, so you have to field these challenges. Trying to spin security as a business enabler is great conceptually but rarely works in the real world. I used three approaches to avoid being the bad guy for advocating security: The Security Porcupine: When I started working with sales people, I learned many sales techniques. One of the best tactics I ever learned was the ‘Porcupine’ strategy from Tom Hopkins, which I bastardized a bit to help with IT projects. In essence, you don’t catch a porcupine when it’s thrown at you – instead you deflect it. Ask the originator how they feel the problem should be addressed. “What a great idea! How would you like to handle [security/compliance/auditng] requirements we must meet?” This is a choice-based strategy. It gives the person who had the idea some responsibility by recognizing their idea carries a security burden; they can then help scale back their idea, or accept security controls as part of the plan. Either way, security becomes a facet of their project. The Hidden Security Project: If it’s your responsibility to form the IT deployment plan, take responsibility for it and build security in. Weave security into the project plan, subtly or otherwise, such that it is difficult to discern core function from security or operations. Costs and functions are bundled, and a detailed presentation makes it look like you have invested time into understanding how to get the project done. Present the plan and let the executive team decide if the investment is worth it. If it passes, you have both budget and executive buy-in. If not, the work you put into the plan saved you from disaster down the road. The Gauntlet: Accept the proposal and enter a “requirements phase”. As the project undergoes scruitiny from your team, raise security as an outstanding question. Also make sure internal compliance, security, and external auditors review the plan. It’s likely someone else will have the same security concerns you do – in addition to compliance and procedural issues you never thought of. If the idea is truly worthy, it will pass these tests. Either way, don’t fight it head-on. Most ideas die in the process, and nobody gets egg on their face when the initial state of euphoria passes over to critical reasoning. It’s a transparent pass-the-buck ruse in smaller organizations, but can harden and flesh out good ideas in larger firms. I hate to advocate shenanigans, but business is business, and people will turn you into road pizza if you don’t protect yourself. Share:

Share:
Read Post

Network Security in the Age of *Any* Computing: Enforcement

As we continue with “Network Security in the Age of Any Computing”, we have already hit the risks and the need for segmentation to restrict access to sensitive data. Now we focus on technologies that can help restrict access – which tend to be NAC, firewalls, and other network layer controls (such as VLANs and physical segmentation). Each technology has pros and cons. There are no ‘right’ answers – just a set of compromises that must be made b weighing the various available technology options. NAC Everyone likes to beat on Network Access Control (NAC), including us. The technology has been around for years, and most of those years have been marketed as The Year of NAC. Unfortunately the technology was oversold years ago, and could not deliver on its promise of securing everything. Imagine that – marketing getting ahead of both technology and user requirements. But folks in the NAC space have been (more quietly) moving their products along, and more importantly building NAC capabilities into a broader network security use case. But before we jump in, let’s take a look at what NAC really does. Basically it scrutinizes the devices on your network to make sure they are configured correctly, and accessing what they are supposed to. That’s the access control part of the name. The most prevalent use case has always been for guest/contractor access, where it’s particularly important to ensure any devices connecting to the network are authorized and configured correctly. Of course, under any computing, every device should now be considered a guest as much as feasible. Given the requirement to ensure the right devices access only the right resources, integrating mobile devices security with Network Access Control offers a means to implement control structures so one can trust but verify these devices. Which is what it’s all about, right? So what’s the issue? Why isn’t NAC proliferating through everything and everywhere, including all these mobile devices? Like most interesting technologies, there is still too much complexity for mass market deployment. You’ll need to link up NAC with your identity infrastructure – which is getting easier through standard technologies like Active Directory, LDAP, and RADIUS, but is still not easy. From a deployment standpoint, the management devices need to see most of the traffic flowing through your network, which requires scalability and sensitivity to latency during design. Finally, you need to integrate with existing network and security infrastructure – at least if you want to actually block any bad stuff – which will be the subject of a later post in this series. As folks who follow markets for a living we know that once the hype around any market starts dying down, the technology starts becoming more prevalent – especially at the large enterprise level. NAC is no different. You don’t hear a lot about it, but it’s happening, largely driven by the proliferation of these mobile devices and the need to more effectively segment the network. Firewalls As described in the PCI Guidance we excerpted, our PCI friends believe firewalls are key to network segmentation and protecting sensitive information. They are right that front-ending sensitive data stores with a firewall is a best practice to control access to sensitive segments. Unfortunately, traditional firewalls tend to only understand IP addresses, ports, and protocols. With a number of newer web-based applications – increasingly encapsulated on port 80 – that can be problematic. This is driving the evolution of the firewall to become much more application aware. We have covered that evolution extensively, so we won’t rehash it all here. But in the use case of network segmentation for dealing with mobile devices, scalability of application layer inspection remains a major concern. These devices need the ability to inspect and enforce application layer policies at multi-gigabit internal network speeds. That’s a tall order for today’s firewall devices, but as with everything else in technology, the devices continue to get faster and more mature. All hail Moore’s Law! So these evolved firewalls are also instrumental for implementing a network segmentation architecture to support any computing. Network Layer Controls But the path of least resistance tends to be based around leveraging devices already in place. That means using the built-in capabilities of network switches and routers to enforce required segmentation and access control capabilities. First let’s hit on the brute force approach, which is physical segmentation. As we described in the last post, we believe that Internet access for mobile devices and guests should be on a totally disparate network. You don’t want to give a savvy attacker any chance to jump from you guest network to your internal net. This level of physical segmentation is great when the usage model supports it, but for most computing functions it doesn’t. So many folks leverage technologies such as VLAN (virtual LANs) to build logical networks on top of a single physical infrastructure. At the theoretical level, this works fine and will likely be good enough to pass your PCI assessment. That said, the objective isn’t to get the rubber stamp, but to protect the information. So we need to take a critical look at where VLANs can be broken and see whether that risk is acceptable. There are many ways to defeat VLAN-based segmentation, including VLAN hopping. To be fair, most modern switches can detect and block most of these attacks if configured correctly. That’s always the case – devices are only as strong as their configuration, and it’s rarely safe to assume a solid and secure configuration. Nor is leaving the security of your critical data to a single control. Layers of security are good. More layers are better. But given that everyone has switches and they all support VLANs and physical segmentation, this will continue to be a common means for restricting access to sensitive data, which is a good thing. VLANs + firewalls + NAC provide a comprehensive system to ensure only the right devices are accessing critical data. That doesn’t mean you need to do everything, but depending on the real sensitivity of the data, it shouldn’t hurt. Device Health As described in

Share:
Read Post

Introduction to File Activity Monitoring

A new approach to an old problem One of the more pernicious problems in information security is allowing someone to perform something they are authorized to do, but catching when they do it in a potentially harmful way. For example, in most business environments it’s important to allow users broad access to sensitive information, but this exposes us to all sorts of data loss/leakage scenarios. We want to know when a sales executive crosses the line from accessing customer information as part of their job, to siphoning it for a competitor. In recent years we have adopted tools like Data Loss Prevention to help detect data leaks of defined information, and Database Activity Monitoring to expose deep database activity and potentially detect unusual activity. But despite these developments, one major blind spot remains: monitoring and protecting enterprise file repositories. Existing system and file logs rarely offer the level of detail needed to truly track activity, generally don’t correlate across multiple repository types, don’t tie users to roles/groups, and don’t support policy-based alerts. Even existing log management and Security Information and Event Management tools can’t provide this level of information. Four years ago when I initially developed the Data Security Lifecycle, I suggested to a technology called File Activity Monitoring. At the time I saw it as similar to Database Activity Monitoring, in that it would give us the same insight into file usage as DAM provides for database access. Although the technology didn’t yet exist it seemed like a very logical extension of DLP and DAM. Over the past two years the first FAM products have entered the market, and although market demand is nascent, numerous calls with a variety of organizations show that interest and awareness are growing. FAM addresses a problem many organizations are now starting to tackle, and the time is right to dig into the technology and learn what it provides, how it works, and what features to look for. Imagine having a tool to detect when an administrator suddenly copies the entire directory containing the latest engineering plans, or when a user with rights to a file outside their business unit accesses it for the first time in 3 years. Or imagine being able to hand an auditor a list of all access, by user, to patient record files. Those are merely a few of the potential uses for FAM. Defining FAM We define FAM as: Products that monitor and record all activity within designated file repositories at the user level, and generate alerts on policy violations. This leads to the key defining characteristics: Products are able to monitor a variety of file repositories, which include at minimum standard network file shares (SMB/CIFS). They may additionally support document management systems and other network file systems. Products are able to collect all activity, including file opens, transfers, saves, deletions, and additions. Activity can be recorded and centralized across multiple repositories with a single FAM installation (although multiple products may be required, depending on network topology). Recorded activity is correlated to users through directory integration, and the product should understand file entitlements and user/group/role relationships. Alerts can be generated based on policy violations, such as an unusual volume of activity by user or file/directory. Reports can be generated on activity for compliance and other needs. You might think much of this should be possible with DLP, but unlike DLP, File Activity Monitoring doesn’t require content analysis (although FAM may be part of, or integrated with, a DLP solution). FAM expands the data security arsenal by allowing us to understand how users interact with files, and identify issues even when we don’t know their contents. DLP, DAM, and FAM are all highly complementary. Through the rest of this series we will dig more into the use cases, technology, and selection criteria. Note – the rest of the posts in the series will appear in our Complete Feed. Share:

Share:
Read Post

Network Security in the Age of *Any* Computing: Containing Access

In the first post of this series, we talked about the risks inherent to this concept of any computing, where those crazy users want to get at critical data at any time, from anywhere, on any device. And we all know it’s not pretty. Sure, there are things we can do at the device layer to protect the and ensure a proper configurations. But in this series we will focus on how to architect and secure the network to protect critical data. The first aspect of that is restricting access to key portions of your network to only those folks that need it. Segmentation is your friend There is an old saying, “out of sight, out of mind,” which could be rephrased for information security as, “out of reach, out of BitTorrent.” By using a smart network segmentation strategy, you can keep the critical data out of the clutches of attackers. OK, that’s an overstatement, but segmentation is the first step to protecting key data. We want to make it as hard as possible for the data to be compromised, and that’s why we put up as many obstacles as possible for attackers. Unless you are being specifically targeted, simply not being the path of least resistance is a decent strategy. The fewer folks who have access to something, the less likely that access will be abused, and the more quickly and effectively we can figure out who is the bad actor in case of malfeasance. Not that we believe the PCI-DSS v2.0 standards represent even a low bar for security controls, but they do advocate and require segmentation of cardholder data. Here is the specific language: All systems must be protected from unauthorized access from untrusted networks, whether entering the system via the Internet as e-commerce, employee Internet access through desktop browsers, employee e-mail access, dedicated connections such as business-to-business connections, via wireless networks, or via other sources. Often, seemingly insignificant paths to and from untrusted networks can provide unprotected pathways into key systems. Firewalls are a key protection mechanism for any computer network. One architectural construct to think about segmentation is the idea of vaults, which really are just a different way of thinking about segmentation of all data – not just cardholder data. This entails classifying data sources into a few tiers of sensitivity and then designing a control set to ensure access to only those authorized. The goal behind classifying critical data sources is to ensure access is only provided to the right person, on the right device, from the right place, at the right time. Of course, that first involves defining rules for who can come in, from where, when, and on what device. And we cannot trivialize that effort, because it’s time consuming and difficult. But it needs to be done. Once the data is classified and the network is segmented – which will discuss in more depth as we progress through this series – we need to authenticate the user. An emerging means of enforcing access to only those authorized devices is to look at something like risk-based or adaptive authentication, where the authentication isn’t just about two or more factors, but instead dynamically evaluated based on any number of data points: including who you are, what you are doing, where you are connecting from, and when you are trying to gain access. This certainly works well for ensuring only the right folks get in, but what happens once they are in? The obvious weakness of a control structure focused purely on initial authentication is that a device could be compromised after entry – and then all the network controls are irrelevant because the device already has unfettered access. A deeper look at risk-based authentication is beyond our scope for this research project, but warrants investigation as you design control structures. We also need to ensure we are very critically looking at how the network controls can be bypassed. If a machine is compromised after getting access, that is a problem unless you are constantly scrutinizing who has access on a continuous basis. And yes, we’ll discuss that in the next post. You also need to worry about unauthorized physical access to your network. That could be a hijacked physical port or a rogue wireless access point. Either way, someone then gets physical access to your network and bypasses the perimeter controls. Architectural straw man Now let’s talk about one architectural construct in terms of three different use models for your network, and how to architect a network in three segments, depending on the use case for access. Corporate network: This involves someone who has physical access to your corporate network. Either via a wired connection or a wireless access point. External mobile devices: These devices access corporate resources via an uncontrolled network. That includes home networks, public wireless networks, cellular (3G) networks, and even partner networks. If your network security team can’t see the entirety of the ingress path, then you need to consider it an external connection and device. Internal guest access: These are devices that just need to access the Internet from inside one of your facilities. Typically these are smartphones used by employees, but we must also factor in a use case for businesses (retail/restaurants, healthcare facilities, etc.) to provide access as a service. We want to provide different (and increasing) numbers of hoops for users to jump through to get access to important data. The easiest to discuss is the third case (internal guest access), because you only need to provide an egress pipe for those folks. We recommend total physical isolation for these devices. That means a totally separate (overlay) wireless network, which uses a different pipe to the Internet. Yes, that’s more expensive. But you don’t want a savvy attacker figuring out a way to jump from the egress network to the internal network. If the networks are totally separate, you eliminate that risk. The techniques to support your corporate network and external mobile devices are largely the same under the philosophy of “trust, but verify.” So we need to design the control sets to scrutinize users. The real question is how many more

Share:
Read Post

Security Counter Culture

There’s nothing like a late-night phone call saying, “I think your email has been hacked,” to drop a security professional over the edge. My wife called me during the RSA Conference to tell me this, because some emails she got from me were duplicates that refused to be deleted. Weirdness like that always makes me question my security, and when I found the WiFi still enabled on my phone, I had my yearly conference ‘Oh $#(!’ moment early. I consider it a BH/DefCon and RSA tradition, as it happens every year: seething paranoia. And this year the HBGary hack kept my paranoia amped up. The good news is that when I am in this state of mind I find mistakes. It not only makes me suspicious of my own work – I assume I screwed up, and that critical mindset helped me discover a couple flaws. A missed setting on a router, and leaving WiFi on when I went to SF. And there was another mistake understanding how a 3rd party product worked, so I needed to rethink my approach to data security on that as well. Then I start thinking: if they got access to this email account, what would that enable an attacker to do? I don’t sleep for the rest of the night, thinking about different possibilities. Sleep deprivation makes it difficult to maintain this degree of focus long-term, but I always harbor the feeling that something is wrong. The bad news is that this state of mind does not go well with interpersonal relationships. Especially in the workplace. Suspicious, distrust, and critical are great traits when looking at source code trying to find security flaws. They are not so great when talking to the IT team about the new system crossover they will be doing in 3 days (despite, of course, being several weeks behind on pre-migration tasks). Stressed out of their minds trying to make sure the servers won’t crash, nobody wants you to point out all they ways they failed to address security – and all the (time consuming) remediation they really should/must perform. We take it out on those not tasked with security, because anyone who does not hold the security bar as high as we do must be an idiot. And God help those poor phone solicitors trying to sell IPS to me after RSA because they somehow managed to scan my conference badge – I now feel the need to educate them on all 99 ways their product sucks and how they don’t understand the threats. Do you have to have a crappy attitude to be effective in this job? Do we need to maintain a state of partial paranoia? I am unable to tell if I simply had this type of personality, which lead me into security; or if the profession built up my the glass is half-empty, cracked, and about to be stolen at any moment, attitude. I’d stop to smell the roses but I might suffer an alergic reaction, and I am certain those thorns would draw blood. Sometimes I feel like security professionals have become the NSA of the private sector – trust no one. We have gotten so tired of leading a charge no-one follows that we have begun to shoot each other. Camaraderie from shared experiences brings us together, but a sense of distrust and disrespect cause more infighting than within any other profession I can think of. We have become a small corporate counterculture without a cool theme song. Share:

Share:
Read Post

What No One Is Saying about That Big HIPAA Fine

By now you have probably seen that the U.S. Department of Health and Human Services (HHS) fined Cignet healthcare a whopping $4.3M for, and I believe this is a legal term, being total egotistical assholes. (Because “willfull neglect” just doesn’t have a good ring to it). This is all over the security newsfeeds, despite it having nothing to do with security. It’s so egregious I suggest that, if any vendor puts this number in their sales presentation, you should simply stand up and walk out of the room. Don’t even bother to say anything – it’s better to leave them wondering. Where do I come up with this? The fine was due to Cignet pretty much telling HHS and a federal court to f* off when asked for materials to investigate some HIPAA complaints. To quote the ThreatPost article: Following patient complaints, repeated efforts by HHS to inquire about the missing health records were ignored by Cignet, as was a subpoena granted to HHS’s Office of Civil Rights ordering Cignet to produce the records or defend itself in any way. When the health care provider was ordered by a court to respond to the requests, it disgorged not just the patient records in question, but 59 boxes of original medical records to the U.S. Department of Justice, which included the records of 11 individuals listed in the Office of Civil Rights Subpoena, 30 other individuals who had complained about not receiving their medical records from Cignet, as well as records for 4,500 other individuals whose information was not requested by OCR. No IT. No security breach. No mention of security issues whatsoever. Just big boxes of paper and a bad attitude. Share:

Share:
Read Post

On Science Projects

I think anyone who writes for a living sometimes neglects to provide the proper context before launching into some big thought. I please guilty as charged on some aspects of the Risk Metrics Are Crap FireStarter earlier this week. As I responded to some of the comments, I used the term science project to describe some technologies like GRC, SIEM, and AppSec. Without context, some folks jumped on that. So let me explain a bit of what I mean. Haves and Have Nots At RSA, I was reminded of the gulf between the folks in our business who have and those who don’t have. The ‘haves’ have sophisticated and complicated environments, invest in security, do risk assessment, hey periodically have auditors in their shorts, and are very likely to know their exposures. These tend to be large enterprise-class organizations – mostly because they can afford the requisite investment. Although you do see a many smaller companies (especially if they handle highly regulated information) that do a pretty good job on security. These folks are a small minority. The ‘have nots’ are exactly what it sounds like. They couldn’t care less about security, they want to write a check to make the auditor go away, and they resent any extra work they have to do. They may or may not be regulated, but it doesn’t really matter. They want to do their jobs and they don’t want to work hard at security. This tends to be the case more often at smaller companies, but we all know there are plenty of large enterprises in this bucket as well. We pundits, Twitterati, and bloggers tend to spend a lot of time with the haves. The have nots don’t know who Bruce Schneier is. They think AV keeps them secure. And they wonder why their bank account was looted by the Eastern Europeans. Remember the Chasm Lots of security folks never bothered to read Geoffrey Moore’s seminal book on technology adoption, Crossing the Chasm. It doesn’t help you penetrate a network or run an incident response, so it’s not interesting. Au contraire, if you wonder why some product categories go away and others become things you must buy, you need to read the book. Without going too deeply into chasm vernacular, early markets are driven by early adopters. These are the customers who understand how to use an emerging technology to solve their business problem and do much of the significant integration to get a new product to work. Sound familiar? Odds are, if you are reading our stuff, you represent folks at the early end of the adoption curve. Then there is the rest of the world. The have nots. These folks don’t want to do integration. They want products they buy to work. Just plug and play. Unless they can hit the Easy Button they aren’t interested. And since they represent the mass market (or mainstream in Moore’s lingo) unless a product/technology matures to this point, it’s unlikely to ever be a standalone, multi-billion-dollar business. 3rd Grade Science Fair Time and again we see that this product needs tuning. Or that product requires integration. Or isn’t it great how Vendor A just opened up their API. It is if you are an early adopter, excited that you now have a project for the upcoming science fair. If you aren’t, you just shut down. You aren’t going to spend the time or the money to make something work. It’s too hard. You’ll just move on to the next issue, where you can solve a problem with a purchase order. SIEM is clearly a science project. Like all cool exploding volcanoes, circuit boards, and fighting Legos, value can be had from a SIEM deployment if you put in the work. And keep putting in the work, because these tools require ongoing, consistent care and feeding. Log Management, on the other hand, is brain-dead simple. Point a syslog stream somewhere, generate a report, and you are done. Where do you think most customers needing to do security management start? Right, with log management. Over time a do make the investment to get to more broad analysis (SIEM), but most don’t. And they don’t need to. Remember – even though we don’t like it and we think they are wrong – these folks don’t care about security. They care about generating a report for the auditor, and log management does that just fine. And that’s what I mean when I call something a science project. To be clear, I love the science fair and I’m sure many of you do as well. But it’s not for everyone. Photo credit: “Science Projects: Volcanoes, Geysers, and Earthquakes” originally uploaded by Old Shoe Woman Share:

Share:
Read Post

Friday Summary: March 4, 2011

The Friday summary is our chance to talk about whatever, and this week I am going to do just that. This week’s introduction has nothing to do with security, so skip it if you are offended by such things. I am a fan of basketball – despite being too slow, too short, and too encumbered by gravity to play well. Occasionally I still follow my ‘local’ Golden State Warriors despite their playoff-less futility for something like 19 of the last 20 years. Not like I know much about how to play the game, but I like watching it when I can. Since moving to Phoenix over 8 years ago it’s tough to follow, but friends were talking last summer about the amazing rookie season performance of Stephen Curry and I was intrigued. I Googled him to find out what was going on and found all the normal Bay Area sports blogs plus a few independents – little more than random guys talking baskeball related nonesense. But one of them – feltbot.com – was different. After following the blog for a while an amazing thing happened: I noticed I could not stomach most of the mainstream media coverage of Warriors basketball. It not only changed my opinion on sports blogs, but cemented in my mind what I like about blogs in general – to the point that it’s making me rethink my own posts. The SF Bay Area has some great journalists, but it also has a number of people with great stature who lack talent, or the impetus to use their talent. These Bay Area personalities offer snapshots of local sports teams and lots of opinions, but very little analysis. They get lots of air but little substance. Feltbot – whoever he is – offers plenty of opinions, just like every other Bay Area sports blogger. And he has lots of biases, but they are in the open, such as being a Don Nelson fanboi. But his opnions are totally contrary to what I was reading and hearing on the radio. And he calls out everyone, from announcers to journalist when he thinks they are off the mark. What got me hooked was him going into great detail on why why – including lots of analysis and many specific examples to back up his assertions. You read one mainstream sports blog that says one thing, and another guy who says exactly the opposite, and then goes into great detail as to why. And over the course of a basketball season, what seemed like outlandish statements week one were dead on target by season’s end. This blog is embarrasing many of the local media folk, and downright eviscerating a few of them – making them look like clueless hacks. I started to realize how bad most of the other Bay Area sports blogs were (are); they provide minimal coverage and really poor analysis. Over time I have come to recognize the formulaic approach of the other major media personalities. You realize that most writers are not looking to analyze players, the coach, or the game – they are just looking for an inflammatory angle. Feltbot’s stuff is so much better that the other blogs I have run across that it makes me feel cheated. It’s like reading those late-career James Patterson novels where he is only looking for an emotional hook rather than trying to tell a decent story. For me, feltbot put into focus what I like to see in blogs – good analysis. Examples that illustrate the ideas. It helps a basketball noob like me understand the game. And a little drama is a good thing to stir up debate, but in excess it’s just clumsy shtick. Sometimes it takes getting outside security to remind me what’s important, so I’ll try to keep that in mind when I blog here. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian’s Dark Reading post: DAM Market Observation. Mort cited for talking about cloud security at Bsides. Rich and Mike covered on the Tripwire blog. Rich quoted on SearchSecurity. Favorite Securosis Posts Rich: Always Assume. This is a post I did a while back on how I think about threat/risk modeling. In a post HBGary world, I think it’s worth a re-read. Mike Rothman: What No One Is Saying about That Big HIPAA Fine. Sometimes you just need to scratch your head. Adrian Lane: FireStarter: Risk Metrics Are Crap. Yeah, it was vague in places and intentionally incendiary, but it got the debate going. And the comments rock! Other Securosis Posts On Science Projects. Random Thoughts on Securing Applications in the Cloud. Network Security in the Age of Any Computing: the Risks. Incite 3/2/2011: Agent Provocateur. React Faster and Better: Index. React Faster and Better: Piecing It Together. Favorite Outside Posts Rich: Numbers Good. Jeremiah’s been doing some awesome work on web stats for a while now, and this continues the trend. Mike Rothman: Post-theft/loss Response & Recovery With Evernote. We need an IR plan for home as well. Bob does a good job of describing one way to make filing claims a lot easier. Adrian Lane: Network Security Management-A Snapshot. Really nice overview by Shimmy! Project Quant Posts NSO Quant: Index of Posts. NSO Quant: Health Metrics–Device Health. NSO Quant: Manage Metrics–Monitor Issues/Tune IDS/IPS. NSO Quant: Manage Metrics–Deploy and Audit/Validate. NSO Quant: Manage Metrics–Process Change Request and Test/Approve. Research Reports and Presentations The Securosis 2010 Data Security Survey. Monitoring up the Stack: Adding Value to SIEM. Network Security Operations Quant Metrics Model. Network Security Operations Quant Report. Understanding and Selecting a DLP Solution. White Paper: Understanding and Selecting an Enterprise Firewall. Understanding and Selecting a Tokenization Solution. Security + Agile = FAIL Presentation. Top News and Posts Alleged WikiLeaker could face death penalty. SMS trojan author pleads guilty. NIST SHA-3 Status Report. Robert Graham Predicts Thunderbolt’s an Open Gateway. Malware infects more than 50 Android apps. Thoughts on Quitting Security. Gh0stMarket operators sentenced. Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Alex Hutton,

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.