Securosis

Research

Friday Summary: March 25, 2011

I am probably in the minority, but when I buy something I think of it as mine. I paid for it so I own it. I buy a lot of stuff I am not totally happy with, but that’s the problem with being a tinkerer. Usually I think I can improve on what I purchased, or customize my purchase to my liking. This could be as simple as adding sugar to my coffee, or having a pair of pants altered, or changing the carburetor on that rusty Camaro in my backyard. More recently it’s changing game save files or backing out ‘fixes’ that break software. It’s not the way the manufacturer designed it or implemented it, but it’s the way I want it. One man’s bug is another man’s feature. But as the stuff I bought is mine – I paid for it, after all – I am free to fix or screw things up as I see fit. Somewhere along the line, the concept of ownership was altered. We buy stuff then treat it as if it’s not ours. I am not entirely sure when this concept went mainstream, but I am willing to bet it started with software vendors – you know, the ones who write those End User License Agreements that nobody reads because that would be a waste of time and delay installing the software they just bought. I guess this is why I am so bothered by stories like Sony suing some kid – George Holtz – for altering a PlayStation 3. Technically they are not pissed off at him for altering the function of his PlayStation – they are pissed that he taught others how to modify their consoles so they can run whatever software they want. The unstated assumption is that anyone who would do such a thing is a scoundrel and criminals, out to pirate software and destroy hard-working companies (And all their employees! Personally!). These PlayStations were purchased – personal property if you will – and their owners should be able to do as they see fit with their possessions. Don’t like Sony’s OS and want to run Linux? Those customers bought the PS3s (and Sony promised support, then reneged) so they should be able to run what they want without interference. It’s not that George is trying to resell the PlayStation code, or copy the PlayStation and sell a derived work. He’s not reselling Halo or an Avatar Blu-ray; he’s altering his own stuff to suit his needs, and then sharing. This is not an issue of content or intellectual property, but of personal property. Sony should be able to void his warranty, but coming after him legally is totally off-the-charts insane IMO. Now I know Sony has better lobbyists than either George or myself, so it’s much more likely that laws – such as the Digital Millennium Copyright Act (DCMA) – reflect their interests rather than ours. I just can’t abide by the notion that someone sells me a product and then demands I use it only as they see fit. Especially when they want to prohibit my enjoyment because there is a possibility someone could run pirated software. If you take my money, I am going to add hard drives or memory of software as I like. If companies like Sony don’t like that, they should not sell the products. Cases like this call the legitimacy of the DCMA into question. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Rich in Macworld on private browsing. Protect your privacy: online shopping. Mike’s first Macworld article. Rich quoted in the New York Times on RSA. A great response to Rich’s Table Stakes article. John Strand does a good job of presenting his own spin. Index link to Mike & Rich’s Macworld series on privacy. Adrian’s Dark Reading article on McAfee acquisition. Rich quoted on RSA breach. Adrian’s Dark Reading post on DB Security in the cloud. Favorite Securosis Posts Rich: Agile and Hammers – They Don’t Fix Stupid. I still don’t fully get how people glom on to something arbitrary and turn it into a religion. Mike Rothman: Agile and Hammers: They Don’t Fix Stupid. Rare that Adrian wields his snark hammer. Makes a number of great points about people – not process – FAIL. Gunnar Peterson: The CIO Role and Security. Adrian Lane: Crisis Communications. Other Securosis Posts FAM: Additional Features. McAfee Acquires Sentrigo. Incite 3/23/2011: SEO Unicorns. RSA Releases (Almost) More Information. FAM: Core Features and Administration, Part 1. Death, Taxes, and M&A. How Enterprises Can Respond to the RSA/SecurID Breach. Network Security in the Age of Any Computing: Index of Posts. Favorite Outside Posts Rich: Why Stuxnet Isn’t APT. Mike Cloppert is one of the few people out there talking about APT who actually knows what he’s talking about. Maybe some of those vendor marketing departments should read his stuff. Mike Rothman: The MF Manifesto for Programming, MF. Back to basics, MFs. And that is one MFing charming pig. Adrian Lane: A brief introduction to web “certificates”. While I wanted to pick the MF Manifesto as it made me laugh out loud, Robert Graham’s post on cryptography and succinct explanation of the Comodo hack was too good to pass up. Project Quant Posts NSO Quant: Index of Posts. NSO Quant: Health Metrics–Device Health. NSO Quant: Manage Metrics–Monitor Issues/Tune IDS/IPS. NSO Quant: Manage Metrics–Deploy and Audit/Validate. Research Reports and Presentations The Securosis 2010 Data Security Survey. Monitoring up the Stack: Adding Value to SIEM. Network Security Operations Quant Metrics Model. Network Security Operations Quant Report. Understanding and Selecting a DLP Solution. White Paper: Understanding and Selecting an Enterprise Firewall. Understanding and Selecting a Tokenization Solution. Security + Agile = FAIL Presentation. Top News and Posts Dozens of exploits released for popular SCADA programs. Twitter, Javascript Defeat NYT’s $40m Paywall. Apple patches unused Pwn2Own bug, 55 others in Mac OS. Spam Down 40 Pecent in Rustock’s Absence. The Challenge of Starting an Application Security Program. Hackers make off with TripAdvisor’s membership list. Talk of Facebook Traffic Being Detoured. Firefox 4 Content Security Policy feature. Firefox

Share:
Read Post

McAfee Acquires Sentrigo

McAfee announced this morning its intention to acquire Sentrigo, a Database Activity Monitoring company. McAfee has had a partnership with Sentrigo for a couple years, and both companies have cooperatively sold the Sentrigo solution and developed high-level integration with McAfee’s security management software. McAfee’s existing enterprise customer base has shown interest in Database Activity Monitoring, and DAM is no longer as much of an evangelical sale as it used to be. Sentrigo is a small firm and integration of the two companies should go smoothly. Despite persistent rumors of larger firms looking to buy in this space, I am surprised that McAfee finally acquired Sentrigo. McAfee, Symantec, and EMC are the names that kept popping up as interested parties, but Sentrigo wasn’t the target discussed. Still, this looks like a good fit because the core product is very strong, and it fills a need in McAfee’s product line. The aspects of Sentrigo that are a bit scruffy or lack maturity are the areas McAfee would want to tailor anyway: workflow, UI, reporting, and integration. I have known the Sentrigo team for a long time. Not many people know that I tried to license Sentrigo’s memory scanning technology – back in 2006 while I was at IPLocks. Several customers used the IPLocks memory scanning option, but the scanning code we licensed from BMC simply wasn’t designed for security. I heard that Sentrigo architected their solution correctly and wanted to use it. Alas, they were uninterested in cooperating with a competitor for some odd reason, but I have maintained good relations with their management team since. And I like the product because it offers a (now) unique option for scraping SQL right out of the database memory space. But there is a lot more to this acquisition that just memory scraping agents. Here are some of the key points you need to know about: Key Points about the Acquisition McAfee is acquiring a Database Activity Monitoring (DAM) technology to fill out their database security capabilities. McAfee obviously covers the endpoints, network, and content security pieces, but was missing some important pieces for datacenter application security. The acquisition advances their capabilities for database security and compliance, filling one of the key gaps. Database Activity Monitoring has been a growing requirement in the market, with buying decisions driven equally by compliance requirements and response to escalating use of SQL injection attacks. Interest in DAM was previously to address insider threats and Sarbanes-Oxley, but market drivers are shifting to blocking external attacks and compensating controls for PCI. Sentrigo will be wrapped into the Risk and Compliance business unit of McAfee, and I expect deeper integration with McAfee’s ePolicy Orchestrator. Selling price has not been disclosed. Sentrigo is one of the only DAM vendors to build cloud-specific products (beyond a simple virtual appliance). The real deal – not cloudwashing. What the Acquisition Does for McAfee McAfee responded to Oracle’s acquisition of Secerno, and can now offer a competitive product for activity monitoring as well as virtual patching of heterogeneous databases (e.g., Oracle, IBM, etc). While it’s not well known, Sentrigo also offers database vulnerability assessment. Preventative security checks, patch verification, and reports are critical for both security and compliance. One of the reasons I like the Sentrigo technology is that it embeds into the database engine. For some deployment models, including virtualized environments and cloud deployments, you don’t need to worry about the underlying environment supporting your monitoring functions. Most DAM vendors offer security sensors that move with the database in these environments, but are embedded at the OS layer rather than the database layer. As with transparent database encryption, Sentrigo’s model is a bit easier to maintain. What This Means for the DAM Market Once again, we have a big name technology company investing in DAM. Despite the economic downturn, the market has continue to grow. We no longer estimate the market size, as it’s too difficult to find real numbers from the big vendors, but we know it passed $100M a while back. We are left with two major independent firms that offer DAM; Imperva and Application Security Inc. Lumigent, GreenSQL, and a couple other firms remain on the periphery. I continue to hear acquisition interest, and several firms still need this type of technology. Sentrigo was a late entry into the market. As with all startups, it took them a while to fill out the product line and get the basic features/functions required by enterprise customers. They have reached that point, and with the McAfee brand, there is now another serious competitor to match up against Application Security Inc., Fortinet, IBM/Guardium, Imperva, Nitro, and Oracle/Secerno. What This Means for Users Sentrigo’s customer base is not all that large – I estimate fewer than 200 customers world wide, with the average installation covering 10 or so databases. I highly doubt there will be any technology disruption for existing customers. I also highly doubt this product will become shelfware in McAfee’s portfolio, as McAfee has internally recognized the need for DAM for quite a while, and has been selling the technology already. Any existing McAfee customers using alternate solutions will be pressured to switch over to Sentrigo, and I imagine will be offered significant discounts to do so. Sentrigo’s DAM vision – for both functionality and deployment models – is quite different than its competitors, which will make it harder for McAfee to convince customers to switch. The huge upside is the possibility of additional resources for Sentrigo development. Slavik Markovich’s team has been the epitome of a bootstrapping start-up, running a lean organization for many years now. They deserve congratulations for making it this less than $10M $20M in VC funds. They have been slowly and systematically adding enterprise features such as user management and reporting, broadening platform support, and finally adding vulnerability assessment scanning. The product is still a little rough around the edges; and lacks some maturity in UI and capabilities compared to Imperva, Guardium, and AppSec – those products have been fleshing out their capabilities for years more. In a

Share:
Read Post

Incite 3/23/2011: SEO Unicorns

It seems blog popularity is a double edged sword. Yes, thousands of folks read our stuff every day. But that also means we are a target for many SEO Experts, who want to buy links from us. No, we don’t sell advertising on the site. But that doesn’t stop them from pummeling us with a bunch of requests each week. Most of the time we are pretty cordial, but not always. Which brings us to today’s story. It seems Rich was a little uppity yesterday and decided to respond to the link request with a serious dose of snark. Rich: Our fee is $10M US. Cash. Non-sequential bills which must be hand delivered on a unicorn. And not one of those glued-on horn jobs. Must be the real thing with a documented pedigree. I guess Rich thought that it was yet another bot sending a blind request and that his list of demands would disappear into the Intertubes, but alas, it wasn’t a bot at all. This SEO fellow and Rich then proceeded to debate the finer issues of unicorn delivery. Interestingly enough, the $10MM fee didn’t seem to be an issue. SEO Guy: Thanks for getting back. I may have some issues fulfilling your request. The $10M will not be a problem, however I don’t know if you’ve noticed, but unicorns are a heavily-endangered species. Even to rent one would require resources that exceed my nearly limitless budget. Do you know how much a unicorn pilot charges by the hour? Rich: African or European unicorn? SEO Guy: How far do you live from Ireland? Rich: About 7000 miles, but my wife has unknown ancestors still living there and I have red hair. Not sure if that will get a discount. SEO Guy: Would it be okay if the unicorn itself delivered the (what I am assuming is a golden satchel of) money instead? I know you want it hand-delivered (mind out of the gutter) and that unicorns lack hands. Rich: Excellent point and I see that will save on the piloting fees. Yes, but only if we can time delivery for my daughter’s birthday and you also include a frosted cupcake with a candle on it for her. I think she’d like that. You can deduct the cost of the cupcake from the $10M, if that helps…but not the cost of the candle. So yes, as busy as we are with launching our super sekret project, polishing the CCSK training course, and all our client work, we still have time to give a hard time to a poor sap trying to buy a few links for his SEO clients. So every time I’m grumpy because QuickBooks Online is down, the EVDO service in my favorite coffee shop is crap, and I have to restructure a white paper – I can just appreciate the fact that I’m not the SEO guy. Yes, I do have to deal with asshats every day. But they are asshats of my own choosing. This guy doesn’t get to choose who he solicits and I’m sure a debate about unicorns was the highlight of his day of drudgery. Yes, I’m a lucky guy, and sometimes I need an SEO unicorn to remind me. -Mike Photo credits: “Unicorns!” originally uploaded by heathervescent Incite 4 U Testing my own confirmation bias: There are many very big-brained folks in security. Errata’s Rob Graham is one of them. Entering a debate with Rob is kind of like fighting a lion. You know you don’t have much of a chance; you can only hope Rob gets bored with you before he mauls your arguments with well-reasoned responses. So when Rob weighed in on Risk Management and Fukushima, I was excited because Rob put into words many of the points I’ve been trying (unsuccessfully) to make for years about risk management. But to be clear, I want to believe Rob’s arguments, because I am no fan of risk metrics (at least the way we practice them today). His ideas on who is an expert (and how that changes), and what that expert needs to do (have the most comprehensive knowledge of all the uncertainties) really resonated with me. Maybe you can model it out, maybe you can’t. But ultimately we are playing the odds and that’s a hard thing to do, which is why we focus so heavily on response. Now Alex Hutton doesn’t back down and has a well reasoned response as well. Though it seems (for a change) that both Rob and Alex are talking past each other. Yes, my appreciation of Rob’s arguments could be my own biases (and limited brainpower) talking, which wouldn’t be the first time. – MR Careful with that poison: Some days the security industry is like cross-breeding NASCAR with one of those crappy fashion/cooking/whatever reality shows. Everyone’s waiting for the crash, and when it happens they are more than happy to tell you how they would have done it better. As analysts we get used to the poison pill marketing briefs. You know, the phishing email or press release designed to knock the competition down. And there is no shortage of them filling my inbox after the RSA breach. At least NASCAR has the yellow caution flag to slow things down until they can get the mangled cars off the track. But I have yet to see one brief that shows any understanding of what happened or customer risk/needs. So I either delete them without reading or send back a scathing response. I have yet to see one of these work with a customer/prospect, so it all comes off as little more than jealous sniping. And besides, I know RSA isn’t the first security company to be breached, just one of the first to disclose, and I doubt any of the folks sending out this poison could survive the same sort of attack. If they aren’t already pwned, that is. (No link for this one since you all are probably getting the same emails). – RM No poop in the sandbox: Good article in Macworld describing the

Share:
Read Post

Agile and Hammers: They Don’t Fix Stupid

I did not see the original Agile Ruined My Life post until I read Paul Krill’s An agile pioneer versus an ‘agile ruined my life’ critic response today. I wish I had, as I would have used Mr. Markham’s post as an example of the wrong way to look at Agile development in my OWASP and RSA presentations. Mr. Markham raises some very good points, but in general the post pissed me off: it reeks of irresponsibility and unwillingness to own up to failure. But rather than go off on a tirade covering the 20 reasons that post exhibits a lack of critical thinking, I’ll take the high road. Jon Kern’s quotes in the response hit the nail on the head, but did not include an adequate explanation of why, so I offer a couple examples. I make two points in my Agile development presentation which are relevant here. First: The scrum is not the same thing as Agile. Scrum is just a technique used to foster face-to-face communication. I like scrum and have had good success with it because, a) it promotes a subtle form of peer pressure in the group, and b) developers often come up with ingenious solutions when discussing problems in an open forum. Sure, it embodies Agile’s quest for simplicity and efficiency, but that’s just facility – not the benefit. Scrum is just a technique, and some Agile techniques work in particular circumstances, while others don’t. For example, I have never got pair programming to work. That could be due to the way I paired people up, or the difficulty of those projects might have made pairs impractical, or perhaps the developers were just lazy (which definitely does happen). The second point is that people break process. Mr. Markham does not accept that, but sorry, there are just not that many variables in play here. We use process to foster and encourage good behavior, to minimize poor behaviors, and to focus people on the task at hand. That does not mean process always wins. People are brilliant at avoiding responsibility and disrupting events. I couch Agile pitfalls in terms of SDL – because I am more interested in promoting secure code development – but the issues I raise cause general project failures as well. Zealots. Morons. Egoists. Unwitting newbies. People paranoid about losing their jobs. All these personality types figure into the success (or lack thereof) of Agile teams. Sometimes Agile looses to that passive-aggressive bastard at the back of the room. Maybe you need process adjustments, or perhaps better process management, or just maybe you need better people. If you use a hammer to drive a screw into the wall, don’t be surprised when things go wrong. Use the wrong tool or technique to solve a problem, and you should expect bad things to happen. Agile techniques are geared toward reducing complexity and improving communication; improvements in those two areas mean better likelihood of success, but there’s no guarantee. Especially when communication and complexity are not your problem. Don’t blame the technique – or the process in general – if you don’t have the people to support it. Share:

Share:
Read Post

Death, Taxes, and M&A

Ben Franklin was a pretty smart dude. My favorite quote of his is: “In this world nothing is certain but death and taxes.” For a couple hundred years, that was pretty good. But at this point, I’ll add mergers and acquisitions as the third certainty in this world. Maybe also that your NCAA bracket will get busted by some college you’ve never heard of (WTF VCU?). We saw this over the weekend. AT&T figures it’s easier and cheaper to drop $39 billion buying T-mobile than build their own network (great analysis by GigaOm) or gain market share one customer at a time. And in security, there are always plenty of deals happening or about to happen. Remember, security isn’t a standalone market over time, so pretty much all security companies will be folded into something or other. Take, for instance, WebSense trying to sell for $1 billion. And no, I’m not going to comment on whether WBSN is worth a billion. That’s another story for another day. Or the fact that given Intel’s balance sheet, McAfee will likely start taking down bigger targets. All we can count on is that there will be more M&A. But let’s take a look at why deals tend to be the path of least resistance for most companies. Outsourced R&D: Anyone who’s ever worked in a large company knows how hard it is to innovate internally. There is a lot of inertia and politics to overcome to get anything done. In many cases it’s easier to just buying some interesting technology, since the buyer has a snowball’s chance in hell of building it in-house. Distribution leverage: There are clear economies of scale in most businesses. So the more stuff in a rep’s bag and the bigger their market share, the more likely they’ll be able to sell something to someone. That’s what’s driving Big IT to continue buying everything. This also drive deals like AT&T/T-Mobile, because they are buying not just the network, but also the customers. Two drunks holding each other up: Yep, we also see deals involving two struggling companies, basically throwing a hail mary pass in hopes of surviving. That doesn’t usually work out too well. And those are just off the top of my head. I’m sure there are another 5-10 reasonable justifications, but from an end-user standpoint let’s cover some of the planning you have to make for the inevitable M&As. We will break the world up into BD (before deal) and AD (after deal). Before Deal: Assess vendor viability: First assess all your security vendors. Rank them on a scale from low viability (likely to be acquired or go out of business) to rock solid. Assess product criticality: Next look at all your security products and rate them on a scale from non-critical to “life is over if it goes down.” Group into quadrants: Using vendor viability and product criticality, you can group all your products into a few buckets. I recommend 4 because it’s easy. This chart should give you a good feel for what I’m talking about. Define contingency plans: For products in the “Get Plan B now” bucket, make sure you have clear contingency plans. For the other quadrants, think about what you’d do if there was M&A activity for those offerings, but they are less urgent than having a plan for the critical & fragile items. After Deal: Call your rep: Odds are your rep will be a pretty busy guy/gal in the days after a deal is announced. And there is a high likelihood they won’t know any more than you. But get in line and hear the corporate line about how nothing will change. Yada yada yada. Then, depending how much leverage you have, ask for a meeting with the buyer’s account team. And then extract either some pricing or product concessions. The first renewal right after a deal closes is the best time to act. They want to keep you (or the deal looks like crap), so squeeze and squeeze hard. Call the competition: Yes, the competition will be very interested in getting back in, hoping they can use the deal’s uncertainty as a wedge. Whether you are open to swapping out the vendor or not, bring the other guys in to provide additional leverage. Revisit contingency plans: You might have to pull the trigger even if you don’t want to, so it’s time to take the theoretical plan you defined before the deal, and adjust it for reality now that the deal has occurred. Evaluate what it would take to switch, assess the potential disruption, and get a very clear feel for how tough it would be to move. Don’t to share that information with vendors, but you need it. None of this stuff is novel, but it’s usually a good reminder of the things you should do, but may not get around to. Given the number of deals we have seen already this year, and the inevitably accelerating deal flow, it’s better to be safe than sorry. Share:

Share:
Read Post

FAM: Core Features and Administration, Part 1

Now that we understand the technical architecture, let’s look at the principal features seen across most File Activity Monitoring tools. Entitlement (Permission/Rights) Analysis and Management One of the most important features in most FAM products is entitlement (permission) analysis. The tool collects all the file and directory permissions for the repository, ties them back to users and groups via directory integration, and generates a variety of reports. Knowing that an IP address tried to access a file might be somewhat useful but practical usefulness requires that policies be able to account for users, roles, and their mappings to real-world contexts such as business units. As we mentioned in the technical architecture section; all FAM products integrate with directory servers to gather user, group, and role information. This is the only way tools can gather sufficient context to support security requirements, such as tracing activity back to a real employee rather than just a username that might not indicate the person behind it. (Not that FAM is magic – if your directories don’t contain sufficient information for these mappings you still might have a lot of work to trace back identities). At the most basic level a FAM tool uses this integration to perform at least some minimal analysis on users and groups. The most common is permission analysis – providing complete reports on which users and groups have rights to which directories/repositories/files. This is often a primary driver for buying the FAM tool in the first place, as such reports are often required for compliance. Some tools include more advanced analysis to identify entitlement issues – especially rights conflicts. For example, you may be able to identify which users in accounting also have engineering rights. Or list users with multiple roles that violate conflict of interest policies. While useful for security, these capabilities can be crucial for finding and fixing compliance issues. A typical rights analysis will collect existing rights, map them to users and groups, help identify excessive permissions, and identify unneeded rights. Some examples are: Determine which users outside engineering have rights to engineering documents. Find which users with access to healthcare records also have access to change privileges, but aren’t in an administrative group. Identify all files and repositories the accounting group has access to, and then which other groups also have access to those files. Identify dormant users in the directory who still have access to files. Finally, the tool may allow you to manage permissions internally so you don’t have to manually connect to servers in order to make entitlement changes. Secure Aggregation and Correlation As useful as FAM is for a single repository, its real power becomes clear as you monitor larger swaths of your organization and can centrally manage permissions, activities, and policies. FAM tools use a similar architecture to Database Activity Monitoring – with multiple sensors, of different types, sending data back to the central management server. This information is normalized, stored in a secure repository, and available for a variety of analyses and reports. As a real-time tool the information is also analyzed for policy violations and (possible) enforcement actions, which we will discuss later. The tools don’t care if one server is a NAS, another a Windows server, and the last a supported document management system – it’s capable of reviewing all their contents consistently. This aggregation also supports correlation – meaning you can build policies based on activities occurring across different repositories and users. For example, you can alert on unusual activity by a single user across multiple file servers, or on multiple user accounts all accessing a single file in one location. Essentially, the FAM tool gives you a big picture view of all file activity across monitored repositories, with various ways of building alerts and analyzing the data, from a central management server. If your product supports multiple file protocols, it will present this in a consistent, activity-based format (e.g., open, delete, privilege change, etc.). Activity Analysis While understanding permissions and collecting activity are great, and may be all you need for a compliance project, the real power of FAM is its capability to monitor all file activity (at the repository level) in real time, and generate alerts, or block activity, based on security policies. Going back to our technical architecture: activity is collected via network monitoring, software agent, or other application integration. The management server then analyzes this activity for policy violations/warnings such as: A user accessing a repository they have access to, but have not accessed within the past 180 days. A sales employee downloading more than 5 customer files in a single day. Any administrator account accessing files in a sensitive repository. A new user (or group) being given rights to a sensitive directory. Any user account copying an entire directory from an engineering server. A service account accessing files. Some tools allow you to define policies based on a sensitivity tag for the repository and user groups (or business units), instead of having to manually build policies on a per-repository or per-directory level. This analysis doesn’t necessarily need to happen in real time – it can also be done on a scheduled or ad hoc basis to support a specific requirement, such as an auditor who wants to know who accessed a file, or as part of an incident investigation. We’ll talk more about reporting later. Data Owner Identification Although every file has an ‘owner’, translating that to an actual person is often a herculean process. Another primary driver of File Activity Monitoring is to help organizations identify file owners. This is typically done through a combination of privilege and activity analysis. Privileges might reveal a file owner, but activity may be more useful. You could build a report showing the users who most often access a file, then correlate that to who also has ownership permissions, and the odds are they will help quickly identify the file owner. This is, of course, much simpler if the tool was already monitoring a repository and can identify who initially created the file.

Share:
Read Post

RSA Releases (Almost) More Information

As this is posting, RSA is releasing a new SecureCare note and FAQ for their clients (Login required). This provides more specific prioritized information on what mitigations they recommend SecurID clients take. To be honest they really should just come clean at this point. With the level of detail in the support documents it’s fairly obvious what’s going on. These notes are equivalent to saying, “we can’t tell you it’s an elephant, but we can confirm that it is large, grey, and capable of crushing your skull if you lay down in front of it. Oh yeah, and it has a trunk and hates mice.” So let’s update what we know, what we don’t, what you should do, and the open questions from our first post: What we know Based on the updated information… not much we didn’t before. But I believe RSA understands the strict definition of APT and isn’t using the term to indicate a random, sophisticated attack. So we can infer who the actor is – China – but RSA isn’t saying and we don’t have confirmation. In terms of what was lost, the answer is, “an elephant” even if they don’t want to say so. This means either customer token records or something similar, and I can’t think of what else it could be. Here’s a quote from them that makes it almost obvious: To compromise any RSA SecurID deployment, the attacker needs to possess multiple pieces of information about the token, the customer, the individual users and their PINs. Some of this information is never held by RSA and is controlled only by the customer. In order to mount a successful attack, someone would need to have possession of all this information. If it were a compromise of the authentication server software itself, that statement wouldn’t be accurate. Also, one of their top recommendations is to use long, complex PINs. They wouldn’t say that if the server was compromised, which means it pretty much has to be related to customer token records. This also leads us to understand the nature of a potential attack. The attacker would need to know the username, password/PIN, and probably the individual assigned token. Plus they need some time and luck. While extremely serious for high-value targets, this does limit potential exposure. This also explains their recommendations on social engineering, hardening the authentication server, setting PIN lockouts, and checking logs for ongoing bad token/authentication requests. I think his name is Babar. What we don’t know We don’t have any confirmation of anything at this point, which is frankly silly unless we are missing some major piece of the puzzle. Until then it’s reasonable to assume a single sophisticated attacker (with a very tasty national cuisine), and compromise of token seeds/records. This reduces the target pool and means most people should be in good shape with the practices we previously recommended (updated below). One big unknown is when this happened. That’s important, especially for high-value targets, as it could mean they have been under attack for a while, and opponents might have harvested some credentials via social engineering or other means already. We also don’t know why RSA isn’t simply telling us what they lost. With all these recommendations it’s clear that the attacker still needs to be sophisticated to pull off more attacks with the SecurID data, and needs to have that data, which means customer risk is unlikely to increase if they reveal more. This isn’t like a 0-day vulnerability, where merely knowing it’s out there is a path to exploitation. More information now will only reduce customer risk. What you need to do Here are our updated recommendations: Remember that SecurID is the second factor in a two-factor system… you aren’t stripped naked (unless you’re going through airport security). Assuming it’s completely useless now, here is what you can do: Don’t panic. Although we don’t know a lot more, we have a strong sense of the attacker and the vulnerability. Most of you aren’t at risk if you follow RSA’s recommendations. Many of you aren’t on the target list at all. Talk to your RSA representative and pressure them for increased disclosure. Read the RSA SecureCare documentation. Among other things, it provides the specific things to look for in your logs. Let your users with SecurIDs know something is up and not to reveal any information about their tokens. Assume SecureID is no longer effective. Review passwords/PINs tied to SecurID accounts and make sure they are strong (if possible). If you change settings to use long PINs, you need to get an update script from RSA (depending on your product version) so the update pushes out properly. If you are a high-value target, force a password change for any accounts with privileges that could be seriously damaging (e.g., admins). Consider disabling accounts that don’t use a password or PIN. Set authentication attempt lockouts (3 tries to lock an account, or similar). The biggest changes are a little more detail on what to look for, which supports our previous assumptions. That and my belief their use of the term APT is accurate. Open questions I will add in my own answers where we have them: While we don’t need all the details, we do need to know something about the attacker to evaluate our risk. Can you (RSA) reveal more details? Not answered, but reading between the lines this looks like true APT. How is SecurID affected and will you be making mitigations public? Partially answered. More specific mitigations are now published, but we still don’t have full information. Are all customers affected or only certain product versions and/or configurations? Answered – see the SecureCare documentation, but it seems to be all current versions. What is the potential vector of attack? Unknown, so we are still assuming it’s lost token records/seeds, which means the attacker needs to gather other information to successfully make an improper authentication request. Will you, after any investigation is complete, release details so the rest of us can learn from your victimization? Answered. An RSA contact told me they have every

Share:
Read Post

Network Security in the Age of *Any* Computing: Index of Posts

It’s hard to believe, but we have wrapped up the initial research on this series dealing with how network security evolves, given the need to provide access to critical information at any time, from anywhere, on any device. We call it any computing. We’ve dealt with the risks and how enforcement and policies will change. And talked quite a bit about integrating these enforcement points into the existing network and security infrastructure. Finally, we wrapped the series yesterday with Quick Wins, about the process of selecting and implementing these technologies. So here is the index of posts. Enjoy. The Risks Containing Access Enforcement Policy Granularity Integration Quick Wins If you missed any of these posts, check out our Complete Feed on the web or via RSS. Then you’ll be sure to get everything we publish. The next step is to assemble these posts, massage a bit, have someone who knows how to write edit the whole thing, and then publish as a white paper. That should happen over the next two weeks. Stay tuned – we’ll post the paper’s availability right here. Share:

Share:
Read Post

How Enterprises Can Respond to the RSA/SecurID Breach

We have gotten a bunch of questions about what people should do, so I thought I would expand more on the advice in our last post, linked below. Since we don’t know for sure who compromised RSA, nor exactly what was taken, nor how it could be used, we can’t make an informed risk decision. If you are in a high-security/highly-targeted industry you probably need to make changes right away. If not, some basic precautions are your best bet. Remember that SecurID is the second factor in a two-factor system… you aren’t stripped naked (unless you’re going through airport security). Assuming it’s completely useless now, here is what you can do: Don’t panic. We know almost nothing at this point, and thus all we can do is speculate. Until we know the attacker, what was lost, how SecurID was compromised (assuming it was), and the potential attack vector we can’t make an informed risk assessment. Talk to your RSA representative and pressure them for this information. Assume SecureID is no longer effective. Review passwords tied to SecurID accounts and make sure they are strong (if possible). If you are a high-value target, force a password change for any accounts with privileges that could be overly harmful (e.g., admins). Consider disabling accounts that don’t use a password or PIN. Set password attempt lockouts (3 tries to lock an account, or similar). I hope we’re wrong, but that’s the safe bet until we hear more. And remember, it isn’t like Skynet is out there compromising every SecurID-‘protected’ account in the world. Share:

Share:
Read Post

The Problem with Open Source in Commercial Software

One of the more interesting results from the Pwn2Own contest at CanSecWest was the exploitation of a Blackberry using a WebKit vulnerability. RIM just learned a lesson that Apple (and others) have been struggling with for a few years now. While I don’t think open code is inherently more or less secure than proprietary code, any time you include external code in your platform you are intrinsically tied to whoever maintains that code. This is bad enough for applications and plugins like Adobe Flash and Acrobat/Reader, but it is really darn ugly for something like Java (a total mess from a security standpoint). While I don’t know if it was involved in this particular hack, one of the bigger problems with using external code is when a vulnerability is discovered and released (or even patched) before you include the patch in your own distribution. Many of the other issues around external code are easier to manage, but Apple clearly illustrates what appears to be the worst one. This is the delay between initial release of patches for open projects (including WebKit, driven by Apple) and their own patches – often months later. During this window, the open source repository shows exactly what changed and thus points directly at their own vulnerability. As Apple has shown – even with WebKit, which it drives – this is a serious problem and seriously aggravates the wait for patch delivery. At this point I should probably make clear that I don’t think including external code (even open source) is bad – merely that it brings this pesky security issue which requires management. There are three ways to minimize this risk: Patch early and often. Keep the window of vulnerability for your platform/application as short as possible by burning the midnight oil once a fix is public. Engage deeply with the open source community your code comes from. Preferably have some of your people on the core team, which only happens if they actually contribute something of significance to the project. Then prepare to release your patch at the same time the primary update is released (don’t patch before – that might well break trust). Invest in anti-exploitation technologies that hopefully mitigate any vulnerabilities, no matter the origin. The real answer is you need to do all three. Issue timely fixes when you get caught unaware, engage deeply with the community you now rely on, and harden your platform. Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.