Securosis

Research

ESF: Endpoint Compliance Reporting

You didn’t think we could get through an 11-part series about anything without discussing compliance, did you? No matter what we do from a security context – whatever the catalyst, budget center, or end goal – we need to substantiate implemented controls. We can grind out teeth and curse the gods all we want, but security investments are contingent on some kind of compliance driver. So we need to focus on documentation and reporting for everything we do. Further, we discussed operational efficiencies in the security programs post, and the only way to get any kind of leverage from an endpoint security program is to automate the reporting. Document what? First we need to understand what needs to be documented from an endpoint perspective for the regulations/standards/guidance you deal with. You must be able to document the process/procedures of your endpoint program, as well as the data substantiating the controls. Process either exists or it doesn’t, so that documentation should be straightforward to produce. On the other hand, figuring out which data types corroborate which controls and apply to which standards requires a big matrix to handle all the permutations and combinations. There are two ways to do this: Buy it – Many of the IT GRC tools out there (and don’t get us started on the value of IT GRC tools) help to manage the workflow of a compliance program. The key capability here is the built-in map, which connects technical controls to regulations, ostensibly so you don’t have to. But these tools cost money and provide limited value. Build it – The other option involves going through your regulations and figuring out relevant controls. This is about as fun as a root canal, but it has to be done. More likely, you can start with something your buddies, auditor, or vendors have. Vendors have excellent motivation to figure out how their products – representing a variety of security controls – map to the various regulations their customers need to address. The data is out there – you just have to assemble it. Actually, there is a third option: to just license the content from an organization like the Unified Compliance Framework folks. They license a big-ass spreadsheet with the map, and their pricing is rather economical. Packaging Now that you know what you need to report on, how do you do it? This question is bigger than your endpoint security program, and applies to every security program you run. We recommend you think architecturally. You’ve got certain domains of controls – think network, endpoint, data center, applications, etc. You want to put together a few things for each domain to make the auditor happy: Control list – Go back to your control maps and make a big list of all the controls required for the auditor’s checklist (they all have checklists). Make sure the auditor buys into your list, and that you aren’t missing anything. Logical architecture – Show graphically (a picture is worth a thousand words) how your controls are implemented. Right – every control on the list should appear on the logical architecture. Data – You didn’t really think the auditor would just believe your architecture diagram, did you? Now you need the data from each of your systems (endpoint suite, configuration management, full disk encryption, etc.) to show that you’ve actually implemented the controls. Your vendor likely has a pre-built report for your regulation, so you shouldn’t have to do a lot of manual report generation. To be clear, one of the value propositions of IT GRC and other compliance automation products like log management/SIEM is to aggregate all this information (not just from the endpoint program, but from all your programs) and spit out an integrated report. For the most part, with a bit of angst in deployment, these tools can help reduce the burden of preparing for frequent audits across multiple regulations for global enterprises. The question to answer is whether the tool can pay for itself in terms of saved time and effort – is the ROI sufficient? Dealing with deficiencies One other thing to consider is the reality of an audit pointing out a specific deficiency in your endpoint security program. The auditor/assessor is there to find problems, and likely they will. But that doesn’t mean the auditor is right. Yes, we said it. Sometimes auditors take liberties and/or subjectively decide how to interpret a specific regulation. If there is a specific reason you decided to either bypass a control – or more likely, implement a compensating control – make your case. In the event (however unlikely) there is a legitimate deficiency, you need to fix it. Welcome, Captain Obvious! During the next audit, first go through the list of previous deficiencies and how you’ve remediated them. Make a big deal of addressing the deficiencies, which will get the audit off on the right foot. What’s next? We’ll cap off the Endpoint Security Fundamentals series by talking about incident response. Stay tuned for the exciting conclusion. 😉 Other posts in the Endpoint Security Fundamentals Series Introduction Prioritize: Finding the Leaky Buckets Triage: Fixing the Leaky Buckets Controls: Update and Patch Controls: Secure Configurations Controls: Anti-Malware Controls: Firewall, HIPS, and Device Control Controls: Full Disk Encryption Building the Endpoint Security Program Share:

Share:
Read Post

Incite 4/14/2010: Just Think

As numb as we are to most advertising (since we are hit with thousands of advertising exposures every day), sometimes an ad campaign is memorable and really resonates. No, seeing Danica Patrick on a massage table doesn’t qualify. But Apple’s Think Different campaign really did. At that point, Apple was positioning to the counter-culture, looking for folks who didn’t want to conform. Those who had their own opinions, but needed a way to set them loose on the world. Of course, we all want to think we are more than just cogs in the big machine and that we do matter. So the campaign resonated. But nowadays I’m not so worried about thinking differently, but just thinking at all. You see, we live in a world of interruption and multitasking. There is nowhere to hide any more, not even at 35,000 feet. Flying used to be my respite. 2-5 hours of solitude. You know, put in the ear buds, crank up the iPod, and tune out. Maybe I’d catch up on some writing or reading. Or even at the risk of major guilt, I’d get some mental floss (I’m plowing through the Daniel Silva series now) and crank through some fiction on the flight. Or Lord help me, sometimes I’d just sit and think. An indulgence I don’t partake in nearly enough during my standard routine. Yet through the wonders of onboard WiFi, you can check email, surf the Web, tweet to your friends (yo, dog, I’m at 30K feet – and you are not!) or just waste time. All for $9.95 per flight. What a bargain. And if you absolutely, positively need to send that email somewhere over Topeka, then the $9.95 is money well spent. Yet in reality, I suspect absolutely, positively means when you get to your destination. What you don’t see is the opportunity cost of that $9.95. Not sure you can put a value on spending 3 hours battling spies or catching up on some journaling or even revisiting your plans for world domination. On my way back from RSA, I did use the on-board WiFi and to be honest it felt like a piece of me died. I was pretty productive, but I didn’t think, and that upset me. The last bastion of solitude was gone, but certainly not forgotten. So yes, I’m writing this from 34,701 feet somewhere above New Mexico. But my battery is about dead, and that means it’s time to indulge. There are worlds to dominate, windmills to chase, strategies to develop, and I can’t do that online. Now quiet down, I need to think… – Mike. PS: Good luck to AndyITGuy, who is leaving his ATL digs to head to Cincy. Hopefully he’ll keep writing and plug into the security community in Southern Ohio. Photo credits: “think___different” originally uploaded by nilson Incite 4 U Porky, Risk Management, Pig – Let’s all welcome Jack Jones back to bloggy land, and he restarts with a doozy – basically saying risk management tools are like putting lipstick on a pig. Being a vegetarian and thinking about actually putting lipstick on a pig, I can only think of the truism from Jules in Pulp Fiction, “we’d have to be talkin’ about one charming motherf***ing pig.” But I’ll summarize Jack’s point more succinctly. Garbage In = Garbage Out. And even worse, if the analysis and the outcomes and the quantification are lacking, then it’s worse than garbage out: it’s sewage out. But senior management wants a number when they ask about risk, and the weak security folks insist on giving it to them, even though it’s pretty much arbitrary. Off soapbox now. – MR Craponomics – Repeat after me – it’s all about the economics. (I’m starting to wish I took one of those econ classes in school). According to the New York Times, lenders sort of ignore many of the signs of ID theft because they’d rather have the business. The tighter the fraud controls, the fewer people (legitimate and criminal) they can lend to, and the lower the potential profits. It’s in their interest to tolerate a certain level of fraud, even if that hurts ID theft victims. Remember, the lenders are out to protect themselves, not you. Can you say moral hazard? – RM Partly Paranoid, with a chance of PR – From what I hear, Google is now paranoid about security. Call me a cynic, but when someone trashes your defenses, is your response to use more web-based computing products and services, like Google Chrome? I am sure they are thinking they’ll modernize their defenses with Chrome, and all those old threat vectors will be magically corrected. You know, like XSS and injection attacks. I am thinking, “Someone broke into my company, now Chrome’s source code is suspect until I can prove attackers did not gain access to the source code control system.” Give Google credit for disclosure, and odds are they will be more secure with Chrome, but that was just a stepping stone in the process. I am far more interested in the steps taken to provide redundant security measures and perhaps some employee education on anti-phishing and security. Something that helps after an employee’s browser is compromised. There will always be another browser hack, so don’t tell me the answer is Chrome. Blah. – AL Yes, you are an addict… – It was funny to read Chris Nickerson’s post on FUDSEC about being a security addict, especially since that’s the entire premise of the Pragmatic CSO. But we look at the problem from different perspectives. Chris is right in pointing out that although we security folks tend to be powerless, that doesn’t mean we are helpless. It’s an important nuance. Personally I found his 12 steps lacking, especially compared to mine. But he also was working within the context of one blog post, and I wrote a book. All kidding aside, there are things we can control and things we can’t. Security is a hard job and

Share:
Read Post

ESF: Building the Endpoint Security Program

Over the previous 8 posts in this Endpoint Security Fundamentals series, we’ve looked at the problem from the standpoint of what to do right awat (Prioritize and Triage) and the Controls (update software and patch, secure configuration, anti-malware, firewall, HIPS and device control, and full disk encryption). But every experienced security professional knows a set of widgets doesn’t make a repeatable process, and it’s really the process and the people that makes the endpoints secure. So let’s examine how we take these disparate controls and make them into a program. Managing Expectations The central piece of any security program is making sure you understand what you are committing to and over-communicating your progress. This requires a ton of meetings before, during, and after the process to keep everyone on board. More importantly, the security team needs a standard process for communicating status, surfacing issues, and ensuring there are no surprises in task completion or results. The old adage about telling them what you are going to do, doing it, and then telling them what you did, is absolutely the right way to handle communications. Forget the ending at your own peril. Defining success The next key aspect of the program is specifying a definition for success. Start with the key drivers for protecting the endpoints anyway. Is it to stop the proliferation of malware? To train users? To protect sensitive data on mobile devices? To improve operational efficiency? If you are going to spend time and money, or to allocate resources you need at least one clear driver / justification. Then use those drivers to define metrics, and operationalize your process based on them. Yes, things like AV update efficiency and percentage of mobile devices encrypted are uninteresting, but you can trend off those metrics. You also can set expectations at the front end of the process about acceptable tolerances for each one. Think about the relevant incident metrics. How many incidents resulted from malware? User error? Endpoint devices? These numbers have impact – whether good or bad. And ultimately it’s what the senior folks worry about. Operational efficiency is one thing – incidents are another. These metrics become your dashboard when you are communicating to the muckety-mucks. And remember to use pie charts. We hear CFO-types like pie charts. Yes, I’m kidding. User training Training is the third rail of security, and needs discussion. We are fans of training. But not crappy, check-the-box-to-make-the-auditor-happy training. Think more like phishing internal users and, using other social engineering tactics to show employees how exposed they are. Good training is about user engagement. Unfortunately most security awareness training sucks. Keep the goals in mind. The end user is the first line of defense (and for mobile professionals, unfortunately also the last) so we want to make sure they understand what an attack looks like and what to do if they think they might have a problem. They don’t have to develop kung fu, they just need to understand when they’ve gotten kicked in the head. For more information and ideas, check out Rich’s FireStarter from Monday. Operational efficiencies Certainly one of the key ways to justify the investment in any kind of program is via operational efficiencies, and in the security business that means automation whenever and wherever possible. So just think about the controls we discussed through this series, and how to automate them. Here’s a brief list: Update and Patch, Secure Configurations – Tools to automate configuration management kill these two birds with one stone. You set a policy, and can thus both enforce standard configurations and keep key software updated. Anti-malware, FW/HIPS – As part of the endpoint suites, enforcing policies on updates, software distribution and policy settings are fairly trivial. Here is the leverage (and the main justification) for consolidating vendors on the endpoint – just beware folks who promise integration, but fail to deliver useful synergy. Device control, full disk encryption, application white listing – These technologies are not as integrated into the endpoint suites at this point, but as the technologies mature, markets consolidate, and vendors actually get out of their own way and integrate the stuff they buy, this will get better. Ultimately, operational efficiencies are all about integrating management of the various technologies used to protect the endpoint. Feedback loops The other key aspect of the program is making sure it adapts to the dynamic nature of the attack space. Here are a few things you should be doing to keep the endpoint program current: Test controls – We are big fans of hacking yourself, which means using hacking tools to test your own defenses. Check out tools like Metasploit and the commercial offerings, and send phishing emails to your employees trying to get them to visit fake sites – which presumably would pwn them. This is a critical aspect of the security program. Endpoint assessment – Figure out to what degree your endpoints are vulnerable, usually by scanning devices on connect with a NAC-type product, or with a scanner. Look for patterns to identify faulty configuration, ineffective patching, or other gaps in the endpoint defenses. Configuration updates – A couple times a year new guidance emerges (from CIS and NIST, etc.) recommending changes to standard builds. Evaluating those changes, and figuring out whether and how the builds should change, helps to ensure the endpoint protection is always adapted to current attacks. User feedback – Finally, you need to pay attention to the squeaky wheels in your organization. Well, not entirely, but you do have to factor in whether they are complaining about draconian usage policies – and more importantly whether some the controls are impeding their ability to do their jobs. That’s what we are trying to avoid. The feedback loops will indicate when it’s time to revisit the controls, perhaps changing standard builds or considering new controls or tools. Keep in mind that without process and failsafes to keep the process relevant, all the technology in the world can’t help. We’ll wrap up

Share:
Read Post

ESF: Controls: Full Disk Encryption

It happens quickly. An end user just needed to pick up something at the corner store or a big box retailer. He was in the store for perhaps 15 minutes, but that was plenty of time for a smash and grab. And then your phone rings, a laptop is gone, and it had information on about 15,000 customers. You sigh, hang up the phone and call the general counsel – it’s disclosure time. Sound familiar? Maybe this has been you. It likely will be, unless you proactively take action to make sure that the customer data on those mobile devices cannot be accessed by whoever buys the laptop on the gray market. That’s right, you need to deploy full disk encryption (FDE) on the devices. Unless you enjoy disclosure and meeting with lawyers, that is. Features Ultimately, encryption isn’t very novel. But managing encryption across an enterprise is, so key management and ease of use end up being the key features that generally drive FDE. As we’ve harped throughout this series, integration of that management with the rest of the endpoint functions is critical to gaining leverage and managing all the controls implemented on the endpoints. Of course, that’s looking at the issue selfishly from the security professional’s perspective. Ultimately the success of the deployment depends on how transparent it is to users. That means it needs to fit in with the authentication techniques they already use to access their laptops. And it needs to just work. Locking a user out of their data, especially an important user at an inopportune, time will make you a pretty unpopular person. Finally, don’t forget about those backups or software updates. If your encryption breaks your backups (and you are backing up all those laptops, right?) it’s a quick way to find yourself in the unemployment line. Same goes for having to tell the CIO everyone needs to bring their laptops back to the office every Patch Tuesday to get those updates installed. Integration with Endpoint Suites Given the natural order of innovation and consolidation, the industry has seen much consolidation of FDE solutions by endpoint vendors. Check Point started the ball rolling by acquiring Pointsec; shortly afterwards Sophos acquired Utimaco and McAfee acquired SafeBoot, which of course gives these vendors the ability to bundle FDE with their endpoint suites. Now bundling on the purchase order is one thing, but what we are really looking for is bundling from a management standpoint. Can the encryption keys be managed by the endpoint security management console? Is your directory supported natively? Can the FDE policies be set up from the same interface you use for host firewalls and HIPS policies? Unless this level of integration is available, there is little leverage in using FDE from your endpoint vendor. Free (as in beer?) Like all good innovations, the stand-alone companies get acquired and then the capability tends to get integrated into the operating system – which is clearly the case with FDE. Both Microsoft BitLocker and Apple FileVault provide the capability to encrypt at the operating system level (Bitlocker is full drive, FileVault is OS). Yes, it’s free, but not really. As mentioned above, encryption isn’t really novel anymore, it’s the management of encryption that makes the difference. Neither Microsoft nor Apple currently provides adequate tools to really manage FDE across an enterprise. Which means there will remain a need for third party managed FDE for the foreseeable future, and that also means the endpoint security suite is the best place to manage it. We expect further integration of FDE into endpoint security suites, further consolidation of the independent vendors, and ultimately commoditization of the capability. So we’ll joke over beers in a few years about how you use to pay separately for full disk encryption. Now that we’ve examined the controls we use to protect the endpoints, we need to build a systematic program to ensure these controls are deployed, enforced, and reported on. That’s our topic for the next two posts, as we build the endpoint security program also consider what kind of reporting we need to keep the auditors happy. Other posts in the Endpoint Security Fundamentals Series Introduction Prioritize: Finding the Leaky Buckets Triage: Fixing the Leaky Buckets Controls: Update and Patch Controls: Secure Configurations Controls: Anti-Malware Controls: Firewall, HIPS, and Device Control Share:

Share:
Read Post

ESF: Controls: Firewalls, HIPS, and Device Control

Popular perception of endpoint security revolves around anti-malware. But they are called suites for a reason – other security components ship in these packages, which provide additional layers of protection for the endpoint. Here we’ll talk about firewalls, host intrusion prevention, and USB device control. Firewalls We know what firewalls do on the perimeter of the network: selectively block traffic that goes through gateways by port and protocol. The functionality of a host firewall on an endpoint is similar. They allow an organization to enforce a policy governing what traffic the device can accept (ingress filtering) and transmit (egress filtering). Managing the traffic to and from each endpoint serves a number of purposes, including hiding the device from reconnaissance efforts, notifying the user or administrators when applications attempt to access the Internet, and monitoring exactly what the endpoints are doing. Many of these capabilities are available separately on the corporate network, but when traveling or at home and not behind the corporate perimeter, the host firewall is the first defense against attacks. Of course, a host firewall (like everything else that runs on an endpoint) takes up resources, which can be a problem on older or undersized machines. It also bears consideration that alerts multiply, especially when you have a couple thousand endpoints forwarding them to a central console, so some kind of automated alert monitoring becomes critical. Although pretty much every vendor bundles a host firewall with their endpoint suite nowadays, the major operating systems also provide firewall options. Windows has included a firewall since XP, but keep in mind that the XP firewall does not provide egress (outbound) filtering – remedied in Windows Vista. Mac OS X 10.5 Leopard added a ‘socket’ firewall to manage application listeners (ingress), and deprecated the classic ipfw network firewall, which is still available. As with all endpoint capabilities, just having the feature isn’t enough, since the number of endpoints to be managed puts a real focus on managing the policies. This makes policy management more important than firewall engine details. Host Intrusion Prevention Systems (HIPS) We know what network intrusion detection/prevention products do, in terms of inspecting network traffic and looking for attacks. Similarly, host intrusion detection/prevention capabilities look for attacks by monitoring what’s happening on the endpoint. This can include application behavior, activity logs, endpoint network traffic, system file changes, Windows registry changes, processes and/or threads, memory allocation, and pretty much anything else. The art of making host intrusion prevention work is to set up the policies to prevent malware infection, without badly impacting the user experience or destroying the signal-to-noise ratio of alerts coming into the management console. Yes, this involves tuning, so you start with the product’s default settings (hopefully on a test group) and see what works and what doesn’t. You should be able to quickly optimize the policy. Given the number of applications and activities at each endpoint, you can go nuts trying to manage these policies, which highlights the importance of the standard builds (as described in Controls: Secure Configurations). Starting with 3-4 different policies, and then you can manage others by exception. Keep in mind that tuning the product for servers is totally different, as the policies will need to be tailored for very different applications running on servers. Currently, all the major endpoint suites include simple HIPS capabilities. Some vendors also offer a more capable HIPS product – typically targeting server devices, which are higher profile targets and subject to different attacks. USB Device Control Another key attack vector for both data compromise and malware proliferation is the USB ports on endpoint devices. In the old days, you’d typically know when someone brought in a huge external drive to pilfer data. Nowadays many of us carry a 16GB+ drive at all times (yes, your smartphone is a big drive), so we’ve got to control USB ports to address this exposure. Moreover, we’ve all heard stories of social engineers dropping USB sticks in the parking lot and waiting for unsuspecting employees to pick them up and plug them in. Instant pwnage 4U! So another important aspect of protecting endpoints includes defining which devices can connect to a USB port and what those devices can do. This has been a niche space, but as more disclosure is required for data loss, organizations are getting more serious about managing their USB ports. As with all other endpoint technologies, device control adds significant management overhead for keeping track of all the mobile devices and USB sticks, etc. The products in the space include management consoles to ease the burden, but managing thousands of anything is non-trivial. Right now device control is a discrete function, but we believe these niche products will also be subsumed into the endpoint suites over the next two years. In the meantime, you may be able to gain some leverage by picking a device control vendor partnered with your endpoint suite provider. Then you should at least be able centralize the alerts, even if you don’t get deeper management integration. Management Leverage Though we probably sound like a broken record at this point, keep in mind that each additional security application/capability (control) implemented on the endpoint devices increases the management burden. So when evaluating technology for implementation, be sure to assess the additional management required and the level of integration with your existing endpoint management workflow. We’ll wrap up our discussion of Endpoint Controls in the next post, as we discuss full disk encryption, which disclosure laws have shifted from nice-to-have to something you need deployed – immediately. Other posts in the Endpoint Security Fundamentals Series Introduction Prioritize: Finding the Leaky Buckets Triage: Fixing the Leaky Buckets Controls: Update and Patch Controls: Secure Configurations Controls: Anti-Malware Share:

Share:
Read Post

ESF: Controls: Anti-Malware

As we’ve discussed throughout the Endpoint Security Fundamentals series, adequately protecting endpoint devices entails more than just an endpoint security suite. That said, we still have to defend against malware, which means we’ve got to figure out what is important in an endpoint suite and how to get the most value from the investment. The Rise of Socially-Engineered Malware To state the obvious, over the past few years malware has dramatically changed. Not just the techniques used, but also the volume. It’s typical for an anti-virus companies to identify 1-2 million new malware samples per month. Yes, that’s a huge amount. But it gets worse: a large portion of malware today gets obfuscated within legitimate looking software packages. A good example of this is fake anti-virus software. If one of your users happens to click on a link and end up on a compromised site (by any means), a nice little window pops up telling them they are infected and need to download an anti-virus program to clean up the attack. Part of that is true – upon visiting the site a drive-by attack did compromise the machine. But in this case, the antidote is a lot worse than the system because this new “anti-virus” package leaves behind a nasty trojan (typically ZeuS or Conficker). The folks at NSS Labs have dubbed this attack “socially-engineered malware,” because it hides the malware and preys upon the user’s penchant to install the compromised payload with disastrous results. That definition is as good as any, so we’ll go with it. Cloud and reputation The good news is that the anti-malware companies are not sitting still. They continue to make investments in new detection techniques to try to keep pace. Some do better than others (check out NSS Labs’ comparative tests for the most objective and relevant testing – in our opinion anyway), but what is really clear is how broken the old blacklist, signature-based model has gotten. With 2 million malware samples per month, there is no way keeping a list of bad stuff on each device remains feasible. The three main techniques added over the past few years are: Cloud-based Signatures – Since it’s not possible to keep a billion signatures in an endpoint agent, the vendors try to divide and conquer the issue. So they split the signature database between the agent and an online (cloud) repository. If an endpoint encounters a file not in its local store, it sends a signature to the cloud for checking against the full list. This has given the blacklist model some temporary legs, but it’s not a panacea, and the AV vendors know it. Reputation – A technique pioneered by the anti-spam companies a few years ago involves inferring the intent of a site by tracking what that site does and assigning it a reputation score. If the site has a bad reputation, the endpoint agent doesn’t let the site’s files or executables run. Obviously this is highly dependent on the scale and accuracy of the reputation database. Reputation has become important for most security offerings, including perimeter and web filtering, in addition to anti-spam and endpoint security. Integrated HIPS – Another technique in use today is host intrusion prevention. But not necessarily signature-based HIPS, which was the first generation. Today most HIPS looks more like file integrity monitoring, so the agent has a list of sensitive system files which should not be changed. When a malware agent runs and tries to change one of these files, the agent blocks the request – detecting the attack. So today’s anti-malware agents attempt to detect malware both before execution (via reputation) and during execution (signatures and HIPS), so they can block attacks. But to be clear, the industry is always trying to catch up with the malware authors. Making things even more difficult, users have an unfortunate tendency to disregard security warnings, allow the called-out risky behavior, and then get pwned. This can be alleviated slightly with high-confidence detection (if we know it’s a virus, we don’t have to offer the user a chance to run it) or stronger administrative policies which authorize not even letting users override the anti-malware software. But it’s still a fundamentally intractable problem. Management is key Selecting an anti-malware agent typically comes down to two factors: price and management. Price is obvious – plenty of upstarts want to take market share from Symantec and McAfee. They use price and an aggressive distribution channel to try and displace the incumbents. All the vendors also have migration tools, which dramatically lower switching costs. In terms of management, it usually comes down to personal preference, because all the tools have reasonably mature consoles. Some use open data stores, so customers can build their own reporting and visualization tools. For others, the built-in stuff is good enough. Architecturally, some consoles are more distributed than others, and so scale better to large enterprise operations. But anti-malware remains a commodity market. One aspect to consider is the size and frequency of signature and agent updates, especially for larger environments. If the anti-malware vendor sends 30mb updates 5 times a day, that will create problems in low-bandwidth environments such as South America or Africa. Free AV: You get what you pay for… Another aspect of anti-malware to consider is free AV, pioneered by folks like AVG and Avast, who claim up to 100 million users of their free products. To be clear, in a consumer context free AV can work fine. But it’s not a suite, so you won’t get a personal firewall or HIPS. There won’t be a cloud-based offering behind the tool, and it won’t use new techniques like reputation to defend against malware. Finally, there are no management tools, so you’ll have to manage every device individually, which loses feasibility past a handful. For a number of use cases (like your mom’s machine), free AV should be fine. And to be clear, the entire intent of these vendors in giving away the anti-malware engine is

Share:
Read Post

ESF: Controls: Secure Configurations

Now that we’ve established a process to make sure our software is sparkly new and updated, let’s focus on the configurations of the endpoint devices that connect to our networks. Silly configurations present another path of least resistance for the hackers to compromise your devices. For instance, there is no reason to run FTP on an endpoint device, and your standard configuration should factor that in. Define Standard Builds Initially you need to define a standard build, or more likely a few standard builds. Typically for desktops (no sensitive data, and sensitive data), mobile employees, and maybe, kiosks. There probably isn’t a lot of value to going broader than those 4 profiles, but that will depend on your environment. A good place to start is one of the accepted benchmarks of configurations available in the public domain. Check out the Center for Internet Security, which produces configuration benchmarks for pretty much every operating system and many of the major applications. In order to see your tax dollars at work (if you live in the US anyway), also consult NIST, especially if you are in the government. Its SCAP configuration guides provide a similar type of enumeration of specific setting to lock down your machines. To be clear, we need to balance security with usability and some of the configurations suggested in the benchmarks clearly impact usability. So it’s about figuring out what will work in your environment, documenting those configurations, getting organizational buy-in, and then implementing. It also makes sense to put together a list of authorized software as part of the standard builds. You can have this authorized software installed as part of the endpoint build process, but it also provides an opportunity to revisit policies on applications like iTunes, QuickTime, Skype, and others which may not yield a lot of business value and have a history of vulnerability. We’re not saying these applications should not be allowed – you’ve got to figure that out in the context of your organization – but you should take the opportunity to ask the questions. Anti-Exploitation As you define your standard builds, at least on Windows, you should turn on anti-exploitation technologies. These technologies make it much harder to gain control of an endpoint through a known vulnerability. I’m referring to DEP (data execution prevention) and ASLR (address space layout randomization), though Apple is also implementing similar capabilities in their software. To be clear, anti-exploitation technology is not a panacea for protection – as the winners of Pwn2Own at CanSecWest show us every year. Especially for those applications that don’t support it (d’oh!), but the technologies do help make it harder to exploit the vulnerabilities in compatible software. Other Considerations Running as a standard user – We’ve written a bit on the possibilities of devices running in standard user mode (as opposed to administrator mode), and you should consider this option when designing secure configurations, especially to help enforce authorized software policies. VPN to Corporate – Given the reality that mobile users will do something silly and put your corporate data at risk, one technique to protect them is to run all their Internet traffic through the VPN to your site. Yes, it may add a bit of latency, but at least the traffic will be running through the web gateway and you can both enforce policy and audit what the user is doing. As part of your standard build, you can enforce this network setting. Implementing Secure Configurations Once you have the set of secure configurations for your device profiles, how do you start implementing them? First make sure everyone buys into the decisions and understands the ramifications of going down this path. Especially if you plan to stop users from installing certain software or block other device usage patterns. Constantly asking for permission can be dangerously annoying but choosing the right threshold for confirmations is a critical aspect of a designing a policy. If the end users feel they need to go around the security team and their policies to get the job done, everyone loses. Once the configurations are locked and loaded, you need to figure out how much work is required for implementation. Next you assess the existing endpoints against the configurations. Lots of technologies can do this, ranging from Windows management tools, to vulnerability scanners, to third party configuration management offerings. The scale and complexity of your environment should drive the selection of the tool. Then plan to bring those non-compliant devices into the fold. Yes, you could just flip the switch and make the changes, but since many of the configuration settings will impact user experience, it makes sense to do a bit of proactive communication to the user community. Of course some folks will be unhappy, but that’s life. More importantly, this should help cut down help desk mayhem when some things (like running that web business from corporate equipment) stop working. Discussion of actually making the changes brings us to automation. For organizations with more than a couple dozen machines, a pretty significant ROI is available from investing in some type of configuration management. Again, it doesn’t have to be the Escalade of products, and you can even look at things like Group Policy Objects in Windows. The point is making manual changes on devices is idiotic, so apply the level of automation that makes sense in your environment. Finally, we also want to institutionalize the endpoint configurations, and that means we need to get devices built using the secure configuration. Since you probably have an operations team that builds the machines, they need to get the image and actually use it. But since you’ve gotten buy-in at all steps of this process, that shouldn’t be a big deal, right? Next up, we’ll discuss the anti-malware space and what makes sense on our endpoints. Other posts in the Endpoint Security Fundamentals Series Introduction Prioritize: Finding the Leaky Buckets Triage: Fixing the Leaky Buckets Controls: Update and Patch Share:

Share:
Read Post

Incite 4/7/2010: Everybody Loves the Underdog

Come on, admit it. Unless you have Duke Blue Devil blood running through your veins (and a very expensive diploma on the wall) or had Duke in your tournament bracket with money on the line, you were pulling for the Butler Bulldogs to prevail in Monday night’s NCAA Men’s Basketball final. Of course you were – everyone loves the underdog. If you think of all the great stories through history, the underdog has always played a major role. Think David taking down Goliath. Moses leading the Israelites out of Egypt. Pretty sure the betting line had long odds on both those scenarios. Think of our movie heroes, like Rocky, Luke Skywalker, Harry Potter, and the list goes on and on. All weren’t supposed to win and we love the fact that they did. We love the underdogs. Unfortunately reality intruded on our little dream, and on Monday Butler came up a bucket short. But you still felt good about the game and their team, right? I can’t wait for next year’s season to see whether the little team that could can show it wasn’t all just a fluke (remember George Mason from 2006?). And we love our underdogs in technology, until they aren’t underdogs anymore. No one really felt bad when IBM got railed when mainframes gave way to PCs. Unless you worked at IBM, of course. Those damn blue shirts. And when PCs gave way to the Internet, lots of folks were happy that Microsoft lost their dominance of all things computing. How long is it before we start hating the Google. Or the Apple? It’ll happen because there will be another upstart taking the high road and showing how our once precious Davids have become evil, profit-driven Goliaths. Yup, it’ll happen. It always does. Just think about it – Apple’s market cap is bigger than Wal-Mart. Not sure how you define underdog, but that ain’t it. Of course, unlike Rocky and Luke Skywalker, the underdog doesn’t prevail in two hours over a Coke and popcorn. It happens over years, sometimes decades. But before you go out and get that Apple logo tattooed on your forearm to show your fanboi cred, you may want to study history a little. Or you may become as much a laughingstock as the guy who tattooed the Zune logo on his arm. I’m sure that seemed like a good idea at the time, asshat. The mighty always fall, and there is another underdog ready to take its place. If we learn anything through history, we should now the big dogs will always let us down at some point. So don’t get attached to a brand, a company, or a gadget. You’ll end up as disappointed as the guy who thought The Phantom Menace would be the New Hope of our kids’ generation. – Mike. Photo credits: “Underdog Design” originally uploaded by ChrisM70 and “Zune Tattoo Guy” originally uploaded by Photo Giddy Incite 4 U What about Ritalin? – Shrdlu has some tips for those of us with an, uh, problem focusing. Yes, the nature of the security managers’ job is particularly acute, but in reality interruption is the way of the world. Just look at CNN or ESPN. There is so much going on I find myself rewinding to catch the headlines flashing across the bottom. Rock on, DVR – I can’t miss that headline about… well whatever it was about. In order to restore any level of productivity, you need to take Shrdlu’s advice and delegate, while removing interruptions – like email notifications, IM and Twitter. Sorry Tweeps, but it’s too hard to focus when you are tempted by links to blending an iPad. It may be counter-intuitive, but you do have to slow down to speed up at times. – MR Database security is a headless rhicken – As someone who has been involved with database security for a while, it comes as no surprise that this study by the Enterprise Strategy Group shows a lack of coordination is a major issue. Anyone with even cursory experience knows that security folks tend to leave the DBAs alone, and DBAs generally prefer to work without outside influence. In reality, there are usually 4+ stakeholders – the DBA, the application owner/manager/developer, the sysadmin, security, and maybe network administration (or even backup, storage, and…). Everyone views the database differently, each has different roles, and half the time you also have outside contractors/vendors managing parts of it. No wonder DB security is a mess… pretty darn hard when no one is really in charge (but we sure know who gets fired first if things turn south). – RM Beware of surveys bearing gifts – The PR game has changed dramatically over the past decade. Now (in the security business anyway) it’s about sound bites, statistics, and exploit research. Without either of those three, the 24/7 media circus isn’t going to be interested. Kudos to Bejtlich, who called out BeyondTrust for trumping up a “survey” about the impact of running as a standard user. Now to be clear, I’m a fan of this approach, and Richard acknowledges the benefits of running as a standard user as well. I’m not a fan of doing a half-assed survey, but I guess I shouldn’t be surprised. It’s hard to get folks interested in a technology unless it’s mandated by compliance. – MR e-Banking and the Basics – When I read Brian Krebs’ article on ‘e-Banking Guidance for Banks & Businesses’, I was happy to see someone offering pragmatic advice on how to detect and mitigate the surge of on-line bank fraud. What shocked me is that the majority of his advice was basic security and anti-fraud steps, and it was geared towards banks! They are not already doing all this stuff? Oh, crap! Does that mean most of these regional banks are about as sophisticated as an average IT shop about security – “not very”? WTF? You don’t monitor for abnormal activity already? You don’t have overlapping controls in

Share:
Read Post

Anti-Malware Effectiveness: The Truth Is out There

One of the hardest things to do in security is to discover what really works. It’s especially hard on the endpoint, given the explosion of malware and the growth of social-engineering driven attack vectors. Organizations like ICSA Labs, av-test.org, and VirusBulletin have been testing anti-malware suites for years, though I don’t think most folks put much stock in those results. Why? Most of the tests yield similar findings, which means all the products are equally good. Or more likely, equally bad. I know I declared the product review dead, but every so often you still see comparative reviews – such as Rob Vamosi’s recent work on endpoint security suites in PCWorld. The rankings of the 13 tested are as follows (in order): Top Picks: Norton Internet Security 2010, Kaspersky Internet Security 2010, AVG Internet Security 9.0 Middle Tier: Avast, BitDefender, McAfee, Panda, PC Tools, Trend Micro, and Webroot Laggards: ESET, F-Secure, and ZoneAlarm The PCWorld test was largely driven by a recent av-test.org study into malware detection. But can one lab produce enough information (especially in a single round of testing) to really understand which product works best? I don’t think so, because my research in this area has shown that 3 testing organizations can produce 10 different results. A case in point is the NSS Labs test from August of last year. Their rankings are as follows, ranked by malware detection rates: Trend Micro, Kaspersky, Norton, McAfee, Norman, F-Secure, AVG, Panda, and ESET. Some similarities, but also a lot of differences. More recently, NSS did an analysis of how well the consumer suites detected the Aurora attacks (PDF), which got so much air play in January. Their results were less than stellar: only McAfee entirely stopped the original attack and a predictable variant two weeks out. ESET and Kaspersky performed admirably as well, but it’s bothersome that most of the products we use to protect our endpoints have awful track records like this. If you look at the av-test ratings and then compare them to the NSS tests, the data shows some inconsistencies – especially with vendors like Trend Micro who are ranked much higher by NSS but close to the bottom by av-test; and AVG which is ranked well by av-test but not by NSS. So what’s the deal here? Your guess is as good as mine. I know the NSS guys and they focus their tests pretty heavily on what they call “social engineering malware,” which are legit downloads with malicious code hidden in the packages. This kind of malware is much harder to detect than your standard malware sample that’s been on the WildList for a month. Reputation and advanced file integrity monitoring capabilities are critical to blocking socially engineered malware, and most folks believe these attacks will continue to proliferate over time. Unfortunately, there isn’t enough detail about the av-test.org tests to really know what they are digging into. But my spidey sense tingles on the objectivity of their findings when you read this report from December by av-test.org and commissioned by Trend. It concerns me that av-test.org had Trend close to the bottom in a number of recent tests, but changed their testing methodology a bit with this test, and shockingly: Trend came out on top. WTF? There is no attempt to reconcile the findings across different sets of av-test.org tests, but I’d guess it has something to do with green stuff changing hands. Moving forward, it would also be great to see some of the application whitelisting products tested alongside the anti-malware suites – for detection, blocking, and usability. That would be interesting. If I’m an end user trying to decide between these products, I’m justifiably confused. Personally, I favor the NSS tests – if only because they provide a lot more transparency on they did their tests. The inconsistent information being published by av-test.org is a huge red flag for me. But ultimately you probably can’t trust any of these tests, so you have a choice to make. Do you care about the test scores or not? If not, then you buy based on what you would have bought on anyway: management and price. It probably makes sense to disqualify the bottom performers in each of the tests, since for whatever reason the testers figured out how to beat them, which isn’t a good sign. In the end you will probably kick the tires yourself, pick a short list (2 or 3 packages) and run them side by side though a gauntlet of malware you’ve found in your organization. Or contract with testing labs to do a test on your specific criteria. But that costs money and takes time, neither of which we have a lot of. The Bottom Line The truth may be out there, but Fox Mulder has a better chance of finding it than you. So we focus on the fundamentals of protecting not just the endpoints, but also the networks, servers, applications, and data. Regardless of the effectiveness of the anti-malware suites, your other defenses should help you both block and detect potential breaches. Share:

Share:
Read Post

ESF: Controls: Update and Patch

Running old software is bad. Bad like putting a new iPad in a blender. Bad because all software is vulnerable software, and with old software even unsophisticated bad guys have weaponized exploits to compromise the software. So the first of the Endpoint Security Fundamentals technical controls is to make sure you run updated software. Does that mean you need to run the latest version of all your software packages? Can you hear the rejoicing across the four corners of the software ecosystem? Actually, it depends. What you do need to do is make sure your endpoint devices are patched within a reasonable timeframe. Like one minute before the weaponized exploit hits the grey market (or shows up in Metasploit). Assess your (software) assets Hopefully you have some kind of asset management thing, which can tell you what applications run in your environment. If not, your work gets a bit harder because the first step requires you to inventory software. No, it’s not about license enforcement, it’s about risk assessment. You need to figure out the your software vendors’ track records on producing secure code, and then on patching exploits as they are discovered. You can use sites like US-CERT and Secunia, among others, to figure this out. Your anti-malware vendor also has a research site where you can look at recent attacks by application. You probably hate the word prioritize already, but that’s what we need to do (again). Based on the initial analysis, stack rank all your applications and categorize into a few buckets. High Risk: These applications are in use by 50M+ users, thus making them high-value targets for the bad guys. Frequent patches are issued. Think Microsoft stuff – all of it, Adobe Acrobat, Firefox, etc. Medium Risk: Anything else that has a periodic patch cycle and is not high-risk. This should be a big bucket. Low Risk: Those apps which aren’t used by many (security by obscurity) and tend to be pretty mature, meaning they aren’t updated frequently. Before we move on to the updating/patching process, while you assess the software running in your environment, it makes sense to ask whether you really need all that stuff. Even low-risk applications provide attack surface for the bad guys, so eliminating software you just don’t need is a good thing for everyone. Yes, it’s hard to do, but that doesn’t mean we shouldn’t try. Defining the Update/Patch Process Next you need to define what your update and patching process is going to be – and yes, you’ll have three different policies for high, medium and low risk applications. The good news is your friends at Securosis have already documented every step of this process, in gory detail, through our Patch Management Quant research. At a very high level, the cycle is: Monitor for Release/Advisory, Evaluate, Acquire, Prioritize and Schedule, Test and Approve, Create and Test Deployment Package, Deploy, Confirm Deployment, Clean up, and Document/Update Configuration Standards. Within each phase of the cycle, there are multiple steps. Not every step defined in PM Quant will make sense for your organization, so you can pick and choose what’s required. The requirement is to having a defined, documented, and operational process; and to have answered the following questions for each of your categories: Do you update to the latest version of the application? Within how soon after its release? When a patch is released, how soon should it be applied? What level of testing is required before deployment? In a perfect world, everything should be patched immediately and all software should be kept at the latest version. Unless you are talking about Microsoft Vista <grin>. But we all know the world isn’t perfect and there are real economic and resource dependencies to tightening the patch window and buying software updates – and discovering more bugs in the patches themselves. So all these factors need to be weighed when defining the process and policies. There is no right or wrong answer – it’s a matter of balancing economic reality against risk tolerance. Also keep in mind that patching remote and mobile users is a different animal, and you have to factor that into the process. Many of these folks infrequently connect and may not have access to high-bandwidth connections. Specifying a one-day patch window for installing a 400mb patch at a mobile office in the jungle may not be practical. Tools and Automation Lots of tools can help you automate your software updating and patching process. They range from full-fledged asset and configuration management offerings to fairly simple patching products. It’s beyond the scope of this series to really dig into the nuances of configuration/patch management, but we’ll just say here that any organization with more than a couple hundred users needs a tool. This is a topic we’ll cover in detail later this year. The next endpoint control we’ll discuss is Secure Configurations, so stay tuned. Other posts in the Endpoint Security Fundamentals Series Introduction Prioritize: Finding the Leaky Buckets Triage: Fixing the Leaky Buckets Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.