Login  |  Register  |  Contact
Tuesday, June 15, 2010

Top 5 Security Tips for Small Business

By Rich

We in the security industry tend to lump small and medium businesses together into “SMB”, but there are massive differences between a 20-person retail outlet and even a 100-person operation. These suggestions are specifically for small businesses with limited resources, based on everything we know about the latest threats and security defenses.

The following advice is not conditional – there really isn’t any safe middle ground, and these recommendations aren’t very expensive. These are designed to limit the chance you will be hit with attacks that compromise your finances or ability to continue business operations, and we’re ignoring everything else:

  1. Update all your computers to the latest operating systems and web browsers – this is Windows 7 or Mac OS X 10.6 as of this writing. On Windows, use at least Internet Explorer 8 or Firefox 3.6 (Firefox isn’t necessarily any more secure than the latest versions of IE). On Macs, use Firefox 3.6. Most small business struggle with keeping malware off their computers, and the latest operating systems are far more secure than earlier versions. Windows XP is nearly 10 years old at this point – odds are most of your cars are newer than that.
  2. Turn on automatic updates (Windows Update, or Software Update on Mac) and set them to check and automatically install patches daily. If this breaks software you need, find an alternative program rather than turning off updates. Keeping your system patched is your best security defense, because most attacks exploit known vulnerabilities. But since those vulnerabilities are converted to attacks within hours of becoming public (when the patch is released, if not earlier), you need to patch as quickly as possible.
  3. Use a dedicated computer for your online banking and financial software. Never check email on this system. Never use it to browse any Web site except your bank. Never install any applications other than your financial application. You can do this by setting up a non-administrative user account and then setting parental controls to restrict what Web sites it can visit. Cheap computers are $200 (for a new PC) and $700 (for a new Mac mini) and this blocks the single most common method for bad guys to steal money from small businesses, which is compromising a machine and then stealing credentials via a software key logger. Currently, the biggest source of financial losses for small business is malicious software sniffing your online bank credentials, which are then used to transfer funds directly to money mules. This is a better investment than any antivirus program.
  4. Arrange with your bank to require in-person or phone confirmation for any transfers over a certain amount, and check your account daily. Yes, react faster is applicable here as well. The sooner you learn about an attempt to move money from your account, the more likely you’ll be able to stop it. Remember that business accounts do not have the same fraud protections as consumer accounts, and if someone transfers your money out because they broke into your online banking account, it is very unlikely you will ever recover the funds.
  5. Buy backup software that supports both local and remote backups, like CrashPlan. Backup locally to hard drives, and keep at least one backup for any major systems off-site but accessible. Then subscribe to the online backup service for any critical business files. Remember that online backups are slow and take a long time to restore, which is why you want something closer to home. Joe Kissell’s Take Control of Mac OS X Backups is a good resource for developing your backup strategy, even if you are on Windows 7 (which includes some built-in backup features). Hard drives aren’t designed to last more than a few years, and all sorts of mistakes can destroy your data.

Those are my top 5, but here are a few more:

  • Turn on the firewalls on all your computers. They can’t stop all attacks, but do reduce some risks, such as if another computer on the network (which might just mean in the same coffee shop) is compromised by bad guys, or someone connects an infected computer (like a personal laptop) to the network.
  • Have employees use non-administrator accounts (standard users) if at all possible. This also helps limit the chances of those computers being exploited, and if they are, will limit the exploitation.
  • If you have shared computers, use non-administrator accounts and turn on parental controls to restrict what can be installed on them. If possible, don’t even let them browse the web or check email (this really depends on the kind of business you have… if employees complain, buy an iPad or spare computer that isn’t needed for business, and isn’t tied to any other computer). Most exploits today are through email, web browsing, and infected USB devices – this helps with all three.
  • Use an email service that filters spam and viruses before they actually reach your account.
  • If you accept payments/credit cards, use a service and make sure they can document that their setup is PCI compliant, that card numbers are encrypted, and that any remote access they use for support has a unique username and password that is changed every 90 days. Put those requirements into the contract. Failing to take these precautions makes a breach much more likely.
  • Install antivirus from a major vendor (if you are on Windows). There is a reason this is last on the list – you shouldn’t even think about this before doing everything else above.

—Rich

Monday, June 14, 2010

If You Had a 3G iPad Before June 9, Get a New SIM

By Rich

If you keep up with the security news at all, you know that on June 9th the email addresses and the device ICC-ID for at least 114,000 3G iPad subscribers were exposed.

Leaving aside any of the hype around disclosure, FBI investigations, and bad PR, here are the important bits:

  1. We don’t know if bad guys got their hands on this information, but it is safest to assume they did.
  2. For most of you, having your email address potentially exposed isn’t a big deal. It might be a problem for some of the famous and .gov types on the list.
  3. The ICC-ID is the unique code assigned to the SIM card. This isn’t necessarily tied to your phone number, but…
  4. It turns out there are trivial ways to convert the ICC-ID into the IMSI here in the US according to Chris Paget (someone who knows about these things).
  5. The IMSI is the main identifier your mobile operator uses to identify your phone, and is tied to your phone number.
  6. If you know an IMSI, and you are a hacker, it greatly aids everything from location tracking to call interception. This is a non-trivial problem, especially for anyone who might be a target of an experienced attacker… like all you .gov types.
  7. You don’t make phone calls on your iPad, but any other 3G data is potentially exposed, as is your location.
  8. Everything you need to know is in this presentation from the Source Boston conference by Nick DePetrillo and Don Bailey.](http://www.sourceconference.com/bos10pubs/carmen.pdf)

Realistically, very few iPad 3G owners will be subject to these kinds of attacks, even if bad guys accessed the information, but that doesn’t matter. Replacing the SIM card is an easy fix, and I suggest you call AT&T up and request a new one.

—Rich

Friday, June 11, 2010

Insider Threat Alive and Well

By Mike Rothman

Is it me or has the term “insider threat” disappeared from security marketing vernacular? Clearly insiders are still doing their thing. Check out a recent example of insider fraud at Bank of America. The perpetrator was a phone technical support rep, who would steal account records when someone called for help. Awesome.

Of course, the guy got caught. Evidently trying to sell private sensitive information to an undercover FBI agent is risky. It is good to see law enforcement getting ahead of some issues, but I suspect for every one of these happy endings (since no customers actually lost anything) there are hundreds who get away with it. It’s a good idea to closely monitor your personal banking and credit accounts, and make sure you have an identity theft response plan. Unfortunately it’s not if, but when it happens to you.

Let’s put our corporate security hats back on and remember the reality of our situation. Some attacks cannot be defended against – not proactively, anyway. This crime was committed by a trusted employee with access to sensitive customer data. BofA could not do business without giving folks access to sensitive data. So locking down the data isn’t an answer. It doesn’t seem he used a USB stick or any other technical device to exfiltrate the data, so there isn’t a specific technical control that would have made a difference.

No product can defend against an insider with access and a notepad. The good news is that insiders with notepads don’t scale very well, but that gets back to risk management and spending wisely to protect the most valuable assets from the most likely attack vectors. So even though the industry isn’t really talking about insider threats much anymore (we’ve moved on to more relevant topics like cloud security), fraud from insiders is still happening and always will. Always remember there is no 100% security, so revisit that incident response plan often.

—Mike Rothman

Friday Summary: June 11, 2010

By Adrian Lane

This Monday’s FireStarter prompted a few interesting behind-the-scenes conversations with a handful of security vendors centering on product strategy in the face of the recent acquisitions in Database Activity Monitoring. The questions were mostly around the state of the database activity monitoring market, where it is going, and how the technology complements and competes with other security technologies. But what I consider a common misconception came up in all of these exchanges, having to do with the motivation behind Oracle & IBMs recent acquisitions. The basic premise went something like: “Of course IBM and Oracle made investments into DAM – they are database vendors. They needed this technology to secure databases and monitor transactions. Microsoft will be next to step up to the plate and acquire one of the remaining DAM vendors.”

Hold on. Not so fast!

Oracle did not make these investments simply as a database vendor looking to secure its database. IBM is a database vendor, but that is more coincidental to the Guardium acquisition than a direct driver for their investment. Security and compliance buyers are the target here. That is a different buying center than for database software, or just about any hardware or business software purchases.

I offered the following parallel to one vendor: if these acquisitions are the database equivalent of SIEM monitoring and auditing the network, then that logic implies we should expect Cisco and Juniper to buy SIEM vendors, but they don’t. It’s more the operations and security management companies who make these investments. The customer of DAM technologies is the operations or security buyer. That’s not the same person who evaluates and purchases database and financial applications. And it’s certainly not a database admin! The DBA is only an evaluator of efficacy and ease of use during a proof of concept.

People think that Oracle and IBM, who made splashes with Secerno and Guardium purchases, were the first big names in this market, but that is not the case. Database tools vendor Embarcadero and security vendor Symantec both launched and folded failed DAM products long ago. Netezza is a business intelligence and data warehousing firm. Fortinet describes themselves as a network security company. Quest (DB tools), McAfee (security) and EMC (data and data center management) have all kicked the tires at one time or another because their buyers have shown interest. None of these firms are database vendors, but their customers buy technologies to help reduce management costs, facilitate compliance, and secure infrastructure.

I believe the Guardium and Secerno purchases were made for operations and security management. It made sense for IBM and Oracle to invest, but not because of their database offerings. These investments were logical because of their other products, because of their views of their role in the data center, and thanks to their respective visions for operations management. Ultimately that’s why I think McAfee and EMC need to invest in this technology, and Microsoft doesn’t.

On to the Summary:

Webcasts, Podcasts, Outside Writing, and Conferences

Favorite Securosis Posts

Other Securosis Posts

Favorite Outside Posts

Project Quant Posts

Research Reports and Presentations

Top News and Posts

Blog Comment of the Week

Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. Usually when a comment starts with “This is a terrific idea …” it gets deleted as blog spam, but not this week, as the best comment goes to DMcElligott, in response to Rich’s Draft Data Security Survey for Review.

This is a terrific idea. I am very curious about the results you see from this.

My suggestions: In the regulation questions I would include some reference to the financial regulatory agencies like FINRA, SEC, NYSE, etc. to cover the banking and financial sector better.

I would also be curious about the level of implementation and the accuracy confidence. Where a data security implementation has been completed what level of confidence do you have in the results (maybe a 1-10 rating)? And are there any user interactions for any data? I assume the confidence level feeds the willingness to interact with an end user.

Best of luck with the survey.

—Adrian Lane

Thursday, June 10, 2010

Understanding and Selecting SIEM/LM: Reporting and Forensics

By Adrian Lane

Reporting and Forensics are the principal products of a SIEM system. We have pushed, prodded, and poked at the data to get it into a manageable format, so now we need to put it to use. Reports and forensic analysis are the features most users work with on a day to day basis. Collection, normalization, correlation and all the other things we do are just to get us to the point where we can conduct forensics and report on our findings. These features play a big part in customer satisfaction, so while we’ll dig in to describe how the technology works, we will also discuss what to look for when making buying decisions.

Reporting

For those of us who have been in the industry for a long time, the term ‘reporting’ brings back bad memories. It evokes hundreds of pages of printouts on tractor feed paper, with thousands of entries, each row looking exactly the same as the last. It brings to mind hours of scanning these lines, yellow highlighter in hand, marking unusual entries. It brings to mind the tailoring of reports to include new data, excluding unneeded columns, importing files into print services, and hoping nothing got messed up which might require restarting from the beginning.

Those days are fortunately long gone, as SIEM and Log Management have evolved their capabilities to automate a lot of this work, providing graphical representations that allow viewing data in novel ways. Reporting is a key capability because this process was just plain hard work. To evaluate reporting features included in SIEM/LM, we need to understand what it is, and the stages of a reporting process. You will notice from the description above that there are several different steps to the production of reports, and depending on your role, you may see reporting as basically one of these subtasks. The term ‘reporting’ is a colloquialism used to encompass a group of activities: selecting, formatting, moving, and reviewing data are all parts of the reporting process.

So what is reporting? At its simplest, reporting is just selecting a subset of the data we previously captured for review, focused analysis, or a permanent record (‘artifact’) of activity. Its primary use is to put data into an understandable form, so we can analyze activity and substantiate controls without having to comb through lots of irrelevant stuff. The report comprises the simplified view needed to facilitate review or, as we will discuss later, forensic analysis. We also should not be constrained by the traditional definition of a report, which is a stack of papers (or in modern days a PDF). Our definition of reporting can embrace views within an interface that facilitate analysis and investigation.

The second common use is to capture and record events that demonstrates completion of an assigned task. These reports are historic records kept for verification. Trouble-ticket work orders and regulatory reports are common examples, where a report is created and ‘signed’ by both the producer of the report and an auditor. These snapshots of events may be kept within, or stored separately from, the SIEM/LM system.

There are a couple basic aspects to reporting that we that we want to pay close attention to when evaluating SIEM/LM reporting capabilities:

  1. What reports are included with the standard product?
  2. How easy is it to manage and automate reports?
  3. How easy is it to create new, ad-hoc reports?
  4. What export and integration options are available?

For many standard tasks and compliance needs, pre-built reports are provided by the vendor to lower costs and speed up product deployment. At minimum, vendors provide canned reports for PCI, Sarbanes-Oxley, and HIPAA. We know that compliance is the reason many of you are reading this series, and will be the reason you invest in SIEM. Reports embody the tangible benefit to auditors, operations, and security staff. Just keep in mind that 2000 built-in reports is not necessarily better than 100, despite vendor claims. Most end users typically use 10-15 reports on an ongoing basis, and those must be automated and customized to the user’s requirements.

Most end users want to feel unique, so they like to customize the reports – even if the built-in reports are fine. But there is a real need for ad-hoc reports in forensic analysis and implementation of new rules. Most policies take time to refine, to be sure that we collect only the data we need, and that what we collect is complete and accurate. So the reporting engine needs to make this process easy, or the user experience suffers dramatically.

Finally, the data within the reports is often shared across different audiences and applications. The ability to export raw data for use with third party-reporting and analysis tools is important, and demands careful consideration during selection.

People say end users buy interface and reports, and that is true for the most part. We call that broad idea _user experience_m and although many security professionals minimize the focus on reporting during the evaluation process, it can be a critical mistake. Reports are how you will show value from the SIEM/LM platform, so make sure the engine can support the information you need to show.

Forensics

It was just this past January that I read an “analyst” report on SIEM, where the author felt forensic analysis was policy driven. The report claimed that you could automate forensic analysis and do away with costly forensic investigations. Yes, you could have critical data at your fingertips by setting up policies in advance! I nearly snorted beer out my nose! Believe me: if forensic analysis was that freaking easy, we would detect events in real time and stop them from happening! If we know in advance what to look for, there is no reason to wait until afterwards to perform the analysis – instead we would alert on it. And this is really the difference between alerting on data and forensic analysis of the same data. We need to correlate data from multiple sources and have a real live human being make a judgement call. Let’s be clear: these pseudo-analyst claims and vendor promotional fluff (you know who they are) are complete BS, and do a disservice to end users by creating absurd expectations.

Now that I’m off the soapbox, let’s take a step back. Forensic analysis is conducted by trained security and network analysts to investigate an event, or more likely a sequence of events, indicating fraud or misuse. An analyst may have an idea what to look for in advance, but more often you don’t actually know what you are looking for, and you need to navigate through thousands of events to piece together what happened and understand the breadth of the damage. This involves rewriting queries over and over to drill down and look at data, using different methods of graphing and visualization before finding the proverbial needle in the haystack.

The use cases for forensic analysis are numerous, including examination of past events and data to determine what happened in your network, OS, or application. This may be to verify something that was supposed to happen actually occurred, or to better understand whether strange activity was fraud or misuse. You might need forensic analysis for simple health checks on equipment and business operations. You may need it to scan user activity to support disciplinary actions against employees. You might even need to provide data to law enforcement to pursue criminal data breaches.

Unlike correlation and alerting, where we have automated analysis of events, forensic analysis is largely manual. Fortunately we can leverage collection, normalization, and correlation – much of the data has already been collected, aggregated, and indexed within the SIEM/LM platform.

A forensic analysis usually starts with data provided by a report, an alert, or a query against the SIEM/LM repository. We start with an idea of whether we are interested in specific application traffic, strange behavior from a host, or pretty much an infinite number of things that could be suspicious. We select data with the attributes we are interested in, gathering information we need to analyze events and validate whether the initial suspicious activity is much ado about nothing, or indicates a major issue.

These queries may be as simple as “Show all failed logins for user ‘mrothman’”, or as specific as “Show events from all firewalls, between 1 and 4 am, that involved this list of users”. It is increasingly common to examine application-layer or database activity to provide context for business transactions – for example, “list all changes to the general ledger table where the user was not ‘GA_Admin’ or the application was not ‘GA_Registered_App’.

There are a couple important capabilities we need to effectively perform forensic analysis:

  1. Custom queries and views of data in the repository
  2. Access to correlated and normalized data
  3. Drill-down to view non-normalized or supplementary data
  4. Ability to reference and access older data
  5. Speed, since forensics is usually a race against time (and attackers)

Basically the most important capability is to enable a skilled analyst to follow their instincts. Forensics is all about making their job easier by facilitating access, correlation, and viewing of data. They may start with a set of anomalous communications between two devices, but end up looking at application logs and database transactions to prove a significant data breach. If queries take too long, data is manipulated, or data is not collected, the investigator’s ability to do his/her job is hindered. So the main role of SIEM/LM in forensics is to streamline the process.

To be clear, the tool only makes the process faster and more accurate. Without a strong incident response process, no tool can solve the problem. Although we all get very impressed by a zillion built-in reports and cool drill-down investigations during a vendor demo, don’t miss the forest for the trees. SIEM/Log Management platforms can only streamline a process that already exists. And if the process is bad, you’ll just execute on that bad process faster.

—Adrian Lane

Wednesday, June 09, 2010

Incite 6/9/2010: Creating Excitement

By Mike Rothman

Some businesses are great at creating excitement. Take Apple, for instance. They create demand for their new (and upgraded) products, which creates a feeding frenzy when the public can finally buy the newest shiny object. 2 million iPads in 60 days is astounding. I suspect they’ll move a bunch of iPhone 4 units on June 24 as well (I know I’ll be upgrading mine and the Boss’). They’ve created a cult around their products, and it generates unbelievable excitement whenever there is a new toy to try.

Now that is some fireworks... Last week I was in the Apple store dropping my trusty MacBook Pro off for service. The place was buzzing, and the rest of the mall was pretty much dead. This was 3 PM on a Thursday, but you’d think it was Christmas Eve from looking at the faces of the folks in the store. Everything about the Apple consumer experience is exciting. You may not like them, you may call me a fanboy, but in the end you can’t argue with the results. Excitement sells.

If you have kids, you know all about how Disney creates the same feeling of excitement. Whether it’s seeing a new movie or going to the theme parks, this is another company that does it right. We recently took the kids down to Disneyworld, and it sure didn’t seem like the economy was crap inside the park. Each day it was packed and everyone was enjoying the happiest place on Earth, including my family. One night we stayed at a Disney property. It’s not enough to send a packet of information and confirmations a few months ahead of the trip. By the time you are ready to go, the excitement has faded. So Disney sends an email reminding you of the great time you are about to have a few days before you check in. They give you lots of details about your resort, with fancy pictures of people having a great time. The message is that you will be those people in a few days. All your problems will be gone, because you are praying in the House of the Mouse. Brilliant.

I do a lot of business travel and I can tell you I’m not excited when I get to Topeka at 1am after being delayed for 3 hours at O’Hare. No one is. But it’s not like any of the business-oriented hotels do anything to engage their customers. I’m lucky if I get a snarl from the front desk attendant as I’m assigned some room near the elevator overlooking the sewage treatment facility next door. It’s a friggin’ bed and a place to shower. That’s it.

It just seems to me these big ‘hospitality’ companies could do better. They can do more to engage their customers. They can do more to create a memorable experience. I expect so little that anything they do is upside. I believe most business travelers are like me. So whatever business you are in, think about how you can surprise your customers in a positive fashion (yes, those pesky users who keep screwing everything up are your customers) and create excitement about what you are doing.

I know, we do security. It’s not very exciting when it’s going well. But wouldn’t it be great if a user was actually happy to see you, instead thinking, “Oh, crap, here comes Dr. No again, to tell me not to surf pr0n on the corporate network.”? Think about it. And expect more from yourself and everyone else you do business with.

– Mike.

Photo credits: “Magic Music Mayhem 3 (Explored)” originally uploaded by Express Monorail


Incite 4 U

  1. Microsoft cannot fix stupid – The sage Rob Graham is at it again, weighing in on Google’s alleged dictum to eradicate Microsoft’s OS from all their desktops, because it’s too hard to secure. Rob makes a number of good points in the post, relative to how much Microsoft invests in security and the reality that Windows 7 and IE 8 are the most secure offerings out there. But ultimately it doesn’t matter because it’s human error that is responsible for most of the successful attacks. And if we block one path the attackers find another – they are good that way. So what to do? Do what we’ve always done. Try to eliminate the low hanging fruit that makes the bad guy’s job too easy, and make sure you have a good containment and response strategy for when something bad does happen. And it will, whatever OS you use. – MR

  2. Fight the good fight – Apparently “Symantec believes security firms should eradicate ‘false positives’ ”. I imagine that this would be pretty high on their list. Somewhere between “Rid the world of computer viruses” and “Wipe out all spam”. And I love their idea of monitoring social network sites such as Facebook and online fora to identify false positives, working tirelessly to eliminate the threat of, what was it again? Yeah, misdiagnosis. In fact, I want to help Symantec. I filled out my job application today because I want that job. Believe me, I could hunt Facebook, Twitter, and YouTube all day, looking for those false positives and misdiagnosis thingies. Well, until the spam bots flood these sites with false reports of false positives. Then I’d have to bring the fight to the sports page for false positive detection, or maybe check out those critical celebrity false positives. It sounds like tough work, but hey, it’s a noble cause. Keep up the good fight, guys! – AL

  3. Good intentions – I always struggle with “policy drift”; the tendency to start from a compliant state but lose that over time due to distractions, pressure, and complacency. For example, I’m pretty bad at keeping my info in our CRM tool up to date. That’s okay, because so are Mike and Adrian. As Mathias Thurman writes over at Computerworld, this can be a killer for something crucial like patch management. Mathias describes his difficulties in keeping systems up to date, especially those pesky virtual machines. The policies are there, everyone even started from a known good state, but the practical realities of running a day to day IT shop and *gasp* testing those patches throws a monkey wrench into the system. – RM

  4. Logging as infrastructure… – As Adrian and I continue plowing through the Understanding and Selecting a SIEM/Log Management series, one of the things we may not have explicitly mentioned was that data collection is really an infrastructure function, and there will be applications that run on top to provide solutions to the usage demands. Seems everyone is still hung up on the category names, but Sam Curry on RSA’s blog gets it right. Every user (not just large enterprises) should be figuring out how to leverage the data they are collecting. Whether it’s for security, efficiency, or compliance reporting, things like forensics and correlation can be useful to pretty much any practitioner. Of course, that doesn’t make them any easier to do, but the first step on that path is to consider data collection an infrastructure function, not just a hermetically sealed security problem solved with an isolated security product. – MR

  5. Must read from Ivan – I’m skipping the usual pithy title and intro to simply point you to Ivan Arce’s response to Michal Zalewski’s recent post on software security. Ivan is flat out one of the best security writers and thinkers out there. In this post Ivan lays out a compelling review of the pitfalls of formal models in secure software engineering, but it applies equally well to general security defenses. The key line, and a major theme in one of my current presentations, is, “Michal’s first argument simply points out that devising mathematical-logical formal models to define and implement security usually goes awry in the presence of real world economic actors, and that the information security discipline would benefit more from adopting knowledge, practices and experience from other fields such as sociology and economics, rather than seeking purely technical solutions. I agree.” I prefer cognitive science to sociology since it’s a bit of a harder science, but everything in our industry is driven by how people act, and the economics that influence their behavior. – RM

  6. New Math – Does piracy occur? Yep. Does it have economic impact? Absolutely. But you have to ask yourself why would someone conduct a study like this: Piracy Cost Game Industry $41.5 Billion. Forget for a moment that the students conducting this survey failed their courses in logic, statistics, and finance, and focus on the question of why was this survey commissioned? Is it about piracy and theft? Was it so game companies knows whether they need to adjust their business and pricing models to combat the problem? Is it to gauge whether they should change their protection model? The answer is “D”, none of the above. This is paid PR to influence legislators into thinking that they are going to make billions in extra tax revenue if they can legislate this bad behavior. Dangle that carrot in front of politicians so they will do your bidding. An adjustment to the law will hopefully coax some extra revenue out of a handful of thieves customers without cost to the company. All without having to change their technology, pricing, or behavior. So the politicians don’t generate 1/1000th of what they were promised because the survey is based on totally bogus numbers, but they do get to pass a law, making it a total win/win! And when said billions in revenue fails to materialize, you can blame the government! Now, where is my trillion dollars? I have a budget deficit to erase! – AL

  7. Binary as a second language – I’m sure a lot of folks working in an HP data center are feeling distinctly uncomfortable now. In fact, 9,000 of them will get a lot more uncomfortable as they are replaced with some kind of automation as HP makes a $1b investment to fully automate their data centers. It begs the question of your own value to your organization. Can you be automated? Replaced by a machine? We’d like to think not, but 9,000 folks will soon realize their assumptions were wrong. So always keep in mind that value is proven every day. The other aspect to the story is that HP is adding 6,000 sales and service reps. So it’s that time again to revisit your choice of career and make sure you are on the right path. Many data center ops folks are doing many other things. Like buying a Subway franchise. Kidding aside, HP is on the cutting edge, but the trend toward replacing ops folks isn’t going to go away. It may be time to start thinking about Plan B. – MR

—Mike Rothman

Tuesday, June 08, 2010

DB Quant: Secure Metrics, Part 4, Shield

By Adrian Lane

This portion of the Secure phase is to ‘shield’ databases from threats such as SQL injection, buffer overflows, and other common attacks. The idea is that patching will only address known weaknesses, and only after you apply the patch, but some products detect and block activity that looks unusual or provide a temporary reprieve to buy you time to patch. What we are advocating in this step is not what you will find in recommended best practices from your database vendor, but is increasingly common as an aid for database security. Shielding can be as simple re-mapping port numbers to avoid automated probing, or as complex as virtual patching via web application firewall or activity monitoring platform. It may include changes to the database, such as stored procedures to perform input validation, or might involve changes to the calling application. In any of these cases, the analysis of threats and countermeasures is part of the database security process and must be accounted for in your cost estimates.

We define our Shield process as:

  1. Identify Threats
  2. Specify Countermeasures
  3. Implement
  4. Document

Identify Threats

Variable Notes
Time to identify databases at risk e.g., Those vulnerable to a known attack or containing particularly sensitive information
Time to review ingress/egress points and network protocols The routes via which the database is accessible, including through applications
Time to identify threats and exploitable trust relationships e.g., SQL injection, unpatched vulnerabilities, application hijacking, etc.

Specify Countermeasures

Variable Notes
Time to identify countermeasures i.e., What countermeasures are available, including external options or internal changes, and how they should be configured/implemented
Time to develop regression test cases Shielding affects database operations and must be tested

Implement

Variable Notes
Time to adjust database configuration or functions e.g., New triggers/stored procedures
Time to adjust firewall/IPS/WAF rules
Time to install new security controls e.g., WAF, VPN, etc.
Time to verify changes via regression tests

Document

Variable Notes
Time to document changes Firewall rules, etc.
Time to record code changes in source control system

Other Posts in Project Quant for Database Security

  1. An Open Metrics Model for Database Security: Project Quant for Databases
  2. Database Security: Process Framework
  3. Database Security: Planning
  4. Database Security: Planning, Part 2
  5. Database Security: Discover and Assess Databases, Apps, Data
  6. Database Security: Patch
  7. Database Security: Configure
  8. Database Security: Restrict Access
  9. Database Security: Shield
  10. Database Security: Database Activity Monitoring
  11. Database Security: Audit
  12. Database Security: Database Activity Blocking
  13. Database Security: Encryption
  14. Database Security: Data Masking
  15. Database Security: Web App Firewalls
  16. Database Security: Configuration Management
  17. Database Security: Patch Management
  18. Database Security: Change Management
  19. DB Quant: Planning Metrics, Part 1
  20. DB Quant: Planning Metrics, Part 2
  21. DB Quant: Planning Metrics, Part 3
  22. DB Quant: Planning Metrics, Part 4
  23. DB Quant: Discovery Metrics, Part 1, Enumerate Databases
  24. DB Quant: Discovery Metrics, Part 2, Identify Apps
  25. DB Quant: Discovery Metrics, Part 3, Config and Vulnerability Assessment
  26. DB Quant: Discovery Metrics, Part 4, Access and Authorization.
  27. DB Quant: Secure Metrics, Part 1, Patch.
  28. DB Quant: Secure Metrics, Part 2, Configure.
  29. DB Quant: Secure Metrics, Part 3, Restrict Access.

—Adrian Lane

Monday, June 07, 2010

DB Quant: Secure Metrics, Part 3, Restrict Access

By Adrian Lane

This portion of the Secure phase is reconfiguration of access control and authorization settings. Its conceptual simplicity belies the hard work involved, as is it one of the most tedious and time-consuming of all database security tasks. Merely reviewing the permissions assigned to groups and roles is hard enough, but verifying that just the right users are assigned to each and every role and group can take days or even weeks. Additionally, many DBAs do not fully appreciate the seriousness of misconfigured database authentication: subtle errors can serve as a wide-open avenue for hackers to assume DBA credentials – tricking the database into trusting them.

Automation is extremely useful in the discovery and analysis process, but when it comes down to it, a great deal of manual analysis and verification is required to complete these tasks.

Our process is:

  1. Review Access/Authentication
  2. Determine Changes
  3. Implement
  4. Document

Review Access/Authentication

Variable Notes
Time to review users and access control settings May have been completed in review phase
Time to identify authentication method
Time to compare authentication method with policy e.g., Domain, database, mixed mode, etc.

Determine Changes

Variable Notes
Time to identify user permission changes
Time to identify group and role membership adjustments
Time to identify changes to password policy settings
Time to identify dormant or obsolete accounts

Implement

Variable Notes
Time to alter authentication settings/methods
Time to reconfigure and remove user accounts
Time to implement new groups and roles, and adjust memberships
Time to reconfigure service accounts e.g., generic application and DBA accounts

Document

Variable Notes
Time to document changes
Time to document accepted configuration variances

Other Posts in Project Quant for Database Security

  1. An Open Metrics Model for Database Security: Project Quant for Databases
  2. Database Security: Process Framework
  3. Database Security: Planning
  4. Database Security: Planning, Part 2
  5. Database Security: Discover and Assess Databases, Apps, Data
  6. Database Security: Patch
  7. Database Security: Configure
  8. Database Security: Restrict Access
  9. Database Security: Shield
  10. Database Security: Database Activity Monitoring
  11. Database Security: Audit
  12. Database Security: Database Activity Blocking
  13. Database Security: Encryption
  14. Database Security: Data Masking
  15. Database Security: Web App Firewalls
  16. Database Security: Configuration Management
  17. Database Security: Patch Management
  18. Database Security: Change Management
  19. DB Quant: Planning Metrics, Part 1
  20. DB Quant: Planning Metrics, Part 2
  21. DB Quant: Planning Metrics, Part 3
  22. DB Quant: Planning Metrics, Part 4
  23. DB Quant: Discovery Metrics, Part 1, Enumerate Databases
  24. DB Quant: Discovery Metrics, Part 2, Identify Apps
  25. DB Quant: Discovery Metrics, Part 3, Config and Vulnerability Assessment
  26. DB Quant: Discovery Metrics, Part 4, Access and Authorization.
  27. DB Quant: Secure Metrics, Part 1, Patch.
  28. DB Quant: Secure Metrics, Part 2, Configure.

—Adrian Lane

Draft Data Security Survey for Review

By Rich

Hey everyone,

As mentioned the other day, I’m currently putting together a big data security survey to better understand what data security technologies you are using, and how effective they are.

I’ve gotten some excellent feedback in the comments (and a couple of emails), and have put together a draft survey for final review before we roll this out. A couple things to keep in mind if you have the time to take a look:

  • I plan on trimming this down more, but I wanted to err on the side of including too many questions/options rather than too little. I could really use help figuring out what to cut.
  • Everyone who contributes will be credited in the final report.
  • After a brief bit of exclusivity (45 days) for our sponsor, all the anonymized raw data will be released to the community so you can perform your own analysis. This will be in spreadsheet format, just the same as I get it from SurveyMonkey.

The draft survey is up at SurveyMonkey for review, because it is a bit too hard to replicate here on the site.

To be honest, I almost feel like I’m cheating when I develop these on the site with all the public review, since the end result is way better than what I would have come up with on my own. Hopefully giving back the raw data is enough to compensate all of you for the effort.

—Rich

DB Quant: Secure Metrics, Part 2, Configure

By Adrian Lane

The next step in our Secure phase is to securely configure the database, as well as make needed changes to the underlying operating system if needed. Out of the box, databases are highly insecure, requiring significant tweaking; in practice, checking and adjusting configurations is an ongoing issue. Patches, new features, new attacks, and new functions all drive the need for periodic checks; so you should be rerunning the assessment and configuration processes at least quarterly.

The majority of the costs will be time to identify issues and appropriate settings to address. Once again, your vendor may offer tools to support configuration changes and administration, which are included as a capital investment, because they are used for general administration and support.

Our process is:

  1. Assess
  2. Prescribe
  3. Fix
  4. Rescan
  5. Document

Remember that this phase relies on the configuration assessment results from the Discovery phase, which is why we don’t include the full assessment here. For some of you, it may make sense to mix and match the process a little to better match how you actually work.

Assess

Variable Notes
Time to review assessment reports per database e.g., assessment scans from discovery phase
Time to identify policy/standards violations and incorrect settings

Prescribe

Variable Notes
Time to itemize issues For tracking/change management purposes
Time to select remediation option
Time to allocate resources, create work order, & create change script as needed

Fix

Variable Notes
Time to reconfigure database or OS
Time to implement changes and reboot (if necessary)
Time to test change and dependent applications/systems Confirm the expected behavior of the change and the effect on other applications/systems relying on the database

Rescan (optional)

Variable Notes
Variable: Re-assess database configuration Rerun scan portion of Assessment phase to verify changes were implemented

Document

Variable Notes
Time to document changes
Time to document accepted configuration variances
Time to specify changes to configuration policies or rules This is the appropriate time to note required changes to policy

Other Posts in Project Quant for Database Security

  1. An Open Metrics Model for Database Security: Project Quant for Databases
  2. Database Security: Process Framework
  3. Database Security: Planning
  4. Database Security: Planning, Part 2
  5. Database Security: Discover and Assess Databases, Apps, Data
  6. Database Security: Patch
  7. Database Security: Configure
  8. Database Security: Restrict Access
  9. Database Security: Shield
  10. Database Security: Database Activity Monitoring
  11. Database Security: Audit
  12. Database Security: Database Activity Blocking
  13. Database Security: Encryption
  14. Database Security: Data Masking
  15. Database Security: Web App Firewalls
  16. Database Security: Configuration Management
  17. Database Security: Patch Management
  18. Database Security: Change Management
  19. DB Quant: Planning Metrics, Part 1
  20. DB Quant: Planning Metrics, Part 2
  21. DB Quant: Planning Metrics, Part 3
  22. DB Quant: Planning Metrics, Part 4
  23. DB Quant: Discovery Metrics, Part 1, Enumerate Databases
  24. DB Quant: Discovery Metrics, Part 2, Identify Apps
  25. DB Quant: Discovery Metrics, Part 3, Config and Vulnerability Assessment
  26. DB Quant: Discovery Metrics, Part 4, Access and Authorization.
  27. DB Quant: Secure Metrics, Part 1, Patch.

—Adrian Lane

FireStarter: Get Ready for Oracle’s New WAF

By Adrian Lane

We have written a lot about Oracle’s acquisition of Secerno: the key points of the acquisition, the Secerno technology, and some of the business benefits Oracle gets with the Secerno purchase. We did so mainly because Database Activity Monitoring (DAM) is a technology that Rich and I are intimately familiar with, and this acquisition shakes up the entire market. But we suspect there is more. Rich and I have a feeling that this purchase signals Oracle’s mid-term security strategy, and the Secerno platforms will comprise the key component. We don’t have any inside knowledge, but there are too many signals to go unnoticed so we are making a prediction, and our analysis goes something like this:

Quick recap: Oracle acquired a Database Activity Monitoring vendor, and immediately marketed the product as a database firewall, rather than a Database Activity Monitoring product. What Oracle can do with this technology, in the short term, is:

  1. “White list” database queries.
  2. Provide “virtual patching” of the Oracle database.
  3. Monitor activity across most major relational database types.
  4. Tune policies based on monitored traffic.
  5. Block unwanted activity.
  6. Offer a method of analysis with few false positives.

Does any of this sound familiar?

What if I changed the phrase “white list queries” to “white list applications”? If I changed “Oracle database” to “Oracle applications”? What if I changed “block database threats” to “block application threats”?

Does this sound like a Web Application Firewall (WAF) to you?

Place Secerno in front of an application, add some capabilities to examine web app traffic, and it would not take much to create a Web Application Firewall to complement the “database firewall”. They can tackle SQL injection now, and provide very rudimentary IDS. It would be trivial for Oracle to add application white listing, HTML inspection, and XML/SOAP validation. Down the road you could throw in basic XSS protections and can call it WAF. Secerno DAM, plus WAF, plus the assessment capabilities already built into Oracle Management Packs, gives you a poor man’s version of Imperva.

Dude, you’re getting a WAF!

We won’t see much for a while yet, but when we do, it will likely begin with Oracle selling pre-tuned versions of Secerno for Oracle Applications. After a while we will see a couple new analysis options, and shortly thereafter we will be told this is not WAF, it’s better than WAF. How could these other vendors possibly know the applications as well as Oracle? How could they possibly protect them as accurately or efficiently? These WAF vendors don’t have access to the Oracle applications code, so how could they possibly deliver something as effective? We are not trying to be negative here, but we all know how Oracle markets, especially in security:

  1. Oracle is secure – you don’t need X. All vendors of X are irresponsible and beneath consideration.
  2. Oracle has purchased vendor Y in market X because Oracle cares about the security of its customers.
  3. Oracle is the leading provider of X.
  4. Buying anything other than Oracle’s X is irresponsible because other vendors use undocumented APIs and/or inferior techniques.
  5. Product X is now part of the new Oracle Suite and costs 50% more than before, but includes 100% more stuff that you don’t really need but we couldn’t sell stand-alone.

OK, so we went negative. Send your hate mail to Rich. I’ll field the hate mail from the technologists out there who are screaming mad, knowing that there is a big difference between WAF policies and traffic analysis and what Secerno does. Yes and no, but it’s irrelevant from a marketing standpoint. For those who remember Dell’s “Dude” commercials from the early 2000s, they made buying a computer easy and approachable. Oracle will do the same thing with security, making the choice simple to understand, and covering all their Oracle assets. They’d be crazy not to. Market this as a full-featured WAF, blocking malicious threats with “zero false positives”, for everything from Siebel to 11G. True or not, that’s a powerful story, and it comes from the vendor who sold you half the stuff in your data center. It will win the hearts of the security “Check the box” crowd in the short term, and may win the minds of security professionals in the long term.

Do you see it? Does it make sense? Tell me I am wrong!

—Adrian Lane

Friday, June 04, 2010

Friday Summary: June 4, 2010

By Rich

There’s nothing like a crisis to bring out the absolute stupidity in a person… especially if said individual works for a big company or government agency. This week alone we’ve had everything from the ongoing BP disaster (the one that really scares me) to the Israeli meltdown. And I’m sure Sarah Palin is in the mix there someplace.

Crisis communications is an actual field of study, with many examples of how to manage your public image even in the midst of a major meltdown. Heck, I’ve been trained on it as part of my disaster response work. But it seems that everyone from BP to Gizmodo to Facebook is reading the same (wrong) book:

  • Deny that there’s a problem.
  • When the first pictures and videos show up, state that there was a minor incident and you value your customers/the environment/the law/supporters/babies.
  • Quietly go to full lockdown and try to get government/law enforcement to keep people from finding out more.
  • When your lockdown attempts fail, go public and deny there was ever a coverup.
  • When pictures/video/news reports show everyone that this is a big fracking disaster, state that although the incident is larger than originally believed, everything is under control.
  • Launch an advertising campaign with a lot of flowers, babies, old people, and kittens. And maybe some old black and white pictures with farms, garages, or ancestors who would be the first to string you up for those immoral acts.
  • Get caught on tape or in an email/text blaming the kittens.
  • Try to cover up all the documentation of failed audits and/or lies about security and/or safety controls.
  • State that you are in full compliance with the law and take safety/security/fidelity/privacy/kittens very seriously.
  • As the incident blows completely out of control, reassure people that you are fully in control.
  • Get caught saying in private that you don’t understand what the big deal is. It isn’t as if people really need kittens.
  • Blame the opposing party/environmentalists/puppies/you business partners.
  • Lie about a bunch of crap that is really easy to catch. Deny lying, and ignore those pesky videos showing you are (still) lying.
  • State that your statements were taken out of context.
  • When asked about the context, lie.
  • Apologize. Say it will never happen again, and that you would take full responsibility, except your lawyers told you not to.
  • Repeat.

On to the Summary:

Webcasts, Podcasts, Outside Writing, and Conferences

Favorite Securosis Posts

Other Securosis Posts

Favorite Outside Posts

Project Quant Posts

Research Reports and Presentations

Top News and Posts


Blog Comment of the Week

Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Michael O’Keefe, in response to Code Re-engineering.

Re-engineering can work, Spolsky inadvertently provides a great example of that, and proves himself wrong. I guess that’s the downside to blogs, and trying to paint things in a black or white manner. He had some good points, one was that when Netscape open sourced the code, it wasn’t working, so the project got off to a slow start. But the success of Mozilla (complete rewrite of Netscape) has since proved him wrong. Once Bill Gates realized the importance of the internet, and licensed the code from Spyglass (I think) for IE, MS started including it on every new release of Windows. In this typical fashion, they slowly whittled away at Netscape’s market share, so Netscape had to innovate. The existing code base was very difficult to work with, even the Netscape engineers admitted that. But when you’re trying to gain market share, speed counts, look at Facebook, and eBay. But eventually you have to make a change, if the code is holding you back. Look at how long it took IE to come out with tabs – ridiculous. And look at Apple’s ability to move to a BSD/Mach/Next (?) kernel. But the best example is still – Mozilla’s Firefox, still ahead of IE, in my opinion.

—Rich

Thursday, June 03, 2010

The Public/Private Pendulum Keeps Swinging

By Mike Rothman

They say the grass is always greener on the other side, and I guess for some folks it is. Most private companies (those which believe they have sustainable businesses, anyway) long for the day when they will be able to trade on the public markets. They know where the Ferrari deal is, and seem to dismiss the angst of Sarbanes-Oxley. On the other hand, most public companies would love the freedom of not having to deal with the quarterly spin cycle and those pesky shareholders who want growth now.

Two examples in the security space show the pendulum in action this week. First is Tripwire’s IPO filing. I love S-1 filings because companies must bare their innards to sell shares to public investors. You get to see all sorts of good stuff, like the fact that Tripwire has grown their business 20-30% annually over the past few years. They’ve been cash flow positive for 6 years, and profitable for the last two (2008 & 2009), although they did show a small loss for Q1 2010.

Given the very small number of security IPOs over the past few years, it’s nice to see a company with the right financial momentum to get an IPO done. But as everyone who’s worked for a public company knows, it’s really about growth – profitable growth. Does 20-30% growth on a fairly small revenue base ($74 million in 2009) make for a compelling growth story?

And more importantly for company analysis, what is the catalyst to increase that growth rate? In the S-1, Tripwire talks about expanding product offerings, growing their customer base, selling more stuff to existing customers, international growth, government growth, and selective M&A as drivers to increase the top line. Ho-hum. From my standpoint, I don’t see anything that gets the company from 20% growth to 50% growth. But that’s just me, and I’m not a stock analyst.

Being publicly listed will enable Tripwire to do deals. They did a small deal last year to acquire SIEM/Log Management technology, but in order to grow faster they need to make some bolder acquisitions. That’s been an issue with the other public security companies that are not Symantec and McAfee – they don’t do enough deals to goose growth enough to make the stock interesting. With Tripwire’s 5,400 customers, you’d figure they’ll make M&A and pumping more stuff into their existing base a key priority once they get the IPO done.

On the other side of the fence, you have SonicWall, which is being taken private by Thoma Bravo Group and a Canadian pension fund. The price is $717 million, about a 28% premium. SonicWall has been public for a long time and has struggled of late. Momentum seems to be returning, but it’s not going to be a high flyer any time soon. So the idea of becoming private, where they only have to answer to their equity holders, is probably attractive.

This is more important in light of SonicWall’s new push into the enterprise. They are putting a good deal of wood behind this Project SuperMassive technology architecture, but breaking into the enterprise isn’t a one-quarter project. It requires continual investment, and public company shareholders are notoriously impatient. SonicWall was subject to all sorts of acquisition rumors before this deal, so it wouldn’t be surprising to see Thoma Bravo start folding other security assets in with SonicWall to make a subsequent public offering, a few years down the line, more exciting.

So the pendulum swings back and forth again. You don’t have to be Carnac the Magnificent to figure there will be more deals, with the big getting bigger via consolidation and technology acquisitions. You’ll also likely see some of the smaller public companies take the path of SafeNet, WatchGuard, Entrust, Aladdin, and now SonicWall, in being taken private. The only thing you won’t see is nothing. The investment bankers have to keep busy, don’t they?

—Mike Rothman

DB Quant: Secure Metrics, Part 1, Patch

By Adrian Lane

Now we move past planning & discovery, and into the actual work of securing databases. The Secure phase is where we implement many of the preventative security measures and establish the secure baseline for database operations. First up is database patching.

For patching most of the costs are time and effort to evaluate, test, and apply a patch. Fixed costs are mostly support and maintenance contracts with the database vendor, if applicable (very few patch management products work with databases, so you are usually limited to the DBMS vendor’s tools). Your vendor may offer tools to support patch rollout and administration, which are included as a capital investment cost.

Our process is:

  1. Evaluate
  2. Acquire
  3. Test & Approve
  4. Confirm & Deploy
  5. Document

Evaluate

Variable Notes
Time to monitor for advisories per database type Vendor alerts and industry advisories announce patch availability
Time to identify appropriate patches
Time to identify workarounds Identify workarounds if available, and determine whether they are appropriate
Time to determine priority e.g., Is this a critical vulnerability; if so, when should you apply the patch?

Acquire

Variable Notes
Time to acquire patch(es)
Costs for maintenance, support, or additional patch management tools Optional: Updates to vendor maintenance contracts, if required

Test & Approve

Variable Notes
Time to create regression test cases and acceptance criteria i.e., How will you verify the patch does not break your applications?
Time to set up test environment Obtain servers, tools, and software for verification; then set up for testing
Time to run test Variable: may require multiple cycles, depending upon test cases
Time to analyze results
Time to create deployment packages Optional – if not using stock patches. Approve, label and archive the tested patch.

Confirm & Deploy

Variable Notes
Time to schedule and notify Schedule personnel & communicate downtime to users
Time to install Taking DB offline, back up, patch database, and restart
Time to verify Verify patch installed correctly and database services are available
Time to clean up Remove temp files

Document

Variable Notes
Time to document Close out trouble tickets and update workflow tracking

Other Posts in Project Quant for Database Security

  1. An Open Metrics Model for Database Security: Project Quant for Databases
  2. Database Security: Process Framework
  3. Database Security: Planning
  4. Database Security: Planning, Part 2
  5. Database Security: Discover and Assess Databases, Apps, Data
  6. Database Security: Patch
  7. Database Security: Configure
  8. Database Security: Restrict Access
  9. Database Security: Shield
  10. Database Security: Database Activity Monitoring
  11. Database Security: Audit
  12. Database Security: Database Activity Blocking
  13. Database Security: Encryption
  14. Database Security: Data Masking
  15. Database Security: Web App Firewalls
  16. Database Security: Configuration Management
  17. Database Security: Patch Management
  18. Database Security: Change Management
  19. DB Quant: Planning Metrics, Part 1
  20. DB Quant: Planning Metrics, Part 2
  21. DB Quant: Planning Metrics, Part 3
  22. DB Quant: Planning Metrics, Part 4
  23. DB Quant: Discovery Metrics, Part 1, Enumerate Databases
  24. DB Quant: Discovery Metrics, Part 2, Identify Apps
  25. DB Quant: Discovery Metrics, Part 3, Config and Vulnerability Assessment
  26. DB Quant: Discovery Metrics, Part 4, Access and Authorization.

—Adrian Lane

White Paper Released: Endpoint Security Fundamentals

By Mike Rothman

Endpoint Security is a pretty broad topic. Most folks associate it with traditional anti-virus or even the newfangled endpoint security suites. In our opinion, looking at the issue just from the perspective of the endpoint agent is myopic. To us, endpoint security is as much a program as anything else.

In this paper we discuss endpoint security from a fundamental blocking and tackling perspective. We start with identifying the exposures and prioritizing remediation, then discuss specific security controls (both process and product), and also cover the compliance and incident response aspects.

It’s a pretty comprehensive paper, which means it’s not short. But if you are trying to understand how to comprehensively protect your endpoint devices, this paper will provide a great perspective and allow you to put all your defenses into context. We assembled this document from the Endpoint Security Fundamentals series posted to the blog in early April, all compiled together, professionally edited, and prettified.

Special thanks to Lumension Security for licensing the report.

You can download the paper directly (PDF), or visit the landing page, where you can leave comments or criticism, and track revisions.

—Mike Rothman