Securosis

Research

Top 3 Steps to Simplify DLP without Compromise

Just when I thought I was done talking about DLP, interest starts to increase again. Below is an article I wrote up on how to minimize the complexity of a DLP deployment. This was for the Websense customer newsletter/site, but is my usual independent perspective. One of the most common obstacles to a DLP deployment is psychological, not technical. With massive amounts of content and data streaming throughout the enterprise in support of countless business processes, the idea that we can somehow wrangle this information in any meaningful way, with minimal disruptions to business process, is daunting if not nigh on inconceivable. This idea is especially reinforced among security professionals still smarting from the pain of deploying and managing the constant false positives and tuning requirements of intrusion detection systems. Since I started to cover DLP technologies about 7 years or so ago, I’ve talked with hundreds of people who have evaluated and/or deployed data loss prevention. Over the course of those conversations I’ve learned what tends to work, what doesn’t, and how to reduce the potential complexity of DLP deployments. Once you break the process down it turns out that DLP isn’t nearly as difficult to manage as some other security technologies, and even very large organizations are able to rapidly reap the benefits of DLP without creating time-consuming management nightmares. The trick, as you’ll see, is to treat your DLP deployment as an incremental process. It’s like eating an elephant – you merely have to take it one bite at a time. Here are my top 3 tips, drawn from those hundreds of conversations: 1. Narrow your scope: One of the most common problems with an initial DLP deployment is trying to start on too wide a scale. Your scope of deployment is defined by two primary factors – how many DLP components you deploy, and how many systems/employees you monitor. A full-suite DLP solution is capable of monitoring network traffic, integrating with email, scanning stored data, and monitoring endpoints. When looking at your initial scope, only pick one of the components to start with. I usually recommend starting with anything other than endpoints, since you then have fewer components to manage. Most organizations tend to start on the network (usually with email) since it’s easy to deploy in a passive mode, but I do see some companies now starting with scanning stored data due to regulatory requirements. In either case, stick with one component as you develop your initial policies and then narrow the scope to a subset of your network or storage. If you are in a mid-sized organization you might not need to narrow too much, but in large organizations you should pick a subnet or single egress point rather than thinking you have to watch everything. Why narrow the scope? Because in our next step we’re going to deploy our policies, and starting with a single component and a limited subset of all your traffic/systems provides the information you need to tune policies without being overwhelmed with incidents you feel compelled to manage. 2. Start with one policy: Once you’ve defined your initial scope, it’s time to deploy a policy. And yes, I mean a policy, not many policies. The policy should be narrow and align with your data protection priorities; e.g. credit card number detection or a subset of sensitive engineering plans for partial document matching. You aren’t trying to define a perfect policy out of the box; that’s why we are keeping our scope narrow. Once you have the policy ready, go ahead and launch it in monitoring mode. Over the course of the next few days you should get a good sense of how well the policy works and how you need to tune it. Many of you are likely looking for similar kinds of information, like credit card numbers, in which case the out of the box policies included in your DLP product may be sufficient with little to no tuning. 3. Take the next bite: Once you are comfortable with the results you are seeing it’s time to expand your deployment scope. Most successful organizations start by expanding the scope of coverage (systems scanned or network traffic), and then add DLP components to the policy (storage, endpoint, other network channels). Then it’s time to start the process over with the next policy. This iterative approach doesn’t necessarily take very long, especially if you leverage out of the box policies. Unlike something like IDS you gain immediate benefits even without having to cover all traffic throughout your entire organization. You get to tune your policies without being overwhelmed, while managing real incidents or exposure risks. Share:

Share:
Read Post

Tokenization: the Business Justification

Justifying an investment in tokenization is actually two separate steps – first justifying an investment to protect the data, and then choosing to use tokenization. Covering all the justifications for protecting data is beyond the scope of this series, but a few common drivers are typical: Compliance requirements Reducing compliance costs Threat protection Risk mitigation We’ve published a full model (and worksheet) on this problem in our paper The Business Justification for Data Security. Once you’ve decided to protect the data, the next step is to pick the best method. Tokenization is designed to solve a very narrow but pervasive and critical problem: protecting discreet data fields within applications, databases, storage, and across networks. The most common use for tokenization is to protect sensitive key identifiers, such as credit card numbers, Social Security Numbers, and account numbers. Less commonly, we also see tokenization used to protect full customer/employee/personal records. The difference between the two (which we’ll delve into more in our architectural discussion) is that in the first case the tokenization server only stores the token and the sensitive field, while in the second case it includes additional data, such as names and addresses. Reasons to select tokenization include: Reduction compliance scope and costs: Since tokenization completely replaces a sensitive value with a random value, systems that use the token instead of the real value are often exempt from audits/assessments that regulations require for the original sensitive data. For example, if you replace a credit card number in your application environment with tokens, the systems using the tokens may be excluded from your PCI assessment – reducing the assessment scope and cost. Reduction of application changes: Tokenization is often used to protect sensitive data within legacy application environments where we might previously have used encryption. Tokenization allows us to protect the sensitive value with an analogue using the exact same format, which can minimize application changes. For example, encrypting a Social Security Number involves not only managing the encryption, but changing everything from form field logic to database field format requirements. Many of these changes can be avoided with tokenization, so long as the token formats and sizes match the original data. Reduction of data exposure: A key advantage of tokenization is that it requires data consolidation. Sensitive values are only stored on the tokenization server(s), where they are encrypted and highly protected. This reduces exposure over traditional encryption deployments, where cryptographic access to sensitive data tends to show up in many locations. Masking by default: Since the token value is random, it also effectively functions as a data mask. You don’t need to worry about adding masking to applications, since the real value is never exposed (the exception being where even the token value could lead to misuse within your environment). Tokenization solutions do not offer as many formatting options to preserve value for reporting and analytics, but fully tokenized solutions provide greater security and less opportunity for data leakage or reverse engineering. For the most part, the primary reason organizations select tokenization over alternatives is cost reduction: reduced costs for application changes, followed by reduced audit/assessment scope. We also see organizations select tokenization when they need to update security for large enterprise applications – as long as you have to make a lot of changes, you might as well reduce potential data exposure and minimize the need for encryption at the same time. Understanding and Selecting a Tokenization Solution: Part 1, Introduction Share:

Share:
Read Post

Friday Summary: July 1, 2010

Earlier this week I was at the gym. I’d just finished a pretty tough workout and dropped down to the cafe area to grab one of those adult candy bars that tastes like cardboard and claims to give you muscles, longer life, and sexual prowess while climbing mountains. At least, that’s what I think they claim based on the pictures on the box. (And as a former mountain rescue professional, the technical logistics of the last claim aren’t worth the effort and potential injuries to sensitive bits). Anyway, there was this woman in front of me, and her ordering process went like this: Ask for item. Ask for about 5-6 different options on said menu item, essentially replacing all ingredients. Look surprised when a number following a dollar sign appears on the little screen facing her on the cash register. Reach down to gym bag. Remove purse. Reach into purse. Remove wallet. Begin scrounging through change. See salad in cooler out of corner of eye. Say, “Oh! I didn’t see that!” Walk to cooler, leaving all stuff in front of register, with transaction in the middle. Fail to see or care about line behind her. At this point, as she was rummaging through the pre-made salads, the guy behind the register looked at me, I looked at him, and we both subconsciously communicated our resignation as to the idiocy of the display in front of us. He moved over and unlocked the next register so I could buy my mountain-prowess-recovery bar, at which point the woman returned to the register and looked surprised that he was helping other (more decisive and prepared) customers. One of my biggest pet peeves is people who lack awareness of the world around them. Which is most people, and probably explains my limited social life. But they probably hate judgmental sanctimonious jerks like me, so it all works out. Just think about how many fewer security (and other) problems we’d have in the world if people would just swivel their damn heads and consider other people before making a turn? John Lennon should write a song about that or something. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Mike and Adrian on Open Network Podcast. Talking Open Source software vulnerabilities. Rich, Martin and Zach on the Network Security Podcast. Rich quoted on DLP in eWeek. Favorite Securosis Posts Adrian Lane: IBM gets a BigFix for Tivoli Endpoint Management. Congratulations to the BigFix team! Mike Rothman: IBM gets a BigFix. Normally I don’t toot my own horn, but this was a good deal of analysis. Fairly balanced and sufficiently snarky… David Mortman: Understanding and Selecting a Tokenization Solution: Introduction. Rich: Ditto. Other Securosis Posts Understanding and Selecting SIEM/LM: Integration. Know Your Adversary. Tokenization: the Business Justification. Understanding and Selecting SIEM/LM: Advanced Features. Incite 6/30/2010: Embrace Individuality. Understanding and Selecting SIEM/LM: Data Management. DB Quant: Manage Metrics, Part 3, Change Management. Favorite Outside Posts Adrian Lane: Full Disclosure: Our Turn Not only does this show just how easily this can happen – to anyone – but it underscores the difficulty for sites built from dozens of components from different vendors. The “weakest link in the chain” rule applies. David Mortman: Same for me – Full Disclosure, Our Turn. Rich: A great TED talk on self deception. I really love better understanding our own biases. Project Quant Posts DB Quant: Protect Metrics, Part 2, Patch Management. DB Quant: Manage Metrics, Part 1, Configuration Management. DB Quant: Protection Metrics, Part 4, Web Application Firewalls. Research Reports and Presentations White Paper: Endpoint Security Fundamentals. Understanding and Selecting a Database Encryption or Tokenization Solution. Low Hanging Fruit: Quick Wins with Data Loss Prevention. Top News and Posts Rich and Adrian in Essentials Guide to Data Protection. Justices Uphold Sarbanes-Oxley Act. Laughably, some parties complained SOX is not being followed by foreign companies! Heck, US comapnies don’t follow SOX! Off balance-sheet assets? Synthetic CDO’s? Please, stop pretending. Alleged Russian agents used high-tech tricks. Review of how the alleged Russian spies allegedly moved data. Interesting mix of old techniques and new technologies. But as any information can be digitized, the risk of being caught is far less, and prosecution much more difficult, if spy and spy-handler are never in the same spot together. Twitter mandated to establish information security program. Destimation Hotels breached. FBI fails to crack TrueCrypt. Top applications fail to leverage Windows security protections. This is a huge deal – if the apps don’t opt into anti-exploitation, they are essentially a dagger straight to the heart of the OS if an attacker finds a vuln. Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Michael O’Keefe, in response to The Open Source Database Security Project. Adrian – thanks for the reply. Maybe risk assessment wasn’t the right word – I was thinking of some sort of market analysis to determine which open source databases to focus on. I was using selection criteria like “total number of installations” and “total size in bytes”, etc, but user groups is indeed a good criterion to use, since you are targeting an audience of actual ordinary users, not mega companies like facebook and twitter that should be managing the security themselves. Maybe these types of distributed databases (bigtable, Cassandra) should be the focus of separate project? A quick search of Securosis shows one mention of bigtable, so while I don’t want to expand the scope of the current project, these “storage systems” do offer some interesting security problems. For example here Peter Fleischer from Google discusses the difficulty in complying with the EU Data Protection Directive: http://peterfleischer.blogspot.com/2009/04/cloud-policy-consequences-for-privacy.html Share:

Share:
Read Post

Understanding and Selecting a Tokenization Solution: Introduction

Updated: 06/30/2010 One of the most daunting tasks in information security is protecting sensitive data in (often complex and distributed) enterprise applications. Even the most hardened security professionals enters these projects with at least a modicum of trepidation. Coordinating effective information protection across application, database, storage, and server teams is challenging under the best of circumstances – and much tougher when also facing the common blend of legacy systems and conflicting business requirements. For the most part our answer to this problem has been various forms of encryption, but over the past few years we’ve seen increasing interest in and adoption of tokenization. Encryption, implemented properly, is one of the most effective security controls available to us. It renders information unreadable except to authorized users, and protects data both in motion and at rest. But encryption isn’t the only data protection option, and there are many cases where alternatives make more sense. Sometimes the right choice is to remove the data entirely. Tokenization is just such a technology: it replaces the original sensitive data with unsensitive placeholders. Tokenization is closely related to encryption – they both mask sensitive information – but its approach to data protection is different. With encryption we protect the data by scrambling it using a process that’s reversible if you have the right key. Anyone with access to the key and the encrypted data can regenerate the original values. With tokenization the process is not reversible. Instead we substitute a token value that’s only associated with the “real” data within a well-protected database. This token can even have the exact same format (size & structure) as the original value, helping minimize application changes. But the token is effectively random, rather than a scrambled version of the original data. The token cannot be compromised to reveal sensitive data. The power of tokenization is that although the token value is usable within its native application environment, it is completely useless outside. So tokenization is ideal to protect sensitive identifying information such as credit card numbers, Social Security Numbers, and the other personally identifiable information bad guys tend to steal and use or sell on the underground market. Unless they crack the tokenization server itself to obtain the original data, stolen tokens are worthless. Interest in tokenization has accelerated because it protects data at a lower overall cost. Adding encryption to systems – especially legacy systems – introduces a burden outside the original design. Making application changes to accomodate encrypted data can dramatically increase overhead, reduce performance, and expand the responsibilities of programmers and systems management staff. In distributed application environments the need to encrypt, decrypt, and re-encrypt data in different locations results in exposures that attackers can take advantage of. More instances where systems handle keys and data mean more opportunities for compromise. For example, one growing attack is the use of memory parsing malware: malicious software installed on servers and capable of directly accessing memory to pull encryption keys or data from RAM, even run without administrative privileges. Aside from minimizing application changes, tokenization also reduces potential data exposure. When properly implemented, tokenization enables applications to use the token throughout the whole system, only accessing the protected value when absolutely necessary. You can use, store, and transact with the token without fear of exposing the sensitive data it represents. Although at times you need to pull out the real value, tokenization allows you to constrain its usage to your most secure implementations. For example, one of the most common uses for tokenization is credit card transaction systems. We’ll go into more depth later, but using a token for the credit card number allows us to track transactions and records, only exposing the real number when we need to send a transaction off to the payment processor. And if the processor uses tokenization as well, we might even be able to completely eliminate storing credit card numbers. This doesn’t mean tokenization is always a better choice than encryption. They are closely related and the trick is to determine which will work best under the particular circumstances. In this series we’ll dig deep into tokenization to explain how the technology works, explore different use cases and deployment scenarios, and review selection criteria to pick the right option. We’ll cover everything from tokenization services for payment processing and PCI compliance to rolling your own solution for internal applications. In our next post we’ll describe the different business justifications, and follow up with a high-level description of the different tokenization models. After that we’ll post on the technology details, deployment, use cases, and finally selection criteria and guidance. If you haven’t figured it out by now, we’ll be pulling all this together into a white paper for release later this summer. Just keep this in mind: sometimes the best data security choice is to avoid keeping the data at all. Tokenization lets us remove sensitive data while retaining much of its value. Share:

Share:
Read Post

DB Quant: Manage Metrics, Part 3, Change Management

Believe it or not, we are down to our final metrics post! We’re going to close things out today with change management – something that isn’t specific to security, but comes with security implications. Our change management process is: Monitor Schedule and Prepare Alter Verify Document Monitor Variable Notes Time to gather change requests Time to evaluate each change request for security implications Schedule and Prepare Variable Notes Time to map request to specific actions/scripts Time to update change management system Time to schedule downtime/maintenance window and communicate Alter Variable Notes Time to implement change request Verify Variable Notes Time to test and verify changes Document Variable Notes Time to document changes Time to archive scripts or backups Other Posts in Project Quant for Database Security An Open Metrics Model for Database Security: Project Quant for Databases Database Security: Process Framework Database Security: Planning Database Security: Planning, Part 2 Database Security: Discover and Assess Databases, Apps, Data Database Security: Patch Database Security: Configure Database Security: Restrict Access Database Security: Shield Database Security: Database Activity Monitoring Database Security: Audit Database Security: Database Activity Blocking Database Security: Encryption Database Security: Data Masking Database Security: Web App Firewalls Database Security: Configuration Management Database Security: Patch Management Database Security: Change Management DB Quant: Planning Metrics, Part 1 DB Quant: Planning Metrics, Part 2 DB Quant: Planning Metrics, Part 3 DB Quant: Planning Metrics, Part 4 DB Quant: Discovery Metrics, Part 1, Enumerate Databases DB Quant: Discovery Metrics, Part 2, Identify Apps DB Quant: Discovery Metrics, Part 3, Config and Vulnerability Assessment DB Quant: Discovery Metrics, Part 4, Access and Authorization DB Quant: Secure Metrics, Part 1, Patch DB Quant: Secure Metrics, Part 2, Configure DB Quant: Secure Metrics, Part 3, Restrict Access DB Quant: Monitoring Metrics: Part 1, Database Activity Monitoring DB Quant: Monitoring Metrics, Part 2, Audit DB Quant: Protect Metrics, Part 1, DAM Blocking DB Quant: Protect Metrics, Part 2, Encryption DB Quant: Protect Metrics, Part 3, Masking DB Quant: Protect Metrics, Part 4, WAF Share:

Share:
Read Post

Trustwave, Acquisitions, PCI, and Navigating Conflicts of Interest

This morning Trustwave announced their acquisition of Breach Security, the web application firewall vendor. Trustwave’s been on an acquisition streak for a while now, picking up companies such as Mirage (NAC), Vericept (DLP), BitArmor (encryption), and Intellitactics (log management/SIEM). Notice any trends? All these products have a strong PCI angles, none of the companies were seeing strong sales (Trustwave doesn’t do acquisitions for large multiples of sales), and all were more mid-market focused. Adding a WAF to the mix makes perfect sense, especially since Trustwave also has web application testing (both controls meet PCI requirement 6.6). Trustwave is clearly looking to become a one-stop shop for PCI compliance. Especially since they hold the largest share of the PCI assessment market. To be honest, there are concerns about Trustwave and other PCI assessment firms offering both the assessment and remediation services. You know, the old fox guarding the henhouse thing. There’s a reason regulations prohibit financial auditors from offering other services to their clients – the conflicts of interest are extremely difficult to eliminate or even keep under control. When the person making sure you are compliant also sells you tools to help become compliant, we should always be skeptical. We all know how this goes down. Sales folks will do whatever it takes to hit their numbers (you know, they have BMW payments to make), and few of them have any qualms about telling a client they will be compliant if they buy both their assessment services and a nice package of security tools and implementation services. They’ll use words like “partners” and “holistic” to seem all warm and fuzzy. We can’t really blame Trustwave and other firms for jumping all over this opportunity. The PCI Council shows no interest in controlling conflicts of interest, and when a breach does happen the investigation in the kangaroo court will show the company wasn’t compliant anyway. But there is also an upside. We also know that every single client of every single PCI assessment, consulting, or product firm merely wants them to make PCI “go away”, especially in the mid-market. Having firms with a complete package of services is compelling, and companies with big security product portfolios like Symantec, McAfee, and IBM aren’t well positioned to provide a full PCI-related portfolio, even though they have many of the pieces. If Trustwave can pull all these acquisitions together, make them good enough, and hit the right price point, the odds are they will make a killing in the market. They face three major challenges in this process: Failing to properly manage the conflicts of interest could become a liability. Unhappy customers could lead to either bad press and word of mouth, or even changes in PCI code to remove the conflicts, which they want to avoid at all costs. The actual assessors and consultants are reasonably well walled off, but they will need to aggressively manage their own sales forces to avoid problems. Ideally account execs will only sell one side of the product line, which could help manage the potential issues. Customers won’t understand that PCI compliance isn’t the same as general security. Trustwave may get the blame for non-PCI security breaches (never mind the real cardholder data breaches), especially given the PCI Council’s history of playing Tuesday morning QB and saying no breached organization could possibly be compliant (even if they passed their assessment). Packaging all this together at the right price point for the mid-market won’t be easy. Products need real integration, including leveraging a central management console and reporting engine. This is where the real leverage is – not merely services-based integration, which is not good enough for the mid-market. So the Breach acquisition is a smart move for Trustwave, and might be good for the market. But as an assessor, Trustwave needs to carefully manage their acquisition strategy in ways mere product mashup shops don’t need to worry about. Share:

Share:
Read Post

FireStarter: Is Full Disk Encryption without Pre-Boot Secure?

This FireStarter is more of a real conversation starter than a definitive statement designed to rile everyone up. Over the past couple months I’ve talked with a few organizations – some of them quite large – deploying full disk encryption for laptops but skipping the pre-boot environment. For those of you who don’t know, nearly every full drive encryption product works by first booting up a mini-operating system. The user logs into this mini-OS, which then decrypts and loads the main operating system. This ensures that nothing is decrypted without the user’s credentials. It can be a bit of a problem for installing software updates, because if the user isn’t logged in you can’t get to the operating system, and if you kick off a reboot after installing a patch it will stall at pre-boot. But every major product has ways to manage this. Typically they allow you to set a “log in once” flag to the pre-boot environment for software updates, but there are a couple others ways to deal with it. I consider this problem essentially solved, based on the user discussions I’ve had. Another downside is that users need to log into pre-boot before the operating system. Some organizations deploy their FDE to require two logins, but many more synchronize the user’s Windows credentials to the pre-boot, then automatically log into Windows (or whatever OS is being protected). Both seem fine to me, and one of the differentiators between various encryption products is how well they handle user support, password changes, and other authentication issues in pre-boot. But I’m now hearing of people deploying a FDE product without using pre-boot. Essentially (I think) they reverse the process I just described and automatically log into the pre-boot environment, then have the user log into Windows. I’m not talking about the tricky stuff a near-full-disk-encryption product like Credent uses, but skipping pre-boot altogether. This seems fracking insane to me. You somewhat reduce the risk of a forensic evaluation of the drive, but lose most of the benefits of FDE. In every case, the reason given is, “We don’t want to confuse our users.” Am I missing something here? In my analysis this obviates most of the benefits of FDE, making it a big waste of cash. Then again, let’s think about compliance. Most regulations say, “Thou shalt encrypt laptop drives.” Thus, this seems to tick the compliance checkbox, even if it’s a bad idea from a security perspective. Also, realistically, the vast majority of lost drives don’t result in the compromise of data. I’m unaware of any non-targeted breach where a lost drive resulted in losses beyond the cost of dealing with breach reporting. I’m sure there have been some, but none that crossed my desk. Share:

Share:
Read Post

Friday Summary: June 18, 2009

Dear Securosis readers, The Friday Summary is currently unavailable. Our staff is at an offsite in an undisclosed location completing our world domination plans. We apologize for the inconvenience, and instead of our full summary of the week’s events here are a few links to keep you busy. If you need more, Mike Rothman suggests you “find your own &%^ news”. Mike’s attitude does not necessarily represent Securosis, even though we give him business cards. Thank you, we appreciate your support, and turn off your WiFi! Securosis Posts Doing Well by Doing Good (and Protecting the Kids). Take Our Data Security Survey & Win an iPad. Incite 6/16/2010: Fenced in. Need to know the time? Ask the consultant. Top 5 Security Tips for Small Business. If You Had a 3G iPad Before June 9, Get a New SIM. Insider Threat Alive and Well. Other News Unpatched Windows XP Flaw Being Exploited. Zombies take over Newsweek (The brain-eating kind, not a botnet). Share:

Share:
Read Post

Take Our Data Security Survey & Win an iPad

One of the biggest problems in security is that we rarely have a good sense of which controls actually improve security outcomes. This is especially true for newer areas like data security, filled with tools and controls that haven’t been as well tested or widely deployed as things like firewalls. Thanks to all the great feedback you sent in on our drafts, we are happy to kick off our big data security survey. This one is a bit different than most of the others you’ve seen floating around, because we are focusing more on effectiveness (technically perceived) of controls rather than losses & incidents. We do have some incident-related questions, but only what we need to feed into the effectiveness results. As with most of our surveys, we’ve set this one up so you can take it anonymously, and all the raw results (anonymized, in spreadsheet format) will be released after our analysis. Since we have a sponsor for this one (Imperva), we actually have a little budget and will be giving away a 32gb WiFi iPad to a random participant. You don’t need to provide an email address to take the survey, but you do if you want the iPad. If we get a lot of recipients (say over 200) we’ll cough up for more iPads so the odds stay better than the lottery. Click here to take the survey, and please spread the word. We designed it to only take 10-20 minutes. Even if you aren’t doing a lot with data security, we need your responses to balance the results. With our surveys we also use something called a “registration code” to keep track of where people found out about it. We use this to get a sense of which social media channels people use. If you take the survey based on this post, please use “Securosis”. If you re-post this link, feel free to make up your own code and email it to us, and we will let you know how many people responded to your referral – get enough and we can give you a custom slice of the data. Thanks! Our plan is to keep this open for a few weeks. Share:

Share:
Read Post

Top 5 Security Tips for Small Business

We in the security industry tend to lump small and medium businesses together into “SMB”, but there are massive differences between a 20-person retail outlet and even a 100-person operation. These suggestions are specifically for small businesses with limited resources, based on everything we know about the latest threats and security defenses. The following advice is not conditional – there really isn’t any safe middle ground, and these recommendations aren’t very expensive. These are designed to limit the chance you will be hit with attacks that compromise your finances or ability to continue business operations, and we’re ignoring everything else: Update all your computers to the latest operating systems and web browsers – this is Windows 7 or Mac OS X 10.6 as of this writing. On Windows, use at least Internet Explorer 8 or Firefox 3.6 (Firefox isn’t necessarily any more secure than the latest versions of IE). On Macs, use Firefox 3.6. Most small business struggle with keeping malware off their computers, and the latest operating systems are far more secure than earlier versions. Windows XP is nearly 10 years old at this point – odds are most of your cars are newer than that. Turn on automatic updates (Windows Update, or Software Update on Mac) and set them to check and automatically install patches daily. If this breaks software you need, find an alternative program rather than turning off updates. Keeping your system patched is your best security defense, because most attacks exploit known vulnerabilities. But since those vulnerabilities are converted to attacks within hours of becoming public (when the patch is released, if not earlier), you need to patch as quickly as possible. Use a dedicated computer for your online banking and financial software. Never check email on this system. Never use it to browse any Web site except your bank. Never install any applications other than your financial application. You can do this by setting up a non-administrative user account and then setting parental controls to restrict what Web sites it can visit. Cheap computers are $200 (for a new PC) and $700 (for a new Mac mini) and this blocks the single most common method for bad guys to steal money from small businesses, which is compromising a machine and then stealing credentials via a software key logger. Currently, the biggest source of financial losses for small business is malicious software sniffing your online bank credentials, which are then used to transfer funds directly to money mules. This is a better investment than any antivirus program. Arrange with your bank to require in-person or phone confirmation for any transfers over a certain amount, and check your account daily. Yes, react faster is applicable here as well. The sooner you learn about an attempt to move money from your account, the more likely you’ll be able to stop it. Remember that business accounts do not have the same fraud protections as consumer accounts, and if someone transfers your money out because they broke into your online banking account, it is very unlikely you will ever recover the funds. Buy backup software that supports both local and remote backups, like CrashPlan. Backup locally to hard drives, and keep at least one backup for any major systems off-site but accessible. Then subscribe to the online backup service for any critical business files. Remember that online backups are slow and take a long time to restore, which is why you want something closer to home. Joe Kissell’s Take Control of Mac OS X Backups is a good resource for developing your backup strategy, even if you are on Windows 7 (which includes some built-in backup features). Hard drives aren’t designed to last more than a few years, and all sorts of mistakes can destroy your data. Those are my top 5, but here are a few more: Turn on the firewalls on all your computers. They can’t stop all attacks, but do reduce some risks, such as if another computer on the network (which might just mean in the same coffee shop) is compromised by bad guys, or someone connects an infected computer (like a personal laptop) to the network. Have employees use non-administrator accounts (standard users) if at all possible. This also helps limit the chances of those computers being exploited, and if they are, will limit the exploitation. If you have shared computers, use non-administrator accounts and turn on parental controls to restrict what can be installed on them. If possible, don’t even let them browse the web or check email (this really depends on the kind of business you have… if employees complain, buy an iPad or spare computer that isn’t needed for business, and isn’t tied to any other computer). Most exploits today are through email, web browsing, and infected USB devices – this helps with all three. Use an email service that filters spam and viruses before they actually reach your account. If you accept payments/credit cards, use a service and make sure they can document that their setup is PCI compliant, that card numbers are encrypted, and that any remote access they use for support has a unique username and password that is changed every 90 days. Put those requirements into the contract. Failing to take these precautions makes a breach much more likely. Install antivirus from a major vendor (if you are on Windows). There is a reason this is last on the list – you shouldn’t even think about this before doing everything else above. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.