Securosis

Research

Encryption: The Maginot Line of Data Security

History is a funny thing. It’s amazing that what many children see in early schooling as a boring collection of facts is neither boring nor factual. On a good day we might get some dates correct, but there isn’t a “fact” in history that isn’t open to interpretation. This is as it should be; think about all the factors that went into a major life decision- say a marriage or picking your college. Now distill everything involved in that decision into a paragraph, stick it in a drawer for a couple decades, pull it out, and see if it still matches your memories and accurately reflects the situation. If you don’t have a few decades to spare, the answer is, “it doesn’t.” The main problems with history are actually those we see in computer science- bandwidth, compression, indexing, and search. We can’t possibly collect and store all the bandwidth of human interaction, so we drop into “sampling mode” and further compress it for long-term storage. We then rely on imperfect indexing to organize the data, and flawed search protocols to find what we need. We don’t collect everything, lose large amounts of data in compression, poorly index it, and rely on primitive search tools. No wonder history is open to interpretation. Take the Maginot Line. And Encryption. For those of you who aren’t military history buffs, the Maginot Line was a series of interlocking defenses, sometimes 25 kilometers deep, that the French built after WWI to keep the Germans out. In popular security culture the term is often used as an analogy to describe a misguided investment designed to fight the last war that’s easily circumvented. In marketing films of the time the Maginot Line was promoted as being an invincible defense for France. A folly painfully realized when the German invasion succeeded in only a month. A metaphor for a failure of hubris. Reality is, of course, open to interpretation. Another interpretation of the Maginot Line is that it completely succeeded in its defined task, preventing a frontal assault along the Franco-German border. The Maginot Line held, but the other defensive layers- the Ardennes and the French Army along the Belgian border- failed. The Maginot Line was designed for a mission it effectively met, but other design flaws in the defense in depth of France lead to the German occupation. Which brings us to encryption. The first version of the PCI Data Security Standard called encryption, “the ultimate data security technology”. Wrong. Encryption is a powerful technology, but probably the most-misunderstood in the context of what it provides for data security. With the McAfee acquisition of SafeBoot for $350M, encryption is in the headlines again. A while ago I wrote the Three Laws of Data Encryption to help users get the most value out of encryption. I really do think of encryption as the Maginot Line of data security. It’s powerful, nigh invincible, if used correctly, but easily circumvented if your other security controls aren’t properly designed. For example, if you have a large application connected to a large database full of encrypted credit card numbers, and that application is subject to SQL injection, odds are your encryption is worthless. Laptop encryption protects you from stolen laptops, but is useless against malicious software running in the context of the user. As I keep walking through the Data Security Lifecycle you’ll see a lot of posts on encryption; it’s a fundamental technology for protecting content. But when big companies start throwing around hundreds of millions of dollars I think it’s an opportune time to step back and remind ourselves of the problem we’re trying to solve, and how the different parts of the solution fit together. If we want a real-world example we need to look no further than TJX. Rumor has it that cardholder data was encrypted, but the attackers sniffed an unencrypted portion of the communications to perform transactions. The encryption worked perfectly, but the breach still succeeded. Share:

Share:
Read Post

Some Answers for Jeremiah: Website Vulnerabilities

Jeremiah posted these questions on dealing with website vulnerabilities. Here are my quick answers (I have to run- sorry for the lack of links, but you can Google the examples): Lets assume a company is informed of a SQLi or XSS vulnerability in their website (I know, shocker) either privately or via public disclosure on sla.ckers.org. And that vulnerability potentially places private personal information (PPI) or intellectual property at risk of compromise. My questions are: 1) Is the company “legally” obligated to fix the issue or can they just accept the risk? Think SOX, GLBA, HIPAA, PCI-DSS, etc. Definitely no for intellectual property. Definitely no for SOX- SOX says you’re free to make as many dumb mistakes and lose as much money as you want, as long as you report it accurately. Other laws are a toss-up, but generally there is no obligation unless there is evidence that a breach occurred. For PCI-DSS you have to remediate or document compensating controls for any network vulnerabilities at the time of your audit (and this expands to applications with 1.1), but there is no definitive requirement for immediate remediation. California AB1950 is the big question mark in this area and I’m unsure on enforcement mechanisms. The regulations are very unclear and unhelpful here, and it’s quite likely a company can accept the risk. But if a breach occurs, they may be held negligent. Take a look at the PetCo case where the FTC mandated a security program after a breach, and Microsoft/MSN. The companies were held liable for losing customer data, but not because of any of the usual regulations. There is almost no case law that I’m aware of. 2) What if repairs require a significant time/money investment? Is there a resolution grace period, does the company have to install compensating controls, or must they shutdown the website while repairs are made? No. Most regulations only require breach notification or remediation of flaws discovered through auditing. Reasonable person theory probably applies if there is a breach with losses and it goes to court. I’ve read all of the regulations- none mention a specific time period. 3) Should an incident occur exploiting the aforementioned vulnerability, does the company bear any additional legal liability? They may carry liability due to negligence. See the cases I mentioned above. 4) If the company’s website is PCI-DSS certified, is the website still be considered certified after the point of disclosure given what the web application security sections dictate? Unknown because there are no public cases that I can find. I believe you remain certified until the next audit. In the case of Cardsystems, they were PCI certified when the breach occurred and immediately re-audited and de-certified following public disclosure of the breach. That’s one problem with PCI-DSS- it’s very audit-reliant and changes between audits don’t directly affect certification. 5) Does the QSA or ASV who certified the website potentially risk any PCI Council disciplinary action for certifying a non-compliant website? What happens if this becomes a pattern? No known cases of disciplinary action, but an audit insider might know of one. Disciplinary action will most likely only take place if the audit failed to follow best practices and a large breach occurs, or if there is (as you mention) a pattern. None of this is formalized to my knowledge. I’ve spent a lot of time researching and discussing all the various data protection and breach disclosure regulations. Organizations generally only face potential liability if they either falsify documentation for auditing or certification, or suffer a breach and are later shown to be negligent. I am unaware of legal enforcement mechanisms if there is a known vulnerability, but no definitively unapproved disclosure of information. This is an inherent risk of audit-based approaches to data protection. Share:

Share:
Read Post

Understanding and Selecting a DLP Solution: Part 5, Data-In-Use (Endpoint) Technical Architecture

Welcome to Part 5 of our series on DLP/CMF/CMP; look here for: Part 1, Part 2, Part 3, and Part 4. I like to describe the evolution of the DLP/CMF market as a series of questions a CEO/CIO asks the CISO/SGIC (Security Guy In Charge). It runs something like this: Hey, are we leaking any of this sensitive data out over the Internet? (Network Monitoring) Oh. Wow. Can you stop that? (Network Filtering) Where did all of that come from in the first place? (Content Discovery) This is pretty much how the market evolved in terms of product capabilities, and it often represents how users deploy the products- monitoring, filtering, then discovery. But there’s another question that typically comes next: < p style=”text-indent:20pt;”>4. Hey, what about our laptops when people are at home and those USB things? DLP usually starts on the network because that’s the most cost-effective way to get the broadest coverage. Network monitoring is non-intrusive (unless you have to crack SSL) and offers visibility to any system on the network, managed or unmanaged, server or workstation. Filtering is more difficult, but again fairly straightforward on the network (especially for email) and covers all systems connected to the network. But it’s clear this isn’t a complete solution; it doesn’t protect data when someone walks out the door with it on a laptop, and can’t even prevent people from copying data to portable storage like USB drives. To move from a “leak prevention” solution to a “content protection” solution, products need to expand not only to stored data, but to the endpoints where data is used. Note: although there have been large advancements in endpoint DLP, I still don’t recommend endpoint-only solutions for most users. As we’ll discuss, they normally require to compromise on the number and types of policies that can be enforced, offer limited email integration, and offer no protection for unmanaged systems. Long term, you’ll need both network and endpoint capabilities, and most of the leading network solutions are adding (or already offer) at least some endpoint protection. Adding an endpoint agent to a DLP solution not only gives you the ability to discover stored content, but to potentially protect systems no longer on the network or even protect data as it’s being actively used. While extremely powerful, it has been very problematic to implement. Agents need to perform within the resource constraints of a standard desktop while maintaining content awareness. This can be problematic if you have large policies such as, “protect all 10 million credit card numbers from our database”, as opposed to something simpler like, “protect any credit card number” that will give you a false positive every time an employee visits Amazon.com. Existing products vary widely in functionality, but we can break out three key capabilities: Monitoring and enforcement within the network stack: This allows enforcement of network rules without a network appliance. It should be able to enforce both the same rules as if the system were on the managed network, and separate rules designed only for enforcement when on unmanaged networks. Monitoring and enforcement within the system kernel: By plugging directly into to the operating system kernel you can monitor user activity, such as cutting and pasting sensitive content. This also allows you to potentially detect (and enforce) policy violations when the user is taking sensitive content and attempting to hide it from detection, perhaps by encrypting it or modifying source documents. Monitoring and enforcing within the file system: This allows monitoring and enforcement of where data is stored. For example, you could restrict transfer of sensitive content to unencrypted USB devices. I’ve simplified the options, and most early products are focusing on 1 and 3; this solves the portable storage problem and protects devices on unmanaged networks. System/kernel integration is much more complex and there are a variety of approaches to gaining this functionality. Over time, I think this will evolve into a few key use cases: Enforcing network rules off the managed network, or modifying rules for more-hostile networks. Restricting sensitive content from portable storage, including USB drives, CD/DVD drives, home storage, and devices like smartphones and PDAs. Restricting cut and paste of sensitive content. Restrict applications allowed to use sensitive content- e.g., only allowing encryption with an approved enterprise solution, not tools downloaded online that don’t allow enterprise data recovery. Integration with Enterprise Digital Rights Management to automatically apply access control to documents based on the included content. Audit use of sensitive content for compliance reporting. Outside of content analysis and technical integration, an endpoint DLP tool should also have the following capabilities: Be centrally managed by the same DLP management server that controls data-in-motion and data-at-rest (network and discovery). Policy creation and management should be fully integrated with other DLP policies in a single interface. Incidents should be reported to, and managed by, the central management server. Rules (policies) should adjust based on where the endpoint is located (on or off the network). If the endpoint is on the managed network with gateway DLP, redundant local rules should be ignored to improve performance. Agent deployment should integrate with existing enterprise software deployment tools. Policy updates should offer options for secure management via the DLP management server, or existing enterprise software update tools. The endpoint DLP agent should use the same content analysis techniques as the network servers/appliances. In short, you ideally want an endpoint DLP solution with all the content analysis techniques offered by the rest of the product line, fully integrated into the management server, with consistent policies and workflow. Realistically the performance and storage limitations of the endpoint will restrict the types of content analysis supported and the number and type of policies that are enforced locally. For some enterprises this might not matter, depending on the kinds of policies you’d like to enforce, but in many cases you’ll need to make serious tradeoffs when designing data-in-use policies. Endpoint enforcement is the least mature capability in the DLP/CMF/CMP market but it’s an essential part

Share:
Read Post

Retailers B*tch Slap PCI Security Standards Council, If You Believe Them

From Bill Brenner at TechTarget (who never calls anymore now that I’m independent- where’s the love?). From the letter, written by NRF Chief Information Officer David Hogan: “All of us – merchants, banks, credit card companies and our customers – want to eliminate credit card fraud. But if the goal is to make credit card data less vulnerable, the ultimate solution is to stop requiring merchants to store card data in the first place. With this letter, we are officially putting the credit card industry on notice. Instead of making the industry jump through hoops to create an impenetrable fortress, retailers want to eliminate the incentive for hackers to break into their systems in the first place.” The letter notes that credit card companies typically require retailers to store credit card numbers anywhere from one year to 18 months to satisfy card company retrieval requests. According to NRF, retailers should have a choice as to whether or not they want to store credit card numbers at all. This is an exceptionally great idea. I’ve been covering PCI since the start and never realized that one of the reasons retailers were keeping card numbers was because of the credit card companies themselves. I’m not fully convinced they really mean it. I’ve worked with hundreds of retailers of all sizes over the years, and many keep card numbers for reasons other than the credit card company requirements. Most of their systems are built on using card numbers as customer identifiers, and removing them is a monumental task (one that some forward-looking retailers are actually starting). Retailers often use card numbers to validate purchases and perform refunds. Not that they have to, but I wonder how many are really willing to make this change? I’ve long thought that the PCI program was designed more to reduce the risks of the credit card companies than to protect consumers. There are many other ways we could improve credit card security aside from PCI, such as greater use of smart cards and PIN-based transactions. Fortunately, even badly motivated actions can have positive effects, and I think PCI is clearly improving retail security. PCI, and credit card company practices, really push as much liability on the retailers and issuing banks as possible. Retailers are challenging them on multiple fronts, especially transaction fees. This is the kind of challenge I like to see- eliminating stored card numbers removes a huge risk (but not all risk, since the bad guys can still attack on a transaction basis), would reduce compliance costs, and simplify infrastructures. We traditionally talk about four ways to respond to risk- transfer, avoid, accept, mitigate. As a martial artists I have to admit I prefer avoiding a punch than blocking it, getting hit, or having someone else take it on the chin for me. Share:

Share:
Read Post

Slashdot Bias And Much Ado About Nothing (PGP Encryption Issue)

I’m sitting here working out of the library (it’s closer to the bars for happy hour), when a headline on Slashdot catches my eye: Undocumented Bypass in PGP Whole Disk Encryption“PGP Corporation’s widely adopted Whole Disk Encryption product apparently has an encryption bypass feature that allows an encrypted drive to be accessed without the boot-up passphrase challenge dialog, leaving data in a vulnerable state if the drive is stolen when the bypass feature is enabled. The feature is also apparently not in the documentation that ships with the PGP product, nor the publicly available documentation on their website, but only mentioned briefly in the customer knowledge base. Jon Callas, CTO and CSO of PGP Corp., responded that this feature was required by unnamed customers and that competing products have similar functionality.” OMG!!!! WTF!!!! Evil backdoors in PGP!!!! Say it ain’t so!!!! Oh, wait a moment. It’s just the temp bypass feature that every single enterprise-class whole disk encryption product on the market supports. I love Slashdot, it’s one of the only sources I read religiously, but on occasion the hype/bias gets to me a little. The CTO of PGP responded well, and I’ll add my outsider’s support. Full disk encryption is a must-have for laptops, but it does come with a bit of a cost. When you encrypt the system, the entire OS is encrypted and you need a thin operating system to boot when you turn on the PC, have the user authenticate, then decrypt and load the primary operating system. Works pretty well, except it interferes with some management tasks like restoring backups and remote updates. Thus all the encryption companies have a feature that allows you to turn off authentication for a single boot- when you need to install an update and reboot the user logs the system in, updates are pushed down and installed, the system reboots without the user logging in, and the bypass flag cleared for the next boot. Otherwise the user would have to sit in front of their machine and enter their password on every reboot cycle. Sure, that would be more secure, but much less manageable- and the risk of data leaking at just the right moment is pretty small. A few vendors, notably Credent, don’t encrypt the entire drive to deal with this problem, but I don’t consider this issue significant enough to discount whole disk encryption solutions like PGP, CheckPoint/Pointsec, Utimaco, etc. This isn’t a back door or a poorly thought out design feature- it’s a reasonable trade-off of risk to solve a well-known management problem. PGP kind of pisses me off sometimes, but I have to support them on this one. Here’s PGP’s documentation. In short, yes- it’s a security risk, but it’s a manageable risk and not significant enough to warrant the hype. Especially since you can disable (or simply not use) the feature in high-security situations. Share:

Share:
Read Post

Data Security Lifecycle- Technologies, Part 1

A week or so ago I published the Data Security Lifecycle, and so far the feedback has been very positive. The lifecycle is a high-level list of controls, but now we need to dig into the technologies to support those controls. The Data Security Lifecycle is designed to be useful today while still being visionary- it’s important to keep in mind that not all these technologies are at the same maturity level. Most data security technologies are only in an adolescent stage of development- they provide real value, but are not necessarily mature. Some technologies, especially Enterprise DRM, aren’t yet suitable for widespread deployment and work best for smaller teams or business units. Others, like logical controls, are barely productized, if at all. As we go through these tools, I will try to clearly address maturity level and suitability for deployment of each one. Over time I’ll be digging into each of these technologies, as I’ve started doing with DLP, and will be able to discuss some of the more detailed implementation and maturity issues. In today’s post we’ll focus on the first two stages- Create and Store. Since we’ll be delving into each technology in more detail down the road, these posts will just give a high-level overview. There are also technologies used for data security, such as data-in-motion encryption and enterprise kay management, that fall outside the lifecycle and will be covered separately. Create Classify: Eventually, in this stage the content-aware combination of DLP/CMF/CMP and Enterprise DRM will classify content at the time of creation and apply rights, based on enterprise policies. Today, classification at the time of creation is a manual process. Structured data is somewhat classified based on where it’s stored in the database, but since this isn’t a content-aware decision and still relies on manual controls, there’s no real technology to implement. In both cases I expect technology advancements over the next 1-3 years to provide classification-on-creation capabilities. Assign Rights: Currently a manual process, but implemented through two technologies: Label Security: A feature of some database management systems that adds a label to a database row, column, or table, classifying the content in that object. The DBMS can then implement access and logical controls based on the data label. Enterprise Digital Rights Management (EDRM): Content is encrypted, and access and use rights are controlled by metadata embedded with the content. The EDRM market has been somewhat self-limiting due to the complexity of enterprise integration and assigning and managing rights. Eventually it will combine with CMF/CMP (notice I dropped DLP on purpose here) for content and policy-based rights assignment. Access Controls: One of the most fundamental data security technologies, built into every file and management system, and one of the most poorly used. DBMS Access Controls: Access controls within a database management system, including proper use of Views vs. direct table access. Use of these controls is often complicated by connection pooling, which tends to anonymize the user between the application and the database. Administrator Separation of Duties: Newer technologies implemented in databases to limit database administrator access. On Oracle this is called Database Vault, and on IBM DB2 I believe you use the Security Administrator role and Label Based Access Controls. File System Access Controls: Normal file access controls, applied at the file or repository level. Eventually I expect to see tools to help manage these more centrally. Document Management System Access Controls: For content in a document management system (e.g., Documentum, SharePoint), the access controls built into the management system. Encryption: The most overhyped technology for protecting data, but still the most important. More often than not encryption is used incorrectly and doesn’t provide the expected level of security, but that’s fodder for a future discussion. Field-Level Encryption: Encrypting fields within a database, normally at the column level. Can take 2-3 years to implement in large, legacy systems. A feature of all DBMSs, but many people look to third party solutions that are more manageable. Long-term this will just be a feature of the DBMS with third-party management tools, but that’s still a few years out. Application-Level Encryption: Encrypting a piece of data at the application on collection. Better security than encrypting at the database level, but needs to be coded into the application. Can create complexities when the encrypted data is needed outside of the application, e.g., for batch jobs or other back-end processing. Tools do exist to encrypt at the application layer using keys available to other applications and systems, but that market is still very young. File/Media Encryption: In the context of databases, this is the encryption of the database files or the media they’re stored on. Only protects data from physical theft and certain kinds of system-level intrusions. Can be very effective when used in combination with Database Activity Monitoring. Media Encryption: Encryption of an entire hard drive, CD/DVD, USB stick, tape, or other media. Encrypting the entire hard drive is particularly useful for protecting laptops. File Encryption: Encryption of individual files and/or directories on a system using software on that system and typically managed on a system-by-system basis by users. Distributed Encryption: Distributed encryption consists of two parts- a central policy server for key management and access control lists, and distributed agents on systems with the data. When a user attempts to access a file, the agent on the local system checks with the server and retrieves the keys if access is approved (in reverse, it can encrypt data using individual or group keys assigned by the server). Distributed encryption provides file-level granularity, while maintaining central control and easing management difficulties. Rights Management: The enforcement of rights assigned during the Create stage. Row-Level Security: Non-label based row-level access controls. Capable of deeper logic than label security. Label Security: Described in Create Enterprise DRM: Described in Create Content Discovery: Content-aware scanning of files, databases, and other storage repositories to identify sensitive content and take protective actions based on enterprise policies. Database Content Discovery: Use of a database-specific tool to scan

Share:
Read Post

Network Security Podcast, Episode 79: SCADA!

Martin and I finally recorded our first podcast in the wee hours of the afternoon, improving both our coherence and my ability to have a beer. There were a few technical difficulties so the quality is a little off, and we’re working on figuring out how to record with high quality across state lines. This week we focused on the FUD and reality around the recent video released by DHS showing a power generator frying due to a remote cyberattack. Martin also added a new regular segment, PCI is a TLA, in honor of his new job as a PCI auditor. Show Notes: Microsoft’s Stealth Update Brian Kreb’s Security Fix Rich: Lessons on Software Updates: Microsoft and Apple Both Muck it Up Interview with a convicted hacker: Robert Moor tells how he broke into routers and stole VoIP service. FUD and SCADA or Oh FUD DevCentral: Sometimes, even the experts are wrong. (M: I think he means me.) Rich: Yes, Hackers can take down the power grid. Maybe. Schneier: Staged attack causes generator to self-destruct Gap loses 800,000 records PCI is a TLA PCI Security Standards Council PCI DSS Compliance Demystified PCI Standards Group on Yahoo TrustWave Tonight’s Music: On a podcast by Cruisebox Network Security Podcast, Episode 79 Time: 46:30 Share:

Share:
Read Post

Understanding and Selecting a DLP Solution: Part 4, Data-At-Rest Technical Architecture

Welcome to part 4 of our series on Data Loss Prevention/Content Monitoring and Filtering solutions. If you’re new to the series, you should check out Part 1, Part 2, and Part 3 first. I apologize for getting distracted with some other priorities (especially the Data Security Lifecycle), I just realized it’s been about two weeks since my last DLP post in this series. Time to stick the nose to the grindstone (I grew up in a tough suburb) and crank the rest of this guide out. Last time we covered the technical architectures for detecting policy violations for data moving across the network in communications traffic, including email, instant messaging, web traffic, and so on. Today we’re going to dig in to an often overlooked, but just as valuable feature of most major DLP products- Content Discovery. As I’ve previously discussed, the most important component of a DLP/CMF solution is it’s content awareness. Once you have a good content analysis engine the potential applications increase dramatically. While catching leaks on the fly is fairly powerful, it’s only one small part of the problem. Many customers are finding that it’s just as valuable, if not more valuable, to figure out where all that data is stored in the first place. Sure, enterprise search tools might be able to help with this, but they really aren’t tuned well for this specific problem. Enterprise data classification tools can also help, but based on discussions with a number of clients they don’t tend to work well for finding specific policy violations. Thus we see many clients opting to use the content discovery features of their DLP product. Author’s Note: It’s the addition of robust content discovery that I consider the dividing line between a Data Loss Prevention solution and a Content Monitoring and Filtering solution. DLP is more network focused, while CMF begins the expansion to robust content prevention. I use the name DLP extensively since it’s the industry standard, but over time we’ll see this migrate to CMF, and eventually to Content Monitoring and Protection, as I discussed in this post. The biggest advantage of content discovery in a DLP/CMF tool is that it allows you to take a single policy and apply it across data no matter where it’s stored, how it’s shared, or how it’s used. For example, you can define a policy that requires credit card numbers to only be emailed when encrypted, never be shared via HTTP or HTTPS, only be stored on approved servers, and only be stored on workstations/laptops by employees on the accounting team. All of this is done in a single policy on the DLP/CMF management server. We can break discovery out into three major modes: Endpoint Discovery: scanning workstations and laptops for content. Storage Discovery: scanning mass storage, including file servers, SAN, and NAS. Server Discovery: application-specific scanning on stored data in email servers, document management systems, and databases (not currently a feature of most DLP products, but beginning to appear in some Database Activity Monitoring products). These types perform their analysis using three technologies: Remote Scanning: a connection is made to the server or device using a file sharing or application protocol, and scanning performed remotely. This is essentially mounting a remote drive and scanning it from a scanning server that takes policies from and sends results to the central policy server. For some vendors this is an appliance, for others it’s a server, and for smaller deployments it’s integrated into the central management server. Agent-Based Scanning: an agent is installed on the system/server to be scanned and scanning performed locally. Agents are platform specific, and use local CPU cycles, but can potentially perform significantly faster than remote scanning, especially for large repositories. For endpoints, this should be a feature of the same agent used for enforcing Data-In-Use controls. Temporal-Agent Scanning: Rather than deploying a full time agent, a memory-resident agent is installed, performs a scan, then exits without leaving anything running or stored on the local system. This offers the performance of agent-based scanning in situations where you don’t want a full-time agent running. Any of these technologies can work for any of the modes, and enterprises will typically deploy a mix depending on policy and infrastructure requirements. We currently see some technology limitations of each approach that affect deployment: Remote scanning can significantly increase network traffic and has performance limitations based on network bandwidth and target and scanner network performance. Some solutions can only scan gigabytes per day (sometimes hundreds of GB, but below TB/day), per server based on these practical limitations which may not be sufficient for very large storage. Agents, temporal or permanent, are limited by processing power and memory on the target system which often translates to restrictions on the number of policies that can be enforced, and the types of content analysis that can be used. For example, most endpoint agents are not capable of enforcing large data sets of partial document matching or database fingerprinting. This is especially true of endpoint agents which are more limited Agents don’t support all platforms. Once a policy violation is discovered, the discovery solution can take a variety of actions: Alert/Report: create an incident in the central management server just like a network violation. Wa : notify the user via email that they may be in violation of policy. Quarantine/Notify: move the file to the central management server and leave a .txt file with instructions on how to request recovery of the file. Quarantine/Encrypt: encrypt the file in place, usually leaving a plain text file on how to request decryption. Quarantine/Access Control: change the access controls to restrict access to the file. Remove/Delete: either transfer the file to the central server without notification, or just delete it. The combination of different deployment architectures, discovery techniques, and enforcement options creates a powerful combination for protecting data-at-rest and supporting compliance initiatives. For example, we’re starting to see increasing deployments of CMF to support PCI compliance- more for the ability to ensure (and

Share:
Read Post

Home Security Tip: Nuke It From Orbit

I say we take off and nuke the entire site from orbit. It’s the only way to be sure. -Ripley (Sigourney Weaver) in Aliens While working at home has some definite advantages, like the Executive Washroom, Executive Kitchen, and Executive HDTV, all this working at home alone can get a little isolating. I realized the other month that I spend more hours every day with my cats than any other human being, including my wife. Thus I tend to work out of the local coffee shop a day or two a week. Nice place, free WiFi (that I help secure on occasion), and a friendly staff. Today I was talking with one of the employees about her home computer. A while ago I referred her to AVG Free antivirus and had her turn on her Windows firewall. AVG quickly found all sorts of nasties- including, as she put it, “47 things in that quarantine thing called Trojans. What’s that?” Uh oh. That’s bad. I warned her that her system, even with AV on it, was probably so compromised that it would be nearly impossible to recover. She asked me how much it would cost to go over and fix it, and I didn’t have the heart to tell her. Truth is, as most of you professional IT types know, it might be impossible to clean out all the traces of malware from a system compromised like that. I’m damn good at this kind of stuff, yet if it were my computer I’d just nuke it from orbit- wipe the system and start from scratch. While I have pretty good backups, this can be a bit of a problem for friends and family. Here’s how I go about it on a home system for friends and family: Copy off all important files to an external drive- USB or hard drive, depending on how much they have. Wipe the system and reinstall Windows from behind a firewall (a home wireless router is usually good enough, a cable or DSL modem isn’t). Install all the Windows updates. Read a book or two, especially if you need to install Service Pack 2 on XP. Install Office (hey, maybe try OpenOffice) and any other applications. Double check that you have SP2, IE7, and the latest Firefox installed. Install any free security software you want, and enable the Microsoft Malicious Software removal tool and Windows firewall. See Security Mike for more, even though he hasn’t shown me his stuff yet. Set up their email and such. Take the drive with all their data on it, and scan it from another computer. Say a Mac with ClamAV installed? I usually scan with two different AV engines, and even then I might warn them not to recover those files. Restore their files. This isn’t perfect, but I haven’t had anyone get re-infected yet using this process. Some of the really nasty stuff will hide in data files, but especially if you hold onto the files for a few weeks at least one AV engine will usually catch it. It’s a risk analysis; if they don’t need the files I recommend they trash them. If they really need the stuff we can restore it as carefully as possible and keep an eye on things. If it’s a REALLY bad infection I’ll take the files on my Mac, convert them to plain text or a different file format, then restore them. You do the best you can, and can always nuke it again if needed. In her case, I also recommended she change any bank account passwords and her credit card numbers. It’s the only way to be sure… Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.