Securosis

Research

Understanding and Selecting a DLP Solution: Part 4, Data-At-Rest Technical Architecture

Welcome to part 4 of our series on Data Loss Prevention/Content Monitoring and Filtering solutions. If you’re new to the series, you should check out Part 1, Part 2, and Part 3 first. I apologize for getting distracted with some other priorities (especially the Data Security Lifecycle), I just realized it’s been about two weeks since my last DLP post in this series. Time to stick the nose to the grindstone (I grew up in a tough suburb) and crank the rest of this guide out. Last time we covered the technical architectures for detecting policy violations for data moving across the network in communications traffic, including email, instant messaging, web traffic, and so on. Today we’re going to dig in to an often overlooked, but just as valuable feature of most major DLP products- Content Discovery. As I’ve previously discussed, the most important component of a DLP/CMF solution is it’s content awareness. Once you have a good content analysis engine the potential applications increase dramatically. While catching leaks on the fly is fairly powerful, it’s only one small part of the problem. Many customers are finding that it’s just as valuable, if not more valuable, to figure out where all that data is stored in the first place. Sure, enterprise search tools might be able to help with this, but they really aren’t tuned well for this specific problem. Enterprise data classification tools can also help, but based on discussions with a number of clients they don’t tend to work well for finding specific policy violations. Thus we see many clients opting to use the content discovery features of their DLP product. Author’s Note: It’s the addition of robust content discovery that I consider the dividing line between a Data Loss Prevention solution and a Content Monitoring and Filtering solution. DLP is more network focused, while CMF begins the expansion to robust content prevention. I use the name DLP extensively since it’s the industry standard, but over time we’ll see this migrate to CMF, and eventually to Content Monitoring and Protection, as I discussed in this post. The biggest advantage of content discovery in a DLP/CMF tool is that it allows you to take a single policy and apply it across data no matter where it’s stored, how it’s shared, or how it’s used. For example, you can define a policy that requires credit card numbers to only be emailed when encrypted, never be shared via HTTP or HTTPS, only be stored on approved servers, and only be stored on workstations/laptops by employees on the accounting team. All of this is done in a single policy on the DLP/CMF management server. We can break discovery out into three major modes: Endpoint Discovery: scanning workstations and laptops for content. Storage Discovery: scanning mass storage, including file servers, SAN, and NAS. Server Discovery: application-specific scanning on stored data in email servers, document management systems, and databases (not currently a feature of most DLP products, but beginning to appear in some Database Activity Monitoring products). These types perform their analysis using three technologies: Remote Scanning: a connection is made to the server or device using a file sharing or application protocol, and scanning performed remotely. This is essentially mounting a remote drive and scanning it from a scanning server that takes policies from and sends results to the central policy server. For some vendors this is an appliance, for others it’s a server, and for smaller deployments it’s integrated into the central management server. Agent-Based Scanning: an agent is installed on the system/server to be scanned and scanning performed locally. Agents are platform specific, and use local CPU cycles, but can potentially perform significantly faster than remote scanning, especially for large repositories. For endpoints, this should be a feature of the same agent used for enforcing Data-In-Use controls. Temporal-Agent Scanning: Rather than deploying a full time agent, a memory-resident agent is installed, performs a scan, then exits without leaving anything running or stored on the local system. This offers the performance of agent-based scanning in situations where you don’t want a full-time agent running. Any of these technologies can work for any of the modes, and enterprises will typically deploy a mix depending on policy and infrastructure requirements. We currently see some technology limitations of each approach that affect deployment: Remote scanning can significantly increase network traffic and has performance limitations based on network bandwidth and target and scanner network performance. Some solutions can only scan gigabytes per day (sometimes hundreds of GB, but below TB/day), per server based on these practical limitations which may not be sufficient for very large storage. Agents, temporal or permanent, are limited by processing power and memory on the target system which often translates to restrictions on the number of policies that can be enforced, and the types of content analysis that can be used. For example, most endpoint agents are not capable of enforcing large data sets of partial document matching or database fingerprinting. This is especially true of endpoint agents which are more limited Agents don’t support all platforms. Once a policy violation is discovered, the discovery solution can take a variety of actions: Alert/Report: create an incident in the central management server just like a network violation. Wa : notify the user via email that they may be in violation of policy. Quarantine/Notify: move the file to the central management server and leave a .txt file with instructions on how to request recovery of the file. Quarantine/Encrypt: encrypt the file in place, usually leaving a plain text file on how to request decryption. Quarantine/Access Control: change the access controls to restrict access to the file. Remove/Delete: either transfer the file to the central server without notification, or just delete it. The combination of different deployment architectures, discovery techniques, and enforcement options creates a powerful combination for protecting data-at-rest and supporting compliance initiatives. For example, we’re starting to see increasing deployments of CMF to support PCI compliance- more for the ability to ensure (and

Share:
Read Post

Home Security Tip: Nuke It From Orbit

I say we take off and nuke the entire site from orbit. It’s the only way to be sure. -Ripley (Sigourney Weaver) in Aliens While working at home has some definite advantages, like the Executive Washroom, Executive Kitchen, and Executive HDTV, all this working at home alone can get a little isolating. I realized the other month that I spend more hours every day with my cats than any other human being, including my wife. Thus I tend to work out of the local coffee shop a day or two a week. Nice place, free WiFi (that I help secure on occasion), and a friendly staff. Today I was talking with one of the employees about her home computer. A while ago I referred her to AVG Free antivirus and had her turn on her Windows firewall. AVG quickly found all sorts of nasties- including, as she put it, “47 things in that quarantine thing called Trojans. What’s that?” Uh oh. That’s bad. I warned her that her system, even with AV on it, was probably so compromised that it would be nearly impossible to recover. She asked me how much it would cost to go over and fix it, and I didn’t have the heart to tell her. Truth is, as most of you professional IT types know, it might be impossible to clean out all the traces of malware from a system compromised like that. I’m damn good at this kind of stuff, yet if it were my computer I’d just nuke it from orbit- wipe the system and start from scratch. While I have pretty good backups, this can be a bit of a problem for friends and family. Here’s how I go about it on a home system for friends and family: Copy off all important files to an external drive- USB or hard drive, depending on how much they have. Wipe the system and reinstall Windows from behind a firewall (a home wireless router is usually good enough, a cable or DSL modem isn’t). Install all the Windows updates. Read a book or two, especially if you need to install Service Pack 2 on XP. Install Office (hey, maybe try OpenOffice) and any other applications. Double check that you have SP2, IE7, and the latest Firefox installed. Install any free security software you want, and enable the Microsoft Malicious Software removal tool and Windows firewall. See Security Mike for more, even though he hasn’t shown me his stuff yet. Set up their email and such. Take the drive with all their data on it, and scan it from another computer. Say a Mac with ClamAV installed? I usually scan with two different AV engines, and even then I might warn them not to recover those files. Restore their files. This isn’t perfect, but I haven’t had anyone get re-infected yet using this process. Some of the really nasty stuff will hide in data files, but especially if you hold onto the files for a few weeks at least one AV engine will usually catch it. It’s a risk analysis; if they don’t need the files I recommend they trash them. If they really need the stuff we can restore it as carefully as possible and keep an eye on things. If it’s a REALLY bad infection I’ll take the files on my Mac, convert them to plain text or a different file format, then restore them. You do the best you can, and can always nuke it again if needed. In her case, I also recommended she change any bank account passwords and her credit card numbers. It’s the only way to be sure… Share:

Share:
Read Post

Movement In The DLP Market?

Rumors are a major deal in the DLP market might drop soon. As in an acquisition. Being just a rumor I’ll keep the names to myself for now, but it’s an interesting development. One that will probably stir the market and maybe get things moving, even if the acquisition itself fails. Share:

Share:
Read Post

Woops- Comments Should Really Be Open Now

A while back I opened up the comments so you didn’t have to register, but somewhere along the lines that setting was reset. They should be open now, and I’ll keep them open until the spam or trolls force me to change things. Share:

Share:
Read Post

Lessons On Software Updates: Microsoft and Apple Both Muck It Up

I know this is going to sound intensely weird, or somewhat disturbing, but I’m fascinated by how we treat software as a product. It’s kind of a mashup between content like movies and music, which we sort of purchase, but are really just licensing to use, and “hard” products like TVs, hammers, and decorative toilet paper dispensers. Most software companies just sell us a license to use their product, with all sorts of onerous (and potentially unenforceable) restrictions is what we politely refer to as “End User License Agreements”, or EULAs. We only call them that because “Non-Consentual Ass Fuck” doesn’t have as legitimate a ring to it. But there’s a HUGE difference between software and media. Media is passive- we read it, watch it, and listen to it, but it doesn’t affect anything else it touches. A bad book doesn’t screw up your library, and a bad CD doesn’t ruin your CD player. Software, on the other hand, deeply affects our work and personal lives. We install software on systems running other software, and one bad error in one little program can ruin our entire system, corrupt data in other applications, or even damage hardware. Because software is so different than other products, it exists, in essence, in a state of perpetual recall. A sizable portion of the technology industry is dedicated to pushing updates to our software. In some cases these updates change functionality, adding new features. In other cases these updates fix security or other product flaws. For a media file it would be like buying the original Star Wars on DVD, then later updating it will all the improvements Lucas made like emasculating Han and having Greedo shoot first. For physical products it would be like plugging my DeWalt compound miter saw into the wall to add a variable speed feature, or to extend the length of the finger guard. This is an intensely new way of buying, selling, and owning products. One I’m not convinced we fully understand the implications of yet. Let’s turn back to software, keeping in mind that many products today, from MP3 players to phones, now ship with updateable software. As I mentioned before, we tend to lump updates into two categories: Functionality changes: adding or changing features Fixes- repairing security or functionality flaws Ideally these updates benefit the customer by improving the product, but in some cases the update goes in entirely the opposite direction. Vendors can even use updates to deliberately remove functionality you paid for. Take a look at the Pioneer I o; its FM feature to listen to XM radio using your car stereo was completely removed during a software update (Pioneer forgot to get FCC approval). We thus have two situations we’ve never really encountered before in the world of buying and selling stuff. Updates can change how a product you paid for works. Updates can change how other products you paid for, on the same system, work. This is a powerful change to the concepts of product ownership and customer relations and comes with certain responsibilities. Over the past few weeks we’ve seen two of the biggest technology names in the world totally muck it up: Microsoft and Apple. One of the cardinal rules of software updates is that you never force an update. The change you’re pushing might change vital functionality, and, to be honest, it isn’t your right to change my system. That’s called cybercrime. It appears Microsoft messed up and pushed out a “stealth” update for the Windows Update feature of Windows XP. This update installed itself even if you told Windows not to install updates. Worse yet, it essentially ruined the Windows Repair function of the system. Press aside, Microsoft probably opened themselves up for some lawsuits. Another rule (probably more of a best practice) is that you should separate security and functionality changes in updates. This is something Microsoft generally does well these days (except for Service Packs) and Apple does extremely poorly. Security and other flaw updates should be separate from functionality updates because while a user may not want to be hacked, they might not want to change how their product works to be safe.This would be like turning in your car for a recall around a defective airbag and having the speedometer changed from miles to kilometers as a “bonus”. Apple updated the iPhone with critical security updates, but these updates are bundled with serious functionality changes. Thus if I don’t want a little Starbucks logo to appear on my phone every time I walk past one, I have to leave myself vulnerable to attack. Nice one Apple. I really do think we’re redefining the concept of ownership, and the privacy advocate in my is worried things are swinging in the wrong direction. Device manufacturers are practically engaged in an all out war with their own customers, and most of it is driven by the content protection requirements of the media industry. Here are a few recommendations when dealing with software updates: All updates should be optional Don’t bundle security updates with functionality updates Don’t break unrelated applications If you’re an application, don’t change the underlying platform Clearly notify customers what features/functions will change with the update Or to be a little clearer- don’t force updates, don’t take away functions, tell people what you’re doing, and don’t break anything else. Share:

Share:
Read Post

Yes, Hackers Can Take Down The Power Grid. Maybe.

I didn’t plan on writing about the DHS blowing up a power generator on CNN, but I’m in my hotel room in Vegas waiting for a conference call and it’s all over the darn TV. Martin and Amrit also talked about it, and I hate to be late to a party. That little video has started an uproar. Based on the press coverage you’ve got raving paranoids on one side, and those in absolute denial on the other. We’re already seeing accusations that it was all just staged to get some funding. I’ve written about SCADA (the systems used to control power grids and other real-world infrastructure like manufacturing systems) for a while now. I’ve written about it here on the blog, and authored two research notes with my past employer that didn’t make me too popular in certain circles. I’ve talked with a ton of people on these issues, researched the standards and technologies, and my conclusion is that some of our networks are definitely vulnerable. The problem isn’t so bad we should panic, but we definitely need to increase the resources used to defend the power grid and other critical infrastructure. SCADA stands for Supervisory Control And Data Acquisition. These are the systems used to supervise physical things, like power switches or those fascinating mechanical doohickies you always see on the Discovery Channel making other doohickies (or beer bottles). They’ve been around for a very long time and run on technologies that have nothing to do with the Internet. At least they used to. Over the last decade or so, especially the past five years, we’ve seen some changes in these process control networks. The first shift was starting to use commodity hardware and software, the same technology you use at work and home, instead of the proprietary SCADA stuff. Some of these things were O L D old, inefficient, and took special skill to maintain. It’s a lot more efficient for a vendor to just build on the technology we all use every day; running special software on regular hardware and operating systems. Sounds great, except as anyone reading this blog knows there are plenty of vulnerabilities in all that regular hardware and software. Sure, there were probably vulnerabilities in SCADA stuff (we know for a fact there were), but it’s not like every pimply faced teenage hacker in the world knew about them. A lot of new SCADA controllers and servers run on Microsoft Windows. Nothing against Microsoft, but Windows isn’t exactly known as a vulnerability free platform. Worse yet, some of these systems are so specialized that you’re not allowed to patch them- the vendor has to handle any software updates themselves, and they’re not always the most timely of folks. Thus we are now running our power plants and beer bottling facilities on stuff that’s on the same software all the little script kiddies can slice through, and we can’t even patch the darn things. I can probably live without power, but definitely not the beer. I brew at home, but that takes weeks to months before you can drink it, and our stash definitely won’t last that long. Especially without any TV. Back to SCADA. Most of these networks were historically isolated- they were around long before the Internet and didn’t connect to it. At least before trend number two, called “convergence”. As utilities and manufacturing moved onto commodity hardware and software, they also started using more and more IT to run the business side of things. And the engineers running the electric lifeblood of our nation want to check email just as often as the rest of us. And they have a computer sitting in front of them all day. Is anyone surprised they started combining the business side of the network with the process control side? Aside from keeping engineers happy with chain letters and bad jokes, the power companies could start pulling billing and performance information right from the process control side to the business side. They merged the networks. Not everyone, but far more companies than you probably think. I know what you’re all thinking right now, because this is Securosis, and we’re all somewhat paranoid and cynical. We’re now running everything on standard platforms, on standard networks, with bored engineers surfing porn and reading junk email on the overnight shift. Yeah, that’s what I thought, and it’s why I wrote the research. This isn’t fantasy; we have a number of real world cases where this broke real world things. During the Slammer virus a safety system at a nuclear power plant went down. Trains in Sydney stopped running due to the Sasser virus. Blaster was a contributing factor to the big Northeast power outage a few years ago because it bogged down the systems the engineers used to communicate with each other and monitor systems (rumor has it). I once had a private meeting in a foreign country that admitted hackers had gained access to the train control system on multiple occasions and could control the trains. Thus our infrastructure is vulnerable in three ways: A worm, virus, or other flaw saturating network traffic and breaking the communications between the SCADA systems. A worm, virus, or other attack that takes down SCADA systems by crashing or exploiting common, non-SCADA, parts of the system. Direct attack on the SCADA systems, using the Internet as a vector Some of these networks are now so messed up that you can’t even run a vulnerability scan on them without crashing things. Bad stuff, but all hope isn’t lost. Not everyone connects their systems together like this. Some organizations use air gaps (totally separate, isolated networks), virtual air gaps (connected, but an isolated one-way connection), or air-locks (a term I created to describe two separate networks with a very controlled, secure system in the middle to exchange information both ways, not network traffic). NERC, the industry body for the power networks, created a pretty good standard (CIP, Critical Infrastructure Protection) for securing these

Share:
Read Post

The Internet Isn’t Still Running Because Bad Guys Don’t Want To Burn Their Houses Down

Richard Bejtlich, commenting on a Marcus Ranum article, said: “Continuing to function” is an interesting concept. The reason the “Internet” hasn’t been destroyed by terrorists, organized crime, or others is that doing so would cut off a major communication and funding resource. Criminals and other adversaries have a distinct interest in keeping computing infrastructure working just well enough to exploit it. I have to disagree here. While there are a lot of smart bad guys just out for a little profit, there are plenty of malicious psychos looking to cause damage. When I did physical security and worked as a paramedic there was a distinct difference between profit-driven crime and ego-driven crime, even in the same criminal act. Ego crimes, ranging from vandalism to spousal abuse, originate in flaws of character where logic and self-preservation don’t necessarily play a role. Or sometimes they’re just fueled by testostahol, the powerful substance created when alcohol and testosterone mix in a juvenile male’s bloodstream. There are plenty of people who would bring the Internet down either to show they could, or to damage society out of some twisted internal motivation. The root DNS servers are constantly under attack, and not just because someone thinks they can make a buck doing it. Marcus said, Will the future be more secure? It’ll be just as insecure as it possibly can, while still continuing to function. Just like it is today. Not because the bad guys want it that way, but because once crime crosses the threshold where society can’t function at some arbitrary level of efficiency or safety, the populace and governments wake up and take action to preserve our quality of life. There really isn’t much motivation to invest in security that’s more than “good enough” to keep things running. We all have acceptable losses and only act when those are exceeded. Share:

Share:
Read Post

Metasploit Is Ready For Your iPhone Exploits

H D Moore got an iPhone. This is both good news and bad news for Apple. The bad news is that once some remote vulnerabilities appear (including clientside vulns), and get coded into exploits, the Metasploit Framework is ready for them with some iPhone-specific payloads. Let the iPhone pwnage begin. The good news is that I think this will help keep the iPhone more secure. There will be clear motivation to keep this thing patched, and researchers and Apple’s own developers can more easily demonstrate the exploitability of any particular vulnerabilities. And the really good news is you can update your iPhone. Easily. This is a first for the mobile phone market and a clear security advantage. Even if Apple makes mistakes (which they have and will), they can fix them far more easily than other mobile phone manufacturers. Share:

Share:
Read Post

Heading to Vegas for SANS

I get in early Wednesday morning and head home Friday. If you want to meet up, drop me a line at rmogull@securosis.com. Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.