Securosis

Research

Understanding and Selecting DSP: Core Components

Those of you familiar with DAM already know that over the last four years DAM solutions have been bundled with assessment and auditing capabilities. Over the last two years we have seen near universal inclusion of discovery and rights management capabilities. DAM is the centerpiece of a database security strategy, but as a technology it is just one of a growing number of important database security tools. We have already defined Database Security Platform, so now let’s spend a moment looking at the key components, how we got here, and where the technology and market are headed. We feel this will fully illustrate the need for the name change. Database Security Platform Origins The situation is a bit complicated, so we include a diagram that maps out the evolution. Database Activity Monitoring originated from leveraging core database auditing features, but quickly evolved to include supporting event collection capabilities: Database Auditing using native audit capabilities. Database Activity Monitoring using network sniffing to capture activity. Database Activity Monitoring with server agents to capture activity. So you either used native auditing, a network sniffer, or a local agent to track database activity. Native auditing had significant limitations – particularly performance – so we considered the DAM market distinct from native capabilities. Due to customer needs, most products combined network monitoring and agents into single products – perhaps with additional collection capabilities, such as memory scanning. The majority of deployments were to satisfy compliance or audit requirements, followed by security. There were also a range of distinct database security tools, generally sold standalone: Data Masking to generate test data from protection data, and to protect sensitive information while retaining important data size and structural characteristics. Database Assessment (sometimes called Database Vulnerability Assessment) to assess database configurations for security vulnerabilities and general configuration policy compliance. User Rights Management to evaluate user and group entitlements, identify conflicts and policy violations, and otherwise help manage user rights. File Activity Monitoring to monitor (and sometimes filter) non-database file activity. Other technologies have started appearing as additional features in some DAM products: Content Discovery and Filtering to identify sensitive data within databases and even filter query results. Database Firewalls which are essentially DAM products placed inline and set to filter attack traffic, not merely monitor activity. The following graph shows where we are today: As the diagram shows, many of these products and features have converged onto single platforms. There are now products on the market which contain all these features, plus additional capabilities. Clearly the term “Database Activity Monitoring” only covers a subset of what these tools offer. So we needed a new name to better reflect the capabilities of these technologies. As we looked deeper we realized how unusual standalone DAM products were (and still are). It gradually became clear that we were watching the creation of a platform, rather than the development of a single-purpose product. We believe the majority of database security capabilities will be delivered either as a feature of a database management system, or in these security products. We have decided to call them Database Security Platforms, as that best reflects the current state of the market and how we see it evolving. Some of these products include non-database features designed for data center security – particularly File Activity Monitoring and combined DAM/Web Application Firewalls. We wouldn’t be surprised to see this evolve into a more generic data center security play, but it’s far too early to see that as a market of its own. Market and Product Evolution We already see products differentiating based on user requirements. Even when feature parity is almost complete between products, we sometimes see vendors shifting them between different market sectors. We see primary use cases, and we expect products to differentiate along these lines over time: Application and Database Security: These products focus more on integrating with Web Application Firewalls and other application security tools. They place a higher priority on vulnerability and exploit detection and blocking; and sell more directly to security, application, and database teams. Data and Data Center Security: These products take a more data-centric view of security. Their capabilities will expand more into File Activity Monitoring, and they will focus more on detecting and blocking security incidents. They sell to security, database, and data center teams. Audit and Compliance: Products that focus more on meeting audit requirements – and so emphasize monitoring capabilities, user rights management, and data masking. While there is considerable feature overlap today, we expect differentiation to increase as vendors pursue these different market segments and buying centers. Even today we see some products evolving primarily in one of these directions, which is often reflected in their sales teams and strategies. This should give you a good idea of how we got here from the humble days of DAM, and why this is more than just a rebranding exercise. We don’t know of any DAM-only tools left on the market, so that name clearly no longer fits. As a user and/or buyer we also think it’s important to know which combination of features to look at, and how they can indicate the future of your product. Without revisiting the lessons learned from other security platforms, suffice it to say that you will want a sense of which paths the vendor is heading down before locking yourself into a product that might not meet your needs in 3-5 years. Share:

Share:
Read Post

Webcast Wednesday 22nd: Tokenization Scope Reduction

Just a quick announcement that this Wednesday I will be doing a webcast on how to reduce PCI-DSS scope and audit costs with tokenization. This will cover the meaty part of our Tokenization Guidance paper from last year. In the past I have talked about issues with the PCI Council’s Tokenization supplement; now I will dig into how tokenization affects credit card processing systems, and how supplementary systems can fall out of scope. The webcast will start at 11am PST and run for an hour. You can sign up at the sponsor’s web site. Share:

Share:
Read Post

Malware Analysis Quant: Documenting Metrics (and survey is still going)

Just a little President’s Day update on the Malware Analysis Quant project. At the end of last month we packaged up all the process descriptions into a spiffy paper, which you can download and check out. We have been cranking away at the second phase of the research, and the first step of that is the survey. Here is a direct survey link, and we would love your input. Even if you don’t do in-depth malware analysis every day, that’s instructive, as we try to figure out how many folks actually do this work, and how many rely on their vendors to take care of it. Finally, we have also started to document the metrics that will comprise the cost model which is the heart of every Quant project. Here are links to the metrics posts we include both in the Heavy feed and on the Project Quant blog. Metrics – Confirm Infection Metrics – Build Testbed Metrics – Static Analysis Metrics – Dynamic Analysis Metrics – The Malware Profile One last note: as with all of projects, our research methodology is dynamic. That means posting something on our blog is just the beginning. So if you read something you don’t agree with let us know, and work with us to refine the research. Leave a comment on the blog, or if for some reason you can’t do that drop us an email. Share:

Share:
Read Post

RSA Conference 2012 Guide: Email & Web Security

For a little bonus on a Sunday afternoon, let’s dig into the next section of the RSA Guide, Email and Web Security which remains a pretty hot area. This shouldn’t be surprising since these devices tend to be one of the only defenses against your typical attacks like phishing and drive-by downloads. We’ve decided to no longer call this market ‘content security’; that was a terrible name. Email and Web Security speaks to both the threat models as well as the deployment architectures of what started as the ‘email security gateway’ market. These devices screen email and web traffic moving in and out of your company at the application layer. The goal is to prevent unwanted garbage like malware from coming into your network, as well as detection of unwanted activity like employees clogging up the network with HiDef downloads of ‘Game of Thrones’. These gateways have evolved to include all sorts of network and content analysis tools for a variety of traffic types (not just restricted to web traffic). Some of the vendors are starting to resemble UTM gateways, placing 50 features all on the same box, and letting the user decide what they want from the security feature buffet. Most vendors offer a hybrid model of SaaS and in-house appliances for flexible deployments while keeping costs down. This is a fully mature and saturated market, with the leading vendors on a very even footing. There are several quality products out there, each having a specific strength in their technology, deployment or pricing model. There are quite a few areas of interest at the show for web gateway security: VPN Security and the Cloud Remember how VPN support was a major requirement for every email security appliance? Yeah, well, it’s back. And it’s new and cloudified! Most companies provide their workforce with secure VPN connections to work from home or on the road. And most companies find themselves supporting more remote users more often than ever, which we touched on in the Endpoint Security section. As demand grows so too does the need for better, faster VPN services. Leveraging cloud services these gateways route users through a cloud portal, where user identification and content screening occur, then passing user requests into your network. The advantages are you get scalable cloud bandwidth, better connectivity, and security screening before stuff hits your network. More (poor man’s) DLP Yes, these secure web offerings provide Data Loss Prevention ‘lite’. In most cases, it’s just the subset of DLP needed to detect data exfiltration. And regular expression checking for outbound documents and web requests is good enough to address the majority of content leakage problems, so this works well enough for most customers, which makes it one of the core features every vendor must have. While it’s difficult for any one vendor to differentiate their offering by having DLP-lite, but they’ll have trouble competing in the marketplace without it. It’s an effective tool for select data security problems. Global Threat Intelligence Global threat intelligence involves a security vendor collecting attack data from all their customers, isolating new attacks that impact a handful, and automatically applying security responses to their other client installations. When implemented correctly, it’s effective at slowing down the propagation of threats across many sites. The idea has been around for a couple years, originating in the anti-spam business, but has begun to show genuine value for some firewall, web content and DAST (dynamic application security testing) products. Alas, like many features, some are little more than marketing ‘check the box’ functionality here while others actually collect data from all their clients and promptly distribute anonymized intelligence back to the rest of their customers to ensure they don’t get hammered. It’s difficult to discern one from the other, so you’ll need to dig into the product capabilities. Though it should be fun on the show floor to force an SE or other sales hack to try to explain exactly how the intelligence network works. Anti-malware Malware is the new ‘bad actor’. It’s the 2012 version of the Trojan Horse; something of a catch-all for viruses, botnets, targeted phishing attacks, keystroke loggers and marketing spyware. It infects servers and endpoints by any and all avenues available. And just as the term malware covers a lot of different threats, vendor solutions are equally vague. Do they detect botnet command and control, do they provide your firewall with updated ‘global intelligence’, or do they detect phishing email? Whatever the term really means, you’re going to hear a lot about anti-malware and why you must stop it. Though we do see innovation on network-based malware detection, which we covered in the Network Security section. New Anti-Spam. Same as the old Anti-Spam We thought we were long past the anti-spam discussion, isn’t that problem solved already? Apparently not. Spam still exists, that’s for sure, but any given vendor’s efficiency varies from 98% to 99.9% effective on any given week. Just ask them. Being firm believers in Mr. Market, clearly there is enough of an opportunity to displace incumbents, as we’ve seen a couple new vendors emerge to provide new solutions, and established vendors to blend their detection techniques to improve effectiveness. There is a lot of money spent specifically for spam protection, and it’s a visceral issue that remains high profile when it breaks, thus it’s easy to get budget for. Couple that with some public breaches from targeted phishing attacks or malware infections through email (see above), and anti-spam takes on a new focus. Again. We don’t think this is going to alter anyone’s buying decisions, but we wanted to make sure you knew what the fuss was about, and not to be surprised when you think you stepped into RSA 2005 seeing folks spouting about new anti-spam solutions. Share:

Share:
Read Post

RSA Conference Guide 2012: Endpoint Security

Ah, the endpoint. Do you remember the good old days when endpoint devices were laptops? That made things pretty simple, but alas, times have changed and the endpoint devices you are tasked to protect have changed as well. That means it’s not just PC-type devices you have to worry about – it’s all varieties of smartphones and in some industries other devices including point of sale terminals, kiosks, control systems, etc. Basically anything with an operating system can be hacked, so you need to worry about it. Good times. BYOD everywhere You’ll hear a lot about “consumerization” at RSAC 2012. Most of the vendors will focus on smartphones, as they are the clear and present danger. These devices aren’t going away, so everybody will be talking about mobile device management. But as in other early markets, there is a plenty of talk but little reality to back it up. You should use the venue to figure out what you really need to worry about, and for this technology that’s really the deployment model. It comes down to a few questions: Can you use the enterprise console from your smartphone vendor? Amazingly enough, the smartphone vendors have decent controls to manage their devices. And if you live in a homogenous world this is a logical choice. But if you live in a heterogenous world (or can’t kill all those BlackBerries in one fell swoop), a vendor console won’t cut it. Does your IT management vendor have an offering? Some of the big stack IT security/management folks have figured out that MDM is kind of important, so they offer solutions that plug into the stuff you already use. Then you can tackle the best of breed vs. big stack discussion, but this is increasingly a reasonable alternative. What about those other tools? If you struck out with the first two questions you should look at one of the start-up vendors who make a trade on heterogenous environment. But don’t just look for MDM – focus on what else those folks are working on. Maybe it’s better malware checking. Perhaps it’s integration with network controls (to restrict devices to certain network segments). If you find a standalone product, it is likely to be acquired during your depreciation cycle, so be sure there is enough added value to warrant the tool standing alone for a while. Another topic to grill vendors on is how they work with the “walled garden” of iOS (Apple mobile devices). Vendors have limited access into iOS, so look for innovation above and beyond what you can get with Apple’s console. Finally, check out our research on Bridging the Mobile Security Gap (Staring Down Network Anarchy, The Need for Context, and Operational Consistency), as that research deals with many of these consumerization & BYOD issues, especially around integrating with the network. The Biggest AV Loser Last year’s annual drops of the latest and greatest in endpoint protection suites were all about sucking less. And taking up less real estate and compute power on the endpoint devices. Given the compliance regimes many of you live under, getting rid of endpoint protection isn’t an option, so less suckage means less heartburn for you. At least you can look at the bright side, right? In terms of technology evolution there won’t be much spoken about at the RSA Conference. You’ll see vendors still worshipping the Cloud Messiah, as they try to leverage their libraries of a billion AV signatures in the cloud. That isn’t very interesting but check into how they leverage file ‘reputation’ to track which files look like malware, and your options to block them. The AV vendors actually have been hard at work bolstering this file analysis capability, so have them run you through their cloud architectures to learn more. It’s still early in terms of effectiveness but the technology is promising. You will also see adjunct endpoint malware detection technologies positioned to address the shortcomings of current endpoint protection. You know, basically everything. The technology (such as Sourcefire’s FireAMP) is positioned as the cloud file analysis technology discussed above so the big vendors will say they do this, but be wary of them selling futures. There are differences, though – particularly in terms of tracking proliferation and getting better visibility into what the malware is doing. You can learn a lot more about this malware analysis process by checking out our Quant research, which goes into gory detail on the process and provides some context for how the tools fit into the process. Share:

Share:
Read Post

RSA Conference Guide 2012: Application Security

Building security in? Bolting it on? If you develop in-house applications, it’s likely both. Application security will be a key theme of the show. But the preponderance of application security tools will block, scan, mask, shield, ‘reperimeterize’, reconfigure, or reset connections from the outside. Bolt-on is the dominant application security model for the foreseeable future. The good news is that you may not be the one managing it, as there is a whole bunch of new cloud security services and technologies available. Security as a service, anyone? Here’s what we expect to see at this year’s RSA Conference. SECaaS Security as a Service, or ‘SECaaS’; basically using ‘the cloud’ to deliver security services. No, it’s not a new concept, but a new label to capture the new variations on this theme. What’s new is that some of the new services are not just SaaS, but delivered for PaaS or IaaS protection as well. And the technologies have progressed well beyond anti-spam and web-site scanning. During the show you will see a lot of ‘cloudwashing’ – where the vendor replaces ‘network’ with ‘cloud’ in their marketing collateral, and suddenly they are a cloud provider – which makes it tough to know who’s legit. Fortunately at the show you will see several vendors who genuinely redesigned products to be delivered as a service from the cloud and/or into cloud environments. Offerings like web application firewalls available from IaaS vendors, code scanning in the cloud, DNS redirectors for web app request and content scanning, and threat intelligence based signature generation, just to name a few. The new cloud service models offers greater simplicity as well as cost reduction, so we are betting these new services will be popular with customers. They’ll certainly be a hit on the show floor. Securing Applications at Scale Large enterprises and governments trying to secure thousands of off-the-shelf and homegrown applications live with this problem every day. Limited resources are the key issue – it’s a bit like weathering a poop storm with a paper hat. Not enough protection and the limited resources you have are not suitable for the job. It’s hard to be sympathetic as most of these organizations created their own headaches – remember when you thought it was a good idea to put a web interface on those legacy applications? Yeah, that’s what I’m talking about. Now you have billions of lines of code, designed to be buried deep within your private token ring, providing content to people outside your company. Part of the reason application security moves at a snail’s pace is because of the sheer scope of the problem. It’s not that companies don’t know their applications – especially web applications – are not secure, but the time and money required to address all the problems are overwhelming. A continuing theme we are seeing is how to deal with application security at scale. It’s both an admission that we’re not fixing everything, and an examination of how to best utilize resources to secure applications. Risk analysis, identifying cross-domain threats, encapsulation, reperimetrization, and multi-dimensional prioritization of bug fixes are all strategies. There’s no embodying product that you’ll see at the show, but we suggest this as a topic of discussion when you chat with folks. Many vendors will be talking about the problem and how their product fits within a specific strategic approach for addressing the issue. Code Analysis? Meh. DAST? Yeah. The merits of ‘building security in’ are widely touted but adoption remains sporadic. Awareness, the scale of the issue, and cultural impediments all keep tools that help build secure code a small portion of the overall application security market. Regardless, we expect to hear lots of talk about code analysis and white box testing. These products offer genuine value and several major firms made significant investments in the technology last year. While the hype will be in favor of white box code analysis, the development community remains divided. No one is arguing the value of white box testing, but adoption is slower than we expected. Very large software development firms with lots of money implement a little of each secure code development technique in their arsenal, including white box as a core element, basically because they can. The rest of the market? Not so much. Small firms focus on one or two areas during the design, development, or testing phase. Maybe. And that usually means fuzzing and Dynamic Application Security Testing (DAST). Whether it’s developer culture, or mindset, or how security integrates with development tools, or this is just the way customers want to solve security issues – the preference is for semi-black-box web scanning products. Big Data, Little App Security You’re going to hear a lot about big data and big data security issues at the conference. Big Data definitely needs to be on the buzzword bingo card. And 99 out of 100 vendors who tell you they have a big data security solution are lying. The market is still determining what the realistic threats are and how to combat them. But we know application security will be a bolt-on affair for a long period, because: Big data application development has huge support and is growing rapidly A vanishingly low percentage of developer resources are going into designing secure applications for big data. SQL injection, command injection, and XSS are commonly found on most of the front-end platforms that support NoSQL development. Some of them did not even have legitimate access controls until recently! Yes, jump into your time machine and set the clock for 10 years ago. Make no mistake – firms are pumping huge amounts of data into production non-relational databases without much more than firewalls and SSL protecting them. So if you have some architects playing around with these technologies (and you do), work on identifying some alternatives to secure them at the show. Share:

Share:
Read Post

OS X 10.8 Gatekeeper in Depth

As you can tell from my TidBITS review of Gatekeeper, I think this is an important advancement in consumer security. There are a lot of in-depth technical aspects that didn’t fit in that article, so here’s an additional Q&A for those of you with a security background who care about these sorts of things. I’m skipping the content from the TidBITS article, so you might want to read that first. Will Gatekeeper really make a difference? I think so. Right now the majority of the small population of malware we see for Macs is downloaded trojans and tools like Mac Defender that download through the browser. While there are plenty of ways to circumvent Gatekeeper, most of them are the sorts of things that will raise even uneducated users’ hackles. Gatekeeper attacks the economics of widespread malware. It conveys herd immunity. If most users use it (and as the default, that’s extremely likely) it will hammers on the profitability of phishing-based trojans. To attackers going after individual users, Gatekeeper is barely a speed bump. But in terms of the entire malware ecosystem, it’s much more effective – more like tire-slashing spikes. How does Gatekeeper work? Gatekeeper is an extension of the quarantine features first implemented in Mac OS X 10.5. When you download files using certain applications a “quarantine bit” is set (more on that in a second). In OS X 10.5-10.7 when you open a file Launch Services looks for that attribute. If it’s set, it informs the user that the program was downloaded from the Internet and asks if they still want to run it. Users click through everything, so that doesn’t accomplish much. In 10.6 and 10.7 it also checks the file for any malware before running, using a short list that Apple now updates daily (as needed). If malware is detected it won’t let you open the file. If the application was code signed, the file’s digital certificate is also checked and used to validate integrity. This prevents tampered applications from running. In Mac OS X 10.8 (Mountain Lion), Gatekeeper runs all those checks and validates the source of the download. I believe this is done using digital certificates, rather than another extended attribute. If the file is from an approved source (the Mac App Store or a recognized Developer ID) then it’s allowed to run. Gatekeeper also checks developer certificates against a blacklist. So here is the list of checks: Is the quarantine attribute set? Is the file from an approved source (per the user’s settings)? Is the digital certificate on the blacklist? Has the signed application been tampered with? Does the application contain a known malware signature? If it passes those checks, it can run. What is the quarantine bit? The quarantine bit is an extended file attribute set by certain applications on downloaded files. Launch Services checks it when running an application. When you approve an application (first launch) the attribute is removed, so you are never bothered again for that version. This is why some application updates trigger quarantine and others don’t… the bit is set by the downloading application, not the operating system. What applications set the quarantine bit? Most Apple applications, like Safari, Firefox, Mail.app, and a really big list in /System/Library/CoreServices/CoreTypes.bundle/Contents/Resources/Exceptions.plist. Plus any applications where developers implement it as part of their download features. In other words, most things a consumer will use to download files off the Internet. But the clearly they won’t catch everything, so there are still applications that can download and avoid Gatekeeper. System utilities like curl, aren’t protected. What apps aren’t protected? Anything already on your system is grandfathered in. Files transferred or installed using fixed media like DVDs, USB drives, and other portable media. Files downloaded by applications that don’t set the quarantine bit. Scripts and other code that isn’t executable. So will this protect me from Flash and Java malware? Nope. Although they are somewhat sandboxed in browsers (which varies widely by browser), applets and other code run just fine in their container, and aren’t affected or protected. Now we just need Adobe to sandbox Flash like they did on Windows. What is the Developer ID? This is a new digital certificate issued by Apple for code signing. It is integrated into XCode. Any developer in the Mac App Developer Program can obtain one for free. Apple does not review apps signed with a Developer ID, but if they find a developer doing things they shouldn’t they can revoke that certificate. These are signed by an Apple subroot that is separate from the Mac App Store subroot. How are Developer ID certificates revoked? Mountain Lion includes a blacklist that Apple updates every 24 hours. If a malicious application is found and Apple revokes the certificate, will it still run? Yes, if it has already run once and had the quarantine bit cleared. Apple does not remove the app from your system, although they said they can use Software Update to clean any widespread malware as they did with Mac Defender. What about a malicious application in the Mac App Store? Apple will remove the application from the app store. This does not remove it from your system, and it would also need to be cleaned with a software update. If we start seeing a lot of this kind of problems, I expect this mechanism to change. Does this mean all Mac applications require code signing? No, but code signing is required for all App Store and Developer ID applications. Starting in Lion, Apple includes extensive support for code signing and sandboxing. Developers can break out and sign different components of their applications and implement pretty robust sandboxing. While I expect most developers to stick with basic signing, the tools are there for building some pretty robust applications (as they are on Windows – Microsoft is pretty solid here as well, although few developers take advantage of it). What role does sandboxing play? All Mac App Store applications must implement sandboxing by March 1st, long before Mountain Lion is released. Sandbox entitlements are

Share:
Read Post

Friday Summary: February 17, 2012

I managed to take a couple days off last week, and got out of town. I went camping with a group of friends, all from very different backgrounds, with totally unrelated day jobs – but we all love camping in the desert. Whenever we’re BSing by the camp fire, they ask me about current events in security. There’s almost always a current data breach, ‘Anonymous’ attack, or whatever. This group is decidedly non-technical and does not closely follow the events I do. This trip the question on their minds was “What ‘s the big deal with SOPA?” Staying away from the hyperbole and accusations on both sides, I explained that the bill would have given content creators the ability to shut down web sites without due process if they suspected they hosted or distributed pirated content. I went into some of the background around issues of content piracy; sharing of intellectual property; and how digital media, rights management, and parody make the entire discussion even more cloudy. I was surprised that this group – on average a decade older than myself – reacted more negatively to SOPA than I did. One of them had heard about the campaign contributions and was pissed. “Politicians on the take, acting on behalf of greedy corporations!” was the general sentiment. “My sons share music with me all the time – and I am always both happy and surprised when they take an interest in my music, and buy songs from iTunes after hearing it at my place.” And, “Who the hell pirates movies when you can stream them from Netflix for a couple bucks a month?” I love getting non-security people’s reactions to security events. It was a very striking reaction from a group I would not have expected to get all that riled up about it. The response to SOPA has been interesting because it crosses political and generational lines. And I find it incredibly ironic that the first thing both sides state is that they are against piracy – but they cannot agree what constitutes piracy vs. fair use. One of my favorite slogans from the whole SOPA debate was It’s No Longer OK To Not Know How The Internet Works, accusing the backers of the legislation of being completely ignorant of a pervasive technology that has already changed the lives of most people. And even people who I do not consider technically sophisticated seem to “get it”, and we saw wit the ground-swell of support. I am willing to bet that continuing advances in technology will make it harder and harder for organizations like the RIAA to harass their customers. Maybe invest some of that money in a new business model? I know, that’s crazy talk! On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian’s OWASP presentation is live. Adrian’s Dark Reading post on The Financial Industry’s Effect On Database Security. Rich’s TidBITS posts: Mac OS X 10.8 Mountain Lion Stalks iOS & Gatekeeper Slams the Door on Mac Malware Epidemics. Favorite Securosis Posts Mike Rothman: RSAG 2012: Application Security. Love Adrian’s summary of what you’ll see at the RSA Conference around AppSec. Especially since we get to see SECaaS in print. Adrian Lane: OS X 10.8 Gatekeeper in Depth. Real. Practical. Security. Other Securosis Posts RSA Conference 2012 Guide: Key Themes. RSA Conference 2012 Guide: Network Security. Incite 2/15/2012: Brushfire. Friday Summary: February 10, 2012. [New White Paper] Network-Based Malware Detection: Filling the Gaps of AV. Implementing and Managing a Data Loss Prevention (DLP) Solution: Index of Posts. Implementing DLP: Starting Your Integration. Implementing DLP: Deploying Network DLP. Implementing DLP: Deploying Storage and Endpoint. Favorite Outside Posts Mike Rothman: The Sad and Ironic Competition Within the Draft “Expert” Community. Whether you are a football fan or not, read this post and tell me there aren’t similarities in every industry. There are experts, and more who think they are experts, and then lots of other jackasses who think breaking folks down is the best way to make themselves look good. They are wrong… Adrian Lane: Printing Drones. I can think of several good uses – and a couple dozen evil ones – for something like this. Control and power will be a bit tricky, but the potential for amusement is staggering! Project Quant Posts Malware Analysis Quant: Metrics – Build Testbed. Malware Analysis Quant: Metrics – Confirm Infection. Malware Analysis Quant: Monitoring for Reinfection. Malware Analysis Quant: Remediate. Malware Analysis Quant: Find Infected Devices. Malware Analysis Quant: Defining Rules. Malware Analysis Quant: The Malware Profile. Research Reports and Presentations Network-Based Malware Detection: Filling the Gaps of AV. Tokenization Guidance Analysis: Jan 2012. Applied Network Security Analysis: Moving from Data to Information. Tokenization Guidance. Security Management 2.0: Time to Replace Your SIEM? Fact-Based Network Security: Metrics and the Pursuit of Prioritization. Tokenization vs. Encryption: Options for Compliance. Top News and Posts Flash Player Security Update via Krebs, and a Java Security Update. Gatekeeper for Mountain Lion. Vote for Web Hacking Top Ten. No so random numbers lead to bad keys? Who Knew? Paget Demo’s Wireless Credit Card Theft. Carrier IQ Concerns. Blog Comment of the Week No comments this week. Starting to think our comments feature is broken. Oh, wait, it is! Share:

Share:
Read Post

Implementing DLP: Deploying Storage and Endpoint

Storage deployment From a technical perspective, deploying storage DLP is even easier than the most basic network DLP. You can simply point it at an open file share, load up the proper access rights, and start analyzing. The problem most people run into is figuring out which servers to target, which access rights to use, and whether the network and storage repository can handle the overhead. Remote scanning All storage DLP solutions support remotely scanning a repository by connecting to an open file share. To run they need to connect (at least administrator-only) to a share on the server scan. But straightforward or not, there are three issues people commonly encounter: Sometimes it’s difficult to figure out where all the servers are and what file shares are exposed. To resolve this you can use a variety of network scanning tools if you don’t have a good inventory to start. After you find the repositories you need to gain access rights. And those rights need to be privileged enough to view all files on the server. This is a business process issue, not a technical problem, but most organizations need to do a little legwork to track down at least a few server owners. Depending on your network architecture you may need to position DLP servers closer to the file repositories. This is very similar to a hierarchical network deployment but we are positioning closer to the storage to reduce network impact or work around internal network restrictions (not that everyone segregates their internal network, even though that single security step is one of the most powerful tools in our arsenal). For very large repositories which you don’t want to install a server agent on, you might even need to connect the DLP server to the same switch. We have even heard of organizations adding a second network interfaces on a private segment network to support particularly intense scanning. All of this is configured in the DLP management console; where you configure the servers to scan, enter the credentials, assign policies, and determine scan frequency and schedule. Server agents Server agents support higher performance without network impact, because the analysis is done right on the storage repository, with only results pushed back to the DLP server. This assumes you can install the agent and the server has the processing power and memory to support the analysis. Some agents also provide additional context you can’t get from remote scanning. Installing the server agent is no more difficult than installing any other software, but as we have mentioned (multiple times) you need to make sure you test to understand compatibility and performance impact. Then you configure the agent to connect to the production DLP server. Unless you run into connection issues due to your network architecture, you then move over to the DLP management console to tune the configuration. The main things to set are scan frequency, policies, and performance throttles. Agents rarely run all the time – you choose a schedule, similar to antivirus, to reduce overhead and scan during slower hours. Depending on the product, some agents require a constant connection to the DLP server. They may compress data and send it to the server for analysis rather than checking everything locally. This is very product-specific, so work with your vendor to figure out which option works best for you – especially if their server agent’s internal analysis capabilities are limited compared to the DLP server’s. As an example, some document and database matching policies impose high memory requirements which are infeasible on a storage server, but may be acceptable on the shiny new DLP server. Document management system/NAS integration Certain document management systems and Network Attached Storage products expose plugin architectures or other mechanisms that allow the DLP tool to connect directly, rather than relying on an open file share. This method may provide additional context and information, as with a server agent. This is extremely dependent on which products you use, so we can’t provide much guidance beyond “do what the manual says”. Database scanning If your product supports database scanning you will usually make a connection to the database using an ODBC agent and then configure what to scan. As with storage DLP, deployment of database DLP may require extensive business process work: to find the servers, get permission, and obtain credentials. Once you start scanning, it is extremely unlikely you will be able to scan all database records. DLP tools tend to focus on scanning the table structure and table names to pick out high-risk areas such as credit card fields, and then they scan a certain number of rows to see what kind of data is in the fields. So the process becomes: Identify the target database. Obtain credentials and make an ODBC connection. Scan attribute names (field/column names). (Optional) Define which fields to scan/monitor. Analyze the first n rows of identified fields. We only scan a certain number of rows because the focus isn’t on comprehensive realtime monitoring – that’s what Database Activity Monitoring is for – and to avoid unacceptable performance impact. But scanning a small number of rows should be enough to identify which tables hold sensitive data, which is hard to do manually. Endpoint deployment Endpoints are, by far, the most variable component of Data Loss Prevention. There are massive differences between the various products on the market, and far more performance constraints required to fit on general-purpose workstations and laptops, rather than on dedicated servers. Fortunately, as widely as the features and functions vary, the deployment process is consistent. Test, then test more: I realize I have told you to test your endpoint agents at least 3 times by now, but this is the single most common problem people encounter. If you haven’t already, make sure you test your agents on a variety of real-world systems in your environment to make sure performance is acceptable. Create a deployment package or enable in your EPP tool: The best way to deploy the DLP agent is to use whatever software distribution

Share:
Read Post

RSA Conference 2012 Guide: Network Security

Yesterday we posted the key themes we expect to see at the upcoming RSA Conference. Now we’ll starting digging into our main coverage areas. Today we’ll start with network security. Firewalls are (still) dead! Long live the perimeter security gateway! Shockingly enough, similar to the past three years at RSAC, you’ll hear a lot about next generation firewalls (NGFW). And you should, as ports and protocol-based firewall rules will soon go the way of the dodo bird. If by soon, we mean 5+ years anyway, but corporate inertia remains a hard game to predict. The reality is that you need to start moving toward a deeper inspection of both ingress and egress traffic through your network, and the NGFW is the way to do that. The good news is that every (and we mean every) vendor in the network security space will be showing a NGFW at the show. Some are less NG than a bolted-on IPS to do the application layer inspection, but at the end of the day they can all claim to meet the NGFW market requirements, as defined by the name-brand analysts anyway. Which basically means these devices are less firewalls and more perimeter security gateways. So we will see two general positioning tactics from the vendors: Firewall-centric vendors: These folks will pull a full frontal assault on the IPS business. They’ll talk about how there is no reason to have a stand-alone IPS anymore and that the NGFW now does everything the IPS does and more. The real question for you is whether you are ready for the forklift that moving to a consolidated perimeter security platform requires. IPS vendors: IPS vendors have to protect their existing revenue streams, so they will be talking about how the NGFW is the ultimate goal, but it’s more about how you get there. They’ll be talking about migration and co-existence and all those other good things that made customers feel good about dropping a million bucks on an IPS 18 months ago. But no one will be talking about how the IPS or yesterday’s ports & protocols firewall remains the cornerstone of the perimeter security strategy. That sacred cow is slain, so now it’s more about how you get there. Which means you’ll be hearing a different tune from many of the UTM vendors. Those same brand-name analysts always dictated that UTM only met small company needs and didn’t have a place in an enterprise network. Of course that wasn’t exactly true but the UTM vendors have stopped fighting it. Now they just magically call their UTM a NGFW. It actually makes sense (from their perspective) as they understand that an application-aware firewall is just a traditional firewall with an IPS bolted on for application classification. Is that a ‘NGFW’? No, because it still runs on firewall blocking rules based on ports and protocols (as opposed to applications), but it’s not like RSA attendees (or most mid-market customers) are going to really know the difference. Control (or lack thereof) Another batch of hyperbole you’ll hear at the conference is about control. This actually plays into a deeply felt desire on the part of all security professionals, who don’t really control much of anything on a daily basis. So you want to buy devices that provide control over your environment. But this is really just a different way of pushing you towards the NGFW, to gain ‘control’ over the applications your dimwit end users run. But control tends to put the cart ahead of the horse. The greatest impact of the NGFW is not in setting application-aware policies. Not at first. The first huge value of a NGFW is gaining visibility over what is going on in your environment. Basically, you probably have no idea what apps are being used by whom and when. The NGFW will show you that, and then (only then) are you in a position to start trying to control your environment through application-centric policies. While you are checking out the show floor remember that embracing application-awareness on your perimeter is about more than just controlling the traffic. It all starts with figuring out what is really happening on your network. Network-based Malware Detection gains momentum Traditional endpoint AV doesn’t work. That public service message has been brought to you by your friend Captain Obvious. But even though blacklists and signatures don’t work anymore, there are certain indicators of malware that can be tracked. Unfortunately that requires you to actually execute the malware to see what it does. Basically it’s a sandbox. It’s not really efficient to put a sandbox on every endpoint (though the endpoint protection vendors will try), so this capability is moving to the perimeter. Thus a hot category you’ll see at RSA is “network-based malware detection” gear. These devices sit on the perimeter and watch all the files passing through to figure out which of them look bad and then either alert or block. They also track command and control traffic on egress links to see which devices have already been compromised and trigger your incident response process. Of course these monitors aren’t a panacea for catching all malware entering your network, but you can stop the low hanging fruit before it makes its way onto your network. There are two main approaches to NBMD, which are described ad nauseum in our recently published paper, so we won’t get into that here. But suffice it to say, we believe this technology is important and until it gets fully integrated into the perimeter security gateway, it’s a class of device you should be checking out while you are at the show. Big security flexes its muscle Another major theme related to network security we expect to see at the show is Big Security flexing its muscles. Given the need for highly specialized chips to do application-aware traffic inspection, and the need to see a ton of traffic to do this network-based malware detection and reputation analysis, network security is no longer really a place for start-ups (and no, Palo Alto is no

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.