Securosis

Research

Friday Summary: February 3, 2012

Since Rich is vacationing working hard at a security conference in Mexico, I figure I would write this week’s Friday Summary. I am pretty jazzed about some upcoming white papers I’ll be writing on securing data and applications at scale, understanding and selecting masking technologies, and why log management is not dead! And I am having a good time researching and writing the DAM 2.0 DSP series as well. I originally intended to write about our research agenda but changed my mind. Frankly, I have spring fever. Spring fever, you ask, in the first week of February? Yep. It’s 74 degrees here and sunny. WTF? Punxsutawney Phil weighed in with his opinion, and after burning his retinas, it looks like we are going to have another six weeks of winter. I sure hope so! Another six weeks of this type of weather would be awesome. I have been on the phone with dozens of people around the country, from Boston to San Diego, and they are all experiencing fantastic weather. Even Gunnar reports highs of 48 degrees in Minnesota. I guess the cold air jet stream has been staying north of the border. For me this means my peach trees are blooming. Blooming! On freakin’ January 30th! See for yourself: And I know some of you may not care, but the warm weather means my backyard garden is almost complete. Following up on my post last October, in just a couple short months the Vegetable Fortress is built! Overbuilt? Beauty is in the eye of the beholder. I may put some solar powered laser turrets on it. You never know when Al-Qaeda might train gophers with tig welders to attack my squash. And if the DHS threat level spikes I will have a detachment of Araucana commando chickens to beat back the attack. The price of vegetables is eternal vigilance – and $3.95 for GMO free seeds. Now call in sick and go outside to enjoy the nice weather! You’ll be glad you did. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Our Research Page with every freakin’ white paper we’ve done in the last three years. Rich, Adrian, and Shimmy discuss NoSQL Security with Couchbase and Mongo founders. Other Securosis Posts Bridging the Mobile Security Gap: Operational Consistency. Malware Analysis Quant: Take the Survey (and win fancy prizes!) Incite 2/1/2012: Bored to Tears. Implementing DLP: Integration, Part 1. Understanding and Selecting Database Security Platforms. Bridging the Mobile Security Gap: The Need for Context. Implementing and Managing a Data Loss Prevention (DLP) Solution: Index of Posts. Implementing DLP: Final Deployment Preparations. Malware Analysis Quant: Phase 1 – The Process [Check out the paper!] Favorite Outside Posts Mike Rothman: Mr. Waledac: The Peter North of Spamming. Krebs could have written this post in Swahili and it would still be my favorite outside link. Anyone that can pull off a Peter North mention in the title of a post gets my weekly vote. And it’s even a good post! Krebs digs into the intrigue of the Russian Spam Mafia. David Mortman: BSides/RSA Conference Dust Up. And the resolution. Beneficial discussion. Rich: Firewalls and SSL: More Profitable than Facebook. Gunnar’s got a great point: Firewalls, AV, and SSL sell – and very little money gets spent on innovative products. Adrian Lane: Fascinating look at Netflix’s Ephemeral Volatile Caching in the cloud. Not security related, but a good presentation of what’s possible with cloud content distribution. Project Quant Posts Malware Analysis Quant: Monitoring for Reinfection. Malware Analysis Quant: Remediate. Malware Analysis Quant: Find Infected Devices. Malware Analysis Quant: Defining Rules. Malware Analysis Quant: The Malware Profile. Malware Analysis Quant: Dynamic Analysis. Malware Analysis Quant: Static Analysis. Malware Analysis Quant: Build Testbed. Research Reports and Presentations Tokenization Guidance Analysis: Jan 2012. Applied Network Security Analysis: Moving from Data to Information. Tokenization Guidance. Security Management 2.0: Time to Replace Your SIEM? Fact-Based Network Security: Metrics and the Pursuit of Prioritization. Tokenization vs. Encryption: Options for Compliance. Security Benchmarking: Going Beyond Metrics. Top News and Posts Paget Demonstrates Wireless Credit Card Theft. Carrier IQ Concerns. WPA2 Vulnerability Analysis. Symantec patches pcAnywhere, says it’s safe. Secure Virtual Storage – the AWS way. Missed this in last week’s summary. Low Orbit Ion Cannon DDoS Analysis. Not new, but newsworthy. Android Malware Infection. Android can be a more powerful platform as you can run more powerful apps on it. This is made possible by a lax security model. That’s the tradeoff. Google to Censor Blogger Blogs on a ‘Per Country Basis’. The tradeoff is either Google blogs get banned on a ‘Per Country Basis’ or Google bans select blogs. Revenue trumps ethics every time. Blog Comment of the Week None this week. Share:

Share:
Read Post

Bridging the Mobile Security Gap: Operational Consistency

We started the Bridging the Mobile Security Gap series by accepting that we can’t control the devices that show up on our networks any more. We followed up with a diatribe on the need for context to build and enforce policies which ensure that (only) the right users get to the right stuff at the right times. To wrap up the series we need to dig deeper into enforcement, because as we all know the chain is only as strong as its weakest link. There are various places where mobile device security policies can be enforced – including on the devices themselves (via mobile device management) and on the network (firewall/VPN, IPS, network access control, etc.). There is no one right or wrong place to enforce policies. In fact the best answer is often “all of the above”. The more places you can enforce policy, the more likely your defenses will succeed at blocking attacks. Of course complexity is the obvious downside to multiple enforcement points. Complexity has a strong negative correlation with operational consistency. You need to make sure your enforcement points work together. Why? Let’s run through a few scenarios where policies are not aligned. Yeah, they do not end well. You can implement a policy forcing device to connect through the corporate VPN to receive the protection of the enterprise network – but that only works if the VPN recognizes the device and puts it in the right trust zone, with access to what the user needs. When that doesn’t happen correctly, the user is out of business – or a risk. Likewise, preventing misconfigured smartphones from accessing the network reflects good security enforcement, right? Sure, unless it belongs to the CEO who is trying to access a letter of understanding about an acquisition – even worse if you have no way to override the control. Exceptions are part of the game of managing security, so you need the ability to adapt as needed. Both those scenarios result in users being unable to access what they need, which means a bad day for you. This is why neither MDM nor any kind of network-based control can operate in a vacuum. You can take a number of steps to attain operational consistency. Coexistence The first stop on our path to policy consistency is just making the enforcement points coexist. Do enough to make sure one tool is working contrary to the others. Unfortunately this is largely a manual process. Whenever changes are made or new policies implemented, your administrators need to run through the impact of these changes. All of them. Well, all the practical ones anyway. It’s a lot of work, but necessary, given how important mobile devices have become to business productivity. Remember the good old days, when you did a similar dance when changing firewall rules. Some folks waited for the help desk to light up, and then they knew something was broken. We don’t recommend that approach. To avoid that problem vendors starting offering built-in policy checkers, and third-party firewall management tools emerged to perform these functions at higher scale and on multiple firewalls. Unfortunately those tools don’t support mobile devices (or the relevant network controls) today, so for now you are on your own. That can be problematic, since you know (even if you don’t want to admit it) that it’s difficult to maintain operational discipline – particularly in the face of the number of changes made, exceptions managed, and other fires to fight. It’s not where you want to be, but coexistence is the start. Integration at the console The next step is console integration. In this scenario alerts funnel from one management console to the other. Integration at least gives administrators a coordinated view of what’s happening. It may even be possible to click on one console and have that link to a specific event or device in the other. Very fancy, and downright useful from an operational standpoint. A little less integration your admins need to perform in their own heads improves productivity. Of course this requires cooperation between vendors and these kinds of relationships are not commonplace. But they will be – enterprise customers will demand them. Another benefit of this initial integration is more effective compliance reporting. Vendors map from a data source to the compliance report and pump the data in. That’s pretty helpful too – you know how painful getting ready for an audit remains, especially when you need to manage 5-10 different data sources to show to the auditor that you know what you’re doing. Of course this is less than full integration – you still need to deal with multiple consoles to make policy changes, and the logic to ensure a policy in one tool doesn’t adversely impact another tool is missing. But it’s progress. True integration What you really want is the ability to manage a single policy, implemented across different devices and network controls. How cool would that be? But don’t hold your breath waiting. Like most other non-standards-based integration, we will see integration initially forced by huge customers. Some Fortune 50 company using a device-centric management product will want to implement network controls. They will call everyone together, write down on a whiteboard how much they spend with each company, and make it very clear that integration will happen, and soon. It’s the proverbial offer they can’t refuse, and they usually don’t. Over time integration gives way to consolidation, and we expect MDM to be integrated into the larger IT device management stack and eventually work with network controls that way. Obviously that’s a few years down the road, but it’s the way these things work out. It’s not a matter of if but a matter of when. But without a crystal ball there isn’t much to do about that, so the best bet is to make decisions based on available integration today, and be ready to adapt for tomorrow. Losing device specificity We used to think of mobile devices as only laptops, but the pendulum has swung back the other way, to focus

Share:
Read Post

Malware Analysis Quant: Take the Survey (and win fancy prizes!)

One of the coolest things about how we work at Securosis is our Totally Transparent Research approach. We always post our work to the blog first and let you folks have at it. In many cases it gets poked and prodded, ridiculed, and broken down. It’s certainly tough on the ego, but in the end makes the work better. So we are now asking for more help as we enter Phase 2 of our Malware Analysis Quant research. As we described over the weekend, Phase 1 resulted in a nice (not so) little paper breaking down the process map for studying malware infections. Now we have to match up theory against reality. And thus the MAQ survey. As with all our surveys, we have set it up so you can take it anonymously, and all the raw results (anonymized, in spreadsheet format) will be released after our analysis. By the way, unlike other folks posting surveys, we don’t know the answers before we post the survey. Click here to take the survey, and please spread the word. We know from our last few surveys that we need to consider the time you are taking to help, so we kept this one pretty short. We would be surprised if it takes you more than 10-15 minutes. We understand filling out surveys is a pain in the behind, so we are providing an incentive. We will give 3 $100 Amazon gift cards to lucky participants. You don’t need to provide an email address to take the survey, but you do to be entered into the drawing. We are also tracking where we get our responses from, so if you take the survey in response to this post, please use Securosis. as your source code. If you repost the link you can make up your own code and email it to us. We’ll let you know how many people responded to your referral. If you generate sufficient response we will be happy to send you your keycode’s slice of the data. Thanks again for your help. We’ll keep the survey open at least 2 weeks and then begin analysis. Again, here is the link: http://www.surveymonkey.com/s/MalwareAnalysisQuant-Survey Photo credit: “Survey says
” originally uploaded by hfabulous Share:

Share:
Read Post

Incite 2/1/2012: Bored to Tears

It’s unbelievable how different growing up today is. When I was in elementary school in the late 70s, Pong was state of the art and a handheld Coleco football game would keep a little kid occupied for hours. When they came up with the Head to Head innovation, two kids would be occupied for hours. That was definitely a different type of Occupy movement. We also didn’t have 300 channels on the boob tube. We had 5 channels, and the highlight of the year was Monster Week. At least for me. Most days I jumped on my bike to go play with my friends. Sometimes we played football. Okay – a lot of days we’d play football. It was easy – you didn’t need much equipment or a special field or anything. Just an even number of kids. I’m not sure what little girls did back in the day, since it was just me and my younger brother, but I’m sure it was similarly unsophisticated. We just played. Why am I getting nostalgic? Basically because I’m frustrated. Today kids don’t play. They need to be entertained. The thing that makes me cringe most is when one of my kids tells me they are bored. Bored? This usually happens after I tell them 5 hours on the iPod touch is enough over the weekend. Or that the 3 hours they watched crappy TV in the morning is more than enough. I tell them to get a book and read. I tell them to play a game. Maybe use some of the thousands of dollars of toys in the basement. Perhaps even build something with Lincoln Logs. Or break out one of the 25 different Lego contraptions we have. Mostly I tell them to get out of my hair, since I’m doing important stuff. Like reading about the Super Bowl on my iPad. But I digress. What ever happened to the 5 Best Toys of All Time? I’d add a football to that list and be good. That was my childhood in a nutshell. No more. Our kids’ minds are numbed with constant stimulation, which isn’t surprising considering that many of us are similarly numb, and it’s not helping us find happiness. Rich sent around this article over the weekend, and it’s right. We seem to have forgotten what it’s like to interact with folks, unless it’s via Words with Friends. Sometimes you need to slow down to speed up in the long run. I know you can’t stop ‘progress’. But you don’t need to just accept it either. After XX1 realized I wasn’t going to cave and let her play on the computer, she spent a few hours writing letters to her camp friends. She painstakingly colored the envelopes, and I think she even wrote English. But what she wrote isn’t the point. It’s that there was no battery, power cord, or other electronics involved. No ads were flying at her head either. Amazingly enough, she overcame her boredom and was even a little disappointed when everyone had to get ready for bed. It was a small victory, but I’ll take it. They don’t come along too often, since my kids are always right. Just ask them. -Mike Photo credits: “Mattel & Coleco H2H classics” originally uploaded by Vic DeLeon Heavy Research After a bit of a blogging hiatus we are back at it. The Heavy Research feed is hopping, so here are a couple links to our latest stuff. Please check them out and (as always) let us know what you think via comments. Implementing and Managing a Data Loss (DLP) Solution: Index of Posts: Rich will be updating this post with the latest in his ongoing series on DLP. Understanding and Selection Database Security Platforms: Rich and Adrian are updating their landmark DAM research from a few years ago. As with many things, what used to a single-purpose capability (DAM) is now a database security platform. Follow along as they explore exactly what that means. Bridging the Mobile Security Gap: The Need for Context: Got rid of those smartphones yet? No? Then you should be checking out this series on how to provision layered controls to maintain order, in light of the onslaught of all sorts of new devices. Malware Analysis Quant: Phase 1 – The Process: We have finished up Phase 1 of Malware Analysis Quant, and packaged up the process map and descriptions into a paper. Check it out, but please understand the process will continue to evolve as we keep digging into the research. We will launch the survey this week, so keep an eye out. You can get our Heavy Feed via RSS, where you can access all our content in its unabridged glory. Incite 4 U Privacy and Google: Google’s new privacy policy has been making waves the last few days. For me it’s not so much about the policy – I’m nonplussed about that. Sure, I don’t like the Google’s non-anonymity posture. On the other hand it’s much easier to understand Google’s consolidated policy on privacy and the intentions behind it – for that they should be commended. Essentially it comes down to “use our stuff and we’ll use your data”, which is clear enough and completely unsurprising in light of their business model. Understand that an encrypted search provides an illusion of privacy, meaning nobody on the network you traverse should be able to see the query, but it does not mean your activity is not logged and indexed by Google (or your other search provider). Good or bad – you be the judge. The real question is what are you going to do about it? For me this is an important “rubber meets the road” milestone. And that’s too bad because I like using Google’s search engine – it is clearly the the best. Gmail is free and it works – but I don’t have an easy way to encrypt email running through my Gmail account so Google can’t read it. Which means I have to get off my lazy butt and stop using these tools, or accept that Google owns an online identity

Share:
Read Post

Implementing DLP: Integration, Part 1

At this point all planning should be complete. You have determined your incident handling process, started (or finished) cleaning up directory servers, defined your initial data protection priorities, figured out which high-level implementation process to start with, mapped our the environment so you know where to integrate, and performed initial testing and perhaps a proof of concept. Now it’s time to integrate the DLP tool into your environment. You won’t be turning on any policies yet – the initial focus is on integrating the technical components and preparing to flip the switch. Define a Deployment Architecture Earlier you determined your deployment priorities and mapped out your environment. Now you will use them to define your deployment architecture. DLP Component Overview We have covered the DLP components a bit as we went along, but it’s important to know all the technical pieces you can integrate depending on your deployment priorities. This is just a high-level overview, and we go into much more detail in our Understanding and Selecting a Data Loss Prevention Solution paper. This list includes many different possible components, but that doesn’t mean you need to buy a lot of different boxes. Small and mid-sized organizations might be able to get everything except the endpoint agents on a single appliance or server. Network DLP consists of three major components and a few smaller optional ones: Network monitor or bridge/proxy – this is typically an appliance or dedicated server placed inline or passively off a SPAN or mirror port. It’s the core component for network monitoring. Mail Transport Agent – few DLP tools integrate directly into a mail server; instead they insert their own MTA as a hop in the email chain. Web gateway integration – many web gateways support the ICAP protocol, which DLP tools use to integrate and analyze proxy traffic. This enables more effective blocking and provides the ability to monitor SSL encrypted traffic if the gateway includes SSL intercept capabilities. Other proxy integration – the only other proxies we see with any regularity are for instant messaging portals, which can also be integrated with your DLP tool to support monitoring of encrypted communications and blocking before data leaves the organization. Email server integration – the email server is often separate from the MTA, and internal communications may never pass through the MTA which only has access to mail going to or coming from the Internet. Integrating directly into the mail server (message store) allows monitoring of internal communications. This feature is surprisingly uncommon. Storage DLP includes four possible components: Remote/network file scanner – the easiest way to scan storage is to connect to a file share over the network and scan remotely. This component can be positioned close to the file repository to increase performance and reduce network saturation. Storage server agent – depending on the storage server, local monitoring software may be available. This reduces network overhead, runs faster, and often provides additional metadata, but may affect local performance because it uses CPU cycles on the storage server. Document management system integration or agent – document management systems combine file storage with an application layer and may support direct integration or the addition of a software agent on the server/device. This provides better performance and additional context, because the DLP tool gains access to management system metadata. Database connection – a few DLP tools support ODBC connections to scan inside databases for sensitive content. Endpoint DLP primarily relies on software agents, although you can also scan endpoint storage using administrative file shares and the same remote scanning techniques used for file repositories. There is huge variation in the types of policies and activities which can be monitored by endpoint agents, so it’s critical to understand what your tool offers. There are a few other components which aren’t directly involved with monitoring or blocking but impact integration planning: Directory server agent/connection – required to correlate user activity with user accounts. DHCP server agent/connection – to associate an assigned IP address with a user, which is required for accurate identification of users when observing network traffic. This must work directly with your directory server integration because the DHCP servers themselves are generally blind to user accounts. SIEM connection – while DLP tools include their own alerting and workflow engines, some organizations want to push incidents to their Security Information and Event Management tools. In our next post I will post a chart that maps priorities directly to technical components. Share:

Share:
Read Post

Understanding and Selecting Database Security Platforms

We love the Totally Transparent Research process. Times like this – where we hit upon new trends, discover unexpected customer uses cases, or discover something going on behind the scenes – are when our open model really shows its value. We started a Database Activity Monitoring 2.0 series last October and suddenly halted because our research showed that platform evolution has changed from convergence to independent visions of database security, with customer requirements splintering. These changes are so significant that we need to publicly discuss them so can you understand why we are suddenly making a significant departure from the way we describe a solution we have been talking about for the past 6+ years. Especially since Rich, back in his Gartner days, coined the term “Database Activity Monitoring” in the first place. What’s going on behind the scenes should help you understand how these fundamental changes alter the technical makeup of products and require new vocabulary to describe what we see. With that, welcome to the reboot of DAM 2.0. We renamed this series Understanding and Selecting Database Security Platforms to reflect massive changes in products and the market. We will fully define why this is the case as we progress through this series, but for now suffice it to say that the market has simply expanded beyond the bounds of the Database Activity Monitoring definition. DAM is now only a subset of the Database Security Platform market. For once this isn’t some analyst firm making up a new term to snag some headlines – as we go through the functions and features you’ll see that real products on the market today go far beyond mere monitoring. The technology trends, different bundles of security products, and use cases we will present, are best reflected by the term “Database Security Platform”, which most accurately reflects the state of the market today. This series will consist of 6 distinct parts, some of which appeared in our original Database Activity Monitoring paper. Defining DSP: Our longstanding definition for DAM is broad enough to include many of the changes, but will be slightly updated to incorporate the addition of new data collection and analysis options. Ultimately the core definition does not change much, as we took into account two anticipated trends when we initially created it, but a couple subtle changes encompass a lot more real estate in the data center. Available Features: Different products enter the DSP market from different angles, so we think it best to list out all the possible major features. We will break these out into core components vs. additional features to help focus on the important ones. Data Collection: The minimum feature set for DAM included database queries, database events, configuration data, audit trails, and permission management for several years. The continuing progression of new data and event sources, from both relational and non-relational data sources, extends the reach of the security platform to include many new application types. We will discuss the implications in detail. Policy Enforcement: The addition of hybrid data and database security protection bundled into a single product. Masking, redaction, dynamically altered query results, and even tokenization build on existing blocking and connection reset options to offer better granularity of security controls. We will discuss the technologies and how they are bundled to solve different problems. Platforms: The platform bundles, and these different combinations of capabilities, best demonstrate the change from DAM to DSP. There are bundles that focus on data security, compliance policy administration, application security, and database operations. We will spend time discussing these different visions and how they are being positioned for customers. Use Cases & Market Drivers: The confluence of what companies are looking to secure mirrors adoption of new platforms, such as collaboration platforms (SharePoint), cloud resources, and unstructured data repositories. Compliance, operations management, performance monitoring, and data security requirements follow the adoption of these new platforms; which has driven the adaptation and evolution of DAM into DSP. We will examine these use cases and how the DSP platforms are positioned to address demand. A huge proportion of the original paper was influenced by the user and vendor communities (I can confirm this – I commented on every post during development, a year before I joined Securosis – Adrian). As with that first version, we strongly encourage user and vendor participation during this series. It does change the resulting paper, for the better, and really helps the community understand what’s great and what needs improvement. All pertinent comments will be open for public review, including any discussion on Twitter, which we will reflect here. We think you will enjoy this series, so we look forward to your participation! Next up: Defining DSP! Share:

Share:
Read Post

Bridging the Mobile Security Gap: The Need for Context

As we discussed in the first post of this series, consumerization and mobility will remain macro drivers of security for the foreseeable future, and force us to stare down network anarchy. We can certainly go back into the security playbook and deal with an onslaught of unwieldy devices by implementing some kind of agentry on the devices to provide a measure of control. But results of this device-centric approach have been mixed. And that’s being kind. On the other hand from a network security standpoint a device is a device is a device. Whether it’s a desktop sitting in a call center, a laptop in an airline club, or a smartphone traipsing around town, the goal of a network security professional is the same. Our network security charter is always to make sure those devices access the right stuff at the right time, and don’t have access to anything else. So we enforce segmented networks to restrict devices to certain trusted network zones. Remember: segmentation is your friend – and that model holds, up to a point. But here’s the rub in dealing with those pesky smartphones: the folks using these devices actually want to do stuff. You know, productive stuff – which requires access to data. The nerve of those folks. So just parking these devices in a proverbial Siberia on your network falls apart. Instead we have to figure out how to recognize these devices, make sure each device is properly configured, and then restrict it to only what the user legitimately needs to access. But controlling the devices is only the first layer of the onion, and as you peel back layers your eyes start to tear. Are you crying? We won’t tell. The next layer is the user. Who has this device? Do they needs access to sensitive stuff? Is it a guest who wants Internet access? Is it a contractor whose access should expire after a certain number of days? Is it a finance team member who needs to use a tablet app on a warehouse floor? Is it the CEO, who basically does whatever he or she wants? Depending on the answer you would enforce a very different network security policy. For lack of a better term, let’s call this context, and be clear that the idea of a generic network security policy no longer provides adequate granularity of protection as we move to this concept of any computing. It’s not enough to know which device the user uses – it gets down to who the user is and what they are entitled to access. Unfortunately that’s not something you can enforce exclusively on the device because it doesn’t: 1) know about the access policies within your enterprise, 2) have visibility into the network to figure out what the device is accessing, or 3) have the ability to interoperate with network security devices to enforce policies. The good news is that we have seen this before, and as good security historians we can draw parallels with how we initially embraced VPNs. But there is a big difference from the past, when we could just install a VPN agent that downloaded a VPN access policy which worked with the perimeter VPN device. With smartphones we get extremely limited access to the mobile operating systems. These new operating systems were built with security much more strongly in mind – including from us – so mobile security agents don’t have nearly as deep access into what other apps are doing – that’s largely blocked by the sandbox model embraced by mobile operating systems. Simply put, the device doesn’t see enough to be able to enforce access policies without some deep, non-public access to the operating systems. But even that is generally not the stickiest issue with supporting these devices. You cannot count on being able to install mobile security agents on mobile devices, particularly because many organizations support a BYOD (bring your own device) policy, and users may not accept security agents on their devices. Of course, you can declare they can’t access the network, which quickly becomes a Mexican stand-off. Isn’t there another way, which doesn’t require agents to implement at least basic control over which mobile devices gain access and what they can reach? In fact there is. You should be looking for a network security device that can: Identify a mobile device and enforce device configuration policies. Have some idea of the user, and be able to understand the access rights of the user + device combination. For example, the CFO may be able to get to everything from their protected laptop, but be restricted if they use an app on their smartphone. Support the segmentation approach of the enterprise network – identifying users and devices is neat but academic until it enables you to restrict them to specific network segments. And we cannot forget: we must be able to most of this without an agent on the smartphone. To bridge this mobile security gap, those are the criteria we need to satisfy. In the next post we will wrap up this series by dealing with some of the additional risk and operational issues of having multiple enforcement points to provide this kind of access control. Share:

Share:
Read Post

Implementing DLP: Final Deployment Preparations

Map Your Environment No matter which DLP process you select, before you can begin the actual implementation you need to map out your network, storage infrastructure, and/or endpoints. You will use the map to determine where to push out the DLP components. Network: You don’t need a complete and detailed topographical map of your network, but you do need to identify a few key components. All egress points. These are where you will connect DLP monitors to a SPAN or mirror port, or install DLP inline. Email servers and MTAs (Mail Transport Agents). Most DLP tools include their own MTA which you simply add as a hop in your mail chain, so you need to understand that chain. Web proxies/gateways. If you plan on sniffing at the web gateway you’ll need to know where these are and how they are configured. DLP typically uses the ICAP protocol to integrate. Also, if your web proxy doesn’t intercept SSL
 buy a different proxy. Monitoring web traffic without SSL is nearly worthless these days. Any other proxies you might integrate with, such as instant messaging gateways. Storage: Put together a list of all storage repositories you want to scan. The list should include the operating system type, file shares / connection types, owners, and login credentials for remote scanning. If you plan to install agents test compatibility on test/development systems. Endpoints: This one can be more time consuming. You need to compile a list of endpoint architectures and deployments – preferably from whatever endpoint management tool you already use for things like configuration and software updates. Mapping machine groups to user and business groups makes it easier to deploy endpoint DLP by business units. You need system configuration information for compatibility and testing. As an example, as of this writing no DLP tool supports Macs so you might have to rely on network DLP or exposing local file shares to monitor and scan them. You don’t need to map out every piece of every component unless you’re doing your entire DLP deployment at once. Focus on the locations and infrastructure needed to support the project priorities you established earlier. Test and Proof of Concept Many of you perform extensive testing or a full proof of concept during the selection process, but even if you did it’s still important to push down a layer deeper, now that you have more detailed deployment requirements and priorities. Include the following in your testing: For all architectures: Test a variety of policies that resemble the kinds you expect to deploy, even if you start with dummy data. This is very important for testing performance – there are massive differences between using something like a regular expression to look for credit card numbers vs. database matching against hashes of 10 million real credit card numbers. And test mixes of policies to see how your tool supports multiple policies simultaneously, and to verify which policies each component supports – for example, endpoint DLP is generally far more limited in the types and sizes of policies it supports. If you have completed directory server integration, test it to ensure policy violations tie back to real users. Finally, practice with the user interface and workflow before you start trying to investigate live incidents. Network: Integrate out-of-band and confirm your DLP tool is watching the right ports and protocols, and can keep up with traffic. Test integration – including email, web gateways, and any other proxies. Even if you plan to deploy inline (common in SMB) start by testing out-of-band. Storage: If you plan to use any agents on servers or integrated with NAS or a document management system, test them in a lab environment first for performance impact. If you will use network scanning, test for performance and network impact. Endpoint: Endpoints often require the most testing due to the diversity of configurations in most organizations, the more-limited resources available to the DLP engine, and all the normal complexities of mucking with user’s workstations. The focus here is on performance and compatibility, along with confirming which content analysis techniques really work on endpoints (the typical sales exec is often a bit 
 obtuse 
 about this). If you will use policies that change based on which network the endpoint is on, also test that. Finally, if you are deploying multiple DLP components – such as multiple network monitors and endpoint agents – it’s wise to verify they can all communicate. We have talked with some organizations that found limitations here and had to adjust their architectures. Share:

Share:
Read Post

Malware Analysis Quant: Phase 1 – The Process [Check out the paper!]

We are well aware that the Quant research can be overwhelming. 70+ pages of process, metrics, and survey data is a lot to get through. So we have broken the Malware Analysis Quant project up into two phases. The first phase focuses on defining and describing the underlying process. In the second phase we get into metrics and run the survey to figure out who is actually doing which aspects of the process. In the end will still produce the big paper in all its glory. But we figured an interim deliverable at the end of Phase 1 would make a lot of sense. So that’s what we have done. Download paper: Malware Analysis Quant: Phase 1 – The Process (PDF) You will see that we have updated the process map once again to account for the fact that some organizations find infected devices and just remediate them. They don’t analyze the malware, or even see whether other devices have been infected. We don’t get it either, but it happens, so we need to reflect the possibility in the process map. Again, we want to thank Sourcefire for sponsoring this Quant project. Share:

Share:
Read Post

Friday Summary: January 27, 2012

This is the Securosis Friday Summary. For those of you who don’t know this is where Rich and I vent. When I started working with Rich I used to loathe writing this intro; now it’s therapeutic. It gives me a chance to talk about whatever is on my mind that I think people might find interesting. Sure, most Friday posts talk about security, but not always. If such things bother you – as one reader mentioned last week – search within the page for ‘Summary’ to avoid our ramblings. Security Burnout? Breach Apathy? Repetitive task depression? Been there, done that, got the T-shirt to prove it? If you have been in security long enough, you will go though some security industry induced negative mental states. It happens to everyone on the security treadmill – it’s the security professionals’ version of the marathon runners’ wall. A tired, disinterested, day-to-day grind of SOSDD. I know I’ve had it – twice in fact. As an IT admin reviewing the same log files over and over again, and also from writing about security breaches caused by the same old SQL injection attacks. Rich, James Arlen, and I got into a conversation about this over dinner the other night. Rich and I have achieved a quiet inner peace with the ups and downs of security, mainly because our work lets us do more of what we like and less of the daily grind that folks in IT security deal with on a daily basis. Usually during my career, with vacations frowned upon for startup executives, conferences were a source of inspiration. Actually, they still are. Presentations like Errata security’s malicious iPhone and Jackpotting Automated Tellers can renew my interest and fascination with the profession. I go back to work with new energy and new ideas on what I can do to make things better. Somewhere down the line, though reality always settles back in. As with life in general, I try not to get too worked up about this profession, but to find the pieces that fascinate me and delve into those technologies, leaving the rest of the stuff behind. On Monday during the RSA Security Conference, Mike, Rich, David Mortman, and I will be helping with the ‘e10+’ event. The idea of this session is to provide advanced discussions for security pros who have been in the field over 10 years. We talk about some of the complex organizational problems security folks deal with, and share different strategies for addressing problems. Of course there is no shortage of interesting problems, and there are some heavily experienced – and opinionated – people in the room, so the discussion gets lively. It’s not on the agenda, but it dawned on me that dealing with security burnout – both causes and reactions – would actually be a good topic for that event. How to put the fun back in security. I hope our talks will do just that. Rich has some great ideas on consumerization and risk (yeah, I know – who thought risk could be interesting?) that I expect to spark some lively debate. Usually during RSA I am too busy worrying about my presentation or meeting with people to see much new stuff, but this year I am looking forward to the event. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Rich, Adrian, and Shimmy discuss NoSQL Security with Couchbase and Mongo founders. Adrian, Jamie, and Rich on the NetSec Podcast. Other Securosis Posts Our Research Page with every freakin’ white paper we’ve done in the last three years. Implementing DLP: Getting Started. Incite 1/25/2011: Prized Possessions. Bridging the Mobile Security Gap: Staring down Network Anarchy (new series). Implementing and Managing a DLP Solution. The 2012 Disaster Recovery Breakfast. Baby Steps toward the New School. Favorite Outside Posts Mike Rothman: Executive could learn a lot from Supernanny. Kevin hits it on the head here, just as Wendy did last week. Without even enforcement of the rules you’re lost. Unless you are Steven Seagal (and you’re not), no one is Above the Law. Dave Lewis: How to close your Google account. Lots of blowback due to Google’s new privacy policy – here’s how you can protest. Adrian Lane: Implementation of MITM Attack on HDCP-Secured Links. Fascinating examination of an HDMI encryption attack – in real time – for fair use. It’s a bit on the technical side but does get to the heart of why DRM and closed systems stifle innovation. Rich: Pete Lindstrom’s take on recent SCADA vulnerability disclosures. I disagree with Pete a lot. It’s hit absurd levels in the past on a mailing list we are both on. And while I don’t agree with his characterizations of vulnerability research justifications, I do agree that for some things – especially SCADA – we need to think differently about disclosure. David Mortman: Google+ Failed Because of Real Names. Project Quant Posts Malware Analysis Quant: Monitoring for Reinfection. Malware Analysis Quant: Remediate. Malware Analysis Quant: Find Infected Devices. Malware Analysis Quant: Defining Rules. Malware Analysis Quant: The Malware Profile. Malware Analysis Quant: Dynamic Analysis. Malware Analysis Quant: Static Analysis. Malware Analysis Quant: Build Testbed. Research Reports and Presentations Tokenization Guidance Analysis: Jan 2012. Applied Network Security Analysis: Moving from Data to Information. Tokenization Guidance. Security Management 2.0: Time to Replace Your SIEM? Fact-Based Network Security: Metrics and the Pursuit of Prioritization. Tokenization vs. Encryption: Options for Compliance. Security Benchmarking: Going Beyond Metrics. And it case you missed it: Our Research Page with every freakin’ white paper we’ve done in the last three years. Top News and Posts Kill pcAnywhere Right Now! We the People: Populist Protest Kills SOPA (Again). The spam tag cloud: Keeping you up to date on what’s important in life! Trojan Trouble-ticket system. Say what you will about malware authors, but they’re usually highly adept at software development tools and techniques. Defacement frenzy via our friends at LiquidMatrix. O2 leaking mobile numbers to web sites Symantec acquires LiveOffice. Norton Source Code Stolen in 2006. Blog Comment of the Week No comments this week. We need to start writing better posts! Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

“Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.”

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.