Securosis

Research

Friday Summary: September 2, 2011

I was reading Martin McKeay’s post Fighting a Bad Habit. Martin makes a dozen or so points in the post – and shares some career angst – but there is a key theme that really resonates with me. Most technology lifers I know have their own sense of self worth tied up in what they are able to contribute professionally. Without the feeling of building, contributing, or making things better, the job is not satisfying. In college a close friend taught me what his father taught him. Any successful career should include three facets: You should do research to stay ahead in your field. You should practice your craft to keep current. You should teach those around you what you know. I find these things make me happy and make me feel productive. I have been fortunate that over my career there has been balance in these three areas. The struggle is that the balance is seldom entirely within a single job. Usually any job role is dominated by one of the three, then I choose another role or job that allows me to move on to the next leg of the stool. I do know I am happiest when I get to do all three, but windows of time when they are in balance are vanishingly small. Another point of interest for me in Martin’s post was the recurring theme that – as security experts – we need to get outside the ‘security echo chamber’. The 6,000 or so dedicated security practitioners around the world who know their stuff and struggle in futility, trying to raise the awareness of those around them to security issues. And the 600 or so experts among them are seldom interested in the mundane – only the cutting edge, which keeps them even further from the realm of the average IT practitioner. It has become clear to me over the last year is this is a self-generated problem, and a byproduct of being in an industry few have noticed until recently. We are simply tired of having the same conversations. For example, I have been talking about information centric security since 1997. I have been actively writing about database security for a decade. On those subjects it feels as if every sentence I write, I have written before. Every thought is a variation on a long-running theme. Frankly, it’s tiring. It’s even worse when I watch Rich as he struggles with waning passion for DLP. I won’t mince words – I’ll come out and say it: Rich knows more about DLP than anyone I have ever met. Even the CTOs of the vendor companies – while they have a little more technical depth on their particular products – lack Rich’s understanding of all the available products, deployment models, market conditions, buying centers, and business problems DLP can realistically solve. And we have a heck of a time getting him to talk about it because he has been talking about it for 8 years, over and over again. The problem is that what is old hat for us is just becoming mainstream. DAM, DLP, and other security. So when Martin or Rich or I complain about having the same conversations over and over, well, tough. Suck it up. There are a lot of people out there who are not part of the security echo chamber, who want to learn and understand. It’s not sexy and it ain’t getting you a speaking slot at DefCon, but it’s beneficial to a much larger IT audience. I guess this is that third facet of a successful career. Teach. It’s the education leg of our jobs and it needs to be done. With this blog – and Martin’s – we have the ability to teach others what we know, to a depth not possible with Twitter and Facebook. Learning you have an impact on a larger audience is – and should be – a reward in and off itself. On to the Summary: Favorite Securosis Posts Adrian Lane: Detecting and Preventing Data Migrations to the Cloud. Mike Rothman: Fact-Based Network Security: Compliance Benefits. Theory is good. Applying theory to practice is better. That’s why I like this series. Application of many of the metrics concepts we’ve been talking about for years. Check out all the posts. Rich: Since our posting is a bit low this week, I dug into the archives for The Data Breach Triangle. Mostly since Ed Bellis cursed me out for it earlier this week and I never learned why. Other Securosis Posts Security Management 2.0: Platform Evaluation, Part 1. Incite 8/31/2011: The Glamorous Life. Detecting and Preventing Data Migrations to the Cloud. Fact-Based Network Security: Operationalizing the Facts. The Mobile App Sec Triathlon. Friday Summary (Not Too Morbid Edition): August 26, 2011. Favorite Outside Posts Mike Rothman: Preparing to Fire an Executive. Ultra-VC Ben Horowitz provides a guide to getting rid of a bad fit. These principles apply whether you’ve got to take out the CEO or a security admin. If you manage people, read this post. Now. Adrian and Gunnar: Those Who Can’t Do, Audit. Mary Ann calls out SASO – er, OK, she called out Veracode. And Chris Wysopal fired back with Musings on Custer’s Last Stand. Not taking a side here as both are about 80% right in what they are saying, but this back and forth is a fascinating read. On a different note, check out MAD’s book recommendations – they rock! Rich: Veracode defends themselves from an Oracle war of words. I’m with Chris on this one… Oracle has yet to build the track record to support this sort of statements. Other companies have. Project Quant Posts DB Quant: Index. NSO Quant: Index of Posts. NSO Quant: Health Metrics–Device Health. NSO Quant: Manage Metrics–Monitor Issues/Tune IDS/IPS. Research Reports and Presentations Tokenization vs. Encryption: Options for Compliance. Security Benchmarking: Going Beyond Metrics. Understanding and Selecting a File Activity Monitoring Solution. Database Activity Monitoring: Software vs. Appliance. Top News and Posts Mac

Share:
Read Post

Making Bets

Being knee deep in a bunch of research projects doesn’t give me enough time to comment on the variety of interesting posts I see each week. Of course we try to highlight them both in the Incite (with some commentary) and in the Friday Summary. But some posts deserve a better, more detailed treatment. We haven’t done an analysis, but I’d guess we find a pretty high percentage of what Richard Bejtlich writes interesting. Here’s a little hint: it’s because he’s a big brained dude. Early this week he posted a Security Effectiveness Model to document some of his ideas on threat-centric vs. vulnerability-centric security. I’d post the chart here but without Richard’s explanations it wouldn’t make much sense. So check out the post. I’ll wait. When I took a step back, Richard’s labels didn’t mean much to me. But there is an important realization in that Venn diagram. Richard presents a taxonomy to understand the impact of the bets we make every day. No, I’m not talking about heading off to Vegas on a bender that leaves you… well, I digress. But the reality is that security people make bets every day. Lots of them. We bet on what’s interesting to the attackers. We bet on what defenses will protect those interesting assets. We bet on how stupid our employees are (they remain the weakest link). We also bet on how little we can do to make the auditors go away, since they don’t understand what we are trying to do anyway. And you thought security was fundamentally different than trading on Wall Street? Here’s the deal. A lot of those bets are wrong, and Richard’s chart shows why. With limited resources we have to make difficult choices. So we start by guessing what will be interesting to attackers (Richard’s Defensive Plan). Then we try to protect those things (Live Defenses). Ultimately we won’t know everything that’s interesting to attackers (Threat Actions). We do know we can’t protect everything, so some of the stuff we think is important will go unprotected. Oh well. Even better, we won’t be right on what we assume the attackers want, nor on what defenses will work. Not entirely. So some of the stuff we think is important isn’t. So of our defenses protect things that aren’t important. As in advertising, a portion of our security spend is wasted – we just don’t know which portion. Oh well. We’ll also miss some of the things the attacker thinks are important. That makes it pretty easy for them, eh? Oh, well. And what about when we are right? When we think something will be a target, and the attackers actually want it? And we have it defended? Well, we can still lose – a persistent attacker will still get its way, regardless of what we do. Isn’t this fun? But the reason I so closely agree with most of what Richard writes is pretty simple. We realize the ultimate end result, which he summed up pretty crisply on Twitter (there are some benefits to a 140 character limit): “Managing risk,” “keeping the bad guys out,” “preventing compromise,” are all failed concepts. How fast can you detect and correct failures? and http://twitter.com/taosecurity/status/108527362597060608: The success of a security program then ultimately rests w/ the ability to detect & respond to failures as quickly & efficiently as possible. React Faster and Better anyone? Share:

Share:
Read Post

Security Management 2.0: Platform Evaluation, Part 2

In the second half of Platform Evaluation for Security Management 2.0, we’ll cover evaluating other SIEM solutions. At this point in the process you have documented your requirements, and rationally evaluated your current SIEM platform to determine what’s working and what’s not. This step is critical because a thorough understanding of your existing platform’s strengths and weaknesses is the yardstick against which all other options will be measured. As you evaluate new platforms, you can objectively figure out if it’s time to move on and select another platform. Again, at this point no decision has been made. You are doing your homework – no more, no less. We’ll walk you through the process of evaluating other security management platforms, in the context of your key requirements and your incumbent’s deficiencies. There are two major difficulties during this phase of the process. First, you need to get close to some of the other SIEM solutions in order to dig in and determine what the other SIEM providers legitimately deliver, and what is marketing fluff. Second, you’re not exactly comparing apples to apples. Some new platforms offer advantages because they use different data models and deployment options, which demands require careful analysis of how a new tool can and should fit into your IT environment and corporate culture. Accept that some capabilities require you to push into new areas that are likely outside your comfort zone. Let’s discuss the common user complaints – and associated solutions – which highlight differences in function, architecture, and deployment. The most common complaints we hear include: the SIEM does not scale well enough, we need more and better data, the product needs to be easier to use while providing more value, and we need to react faster given the types of attacks happening today. Scale: With the ever growing number of events to monitor, it’s simply not enough to buy bigger and/or more boxes to handle the exponential growth in event processing. Some SIEM vendors tried segregating reporting and alerting from collection and storage to offload processing requirements, which enables tuning each server to its particular role. This was followed by deployment models, where log management collected the data (to meet scalability needs) and delivered a heavily filtered event stream to the SIEM to reduce the load it needed to handle. But this is a stopgap. New platforms address many of the architectural scaling issues, with purpose-built data stores provide fully distributed processing. These platforms can flexibly divide event processing/correlation, reporting, and forensic analysis. For more information on SIEM scaling architectures, consult our Understanding and Selecting a SIEM/Log Management report. Data: Most platforms continue to collect data from an increasing number of devices, but many fail in two areas. First, they have failed to climb out of the network and server realm, to start monitoring applications in more depth. Second, many platforms suffer from over-normalization – literally normalizing the value right out of collected data. For many platforms, normalization is critical to address scalability concerns. This, coupled with poorly executed correlation and enrichment, produces data of limited value for analysis and reporting – which defeats the purpose. For example, if you need detailed information for business analytics, you’ll need new agents on business systems – collecting application, file system, and database information that is not included in syslog! Oh, horrors, going beyond syslog is really no longer an option. The format of this data is non-standard, and the important aspects of an application event or SQL query are not easily extracted, and vary by application. At times, you might feed these events through a typical data normalization routine and see nothing out of the ordinary. But if you examine the original transaction and dig into the actual query, you might find SQL injection. Better data means both broader data collection options and more effective processing of the collected data. Easier: This encompasses several aspects: automation of common tasks, (real) centralized management, better visualization, and analytics. Rules that ship out of the box have traditionally been immature (and mostly useless) as they were written by tech companies with little understanding of your particular requirements. Automated reporting and alerting features got a black eye because they returned minimally useful information, requiring extensive human intervention to comb through thousands of false positives. The tool was supposed to help – not create even more work. Between better data collection, more advanced analytics engines, and easier policy customization, the automation capabilities of SIEM platforms have evolved quickly. Centralized management is not just a reporting dashboard across several products. We call that integration on the screen. To us, centralized management means both reporting events and the ability to distribute rules from a central policy manager and tune the rules on an enterprise basis. This is something most products cannot do, but is very important in distributed environments where you want to push processing closer to the point of attack. Useful visualization – not just shiny pie charts, but real graphical representations of trends, meaningful to the business – can help make decisions easier. Speed: Collection, moving the data to a central location, aggregation, normalization, correlation, and then processing is a somewhat antiquated SIEM model. Welcome to 2002. Newer SIEMs inspect events and perform some pre-processing prior to storage to ensure near-real-time analysis, as well as performing post-correlation analysis. These actions are computationally expensive, so recognize these advancements are predicated on an advanced product architecture and an appropriate deployment model. As mentioned in the data section, this requires SIEM deployment (analysis, correlation, etc.) to be pushed closer to the collector nodes – and in some cases even into the data collection agent. Between your requirements and the SIEM advances you need to focus on, you are ready to evaluate other platforms. Here’s a roadmap: Familiarize yourself with SIEM vendors: Compare and contrast their capabilities in the context of your requirements. There are a couple dozen vendors, but it’s fairly easy to eliminate many by reviewing their product marketing materials. You use a “magic

Share:
Read Post

Security Management 2.0: Platform Evaluation, Part 1

To understand the importance of picking a platform, as opposed to a product, when discussing Security Management 2.0, let’s draw a quick contrast between what we see when talking to customers of either Log Management or SIEM. Most of the Log Management customers we speak with are relatively happy with their products. They chose a log-centric offering based on limited use cases – typically compliance-driven and requiring only basic log collection and reporting. These products keep day-to-day management overhead low, and if they support the occasional forensic audit customers are generally happy. Log Management is an important – albeit basic – business tool. Think of it like buying a can opener – it needs to perform a basic function and should always perform as expected. Customers don’t want their can opener to sharpen knives, tell time, or let the cat out – they just want to open cans. It’s not that hard. Log Management benefits from its functional simplicity – and even more from relatively modest expectations. Contrast that against conversations we have with SIEM customers. They have been at it for 5 years (maybe more), and as a result the scopes of their installation are massive – in terms of both infrastructure and investment. They grumble about the massive growth in event collection driven by all these new devices. They need to collect nearly every event type, and often believe they need real-time response. The product had better be fast and provide detailed forensic audits. They depend on the compliance reports for their non-technical audience, along with detailed operational reports for IT. SIEM customers have a daily yin vs. yang battle between automation and generic results; between efficiency and speed; between easy and useful. It’s like a can opener attached to an entire machine shop, so everything is a lot more complicated. You can open a can, but first you have to fabricate it from sheet metal. We use this analogy because it’s important to understand that there are a lot of moving parts in security management, and setting appropriate expectations is probably more important than any specific technical feature or function. So your evaluation of whether to move to a new platform needs to stay laser focused on the core requirements to be successful. In fact, the key to the entire decision-making process is understanding your requirements as we outlined in the last post. We keep harping on this because it’s the single biggest determinant of the success of your project. When it comes to evaluating your current platform, you need to think about the issue from two perspectives, so we will break this discussion into two posts. First is the formal evaluation of how well your platform addresses your current and foreseeable requirements. This is necessary to quantify both critical features you depend on, as well as to identify significant deficiencies. A side benefit is that you will be much better informed if you do decide to look for a replacement. Second, we will look at some of the evolving use cases and the impact of newer platforms on operations and deployment – both good and bad. Just because another vendor offers more features and performance does not mean it’s worth replacing your SIEM. The grass is not always greener on the other side. The former is critical for the decision process later in this series; the latter is critical for understanding the ramifications of replacement. The first step in the evaluation process is to use the catalog of requirements you have built to critically assess how the current SIEM platform achieves your needs. This means spelling out each business function, how critical it is, and whether the current platform gets it done. You’ll need to discuss these questions with stakeholders from operations, security, compliance, and any other organizations that participate in the management of SIEM or take advantage of it. You cannot make this decision in a vacuum, and lining up support early in the process will pay dividends later on. Trust us on that one. Operations will be the best judge of whether the platform is easy to maintain and how straightforward it is to implement new policies. Security will have the best understanding of whether forensic auditing is adequate, and compliance teams are the best source of information on suitability of reports for preparing for an audit. Each audience provides a unique perspective on the criticality of the function, and the effectiveness of the current platform. In some cases, you will find that the incumbent platform flat-out does not fill a requirement – that makes the analysis pretty easy. In other cases the system works perfectly, but is a nightmare in terms of maintenance and care & feeding for any system or rule changes. In most cases you will find that performance is less than ideal, but it’s not clear what that really means, because the system could always be faster when investigating a possible breach. It may turn out the SIEM functions as desired, but simply lacks capacity to keep up with all the events you need to collect, or takes too long to generate actionable reports. Act like a detective, collecting these tidbits of information, no matter how small, to build the story of the existing SIEM platform in your environment. This information will come into play later when you weigh options, and we recommend using a format that makes it easy to compare and contrast issues. We offer the following table as an example of one method of tracking requirements, based on minimum attributes you should consider. Security, compliance, management, integration, reporting, analysis, performance, scalability, correlation, and forensic analysis are all areas you need to evaluate in terms of your revised requirements. Prioritization of existing and desired features helps streamline the analysis. We reiterate the importance of staying focused on critical items to avoid “shiny object syndrome” driving you to select the pretty new thing, perhaps ignoring a cheap dull old saw that gets the work done. As we mentioned, evaluating

Share:
Read Post

Incite 8/31/2011: The Glamorous Life

It was a Sunday like too many other Sundays. Get up, take the kids to Sunday school, grab lunch with friends, then take the kids to the pool. Head home, shower up, and then kiss the Boss and kids goodbye and head off to the airport. Again. Another week, another business trip. It’s a glamorous life. I pass through security and suffer the indignity of having some (pleasant enough) guy grope me because I won’t pass through an X-ray machine because the asshats at TSA don’t understand the radiation impact. Maybe it makes other folks feel safe, but it’s just annoying to people aware of how ridiculous airport security theater really is. Man, how glamorous is that experience? When I arrive at my destination (at 1am ET), I get on a tram with all the other East Coast drones and wait in a line to get my rental car. The pleasant 24-year-old trying to climb the corporate ladder by dealing with grumps like me reminds me why I shouldn’t depend on my AmEx premium rental car insurance. I not-so-politely decline. She doesn’t want an explanation of why she is wrong, and I don’t offer it. Glamor, baby, yeah! I get to the hotel, which is comfortable enough. I sleep in a bit (since I’m now on the West Coast), and at 5am realize the hotel is literally right next to mass transit. Every 5 minutes, a train passes by. Awesome. I’m glad my body thinks it’s 8am or I’d probably be a bit upset. And the incredible breakfast buffet is perfect. Lukewarm hard-boiled eggs for protein. And a variety of crap cereals. At least they have a waffle maker. So much for my Primal breakfast. With this much glamor, I’m surprised I don’t see Trump at the buffet. But then my strategy day starts, and now I remember why I do this. We have a great meeting, with candid discussions, intellectual banter, and lots of brainstorming. I like to think we made some progress on my client’s strategic priorities. Or I could be breathing my own exhaust. Either way, it’s all good. I find a great salad bar for dinner and listen to the Giants’ pre-season game on my way back to the hotel. Sirius in the rental car for the win. When I wake up the next morning, it’s different. Thankfully the breakfast buffet isn’t open yet. I head to the airport. Again. It takes me little while to find a gas station to fill up the car. Oh well, it doesn’t matter, I’m going home. I pass through security without a grope, get an upgrade, and settle in. As we take off, I am struck by the beauty of our world. The sun poking through the clouds as we climb. The view of endless clouds that makes it look like we are in a dream. The view of mountains thousands of feet below. Gorgeous. So maybe it’s not a glamorous life, but it is beautiful. And it’s mine. For that I’m grateful. -Mike Photo credits: “Line for security checkpoint at Hartsfield-Jackson Airport in Atlanta” originally uploaded by Rusty Tanton Incite 4 U Painting the Shack gray: If you know Dave Shackleford, it’s actually kind of surprising to see Dave discuss the lack of Black or White in the security world. He’s not your typical shades-of-gray type guy. Dave will go to the wall to defend what he believes, and frequently does. A lot of the time, he’s right. In this post he makes a great point, which I paraphrase as everyone has their own truth. There are very few absolutes in security or life. What is awesome for you may totally suck for me. But what separates highly functioning folks from assholes is the ability to agree to disagree. Unfortunately a lot folks fall in the asshole camp because they can’t appreciate that someone else’s opinion may be right, given their own different circumstances. I guess you need to be wrong fairly frequently (as I have throughout my career) to learn to appreciate the opinions of other folks, even if you think they are wrong. – MR Betting on the wrong cryptohorse: I will be the first to admit that I never went to business school, although I did manage IT at one. So I probably missed all those important MBA lessons like how to properly teamify or synergistically integrate holistic accounting process management. Instead I stick to simple rules like, “Don’t make it hard for people to give you money,” and “Don’t build a business that completely relies on another company that might change its mind.” For example, there are a few companies building out encryption solutions that are mostly focused on protecting data going into Salesforce.com. Seems like the sort of thing Salesforce themselves might want to offer someday, especially since data protection is one of the bigger inhibitors of their enterprise customer acquisition process. So we shouldn’t be surprised that they bought Najavo Systems. Great for Navajo, not so much for everyone else. Sure, there are other places they can encrypt, but that was the biggest chunk of the market and it won’t be around much longer. On that note, I need to get back to coding our brand new application. Don’t worry, it only runs on the HP TouchPad – I’m sure that’s a safe bet. – RM Cutting off their oxygen: Brian Krebs’ blog remains a favorite of mine, and his recent posts on Fake AV and Pharma Wars read like old-fashioned gangsters-vs.-police movies. Fake AV is finally being slowed by very traditional law enforcement methods, as Ed Bott pointed out in his analysis of MacDefender trends. Identifying the payment processors and halting payments to the criminal organizations, as well as arresting some of the people directly responsible, actually works. Who knew? The criminals are using fake charities to funnel money to politicians in order to protect their illegal businesses. Imagine that! We know defenses and education to help secure the general public

Share:
Read Post

Fact-Based Network Security: Compliance Benefits

As we discussed in the last post, beyond the operational value of fact-based network security, compliance efforts can benefit greatly from gathering data, and being able to visualize and report on it. Why? Because compliance is all about substantiating your control set to meet the spirit of whatever regulatory hierarchy you need to achieve. Let’s run through a simple example. During a PCI assessment, the trusty assessor shows up with his/her chart of requirements. Requirement 1 reads “Install and maintain a firewall configuration to protect cardholder data.” So you have two choices at this point. The first is to tell auditor that you have this, and hope they believe you. Yeah, probably not a recipe for success. Or, you could consult your network security fact-base and pull a report on network topology, which shows your critical data stores (based on assessments of their relative value), the firewalls in place to protect them, and the flow of traffic through the network to get to the critical assets/business systems. Next the auditor needs to understand the configuration of the devices to make sure unauthorized protocols are not allowed through the firewalls to expose cardholder data. Luckily, the management system also captures firewall configurations on an ongoing basis. So you have current data on how the device is configured, and can show the protocols in question are blocked. You can also explicitly show what IP addresses and/or devices can traverse the device, using which protocols or applications (in the case of a new, fancy application-aware firewall). You close out this requirement by showing some of the event logs from the device, which demonstrate what was blocked by the firewall and why. The auditor may actually smile at this point, will likely check the box in the chart, and should move on to the next requirement. Prior to implementing your fact-based network security process, you spent a few days updating the topology maps (damn Visio), massaging the configuration files to highlight the relevant configuration entries (using a high-tech highlighter) and finally going through a zillion log events to find a few examples to prove the policies are operational. Your tool doesn’t make audit prep as easy as pressing a button, but it’s a lot closer than working without tools. Going where the money is To be clear, compliance is a necessary evil in today’s security world. Many of the projects we need to undertake have at least tangential compliance impact. Given the direct cost of failing an audit, potentially having to disclose an issue to customers and/or shareholders and applicable fines, most large organizations have a pot of money to make the compliance issue go away. Smart security folks still think about Security First! Which means you continue to focus on implementing the right controls to protect the information that matters to you. But success still hinges on your ability to show how the project can impact compliance, either by addressing audit deficiencies or making the compliance process more efficient, thus saving money. It’s probably not a bad idea to keep time records detailing how long it takes your organization to prepare for a specific audit, without some level of automation. The numbers will likely be pretty shocking. In many cases, the real costs of time and perhaps resources will pay for the tools to implement a fact-based network security process. As we wrap up our blog series in the next post, we’ll take this from theory to practice, running through a scenario to show how this kind of approach would impact your operational security. Share:

Share:
Read Post

The Mobile App Sec Triathlon

A quick announcement for those of you interested in Mobile Application Security: Our very own Gunnar Peterson is putting on a 3 day class with Ken van Wyk this coming November. The Mobile App Sec Triathlon will provide a cross-platform look at mobile application security issues, and spotlight critical areas of concern. The last two legs of the Triathlon cover specific areas of Android and iOS security that are commonly targeted by attackers. You’ll be learning from some of the best – Ken is well known for his work in secure coding, and Gunnar is one of the world’s best at Identity Management. Classes will be held at the eBay/PayPal campus in San Jose, California. Much more information is on the web site, including a picture of Gunnar with his ‘serious security’ face, so check it out. If you have specific questions or want to make sure specific topics are covered during the presentation, go ahead and email info@mobileappsectriathlon.com. Share:

Share:
Read Post

Detecting and Preventing Data Migrations to the Cloud

One of the most common modern problems facing organizations is managing data migrating to the cloud. The very self-service nature that makes cloud computing so appealing also makes unapproved data transfers and leakage possible. Any employee with a credit card can subscribe to a cloud service and launch instances, deliver or consume applications, and store data on the public Internet. Many organizations report that individuals or business units have moved (often sensitive) data to cloud services without approval from, or even notification to, IT or security. Aside from traditional data security controls such as access controls and encryption, there are two other steps to help manage unapproved data moving to cloud services: Monitor for large internal data migrations with Database Activity Monitoring (DAM) and File Activity Monitoring (FAM). Monitor for data moving to the cloud with URL filters and Data Loss Prevention. Internal Data Migrations Before data can move to the cloud it needs to be pulled from its existing repository. Database Activity Monitoring can detect when an administrator or other user pulls a large data set or replicates a database. File Activity Monitoring provides similar protection for file repositories such as file shares. These tools can provide early warning of large data movements. Even if the data never leaves your internal environment, this is the kind of activity that shouldn’t occur without approval. These tools can also be deployed within the cloud (public and/or private, depending on architecture), and so can also help with inter-cloud migrations. Movement to the Cloud While DAM and FAM indicate internal movement of data, a combination of URL filtering (web content security gateways) and Data Loss Prevention (DLP) can detect data moving from the enterprise into the cloud. URL filtering allows you to monitor (and prevent) users connecting to cloud services. The administrative interfaces for these services typically use different addresses than the consumer side, so you can distinguish between someone accessing an admin console to spin up a new cloud-based application and a user accessing an application already hosted with the provider. Look for a tool that offers a list of cloud services and keeps it up to date, as opposed to one where you need to create a custom category and manage the destination addresses yourself. Also look for a tool that distinguishes between different users and groups so you can allow access for different employee populations. For more granularity, use Data Loss Prevention. DLP tools look at the actual data/content being transmitted, not just the destination. They can generate alerts (or block) based on the classification of the data. For example, you might allow corporate private data to go to an approved cloud service, but block the same content from migrating to an unapproved service. Similar to URL filtering, you should look for a tool that is aware of the destination address and comes with pre-built categories. Since all DLP tools are aware of users and groups, that should come by default. This combination isn’t perfect, and there are plenty of scenarios where they might miss activity, but that is a whole lot better than completely ignoring the problem. Unless someone is deliberately trying to circumvent security, these steps should capture most unapproved data migrations. Share:

Share:
Read Post

Fact-Based Network Security: Operationalizing the Facts

In the last post, we talked about outcomes important to the business, and what types of security metrics can help make decisions to achieve those outcomes. Most organizations do pretty well with the initial gathering of this data. You know, when the reports are new and the pie charts are shiny. Then the reality – of the amount of work and commitment required to implement a consistent measurement and metrics process – sets in. Which is when most organizations lose interest and the metrics program falls by the wayside. Of course, if the there is a clear and tangible connection between gathering data and doing your job better, you make the commitment and stick with it. So it’s critical, especially within the early phases of a fact-based network security process, to get a quick win and capitalize on that momentum to cement the organization’s commitment to this model. We’ll discuss that aspect later in the series. But consistency is only one part of implementing this fact-based network security process. In order to get a quick win and warrant ongoing commitment, you need to make sense of the data. This issue has plagued technologies such as SIEM and Log Management for years – having data does not mean you have useful and valuable information. We want to base decisions on facts, not faith. In order to do that, you need to make gathering security metrics an ongoing and repeatable process, and ensure you can interpret the data efficiently. The keys to these are automation and visualization. Automating Data Collection Now that you know what kind of data you are looking for, can you collect it? In most cases the answer is yes. From that epiphany, the focus turns to systematically collecting the types of data we discussed in the last post. Data sources like device configuration, vulnerability, change information, and network traffic can be collected systematically in a leveraged fashion. There is usually a question of how deeply to collect data, whether you need to climb the proverbial stack in order to gather application and database events/logs/transactions, etc. In general, we Securosis folk advocate collecting more rather than less data. Not all of it may be useful now (or ever). But once you miss the opportunity to capture data you don’t get it back. It’s gone. And of course which data sources to leverage depends on the problems you are trying to solve. Remember, data does not equal information, and as much as we’d like to push you to capture everything, we know it’s not feasible. So balance data breadth and fidelity against cost and storage realities. Only you can decide how much data is enough to answer the questions of prioritizing activities. We tend to see most organizations focus on network, security, and server logs/events – at least initially. Mostly because that information is plentiful and largely useful in pinpointing attacks and substantiating controls. It’s beyond the scope of this paper to discuss the specifics of different platforms for collecting and analyzing this data, but you should already know the answer is not Excel. There is just too much data to collect and parse. So at minimum you need to look for some kind of platform to automate this process. Visualizaton Next we come come up against that seemingly intractable issue of making sense of the data you’ve collected. In this case, we see (almost every day) that a picture really is worth thousands of words (or a stream of thousands of log events). In practice, pinpointing anomalies and other suspicious areas which demand attention, is much easier visually – so focusing on dashboards, charts, and reports become a key part of operationalizing metrics. Right, those cool graphics available in most security management tools are more than eye candy. Who knew? So which dashboards do you need? How many? What should they look like? Of course it depends on which questions you are trying to answer. At the end of this series we will walk through a scenario to describe (at a high level, of course) the types of visualizations that become critical to detecting an issue, isolating its root cause, and figuring out how to remediate it. But regardless of how you choose to visualize the data you collect, you need a process of constant iteration and improvement. It’s that commitment thing again. In a dynamic world, things constantly change. That means your alerting thresholds, dashboards, and other decision-making tools must evolve accordingly. Don’t say we didn’t warn you. Making Decisions As we continue through our fact-based network security process, you now have a visual mechanism for pinpointing potential issues. But if your environment is like others we have seen, you’ll have all sorts of options for what you can do. We come full circle, back to defining what is important to your organization. Some tools have the ability to track asset value, and show visuals based on the values. Understand that value in this context is basically a totally subjective guess as to what something is worth. Someone could arbitrarily decide that a print server is as important as your general ledger system. Maybe it is, but this gets back to the concept of “relative value” earlier in the series. This relative understanding of an asset/business system’s value yields a key answer for how you should prioritize your activities. If the visualization shows something of significant value at risk, then fix it. Really. We know that sounds just too simple, and may even be so obvious it’s insulting. We mean no offense, but most organizations have no idea what is important to them. They collect very little data and thus have little understanding of what is really exposed or potentially under attack. So they have no choice but to fly blind and address whatever issue is next on the list, over and over again. As we have discussed, that doesn’t work out very well, so we need a commitment to collecting and then visualizing data, in order to

Share:
Read Post

Friday Summary (Not Too Morbid Edition): August 26, 2011

Last Thursday I thought I was dying. Not a joke. Not an exaggeration. As in “approaching room temperature”. I was just outside D.C. having breakfast with Mike before going to teach the CCSK instructors class. In the middle of a sentence I felt… something. Starting from my chest I felt a rush to my head. An incredibly intense feeling on the edge of losing consciousness. Literally out of nowhere, while sitting. I paused, told Mike I felt dizzy, and then the second wave hit. I said, “I think I’m going down”, told him to call 9-1-1, and had what we in the medical profession call “a feeling of impending doom”. I thought I was having either an AMI (acute myocardial infarction (heart attack), not the cloud thing) or a stroke. I’ve been through a lot over the years and nothing, nothing, has ever hit me like that. The next thoughts in my head were what I know my last thoughts on this planet will be. I never want to experience them again. Seconds after this hit I checked my pulse, since that feeling was like what many patients with an uncontrolled, rapid heart rate described. But mine was only up slightly. It tapered off enough that I didn’t think I was going to crash right then and there. Fortunately Mike is a bit… inexperienced… and instead of calling 9-1-1 with his cell phone he got up to tell the restaurant. I stopped him, it relented more, and I asked if there was a hospital close (Mike lived in that area for 15 years). There was one down the road and he took me there. (Never do that. Call the ambulance – we medical folks are freaking idiots.) I spent the next 29 hours in the hospital being tested and monitored. Other than a slightly elevated heart rate, everything was normal. CT scan of the head, EKG, blood work to rule out a pulmonary embolus (common traveling thing), echocardiogram, chest x-ray, and more. I ate what I was told was a grilled cheese sandwich. Assuming that was true, I’m certain it was microwaved and the toast marks airbrushed. Once they knew I wasn’t going to die they let me loose and I flew home (a day late). I won’t lie – I was pretty shaken up. Worse than when I fell 30 feet rock climbing and punctured my lung. Worse than skiing through avalanche terrain, or the time my doctor called to ask “are you close to the hospital” after a wicked infection. Especially with my rescue and extreme sports background I’ve been in a lot of life-risking situations, but I never before thought “this is it”. Tuesday I went to the doctor, and after a detailed history and reviewing the reports she thinks it was an esophageal spasm. The nerves in your thorax aren’t always very discriminating. They are like old Ethernet cables prone to interference and cross talk. A spasm in the wrong spot will trigger something that is essentially indistinguishable from a heart attack (to your brain). I’ve been having some reflux lately from all the road food, so it makes sense. There are more tests on the way, but it seems you all are stuck with me for much, much longer. All that testing was like the best physical ever, and I’m in killer good shape. but I am going to chill a bit for the next few weeks, which was in the works anyway. False positives suck. Now I know why you all hate IDS. Update: I was talking with our pediatrician and he went through the same thing once. He asked “can I ask you a personal question?” “Sure” I replied. “So what was running through your head when it happened?” I said, “I can’t believe I won’t be there for my girls”. “Oh good” he went, “I’ve never talked to anyone else who went through it, but I was trying to figure out if I had enough life insurance for my family”. And a coworker of my wife’s mentioned she had the same thing, and called her kids to say goodbye. To be honest, now I don’t feel so bad. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian quoted on dangers to law enforcement from recent hack. My Spanish is good, no? Adrian’s DR article on Fraud Detection and DAM. Rich, Zach, and Martin on the Network Security Podcast. Favorite Securosis Posts Adrian Lane: Cloud Security Q&A from the Field. Mike Rothman: Spotting That DAM(n) Fake. Grumpy Adrian is a wonder to behold. And he is definitely grumpy in this post. David Mortman: Spotting That DAM(n) Fake. Rich: Beware Anti-Malware Snake Oil Other Securosis Posts Security Management 2.0: Revisiting Requirements. Fact-based Network Security: Outcomes and Operational Data. Incite 8/24/2011: Living Binary. Security Management 2.0: Platform Evolution. Favorite Outside Posts Adrian Lane: Visa Kills PCI Assessments and Wants Your Processor to Support EMV. This is the carrot I mentioned, which Visa is offering to encourage adoption. As Branden points out, most merchants take more than Visa, but I expect MC to follow suit. Mike Rothman: National Archives Secret Question Fail. H/T to the guys at 37Signals for pointing out this security FAIL. David Mortman: Soft switching might not scale, but we need it. Rich: Wim Remes petitioning to get on the ISC2 ballot. Although I burned someone’s certificate on stage at DefCon, the organization could do some good if they changed direction. (No, I don’t have a CISSP… as a DefCon goon I’m not sure how to answer that whole “Do you associate with hackers?” question.) Research Reports and Presentations Tokenization vs. Encryption: Options for Compliance. Security Benchmarking: Going Beyond Metrics. Understanding and Selecting a File Activity Monitoring Solution. Database Activity Monitoring: Software vs. Appliance. React Faster and Better: New Approaches for Advanced Incident Response. Measuring and Optimizing Database Security Operations (DBQuant). Network Security in the Age of Any Computing. The Securosis 2010 Data Security Survey. Top News and Posts Chinese Military

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.