Securosis

Research

NSO Quant: The End is Near!

As mentioned last week, we’ve pulled the NSO Quant posts out of the main feed because the volume was too heavy. So I have been doing some cross-linking to let you who don’t follow that feed know when new stuff appears over there. Well, at long last, I have finished all the metrics posts. The final post is … (drum roll, please): NSO Quant: Health Metrics – Device Health I’ve also put together a comprehensive index post, basically because I needed a single location to find all the work that went into the NSO Quant process. Check it out, it’s actually kind of scary to see how much work went into this series. 47 posts. Oy! Finally, I’m in the process of assembling the final NSO Quant report, and that means I’m analyzing the survey data right now. If you want to have a chance at the iPad, you’ll need to fill out the survey (you must complete the entire survey to be eligible), by tomorrow at 5pm ET. We’ll keep the survey open beyond that, but the iPad will be gone. Given the size of the main document – 60+ pages – I will likely split out the actual metrics model into a stand-alone spreadsheet, so that and the final report should be posted within two weeks. Share:

Share:
Read Post

Friday Summary: September 24, 2010

We are wrapping up a pretty difficult summer here at Securosis. You have probably noticed from the blog volume as we have been swamped with research projects. Rich, Mike, and I have barely spoken with one another over the last couple months as we are head-down and researching and writing as fast as we can. No time for movies, parties, or vacation travel. These Quant projects we have been working on make us feel like we have been buried in sand. I have been this busy several times during my career, but I can’t say I have ever been busier. I don’t think that would be possible, as there are not enough hours in the day! Mike’s been hiding at undisclosed coffee shops to the point his family had his face put on a milk carton. Rich has taken multitasking to a new level by blogging in the shower with his iPad. Me? I hope to see the shower before the end of the month. I must say, despite the workload, projects like Tokenization and PCI Encryption have been fun. There is light at the end of the proverbial tunnel, and we will even start taking briefings again in a couple weeks. But what really keeps me going is having work to do. If I even think about complaining about the work level, something in the back of my brain reminds me that it is very good to be busy. It beats the alternative. By the time this post goes live I will be taking part of the day off from working to help friends load all their personal belongings into a truck. After 26 years with the same employer, one of my friends here in Phoenix was laid off. He and his wife, like many of the people I know in Arizona, are losing their home. 22 years of accumulated stuff to pack … whatever is left from the various garage sales and give-aways. This will be the second friend I have helped move in the last year, and I expect it will happen a couple more times before this economic depression ends. But as depressing as that may sound, after 14 months of haggling with the bank, I think they are just relieved to be done with it and move on. They now have a sense of relief from the pressure and in some ways are looking forward to the next phase of their life. And the possibility of employment. Spirits are high enough that we’ll actually throw a little party and celebrate what’s to come. Here’s to being busy! On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Seven Features To Look For In Database Assessment Tools. Mike’s presentation on Endpoint Security Fundamentals. Adrian’s Dark Reading post: Protegrity Gets Aggressive. Adrian quoted in TechTarget. And I’ll probably catch hell for this. Favorite Securosis Posts Rich: Monitoring up the Stack: Threats. Knowing what to monitor, and how to pull the value from it, is a heck of a lot tougher than merely collecting data. Mike and Adrian are digging in and showing us how to focus. Mike Rothman: Monitoring up the Stack: Threats. This blog series is getting going and it’s going to be cool. Getting visibility beyond just the network/systems is critical. David Mortman: Monitoring up the Stack: Threats. Adrian Lane: FireStarter: It’s Time to Talk about APT. Other Securosis Posts Government Pipe Dreams. NSO Quant: Clarifying Metrics (and some more links). Monitoring up the Stack: File Integrity Monitoring. Incite 9/22/2010: The Place That Time Forgot. New Paper (+ Webcast): Understanding and Selecting a Tokenization Solution. NSO Quant: Manage Process Metrics, Part 1. Understanding and Selecting an Enterprise Firewall: Selection Process. Upcoming Webinar: Selecting SIEM. Favorite Outside Posts Rich: 2010 Website Security Statistics Report. Once again, Jeremiah provides some absolutely amazing numbers on the state of Web site security. He pulled together stats from over 2000 web sites across 350 organizations to provide us all some excellent benchmarks for things like numbers and types of vulnerabilities (by vertical) and time to remediate. Truly excellent, and non-biased, work. Mike Rothman: Do you actually care about privacy?. Lots of us say we do. Seth Godin figures we are more worried about being surprised. It makes you think. Chris Pepper: evercookie: doggedly persistent cookies. By the guy who XSSed MySpace! David Mortman: Cyber Weapons. Adrian Lane: Titanic Secret Revealed. A serious case of focusing on the wrong threat! Chris Pepper: Little Bobby Tables moves to Sweden. Project Quant Posts NSO Quant: Manage Metrics–Monitor Issues/Tune IDS/IPS. NSO Quant: Manage Metrics–Deploy and Audit/Validate. NSO Quant: Manage Metrics–Process Change Request and Test/Approve. Research Reports and Presentations Understanding and Selecting a Tokenization Solution. Security + Agile = FAIL Presentation. Data Encryption 101: A Pragmatic Approach to PCI. White Paper: Understanding and Selecting SIEM/Log Management. White Paper: Endpoint Security Fundamentals. Understanding and Selecting a Database Encryption or Tokenization Solution. Low Hanging Fruit: Quick Wins with Data Loss Prevention. Top News and Posts Twitter Worm Outbreak. The most interesting security event of the week. New Security Microchip Vuln. Mac OS (iOS and OSX) Security Updates. New Autofill Hack Variant. VMWare Security Hardening Guide (PDF). evercookie. Many of you probably saw the re-tweet stream this week. Yes, this looks nasty and a pain in the ass to remove. Maybe I need to move all my browsing to temporary partitions. More Conjecture on Stuxnet Malware and some alternate opinions. And some funny quotes on Schneier’s blog. My relentless pursuit of the guy who robbed me. Cranky amateur cyber-sleuth FTW! DRG SSH Username and Password Authentication Tag Clouds. Nice rendering of human nature (you can call it laziness or stupidity, as you prefer). Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to ds, in response to FireStarter: It’s Time to Talk about APT . I think you are oversimplifying the situation regarding te reaons for classifying information. It is well known that information has value, and sometimes that value diminishes if others are aware you know it. Consider the historical case of the Japanese codes in WWII. If the

Share:
Read Post

NSO Quant: Clarifying Metrics (and some more links)

We had a great comment by Dan on one of the metrics posts, and it merits an answer with explanation, because in the barrage of posts the intended audience can certainly get lost. Here is Dan’s comment: Who is the intended audience for these metrics? Kind of see this as part of the job, and not sure what the value is. To me the metrics that are critical around process are do the amount of changes align with the number of authorized requests. Do the configurations adhere to current policy requirements, etc… Just thinking about presenting to the CIO that I spent 3 hours getting consensus, 2 hours on prioritizing and not seeing how that gets much traction. One of the pillars of my philosophy on metrics is that there are really three sets of metrics that network security teams need to worry about. The first is what Dan is talking about, and that’s the stuff you need to substantiate what you are doing for audit purposes. Those are key issues and things that you have to be able to prove. The second bucket is numbers that are important to senior management. That tends to focus around incidents and spending. Basically how many incidents happen, how is that trending and how long does it take to deal with each situation. On the spending side, senior folks want to know about % of spend relative to IT spend, relative to total revenues, as well as how that compares to peers. Then there is the third bucket, which are the operational metrics that we use to improve and streamline our processes. It’s the old saw about how you can’t manage what you don’t measure – well, the metrics defined within NSO Quant represent pretty much everything we can measure. That doesn’t mean you should measure everything, but the idea of this project is to really decompose the processes as much as possible to provide a basis for measurement. Again, not all companies do all the process steps. Actually most companies don’t do much from a process standpoint – besides fight fires all day. Gathering this kind of data requires a significant amount of effort and will not be for everyone. But if you are trying to understand operationally how much time you spend on things, and then use that data to trend and improve your operations, you can get payback. Or if you want to use the metrics to determine whether it even makes sense for you to be performing these functions (as opposed to outsourcing), then you need to gather the data. But clearly the CIO and other C-level folks aren’t going to be overly interested in the amount of time it takes you to monitor sources for IDS/IPS signature updates. They care about outcomes, and most of the time you spend with them needs to be focused on getting buy-in and updating status on commitments you’ve already made. Hopefully that clarifies things a bit. Now that I’m off the soapbox, let me point to a few more NSO Quant metrics posts that went up over the past few days. We’re at the end of the process, so there are two more posts I’ll link to Monday, and then we’ll be packaging up the research into a pretty and comprehensive document. NSO Quant: Manage Metrics – Signature Management NSO Quant: Manage Metrics – Process Change Request and Test/Approve NSO Quant: Manage Metrics – Deploy and Audit/Validate NSO Quant: Manage Metrics – Monitor Issues/Tune IDS/IPS Share:

Share:
Read Post

Government Pipe Dreams

General Keith Alexander heads the U.S. Cyber Command and is the Director of the NSA. In prepared testimony today he said the government should set up a secure zone for themselves and critical infrastructure, walled off from the rest of the Internet. “You could come up with what I would call a secure zone, a protected zone, that you want government and critical infrastructure to work in that part,” Alexander said. “At some point it’s going to be on the table. The question is how are we going to do it.” Alexander said setting up such a network would be technically straightforward, but difficult to sell to the businesses involved. Explaining the measure to the public would also be a challenge, he added. I don’t think explaining it to the public would be too tough, but practically speaking this one is a non-starter. Even if you build it, it will only be marginally more secure than the current Internet. Here’s why: The U.S. government currently runs its own private networks for managing classified information. For information of a certain classification, the networks and systems involved are completely segregated from the Internet. No playing Farmville on a SIPRnet-connected system. Extending this to the private sector is essentially a non-starter, at least without heavy regulation and a ton of cash. Most of our critical infrastructure, such as power generation/transmission and financial services, used to also be on their own private networks. But – often against the advice of us security folks – due to various business pressures they’ve connected these to Internet-facing systems and created a heck of a mess. When you are allowed to check your email on the same system you use to control electricity, it’s hard to not get hacked. When you put Internet facing web applications on top of back-end financial servers, it’s hard to keep the bad guys from stealing your cash. Backing out of our current situation could probably only happen with onerous legislation and government funding. And even then, training the work forces of those organizations to not screw it up and reconnect everything back to the Internet again would probably be an even tougher job. Gotta check that Facebook and email at work. If they pull it off, more power to them. From a security perspective isolating the network could reduce some of our risk, but I can’t really imagine the disaster we’d have to experience before we could align public and private interests behind such a monumental change. Share:

Share:
Read Post

Incite 9/22/2010: The Place That Time Forgot

I don’t give a crap about my hair. Yeah, it’s gray. But I have it, so I guess that’s something. It grows fast and looks the same, no matter what I do to it. I went through a period maybe 10 years ago where I got my hair styled, but besides ending up a bit lighter in the wallet (both from a $45 cut and all the product they pushed on me), there wasn’t much impact. I did get to listen to some cool music and see good looking stylists wearing skimpy outfits with lots of tattoos and piercings. But at the end of the day, my hair looked the same. And the Boss seems to still like me regardless of what my hair looks like, though I found cutting it too short doesn’t go over very well. So when I moved down to the ATL, a friend recommended I check out an old time barber shop in downtown Alpharetta. I went in and thought I had stepped into a time machine. Seems the only change to the place over the past 30 years was a new boom box to blast country music. They probably got it 15 years ago. Aside from that, it’s like time forgot this place. They give Double Bubble to the kids. The chairs are probably as old as I am. And the two barbers, Richard and Sonny, come in every day and do their job. It’s actually cool to see. The shop is open 6am-6pm Monday thru Friday and 6am-2pm on Saturday. Each of them travels at least 30 minutes a day to get to the shop. They both have farms out in the country. So that’s what these guys do. They cut hair, for the young and for the old. For the infirm, and it seems, for everyone else. They greet you with a nice hello, and also remind you to “Come back soon” when you leave. Sometimes we talk about the weather. Sometimes we talk about what projects they have going on at the farm. Sometimes we don’t talk at all. Which is fine by me, since it’s hard to hear with a clipper buzzing in my ear. When they are done trimming my mane to 3/4” on top and 1/2” on the sides, they bust out the hot shaving cream and straight razor to shave my neck. It’s a great experience. And these guys seem happy. They aren’t striving for more. They aren’t multi-tasking. They don’t write a blog or constantly check their Twitter feed. They don’t even have a mailing list. They cut hair. If you come back, that’s great. If not, oh well. I’d love to take my boy there, but it wouldn’t go over too well. The shop we take him to has video games and movies to occupy the ADD kids for the 10 minutes they take to get their haircuts. No video games, no haircut. Such is my reality. Sure the economy goes up and then it goes down. But everyone needs a haircut every couple weeks. Anyhow, I figure these guys will end up OK. I think Richard owns the building and the land where the shop is. It’s in the middle of old town Alpharetta, and I’m sure the developers have been chasing him for years to sell out so they can build another strip mall. So at some point, when they decide they are done cutting hair, he’ll be able to buy a new tractor (actually, probably a hundred of them) and spend all day at the farm. I hope that isn’t anytime soon. I enjoy my visits to the place that time forgot. Even the country music blaring from the old boom box… – Mike. Photo credits: “Rand Barber Shop II” originally uploaded by sandman Recent Securosis Posts Yeah, we are back to full productivity and then some. Over the next few weeks, we’ll be separating the posts relating to our research projects from the main feed. We’ll do a lot of cross-linking, so you’ll know what we are working on and be able to follow the projects interesting to you, but we think over 20 technically deep posts is probably a bit much for a week. It’s a lot for me, and following all this stuff is my job. We also want to send thanks to IT Knowledge Exchange, who listed our little blog here as one of their 10 Favorite Information Security Blogs. We’re in some pretty good company, except that Amrit guy. Does he even still have a blog? The Securosis 2010 Data Security Survey Report Rates the Top 5 Data Security Controls New Paper (+ Webcast): Understanding and Selecting a Tokenization Solution FireStarter: It’s Time to Talk about APT Friday Summary: September 17, 2010 White Paper Released: Data Encryption 101 for PCI DLP Selection Process: Infrastructure Integration Requirements Protection Requirements Defining the Content Monitoring up the Stack: Threats Introduction Understanding and Selecting an Enterprise Firewall: Advanced Features, Part 1 Advanced Features, Part 2 To UTM or Not to UTM? Selection Process NSO Quant Posts Manage Metrics – Signature Management Manage Metrics – Document Policies & Rules Manage Metrics – Define/Update Policies & Rules Manage Metrics – Policy Review Monitor Metrics – Validate and Escalate Monitor Metrics – Analyze Monitor Metrics – Collect and Store LiquidMatrix Security Briefing: September 20 September 21 Incite 4 U What’s my risk again? – Interesting comments from Intel’s CISO at the recent Forrester security conference regarding risk. Or more to the point, the misrepresentation of risk either towards the positive or negative. I figured he’d be pushing some ePO based risk dashboard or something, but it wasn’t that at all. He talked about psychology and economics, and it sure sounded like he was channeling Rich, at least from the coverage. Our pal Alex Hutton loves to pontificate about the need to objectively quantify risk and we’ve certainly had our discussions (yes, I’m being kind) about how effectively you can model risk. But the point is not necessarily to get a number, but

Share:
Read Post

Monitoring up the Stack: File Integrity Monitoring

We kick off our discussion of additional monitoring technologies with a high-level overview of file integrity monitoring. As the name implies, file integrity monitoring detects changes to files – whether text, configuration data, programs, code libraries, critical system files, or even Windows registries. Files are a common medium for delivering viruses and malware, and detecting changes to key files can provide an indication of machine compromise. File integrity monitoring works by analyzing changes to individual files. Any time a file is changed, added, or deleted, it’s compared against a set of policies that govern file use, as well as signatures that indicate intrusion. Policies are as simple as a list of operations on a specific file that are not allowed, or could include more specific comparisons of the contents and the user who made the change. When a policy is violated an alert is generated. Changes are detected by examining file attributes: specifically name, date of creation, time last modified, ownership, byte count, a hash to detect tampering, permissions, and type. Most file integrity monitors can also ‘diff’ the contents of the file, comparing before and after contents to identify exactly what changed (for text-based files, anyway). All these comparisons are against a stored reference set of attributes that designates what state the file should be in. Optionally the file contents can be stored for comparison, and what to do in case a change is detected as a baseline. File integrity monitoring can be periodic – at intervals from minutes to every few days. Some solutions offer real-time threat detection that performs the inspection as the files are accessed. The monitoring can be performed remotely – accessing the system with user credentials and running instructing the operating system to periodically collect relevant information – or an agent can be installed on the target system that performs the data collection locally, and returns data upstream to the monitoring server. As you can imagine, even a small company changes files a lot, so there is a lot to look at. And there are lots of files on lots of machines – as in tens of thousands. Vendors of integrity monitoring products provide the basic list of critical files and policies, but you need to configure the monitoring service to protect the rest of your environment. Keep in mind that some attacks are not fully defined by a policy, and verification/investigation of suspicious activity must be performed manually. Administrators need to balance performance against coverage, and policy precision against adaptability. Specify too many policies and track too many files, and the monitoring software consumes tremendous resources. File modification policies designed for maximum coverage generate many ‘false-positive’ alerts that must be manually reviewed. Rules must balance between catching specific attacks and detecting broader classes of threats. These challenges are mitigated in several ways. First, monitoring is limited to just those files that contain sensitive information or are critical to the operation of the system or application. Second, the policies have different criticality, so that changes to key infrastructure or matches against known attack signatures get the highest priority. The vendor supplies rules for known threats and to cover compliance mandates such as PCI-DSS. Suspicious events that indicate an attack policy violation are the next priority. Finally, permitted changes to critical files are logged for manual review at a lower priority to help reduce the administrative burden. File integrity monitoring has been around since the mid-90s, and has proven very effective for detection of malware and system compromise. Changes to Windows registry files and open source libraries are common hacks, and very difficult to detect manually. While file monitoring does not help with many of the web and browser attacks that use injection or alter programs in memory, it does detect many types of persistant threats, and therefore is a very logical extension of existing monitoring infrastructure. Share:

Share:
Read Post

NSO Quant: Manage Process Metrics, Part 1

We realized last week that we may have hit the saturation point for activity on the blog. Right now we have three ongoing blog series and NSO Quant. All our series post a few times a week, and Quant can be up to 10 posts. It’s too much for us to keep up with, so I can’t even imagine someone who actually has to do something with their days. So we have moved the Quant posts out of the main blog feed. Every other day, I’ll do a quick post linking to any activity we’ve had in the project, which is rapidly coming to a close. On Monday we posted the first 3 metrics posts for the Manage process. It’s the part where we are defining policies and rules to run our firewalls and IDS/IPS devices. Again, this project is driven by feedback from the community. We appreciate your participation and hope you’ll check out the metrics posts and tell us whether we are on target. So here are the first three posts: NSO Quant: Manage Metrics – Policy Review NSO Quant: Manage Metrics – Define/Update Policies and Rules NSO Quant: Manage Metrics – Document Policies and Rules Over the rest of the day, we’ll hit metrics for the signature management processes (for IDS/IPS), and then move into the operational phases of managing network security devices. Share:

Share:
Read Post

New Paper (+ Webcast): Understanding and Selecting a Tokenization Solution

Around the beginning of the year Adrian and I released our big database encryption paper: Understanding and Selecting a Database Encryption or Tokenization Solution. We realized pretty quickly there was no way we could do justice to tokenization in that paper, so we are now excited to release Understanding and Selecting a Tokenization Solution. In this paper we dig in and cover all the major ins and outs of tokenization. How it works, why you might want to use it, architectural and integration options, and key selection criteria. We also include descriptions of three major use cases… with pretty architectural diagrams. This was a fun project – the more we dug in, the more we learned about the inner workings of these systems and how they affect customers. We were shocked at how such a seemingly simple technology requires all sorts of design tradeoffs, and the different approaches taken by each vendor. In support of this presentation we are also giving a webcast with the sponsor/licensee, RSA. The webcast is September 28th at 1pm ET, and you can register. The content was developed independently of sponsorship, using our Totally Transparent Research process. You can download the PDF directly here, and the paper is also available (without registration) at RSA. Since they were so nice as to help feed my kid without mucking with the content, please pay them a visit to learn more about their offerings. Share:

Share:
Read Post

FireStarter: It’s Time to Talk about APT

There’s a lot of hype in the press (and vendor pitches) about APT – the Advanced Persistent Threat. Very little of it is informed, and many parties within the security industry are quickly trying to co-opt the term in order to advance various personal and corporate agendas. In the process they’ve bent, manipulated and largely tarnished what had been a specific description of a class of attacker. I’ve generally tried to limit how much I talk about it – mostly restricting myself to the occasional Summary/Incite comment, or this post when APT first hit the hype stage, and a short post with some high level controls. I self-censor because I recognize that the information I have on APT all comes either second-hand, or from sources who are severely restricted in what they can share with me. Why? Because I don’t have a security clearance. There are groups, primarily within the government and its contractors, with extensive knowledge of APT methods and activities. A lot of it is within the DoD, but also with some law enforcement agencies. These guys seem to know exactly what’s going on, including many of the businesses within private industry being attacked, the technical exploit details, what information is being stolen, and how it’s exfiltrated from organizations. All of which seems to be classified. I’ve had two calls over the last couple weeks that illustrate this. In the first, a large organization was asking me for advice on some data protection technologies. Within about 2 minutes I said, “if you are responding to APT we need to move the conversation in X direction”. Which is exactly where we went, and without going into details they were essentially told they’d been compromised and received a list, from “law enforcement”, of what they needed to protect. The second conversation was with someone involved in APT analysis informing me of a new technique that technically wasn’t classified… yet. Needless to say the information wasn’t being shared outside of the classified community (e.g., not even with the product vendors involved) and even the bit shared with me was extremely generic. So we have a situation where many of the targets of these attacks (private enterprises) are not provided detailed information by those with the most knowledge of the attack actors, techniques, and incidents. This is an untenable situation – further, the fundamental failure to share information increases the risk to every organization without sufficient clearances to work directly with classified material. I’ve been told that in some cases some larger organizations do get a little information pertinent to them, but the majority of activity is still classified and therefore not accessible to the organizations that need it. While it’s reasonable to keep details of specific attacks against targets quiet, we need much more public discussion of the attack techniques and possible defenses. Where’s all the “public/private” partnership goodwill we always hear about in political speeches and watered-down policy and strategy documents? From what I can tell there are only two well-informed sources saying anything about APT – Mandiant (who investiages and responds to many incidents, and I believe still has clearances), and Richard Bejtlich (who, you will notice, tends to mostly restrict himself to comments on others’ posts, probably due to his own corporate/government restrictions). This secrecy isn’t good for the industry, and, in the end, it isn’t good for the government. It doesn’t allow the targets (many of you) to make informed risk decisions because you don’t have the full picture of what’s really happening. I have some ideas on how those in the know can better share information with those who need to know, but for this FireStarter I’d like to get your opinions. Keep in mind that we should try and focus on practical suggestions that account for the nuances of the defense/intelligence culture being realistic about their restrictions. As much as I’d like the feds to go all New School and make breach details and APT techniques public, I suspect something more moderate – perhaps about generic attack methods and potential defenses – is more viable. But make no mistake – as much hype as there is around APT, there are real attacks occurring daily, against targets I’ve been told “would surprise you”. And as much as I wish I knew more, the truth is that those of you working for potential targets need the information, not just some blowhard analysts. UPDATE Richard Bejtlich also highly recommends Mike Cloppert as a good source on this topic. Share:

Share:
Read Post

Monitoring up the Stack: Threats

In our introductory post we discussed how customers are looking to derive additional value form their SIEM and log management investments by looking at additional data types to climb the stack. Part of the dissatisfaction we hear from customers is the challenge of turning collected data into actionable information for operational efficiency and compliance requirements. This challenge is compounded by the clear focus on application-oriented attacks. For the most part, our detection only pays attention to the network and servers, while the attackers are flying above that. It’s kind of like repeatedly missing the bad guys because they are flying at 45,000 feet, but you cannot get above 20,000 feet. You aren’t looking where the attacks are actually happening, which obviously presents problems. At its core SIEM can fly at 45,000’ and monitor application components looking for attacks, but it will take work to get there. Though given the evolution of the attack space, we don’t believe keeping monitoring focused on infrastructure is an option, even over the middle term. What kind of application threats are we talking about? It’s not brain surgery and you’ve seen all of these examples before, but they warrant another mention because we continue to miss opportunities to focus on detecting these attacks. For example: Email: You click a link in a ‘joke-of-the-day’ email your spouse forwarded, which installs malware on your system, and then tries to infect every machine on your corporate network. A number of devices get compromised and become latent zombies waiting to blast your network and others. Databases: Your database vendor offers a new data replication feature to address failover requirements for your financial applications, but it’s installed with public credentials. Any hacker can now replicate your database, without logging in, just by issuing a database command. Total awesomeness! Web Browsers: Your marketing team launches a new campaign, but the third party content provider site was hacked. As your customers visit your site, they are unknowingly attacked using cross-site request forgery and then download malware. The customer’s credentials and browsing history leak to Eastern Europe, and fraudulent transactions get submitted from customer machines without their knowledge. Yes, that’s a happy day for your customers and also for you, since you cannot just blame the third party content provider. It’s your problem. Web Applications: Your web application development team, in a hurry to launch a new feature on time, fails to validate some incoming parameters. Hackers exploit the database through a common SQL injection vulnerability to add new administrative users, copy sensitive data, and alter database configuration – all through normal SQL queries. By the way, as simple as this attack is, a typical SIEM won’t catch it because all the requests look normal and are authorized. It’s an application failure that causes security failure. Ad-hoc applications: The video game your kid installed on your laptop has a keystroke logger that records your activity and periodically sends an encrypted copy to the hackers who bought the exploit. They replay your last session, logging into your corporate VPN remotely to extract files and data under your credentials. So it’s fun when the corporate investigators show up in your office to ask why you sent the formula for your company’s most important product to China. The power of distributed multi-app systems to deliver services quickly and inexpensively cannot be denied, which means we security folks will not be able to stop the trend – no matter what the risk. But we do have both a capability and responsibility to ensure these services are delivered as securely as possible, and we watch for bad behavior. Many of the events we discussed are not logged by traditional network security tools, and to casual inspection the transactions look legitimate. Logic flaws, architectural flaws, and misused privileges look like normal operation to a router or an IPS. Browser exploits and SQL injection are difficult to detect without understanding the application functionality. More problematic is that damage from these exploits occurs quickly, requiring a shift from after-the-fact forensic analysis to real-time monitoring to give you a chance to interrupt the attack. Yes, we’re really reiterating that application threats are likely to get “under the radar” and past network-level tools. Customers complain the SIEM techniques they have are too slow to keep up with remote multi-stage attacks, code substitution, etc.; ill-suited to stopping SQL injection, rogue applications, data leakage, etc.; or simply effective against cross-site scripting, hijacked privileges, etc. – we keep hearing that current tools to have no chance against these new attacks. We believe the answer involves broader monitoring capabilities at the application layer, and related technologies. But reality dictates the tools and techniques used for application monitoring do not always fit SIEM architectures. Unfortunately this means some of the existing technologies you may have, and more importantly the way you’ve deployed them – may not fit into this new reality. We believe all organizations need to continue broadening how they monitor their IT resources and incorporate technologies that are designed to look at the application layer, providing detection of application attacks in near real time. But to be clear, adoption is still very early and the tools are largely immature. The following is an an overview of the technologies designed to monitor at the application layer, and these are what we will focus on in this series: File Integrity Monitoring: This is real-time verification of applications, libraries, and patches on a given platform. It’s designed to detect replacement of files and executables, code injection, and the introduction of new and unapproved applications. Identity Monitoring: Designed to identify users and user activity across multiple applications, or when using generic group or service accounts. Employs a combination of location, credential, activity, and data comparisons to ‘de-anonymize’ user identity. Database Monitoring: Designed to detect abnormal operation, statements, or user behavior; including both end users and database administrators. Monitoring systems review database activity for SQL injection, code injection, escalation of privilege, data theft, account hijacking, and misuse. Application Monitoring: Protects applications, web applications, and web-based clients from man-in-the-middle attacks, cross site scripting (XSS), cross site request forgery (CSRF), SQL

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.