Securosis

Research

Monitoring up the Stack: Platform Considerations

So far in the Monitoring up the Stack series, we have focused on a number of additional data types and analysis techniques that extend security monitoring to gain a deeper and better perspective of what’s happening. We have been looking at the added value that is all good, but we all know there is no free lunch. So now let’s look at some of the problems, challenges, and extra work that come along with deeper monitoring goodness. We know most of you who have labored with scalability and configuration challenges with your SIEM product were waiting for the proverbial other shoe to drop. Each new data type and the associated analysis impact the platform. So in this post we will discuss some of these considerations and think a bit about how to work around the potential issues. To be fair, it’s not all bad news. Some additional data sources are already integrated with the SIEM (as in the case of identity and database activity monitoring), minimizing deployment concerns. However, most options for application, database, user, and file monitoring are not offered as fully integrated features. Monitoring products sometimes need to be set up in parallel – yep, that means another product to deploy, configure, and manage. You’ll configure the separate monitor to feed some combination of events, configuration details, and/or alerts to the SIEM platform – but the integration likely stops there. And each type of monitoring we have discussed has its own idiosyncrasy and/or special deployment requirement, so the blade cuts both ways. To add hard-to-get data and real-time analysis for these additional data sources comes at a cost. But what fun would it be if everything was standardized and worked out of the box? So you know what you’re getting yourself into, the following is a checklist of platform issues to consider when adding these additional data types to your monitoring capabilities. Scalability: When adding monitoring capabilities, integrated or standalone, you need additional processing power. SIEM solutions offer distributed models to leverage multi-tier or multi-platform deployments which may provide the horsepower to process additional data types. You may need to reconfigure your collection and/or analysis architecture to redistribute compute power for these added capabilities. Alternatively, many application and/or database monitoring approaches utilize software agents on the target platform. In some cases this is to access data otherwise not available, or to remove network latency from analysis response times, as well as to distribute the processing load across the organization. Of course there is a downside to agents: overhead and memory consumption could impact the target platform, as well as the normal installation & management headaches. The point is that you need to be aware of the extra work being performed and where, and you will need to absorb that requirement on the target platforms or add horsepower to the SIEM system. Regardless of the deployment model you choose, you will need additional storage to accomodate the extra data collected. You may already be monitoring some application events through syslog, but transaction history can increase event volume per application by an order of magnitude. All monitoring platforms can be set to filter out events by policy, but filtering too much defeats the purpose of monitoring these other sources in the first place. Integration: There are three principle integration points to consider. The first is how to get data into the SIEM and integrated with other event types, and second is how to configure the monitors regarding what to look for. Fully integrated SIEM systems account for both policy management and normalization / correlation of events. While you may need to alter some of your correlation rules and reports to take advantage of these new data types, it can all be performed from a single management console. Standalone monitoring systems can easily be configured to send events, configuration settings, and alerts directly to a SIEM, or drop the data into files for batch processing. SIEM platforms are adept at handling data from heterogenous sources so you just change the correlation, event filtering, and data retention rules to account for the additional data. The second – and most challenging – part of integration is sharing policies & reports between the two systems (SIEM and standalone monitor). Keep in mind that things like configuration analysis, behavioral monitoring, and file integrity monitoring all work by comparing current results against reference values. Unlike hard-coded attribute comparisons in most SIEM platforms, these reference values change over time (by definition). Policies need to be flexible enough to handle these dynamic values so if your SIEM platform can’t you’ll need to use the monitoring platform’s interface for policies, reporting, and data management. We see that with most of the Database Activity Monitoring platforms, where the SIEM is not flexible enough to alert properly. Thus customers need to maintain separate rule bases in the two products. Whenever a rule changes on either side, this disconnection requires manual verification that settings remain consistent between the two platforms. Some monitoring tools have import and export features so you can create a master policy set for all servers, and provide policy reports that detail which rules are active for audit purposes. The third point to consider is that most monitoring systems leverage smart agents, with agent deployment and maintenance managed from the console. Most SIEM platforms leverage a web-based management platform which facilitates central location management, or even the merging of consoles. Many standalone monitoring systems for content, file integrity, and web application monitoring are Windows-specific applications that can’t easily be merged and must be managed as standalone applications. Analysis: Each new data type needs its own set of analysis policies, alerting rules, dashboards, and reports. This is really where the bulk of the effort is spent – to make these broader data sources available and effective. It’s not just that we have new types of data being collected – the flexibility of flat-file event storage used within SIEM products adapts readily enough – but that monitoring tools should leverage more than merely attribute analysis.

Share:
Read Post

New Blog Series: Incident Response Fundamentals

Our “beat our readers into a content coma” plan is working perfectly. Just when you thought you had enough of NSO Quant, Enterprise Firewall, Monitoring up the Stack, and DLP (just in the last month) – we will be starting another series Monday. Rich and I will begin the “Incident Response Fundamentals: Understanding Threats Before, During, and After the Attack” series. React Faster is something I’ve been talking about for years (literally) and Rich improved it by integrating the importance of incident response to the mix. Now we are going to bring all those aspects together into a very focused view on how you can keep pace with the rapidly evolving attack space. The general thesis of the series is: Organizations need to embrace a pervasive monitoring approach to track attacks before, during, and after the threat. Far too many organizations do not capture the proper data at the network layer to detect attacks, find the root cause and remediate, or perform a detailed forensic analysis after the fact. This impairs their ability to protect their environments and ensure they don’t suffer similar breaches over and over again. We will not only talk about monitoring (as much as Adrian loves that), but also about an incident response plan and what to do before the attack, once you think something is going down, and (from a forensics standpoint) after the fact. We’ll also do a little bit of visioning and take a cut at what network security will look like in 5 years. Overall it will be a great research project and we think the output will be very valuable to practitioners. Which is why we do this stuff. Share:

Share:
Read Post

Dead or Alive: Pen Testing

Remember the dead or alive game Howard Stern used to do? I think it was Stern. Not sure if he’s still doing it because I’m too cheap to subscribe to Sirius for the total of 5 minutes I spend in the car driving between coffee shops. Pen testing has been under fire lately. Ranum has been talking for years about how pen testing sucks. Brian Chess also called pen testing dead at the end of 2008. It’s almost two years later and the death of pen testing has been greatly exaggerated. Pen testing is not dead. Not by a long shot. But it is changing. And we have plenty of folks weighing in on how this evolution is taking place. First off is the mouth from the South, Dave Maynor. OK, one of the mouths from the South, because I suspect I am another. Dave made some waves regarding whether to use 0-day exploits in a pen test, and then had to respond when everyone started calling him names. Here’s the thing. Dave is right. The bad guys don’t take an oath when they graduate from bad guy school that they won’t use 0-days. They can and do, and you need to know how you’ll respond. Whether it’s part of a pen test or incident response exercise doesn’t matter to me. But if you think you don’t need to understand how you’ll respond under fire, you are wrong. Second, I got to attend a great session by Dave Kennedy and Eric Smith at BSides Atlanta about strategic pen testing. It was presented from the viewpoint of the pen tester, but you can apply a lot of those lessons to how a practitioner runs a pen test in their organization. First off, a pen test is about learning where you can be exploited. If you think it’s about checking a box (for an audit) or making yourself and your team look good, you’ve missed the point. These guys will break your stuff. The question is what can you learn and how will that change your defensive strategies? The pen testers need to operate in a reasonable semblance of a real wold scenario. Obviously you don’t want them taking down your production network. But you can’t put them in a box either. The point is to learn and unless their charter is broad enough to make a difference, again you are wasting your time. Finally, I’ll point to a presentation by Josh Abraham, talking about his “Goal Oriented Pentesting” (PDF) approach. It’s good stuff. Stuff you should know, but probably don’t do. What do all these things have in common? They talk about the need for pen testing to evolve. But by no means are they talking about its death. Listen – at the end of the day, whether you are surprised by what an attacker does to your network is your business. I still believe pen testing can provide insights you can’t get any other way. I think those insights are critical to understanding your security posture. Those enlightened organizations whihc don’t pen test do so at their own risk. And the rest of us should thank them – they are the slow gazelles and the lions are hungry. Share:

Share:
Read Post

Incite 10/13/2010: the Rise of the Cons

No we aren’t going to talk about jailbreaks or other penal system trials and tribulations. This one is about how the conference circuit is evolving in a really positive way. Most folks attend the big security shows – you know, RSA and BlackHat and maybe some others. Most folks also hate these shows. I hear a lot of complaints about weak content and vendor whoring putting a damper on the experience. Of course, since myself and my ilk tend to speak at most of these shows, we can only point the finger at ourselves. Personally, unless I’m speaking I tend to skip all but the biggest shows, which I attend for networking purposes. But that’s just me. But nature hates a vacuum, and the vacuum of user-oriented conferences is being filled by the BSides movement and a number of regional hacker cons. If the conference you are attending doesn’t do it for you, get some smart folks together (who are there anyway) and put on an unconference of your own. That’s the general concept for BSides. I attended BSides ATL last week, and it was a really great experience. First shout outs need to be sent to the driving forces bringing BSides to ATL, and they were Eric Smith (@infosecmafia), Nick Owen (@wikidsystems), Marisa Fagan (@dewzi) and MC Petermann (@petermannmc). I know there were tons of other folks who put a lot of blood, sweat, and tears into making BSides ATL happen, and no offense to anyone I didn’t mention. I can’t thank them all enough. Why is this working? Because it’s about community. I’ve been in Atlanta for over 6 years now, and there isn’t really a cohesive security community. The ISSA meetings are a joke, unless you like vendors to hump your leg for 2 hours every month. We tried to get a CitySec group meeting going (and all three of us who attended enjoyed the beer that I bought), but that fizzled. A new Cloud Security Alliance chapter is forming in the ATL and we are seeing a lot of activity for the NAISG in town as well. Yes, there are other organizations, but it’s generally a small group of folks getting together in an ad hoc fashion. But what’s been missing has been a more technically oriented conference, where smart folks from the Southeast can get together and share what we are seeing. That happened in spades at BSides ATL. Whether talking Google and Bing hacking with Rob Ragan, exfiltration with Dave Shackleford and Rick Hayes, pen testing with Eric Smith and Dave Kennedy, or having Chris Nickerson show how to bring entire companies down (think attacking robots!) – it was just a flood of information. Good information. And those were just the sessions I attended. There were a bunch of others I had to miss. The conference organizers even let me play and talk about what I think will happen in 2011. The short answer is I have no idea. But you already knew that. Yet I did get to use a picture of a guinea pig BBQ, which has to set some low bar for depravity. I’m probably going to get in trouble by talking up BSides because we Securosis folks do a lot of work with the RSA Conference. Next year, we’ll be leading the E10 (CISO-focused) event on Monday at RSA, and Rich is in London and will be in China this month speaking at RSA’s global events. But the writing is on the wall. Content is king, and right now there is a lot of great content being driven through the regional BSides conferences and the other hacker cons. While I’m talking conferences, I also should mention what seemed to be a rousing success for Hoff and friends at the inaugural HacKid conference in Boston last weekend. It’s such a great concept, to teach kids about security, self-defense and other important topics. I can’t wait to get this going in ATL. And with that, just remember – if you don’t take care of your customers someone else will. Mr. Market told me. – Mike. Photo credits: “Pug Shot” originally uploaded by Jerry Reynolds Recent Securosis Posts IT Debt: Real or FUD? FireStarter: Consumer Internet Penalty Box Friday Summary: October 8, 2010 Monitoring up the Stack: User Activity Monitoring Identity Monitoring I should also highlight an article on Application Monitoring in Dark Reading that highlights the Monitoring up the Stack research Adrian and Gunnar are working on right now. I know lots of folks have a hard enough time monitoring their network and security devices, but the application is where the action is, so ignore it at your own peril. Incite 4 U Time for the heavy artillery. What heavy artillery? – Greg Shipley makes the point we’ve all come to grips with. We are outgunned. The bad guys have better tools and more motivation, and all we can do is watch it happen and clean up the mess afterwards. This statement kind of says it all: “Recent events suggest that we are at a tipping point, and the need to reassess and adapt has never been greater. That starts with facing some hard truths and a willingness to change the status quo.” Right. So all is not lost, but we need to start thinking differently. But what does that mean? According to Shipley, it’s focusing on the database and maybe things like application white listing. Best of all is the idea to “stop rewarding ineffectiveness and start rewarding innovation.” Bravo. But how do you do that when the checkbox says you need AV? So basically we are in a quandary, but you already knew that. What to do? Basically what we’ve been saying for years. React Faster (and Better), focus on the fundamentals, and if you are targeted, just understand you can’t stop them. And manage expectations accordingly. He closes the article with “If we remain bound to our relentless commitment to mediocrity, we will be worse off moving ahead. We can and must do better. It’s time to change our way of thinking.” Right. – MR Instructive memory – Ever had

Share:
Read Post

FireStarter: Consumer Internet Penalty Box

A few weeks back, the fine folks at Microsoft used a healthcare analogy to describe a possible solution to the Internet’s bot infestation. Scott Charney suggested that every PC should have a health certificate which would provide access to the Internet. No health certificate, no access. Kind of like a penalty box for consumer Internet users. It’s an interesting idea, and clearly we need some kind of solution to the reality that Aunt Bessie has no idea her machine has been pwned and is blasting spam and launching DDoS attacks. Unfortunately it won’t work, unless mandated by some kind of regulation. It’s really an economic thing. Comcast will proactively send devices connected to their network exhibiting bad behavior a message telling them they are likely compromised. They call it their Bot Alert program. Then they point to a nice web page where the consumer can get answers. The consumer is then expected to address the issue. If they can’t (or don’t) Comcast will continue to notify the customer until they do. Here’s the rub: if the consumer knew what they were doing in the first place, they wouldn’t have gotten pwned. You can’t blame Comcast (or any other ISP) for drawing a line in the sand. They charge maybe $40 a month for Internet service. The minute a customer picks up the phone and calls for help, they lose money for that month. There is no financial incentive for them to try to fix the compromised device. Sure, a bot does bad things. But bad enough to spend staff time trying to fix every one of them? The constant notifications will definitely push a customer to call and force Comcast to help them address the issue. I guess that worked OK in their pilot test, but we’ll see how well it scales as they roll it out nationwide. And Comcast seems to be out in front on this issue. I’m not familiar with any similar initiatives from the other major ISPs. So let’s tip our hat to Comcast for at least trying to do something. But is it the right approach? Do we just accept the fact that a percentage of consumer devices will be pwned and will exhibit bad behavior. Is it a cost of doing business for the ISPs? Is there some other kind of technical, procedural, or cultural answer? I wish I knew. What do you folks think? Can this health certificate thing work? Am I just stuck in a cycle of cynicism that prevents me from seeing any solution to this problem? Or do we just make sure our families aren’t the path of least resistance and forget the rest? Share:

Share:
Read Post

IT Debt: Real or FUD?

I just ran across Slashdot’s mention of the Measuring and Monitoring Technical Debt study funded by a research grant. Their basic conclusion is that a failure to modernize software is a form of debt obligation, and companies ultimately must pay off that debt moving forward. And until the modernization process happens, software degrades towards obsolescence or failure. From Andy Kyte at Gartner: “The issue is not just that maintenance keeps on getting deferred, it is that the lack of an application inventory and the absence of a structured review process for the application portfolio. This means the IT management team is simply never aware of the true scale of the problem,” Mr. Kyte said. “This problem, hidden from sight, is getting bigger every year and more difficult to deal with every year.” I am on the fence on the research position – apparently others are as well – and I disagree with many of the assertions because the cost of inaction needs to be weighed against the cost of overhauls. The cost of migration is significant. Retraining users. Retraining IT. New software and maintenance contracts. The necessary testing environments and regression tests. The custom code that needs to be developed in order to work with the software packages. Third party consulting agreements. New workflow and management system integration. Fixing bugs introduced with the new code. And so on. In 2008, 60% of the clients of my former firm were running on Oracle & IBM versions that were 10 years old – or older. They stayed on those version because the databases and applications worked. The business functions operated exactly as they needed them to – after 2-5 years of tweaking them to get them exactly right. A new migration was considered to be another 2-5 year process. So many firms selected bolt-on, perimeter-based security products because there was no way to build security into a platform in pure maintenance mode. And they were fine with that, as the business application was designed to a specification that did not account for changes to the security landscape, and depended on network and platform isolation. But the primary system function it was designed for worked, so overhaul was a non-starter. Yes, the cost of new features and bug fixes on very old software, if needed, was steep. But that’s just it … there were very few features and bug fixes needed. The specifications for business processing were static. Configuration and maintenance costs we at a bare minimum. The biggest reason why “The bulk of the budget cut has fallen disproportionately on maintenance activities –” was because they were not paying for new software and maintenance contracts! Added complexity would have come with new software, rather than keeping the status quo. The biggest motivator to upgrade was that older hardware/OS platforms was either too slow, or began failing. A dozen or so financial firms I spoke with performed this cost analysis and felt that every day they did not upgrade saved them money. It was only in segments that required rapid changes to meet changing market – retail and shipping come to mind – that corporations benefitted from modernization and new functionality to improve customer experience. I’ll be interested to see if this study sways IT organizations to modernize. The “deferred maintenance” message may resonate with some firms, but calling older software a liability is pure FUD. What I hope the study does is prompt firms to compare their current maintenance costs against upgrades and new maintenance – the only meaningful must be performed within a customer environment. That way they can intelligently plan upgrades when appropriate, and be aware of the costs in advance. You can bet every sales organization in the country will be delivering a copy of this research paper to their customers in order to poke and prod them into spending more money. Share:

Share:
Read Post

Monitoring up the Stack: User Activity Monitoring

The previous Monitoring up the Stack post examined Identity Monitoring, which is a set of processes to monitor events around provisioning and managing accounts. The Identity Monitor is typically blind to one very important aspect of accounts: how they are used at runtime. So you know who the user is, but not what they are doing. User Activity Monitoring addresses this gap through reporting not on how the accounts were created and updated in the directory, but by examining user actions on systems and applications, and linking them to assigned roles. Implementing User Activity Monitoring User Activity Monitors can be deployed to monitor access patterns and system usage. The collected data regarding how the system is being used, and by who, is then sent to the SIEM/Log Management system. This gives the SIEM/Log Management system data that is particularly helpful for attribution purposes. Implementing User Activity Monitoring rests on four key decisions. First, what constitutes a user? Next, what activities are worth monitoring? Third, what does typical activity look like, and how do we define policies to scope acceptable use? And finally, where and how should the monitor be deployed? The question about what constitutes a user seems simple, and on one level it is. Most likely a user is an account in the corporate or customer directory, such as Active Directory or LDAP. But sometimes there are accounts for non-human system users, such as service accounts and machine accounts. In many systems service accounts, machine accounts, and other forms of automated batch processing can do just as much damage as any other account/function. After all, these features were programmed and configured by humans, and are subject to misuse like any other accounts, so likely are worth monitoring as well. Drilling down further into users, how are they identified? To start with, there is probably a username. But remember the data that the User Activity Monitor sends to the SIEM/Log Management system is will be used after the fact. What user data will help a security analyst understand the user’s actions and whether they were malicious or harmful? Several data elements are useful for building a meaningful user record: Username: The basic identifier for a user in the system, including the namespace or other protocol-specific data. Identity Provider: The name of the directory or database that authenticated the user. Group/Role Membership: Any group or role information assigned to the user account, or other data used for authorization purposes. Attributes: Was the user account assigned any privileges or capabilities? Are there time of day or location attributes that are important for verifying user authenticity? Authentication Information: If available, information around how the user was authenticated can be helpful. Was the user dialed in from a remote location? Did they log in from the office? When did they log in? And so on. A log entry that reads user=rajpatel; is far less useful than one that contains “user=rajpatel; identityprovider=ExternalCORPLDAP; Group=Admin; Authenticated=OTP”. The more detailed the information around the user and their credential, the more precsion the analyst has to work with. Usually this data is easy to get at runtime – it is available in security tokens such as SAML and Kerberos – but the monitor must be configured to collect it. Now that we see how to identify a user, what activities are of interest to the SIEM/Log Management system? The types of activities mentioned in other Monitoring up the Stack posts can all be enriched through the user data model described above; in addition there are some user-specific events worth tracking, including: User Session Activities: events that create, use, and terminate sessions; such as login and logout events. Security Token Activities: events that issue, validate, exchange and terminate security tokens. System Activities: events based around system exceptions, startups, shutdowns, and availability issues. Platform Activities: events from specific ports or interfaces, such as USB drive access. Inter-Application Activities: events performed by more than one application on behalf of the user, all linked to the same business function. Now that we know what kind of events that we are looking for, what do we want to do with these events? If we are monitoring we need to specify policies to define appropriate use, and what should be done when an event – or in some cases a series of events – occurs. Policy set up and administration is a giant hurdle with SIEM systems today, and adding user activity monitoring – or any other form of monitoring – will require the same time to set up and adjust over time. Based on an event type listed above, you select the behavior type you want to monitor and define what users can & cannot do. User monitoring systems, at minimum, offer attribute-based analysis. More advanced systems offer heuristics and behavioral analysis; these provide flexibility in how users are monitored, and reduce false positives as the analysis adapts to user actions over time. The final step is deployment of the User Activity Monitor; and the logical place to start is the Identity repository because repositories can write auditable log events when they issue, validate, and terminate sessions and security tokens; thus the Identity repository can report to the SIEM/Log Management system on what users were issued what sessions and tokens. This location can be made more valuable by adding User Activity Monitors closer to the monitored resources, such as Web Application Firewalls and Web Access Managers. These systems can enhance visibility beyond simply what tokens and sessions were issued (from the Identity repository), adding information on how were they used and what the user accessed. Correlation: Putting the Data to Work With monitors situated to report on User Activity, the next step is to use the data. The data and event models described above provide an enriched model that enables the analyst to trace events back upstream. For example, the analyst can set up rules that identify known good and bad behavior patterns to reflect authorized usage and potentially malicious patterns. Authorized usage patterns generally reflect the use case flows that users follow. In most cases these do

Share:
Read Post

Friday Summary: October 8, 2010

Chris Pepper was kind enough to forward this interview with James Gosling on the Basement Coders blog earlier in the week. I seldom laugh out loud when reading blogs, but his “Java, Just Free It” & “Set Java Free” t-shirts that were pissing off Oracle got me going. And the “Google is kind of a funny company because a lot of them have this peace love and happiness version of evil” quote had me rolling on the floor. In fact I found the entire article entertaining, so I recommend reading it all the way through if you have a chance. James Gosling is an interesting guy, and for someone I have never met, he has had more impact on my career than any other person on the planet. Around Christmas 1995 I downloaded the Java white paper. At the time I was a porting engineer for Oracle, so my job was to get Oracle and Oracle apps to run on different flavors of Unix. The paper hit me like a ton of bricks. It was the first time I had seen a really good object model, one which could allow good object oriented techniques. But most importantly, being a porting engineer, Java code could run anywhere without the need to be ported. The writing was on the wall that my particular skill set would be decreasing in value every day from then on. As soon as I could, I downloaded the JDK and started programming in Java. At the first Java One developers conference in 1996 – and seeing the ‘Green Project’ handheld Gosling described in the interview – I was beyond sold. I was more excited about the possibilities in computer science than ever before. I scripted my Oracle porting job, literally, in Perl and Expect scripts, to free up more time to program Java. I spent my days not-so-clandestinely programming whatever Java projects interested me. Within months I left Oracle just so I could go somewhere, anywhere, and program Java. The startup I landed at happened to be a security start-up. But that white paper was the major catalyst in my career and pretty much shaped my professional direction for the next 10 years. And so it is again – Gosling’s views on NoSQL actually got me to go back and reconsider some of my negative opinions on the movement. I am still not sold, but there are a handful of people I have so much respect for, that their vision is enough to prompt me to reinvestigate my beliefs. I hope Mr. Gosling gets another chance to research new technologies … the last time he set the industry on its ear. – Adrian On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian’s Dark reading article on Data Security: You’re Doing It Wrong. Rich gets snarky with the Scwartz PR folks when they profile him. Mike’s Endpoint Security Fundamentals: Part 3 Favorite Securosis Posts Mike Rothman: Index of NSO Quant Posts. Yeah, pimping out my own research again. But NSOQ was a monumental amount of work, and this provides quick links to all of it. Adrian Lane: Monitoring up the Stack: Identity Monitoring. Gunnar has an excellent grasp of Identity Monitoring, and it shows in this post. Gunnar Peterson: Monitoring up the Stack: Identity Monitoring. Rich: This week’s Incite. In which Mike admits to thousands of people it’s his birthday this week! Other Securosis Posts Monitoring up the Stack: Identity Monitoring. Incite 10/6/2010: The Answer is 42. Monitoring up the Stack: App Monitoring, Part 2. Favorite Outside Posts Mike Rothman: Why Wesabe Lost to Mint. Not security related, but important nonetheless. The one that makes things easier on the user wins. Sound familiar, Dr. No? If users have to work too hard, they’ll find ways around your controls. Count on it. Adrian Lane: AT&T, Voice Encryption and Trust. Rich: Verizon releases their big PCI compliance report. Seriously good – this actually ties compliance to breaches. Gunnar Peterson: OAuth Bearer Tokens are a Terrible Idea. This is a sad story, because OAuth gained a ton of traction in version 1.0 (many major sites like Twitter & Netflix are using it), and then in the process of moving OAuth to a full-blown IETF standard the primary security protections were dropped! Project Quant Posts NSO Quant: Index of Posts. NSO Quant: Health Metrics – Device Health. Research Reports and Presentations Understanding and Selecting a Tokenization Solution. Security + Agile = FAIL Presentation. Data Encryption 101: A Pragmatic Approach to PCI. White Paper: Understanding and Selecting SIEM/Log Management. Top News and Posts Dennis’s awesome article on Rethinking Stuxnet. FBI Caught Spying. Then they want their toy back? Dumbasses. Record Breaking Patch Tuesday. eBanking Security Guarantees for Gov Institutions. Things are getting bad! LinkedIn Drive-by Malware Attack. Share:

Share:
Read Post

Monitoring up the Stack: Identity Monitoring

As we continue up the Monitoring stack, we get to Identity Monitoring, which is a distinct set of concerns from User Activity Monitoring (the subject of the next post). In Monitoring Identity, the SIEM/Log Management systems gain visibility into the provisioning and Identity Management processes that enterprise use to identify, store and process user accounts to prepare the user to use the system. Contrast that with User Activity Monitoring, where SIEM/Log Management systems focus on monitoring how the user interacts with the system at runtime and looks for examples of bad behavior. As an example, do you remember when you got your driver’s license? All the processes that you went through at the DMV: Getting your picture taken, verifying your address, and taking the driving tests. All of those activities are related to provisioning an account, getting credentials created; that’s Identity Management. When you are asked to provide your driver’s license, say when checking in at a hotel, or by a police officer for driving too fast; that’s User Activity Monitoring. Identity Monitoring is an important first step because we need to associate a user’s identity with network events and system usage in order to perform User Activity Monitoring. Each requires a different type of Monitoring and different type of report, today we tackle Identity Management (and no, we won’t make you wait in line like the DMV). To enable Identity Monitoring, the SIEM/Log Management project inventories the relevant Identity Management processes (such as Provisioning), data stores (such as Active Directory and LDAP) and technologies (such as Identity Management suites). The inventory should include the Identity repositories that store accounts used for access to the business’ critical assets. In the old days it was as simple as going to RACF and examining the user accounts and rules for who was allowed to access what. Nowadays, there can be many repositories that store and manage account credentials, so inventorying the critical account stores is the first step. Process The next step is to identify the Identity Management processes that govern the Identity repositories. How did the accounts get into LDAP or Active Directory? Who signs off on them? Who updates them? There are many facets to consider in the Identity management lifecycle. The basic Identity Management process includes the following steps: Provisioning: account creation and registration Propagating: synchronizing or replicating the account to the account directory or database Access: accessing the account at runtime Maintenance: changing account data End of Life: Deleting and disabling accounts The Identity Monitoring system should verify events at each process step, record the events, and write the audit log messages in a way that they can be correlated for security incident response and compliance purposes. This links the event to the account(s) that initiated and authorized the action. For example, who authorized the account that were Provisioned? What manager(s) authorized the account updates? As we saw in the recent Societe Generale case, Jerome Kerviel (the trader who lost billions of the bank’s money) was originally an IT employee who moved over to the trading desk. When he made the move from IT to trading, his account retained his IT privileges and gained new trading privileges. Snowball entitlements enabled him to both execute trades and remove logs and hide evidence. It seems likely there was a process mishap in the account update and maintenance rules that allowed this to happen, and it shows how important the identity management processes are to access control. In complex systems, the Identity Management process is often automated using an Identity Management suite. These suites generate reports for Compliance and Security purposes, these reports can be published to the SIEM/Log Management system for analysis. Whether automated with a big name suite or not, its important to start Identity Monitoring by understanding the lifecycle that governs the account data for the accounts in your critical systems. To fully close the loop, some processes also reconcile the changes with change requests (and authorizations) to ensure every change is requested and authorized. Data In addition to identifying the Identity repositories and the management processes around them, the data itself is useful to inform the auditable messages that are published to the SIEM/Log Management systems. The data aspects for collection typically include the following: User Subject (or entity) which could be a person, an organization, or a host or application. Resource Object which could be a database, a URL, component, queue or a Web Service, Attributes such as Roles, Groups and other information that is used to make authorization decisions. The identity data should be monitored to record any lifecycle events such as Create, Read, Update, Delete, and Usage events. This is important to give the SIEM/Log Management system an end to end view of the both the account lifecycle and the account data. Challenges One challenge in Identity Monitoring is that the systems that are to be monitored (such as authentication systems) sport byzantine protocols and are not easy to get data and reports out of. This may require some extra spelunking to find the optimal protocol to use to communicate with the Identity repository. The good news is this is a one-time effort during implementation. These protocols do not change frequently. Another challenge is the accuracy of associating the user identity with the activity that a SIEM collects. Simply matching user ID to IP or MAC address is limited, so heuristic and deterministic algorithms are used to help associate users with events. The association can be performed by the collector, but more commonly this feature is integrated within the SIEM engine as an log/event enrichment activity. The de-anonymization occurs as data is normalized, and stored with the events. Federated identity systems that separate the authentication, authorization and attribution create additional challenges, because the end to end view of the account in both the Identity Provider and in the Relying Party is not usually easy to attain. Granted this is the point of Federation, which resolves the relationship at runtime, but it’s worth pointing out the difficulty this presents to end to end

Share:
Read Post

Incite 10/6/2010: The Answer is 42

One of my favorite passages in literature is when Douglas Adams proclaims the Ultimate Answer to the Ultimate Question of Life, The Universe, and Everything to be 42 in Hitchhiker’s Guide to the Galaxy. Of course, we don’t know the Ultimate Question. Details. This week I plan to discover he was right as I finish my 42nd year on the planet. That seems old. It’s a big number. But I don’t feel old. In fact, I feel like a big kid. Sometimes I look at my own kids and my house and snicker a bit. Can you believe they’ve entrusted any responsibility to me? These kids think I actually know something? Ha, that’s a laugher… Since I’m trying not to look forward and plan, I figure I should look backward and try to appreciate the journey. As I look back, I can kind of break things up into a couple different phases. My childhood was marked by anger. Yeah, I know you are shocked. But I took everything bad that happened personally, and as a result, I was a pretty angry kid. College was a blur. I know I drank a lot of beer. I think I studied a bit. When I graduated I entered the unbreakable phase. Right, like the Oracle database. I could do little wrong. I had a pretty quick progression through the corporate ranks. In hindsight it was too quick. I didn’t screw anything up, so I felt invincible. I also didn’t learn a hell of a lot, but thought I did. Sound familiar? Then I started a software company in 1998 to chase the Internet bubble IPO money. I learned pretty quickly that I wasn’t invincible, as I heard the sound of $30 million of someone else’s money being flushed down the toilet. Crash. Big time. Then I entered the striving stage throughout my 30’s. Striving for more and never being satisfied. From there I proceeded to jump from job to job every 15 months, chasing some shiny object and trying to catch the brass ring. Again, that didn’t work out too well and I found myself getting angry again. Then I started Incite and was a lot happier. I managed to remember what I liked to do and then start to address some of my deeply buried issues. No, I’m not going to bare my soul like Bill Brenner, but we all have demons to face and at that point I started facing my own. I took a detour back into the vendor world for 15 months, and then sold Rich and Adrian a bill of goods to let me hang my shingle at Securosis. 10 months in, I’m having the time of my life. I’m thinking this is the contented phase. I’ve been working hard, at everything. Physically, I’m in the best shape I’ve been in since my early 20’s. Mentally I’m making progress, working to accept what’s happening and stop looking forward at the expense of being present. I’m happy with what I do and what I have. My family loves me and I love them. What else does a guy need? I’m still fighting demons, and I probably always will. The hope is that my epic battles will be fewer and farther between over time. I’m still screwing things up, and I’ll probably always do that too. That’s an entrepreneur’s curse. I’m also learning new things almost every day, and when that stops it’s time to move on to the Great Unknown. As I look back, I figured out what my Ultimate Question is: “When do you realize it’s a game and you should enjoy the ride, both the ups and the downs?” Right. For me, the answer is 42. – Mike. Photo credits: “42” originally uploaded by cszar Recent Securosis Posts Friday Summary: September 30, 2010 Monitoring up the Stack: DAM, Part 2 App Monitoring, Part 1 App Monitoring, Part 2 Understanding and Selecting a DLP Solution A Wee Bit on DLP SaaS “DLP Light” and DLP Features NSO Quant Posts The End is Near! Comprehensive Index of Posts Incite 4 U Get on the (security incident) cycle – Good summary here by Lenny Zeltser covering a presentation from our hero Richard Bejtlich about how he’s built the Incident Response team at GE to deal with things like well-funded patient attackers (note I didn’t use the a(blank)t acronym). Of course there will always be failures, but the question is about organizational commitment to detecting adversaries and putting the right capabilities in place to protect your organization. And to look at security as a process and – dare I say it – a lifecycle. That means you need to focus on all aspects – before, during, and after the attack. Amazingly enough, Rich and I are starting another blog series on exactly this topic in about a week. – MR Save the children… with robots – The state of technology education in this country is simply embarrassing. Everyone talks about how kids use a mouse before they can read, but how many of them understand how a computer works? You’d think today’s teenagers would know a hard drive from RAM, but not if they rely on their (standard) school to teach them. However, they are pretty good at putting cats in PowerPoints. Our friend Chris Hoff is trying to change this with a hacking conference dedicated to kids… called, appropriately enough, HacKid. It’s an amazing idea, with everything from Lego robots to online safety covered, and if you have kids of the right age, or just want to support it, I highly recommend attending or getting involved. – RM No trust for you! – Despite being a big fan of monitoring technologies, I thought the Trust No One, Monitor Everything position was a bit over the top. The “monitor everything” approach fails for exactly the same reasons “encrypt everything” fails: a single technology cannot solve every problem. Monitoring is just another security tool, and before you try to saw wood with a hammer, remember attacks that bypass WAF, IDS, App Monitoring, and DAM are well documented. Don’t

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.