Securosis

Research

Ticker Symbol: Hack – *Updated*

There is a ticker symbol HACK that tracks a group of publicly traded “Cyber Security” firms. Given how hot everything ‘Cyber’ is, HACK may do just fine – who knows? But perhaps one for breached companies (BRCH?) would be better. For you security geeks out there who love to talk about the cost of breaches, let’s take a look at the stock prices of several big-named firms which have been breached: Sony 11/24/14 28.3% S&P 500 11/24/14 2.2% Home Depot 9/9/14 31.3% S&P 500 9/9/14 6.4% Target 12/19/13 23.8% S&P 500 12/19/13 16.9% Heartland 1/20/09 250.1% S&P 500 1/20/09 162.7% Apple 9/2/14 28% S&P 500 9/2/14 6% This is a small sample of companies, but their stock values have each substantially outperformed the S&P 500 (which has been on a tear in the last year or so) from the time of their breaches through now. “How long until activist investors like Icahn pound the table demanding more dividends, stock buy backs and would it kill you to have a breach?” Food for thought. Share:

Share:
Read Post

Ticker Symbol: HACK

I think the financial equivalent of jumping shark is Wall Street creating an ETF based on your theme. If so, cybersecurity has arrived. The ISE Cyber Security Index provides a benchmark for investors interested in tracking companies actively involved in providing technology and services that are designed to protect data, networks, hardware, software, and other cyber-dependent devices from unauthorized access and attacks. The index includes twenty-nine constituent companies, including VASCO Data Security International Inc. (ticker: VDSI), Palo Alto Networks Inc. (ticker: PANW), Symantc Corp. (ticker: SYMC), Juniper Networks Inc. (ticker: JNPR), FireEye Inc. (ticker: FEYE), and Splunk Inc. (ticker: SPLK). Before you invest your life savings in ETFs, listen to Vanguard founder Jack Bogle: “The ETF is like the famous Purdy shotgun that’s made over in England. It’s great for big game hunting, and it’s great for suicide.” Two interesting things to look at in ETFs are fees and weighting. The fees on this puppy look to be 0.75% – outlandishly high. For comparison Vanguard’s Dividend Growth ETF has a 0.1% fee. It is true that with foreign ETFs the fees are higher (to access foreign markets), but I do not know why HACK should have such a high fee – the shares they list are liquid and widely traded. Foreign issues themselves do not seem to dictate such a lavish expense ratio. As of October 30, 2014, the Underlying Index had 30 constituents, 6 of which were foreign companies, and the three largest stocks and their weightings in the Underlying Index were VASCO Data Security International, Inc. (8.57%), Imperva, Inc. (6.08%), and Palo Alto Networks, Inc. (5.49%). I cannot tell how it is weighted but if they follow the weighting on ISE then investors will wind up almost 10% into Vasco. The largest members of the index, per ISE, are: Vasco: 9.17% Imperva: 7.57% Qualys: 5.48% Palo Alto: 5.35% Splunk: 5.18% Infoblox: 5.04% That is near 40% in the top six holdings – pretty concentrated. The old school way to index is to weight by market capitalization, but that has been shown to be imperfect because size alone does not determine quality. The preferred weighting for the last few years (since Rob Arnott’s work) has been by value, which bases the percentage of each holding on value metrics like P/E. There is considerable evidence that this works much better than market cap. But we still have a problem: many tech companies, especially new ones, have no earnings! From reverse engineering the index membership it looks like they are using Price/Sales for weighting. For example: Vasco has a Price/Sales ratio 6.1. Palo Alto has a P/S ratio of 13.5. Vasco has about twice the weighting of Palo Alto because it is about twice as cheap on a Price to Sales basis. This is probably not best way to do it, but it is probably the best available way because market cap is flawed and would miss all the upstarts. Due to lack of earnings value metrics are a non-starter. The weightings appear roughly right per Price/Sales, but I could not get the numbers to work precisely. It is possible they are using an additional weighting factor like Relative Strength. Needless to say, this is all in the spirit of “As the Infosec Industry Turns…” and not financial advice of any kind. This is not a recommendation to buy, sell, or hold any of the issues mentioned. In the meantime remember the fees, and this from Jack Bogle: “Performance comes and goes but cost goes on forever.” HACK SEC filing Share:

Share:
Read Post

The future of security is embedded

I do not think Mike’s and Rich’s points are at odds at all. Mike’s post lays out what in my view is infosec’s Achilles heel: lack of strategic alignment with the business. There are very few things that basically everyone in infosec agrees on; but a near universal one is that you can, should, and will never show a Return on Security Investment. “The business” is just supposed to accept this, apparently, and keep increasing the budget year after year; the People’s Republic of Information Security shall remain unsullied by such things as profit and loss, and breeze merrily along. Of course in the very next breath, after waving away the petty request for return on investment, infosec teams routinely complain that “the business” doesn’t get security. I humbly suggest that while that may be true, security doesn’t actually get “the business” either. Rich’s approach to this issue is quite pragmatic: close collaboration. Business is already driven by externalities – infosec is not unique in this regard, although security does have different drivers (although they get more closely aligned every day). I like Rich’s approach, but I would take it one step farther. Andy Jaquith tweeted the other day on OWASP that ‘ITsec guys need to “embed” into existing OSS projects not make new ones’ – this applies to security teams in spades. Embed security in the business. We are so used to trotting out things like breach statistics, but what is “the business” supposed to get from these meaningless out-of-context numbers? They look at the world in terms of transaction volume, throughput, customer retention, cash flows, ARPU, and other business relevant metrics. Every industry has its own key metrics – does your security team know yours? When General David Petraeus took over US forces in Iraq, one of the first things he changed was how to measure success. The previous command used classic military metrics – how many American soldiers got killed and how many enemy combatants did we kill. Petraeus changed the measurement and thus the mindset. He used metrics like how many little old ladies can get safely to the market in Baghdad to buy oranges. This is a totally different way of looking at measuring success and failure. There are precedents for this kind of mind shift in certain industry segments. Banks have sophisticated fraud detection teams and schemes. They are able to map events and compare fraud rates against total transactions and customer interactions. It is a simple way to communicate fraud control program effectiveness with “the business”, once you stop looking at security as something separate and see it as part of the whole. The practical point here is to very clearly understand your business’ competitive advantage. There is no generic answer for this – business imperatives and competitive differentiators vary from one business to the next. That is a major reason there is no magic set of security metrics that broadly addresses the whole industry. You need to know what your moat is, and to organize metrics and processes around moats. If you are Walmart you care about anything that drives up cost because a big part of your moat is being the low-cost provider. If you are an ecommerce company then availability is a big moat. You can bet that Amazon can tell you very precisely what 5 minutes of downtime or an extra second to load a page costs in real dollars. All communication with “the business” should be within this context – then we can map our own internal infosec issues such as attacker innovation and operational efficiency onto a framework that is much more trackable for productive collaboration with “the business.” One more thing: there is no security. There is just “the business” – with everyone sharing the same mission and working together as best we can. Share:

Share:
Read Post

Counterpoint: KNOX vs. AZA throwdown

Adrian makes a number of excellent points. Enterprises need better usability and management for mobile devices, but co-mingling these goals complicates solutions. Adrian contrasted two approaches: AZA and KNOX, which I also want to discuss. Let me start by saying I think we are in the first or second inning for mobile. I do not expect today’s architectural choices to stick for 10+ years. I think we will see substantial evolution, root and branch, for a while. Here is a good example of a mobile project: The Wall St. Journal just published their 1,000th edition on iPad. It is a great example of a mobile app, works in both offline and online modes, is easy to navigate and packed with information (okay – just ignore the editorial page) – it is a great success. The way they started the project is instructive: Three and a half years ago, The Wall Street Journal locked six people in a windowless room and threw down a secret challenge: Build us an iPad app. You have six weeks. And so we did. We started with a blank slate–no one had ever seen a tablet news app before. This is not uncommon for mobile projects. A few takeaways: We are learning our lessons as we go. There is an architectural vision but it evolves quickly and adapts, and did I mention we are leaning as we go? Evolution today is less about enterprise-level grand architecture (we already have those, called iOS and Android, themselves evolving while we scramble to keep up) – it is incremental improvement. Looking at AZA vs. KNOX from ground level, I see attractive projects for enterprise, with AZA more focused the here and now. KNOX seems to be shooting for DOD today, and enterprise down the road. This all reminds me of how Intel does R&D. They roll out platforms with a tick/tock pattern. Ticks are whole new platforms and tocks are incremental improvements. To me AZA looks like classic tock: it cleans up some things for developers, improves capabilities of existing systems, and connects some dots. KNOX is a tick: it is a new ballgame, new management, and a new way to write apps. That doesn’t mean KNOX cannot succeed, but would the WSJ start a new project by learning a new soup-to-nuts architecture just to handle security requirements (remember: you need to launch in six weeks)? I know we as security people wish they would, but how likely is that in the near term, really? The positive way to look at this choice is that, for a change, we have two interesting options. I may be overly pessimistic. It is certainly possible that soup-to-nuts security models – encompassing hardware, MAC, Apps, Platforms – will rule from here on out. There is no doubt plenty of room for improvement. But the phrase I keep hearing on mobile software projects is MVP: Minimum Viable Product. KNOX doesn’t fit that approach – at least not for most projects today. I can see why Samsung wants to build a platform – they do not want to be just another commoditized Android hardware manufacturer, undifferentiated from HTC or Googorola. But there is more to it than tech platforms – what do customers want? There is at least one very good potential customer for KNOX, with DOD-type requirements. But will it scale to banks? Will KNOX scale to healthcare, manufacturing, and ecommerce? That is an open question, and app developers in those sectors will determine the winner(s). Share:

Share:
Read Post

Let’s Get Physical—Road Rules Edition

It’s a new year, so let’s get physical and personal. I wondered what people do about physical security specifically – how do you protect your laptop while on business travel? Hotels, airports, cars, etc. We have all seen that “road rules” can be pretty different, so what precautions do you take to ensure your laptop and devices return home safely? Do you always carry your laptop? Carry a lock? Have ways to hide it? It seems like there are no real 100% answers or ‘best’ practices – just least-bad practices, and answers I hear are an interesting mix of personal and technology options. I asked a number of folks and here is what they said. (please comment on your own “least bad” approach) “I usually carry my laptop. But have left it in an in-room safe and locked in my bag. I don’t leave anything out and visible.” Hotels: “On rare occasions that I leave it in my room, i leave the Do Not Disturb sign up. Disinformation FTW. I think the real answer is try to travel with tablets and not laptops if possible” “I try to avoid traveling with a laptop anymore, although I still need it for conferences usually.” “I use the do not disturb trick, and I never use the hotel safe since that’s the first place they’ll look. I bury my laptop in my clothes bag when I leave it. With an 11-inch MacBook Air, that’s easy. But the truth is it is disposable for me. I’d be out the money, but being well encrypted I don’t worry about data loss. And everything is synced anyway.” I have only been able to do this for the past few years thanks to Dropbox and a few other things. (This one is from our very own Rich Mogull). “I rarely travel with a laptop, and keep a short lock on iOS devices.” On TSA: “I don’t care about TSA – nothing to hide. But I do shut down if I’m someplace where I worry about cold boot (China).” What are your road rules? Share:

Share:
Read Post

Monitoring up the Stack: User Activity Monitoring

The previous Monitoring up the Stack post examined Identity Monitoring, which is a set of processes to monitor events around provisioning and managing accounts. The Identity Monitor is typically blind to one very important aspect of accounts: how they are used at runtime. So you know who the user is, but not what they are doing. User Activity Monitoring addresses this gap through reporting not on how the accounts were created and updated in the directory, but by examining user actions on systems and applications, and linking them to assigned roles. Implementing User Activity Monitoring User Activity Monitors can be deployed to monitor access patterns and system usage. The collected data regarding how the system is being used, and by who, is then sent to the SIEM/Log Management system. This gives the SIEM/Log Management system data that is particularly helpful for attribution purposes. Implementing User Activity Monitoring rests on four key decisions. First, what constitutes a user? Next, what activities are worth monitoring? Third, what does typical activity look like, and how do we define policies to scope acceptable use? And finally, where and how should the monitor be deployed? The question about what constitutes a user seems simple, and on one level it is. Most likely a user is an account in the corporate or customer directory, such as Active Directory or LDAP. But sometimes there are accounts for non-human system users, such as service accounts and machine accounts. In many systems service accounts, machine accounts, and other forms of automated batch processing can do just as much damage as any other account/function. After all, these features were programmed and configured by humans, and are subject to misuse like any other accounts, so likely are worth monitoring as well. Drilling down further into users, how are they identified? To start with, there is probably a username. But remember the data that the User Activity Monitor sends to the SIEM/Log Management system is will be used after the fact. What user data will help a security analyst understand the user’s actions and whether they were malicious or harmful? Several data elements are useful for building a meaningful user record: Username: The basic identifier for a user in the system, including the namespace or other protocol-specific data. Identity Provider: The name of the directory or database that authenticated the user. Group/Role Membership: Any group or role information assigned to the user account, or other data used for authorization purposes. Attributes: Was the user account assigned any privileges or capabilities? Are there time of day or location attributes that are important for verifying user authenticity? Authentication Information: If available, information around how the user was authenticated can be helpful. Was the user dialed in from a remote location? Did they log in from the office? When did they log in? And so on. A log entry that reads user=rajpatel; is far less useful than one that contains “user=rajpatel; identityprovider=ExternalCORPLDAP; Group=Admin; Authenticated=OTP”. The more detailed the information around the user and their credential, the more precsion the analyst has to work with. Usually this data is easy to get at runtime – it is available in security tokens such as SAML and Kerberos – but the monitor must be configured to collect it. Now that we see how to identify a user, what activities are of interest to the SIEM/Log Management system? The types of activities mentioned in other Monitoring up the Stack posts can all be enriched through the user data model described above; in addition there are some user-specific events worth tracking, including: User Session Activities: events that create, use, and terminate sessions; such as login and logout events. Security Token Activities: events that issue, validate, exchange and terminate security tokens. System Activities: events based around system exceptions, startups, shutdowns, and availability issues. Platform Activities: events from specific ports or interfaces, such as USB drive access. Inter-Application Activities: events performed by more than one application on behalf of the user, all linked to the same business function. Now that we know what kind of events that we are looking for, what do we want to do with these events? If we are monitoring we need to specify policies to define appropriate use, and what should be done when an event – or in some cases a series of events – occurs. Policy set up and administration is a giant hurdle with SIEM systems today, and adding user activity monitoring – or any other form of monitoring – will require the same time to set up and adjust over time. Based on an event type listed above, you select the behavior type you want to monitor and define what users can & cannot do. User monitoring systems, at minimum, offer attribute-based analysis. More advanced systems offer heuristics and behavioral analysis; these provide flexibility in how users are monitored, and reduce false positives as the analysis adapts to user actions over time. The final step is deployment of the User Activity Monitor; and the logical place to start is the Identity repository because repositories can write auditable log events when they issue, validate, and terminate sessions and security tokens; thus the Identity repository can report to the SIEM/Log Management system on what users were issued what sessions and tokens. This location can be made more valuable by adding User Activity Monitors closer to the monitored resources, such as Web Application Firewalls and Web Access Managers. These systems can enhance visibility beyond simply what tokens and sessions were issued (from the Identity repository), adding information on how were they used and what the user accessed. Correlation: Putting the Data to Work With monitors situated to report on User Activity, the next step is to use the data. The data and event models described above provide an enriched model that enables the analyst to trace events back upstream. For example, the analyst can set up rules that identify known good and bad behavior patterns to reflect authorized usage and potentially malicious patterns. Authorized usage patterns generally reflect the use case flows that users follow. In most cases these do

Share:
Read Post

Monitoring up the Stack: Identity Monitoring

As we continue up the Monitoring stack, we get to Identity Monitoring, which is a distinct set of concerns from User Activity Monitoring (the subject of the next post). In Monitoring Identity, the SIEM/Log Management systems gain visibility into the provisioning and Identity Management processes that enterprise use to identify, store and process user accounts to prepare the user to use the system. Contrast that with User Activity Monitoring, where SIEM/Log Management systems focus on monitoring how the user interacts with the system at runtime and looks for examples of bad behavior. As an example, do you remember when you got your driver’s license? All the processes that you went through at the DMV: Getting your picture taken, verifying your address, and taking the driving tests. All of those activities are related to provisioning an account, getting credentials created; that’s Identity Management. When you are asked to provide your driver’s license, say when checking in at a hotel, or by a police officer for driving too fast; that’s User Activity Monitoring. Identity Monitoring is an important first step because we need to associate a user’s identity with network events and system usage in order to perform User Activity Monitoring. Each requires a different type of Monitoring and different type of report, today we tackle Identity Management (and no, we won’t make you wait in line like the DMV). To enable Identity Monitoring, the SIEM/Log Management project inventories the relevant Identity Management processes (such as Provisioning), data stores (such as Active Directory and LDAP) and technologies (such as Identity Management suites). The inventory should include the Identity repositories that store accounts used for access to the business’ critical assets. In the old days it was as simple as going to RACF and examining the user accounts and rules for who was allowed to access what. Nowadays, there can be many repositories that store and manage account credentials, so inventorying the critical account stores is the first step. Process The next step is to identify the Identity Management processes that govern the Identity repositories. How did the accounts get into LDAP or Active Directory? Who signs off on them? Who updates them? There are many facets to consider in the Identity management lifecycle. The basic Identity Management process includes the following steps: Provisioning: account creation and registration Propagating: synchronizing or replicating the account to the account directory or database Access: accessing the account at runtime Maintenance: changing account data End of Life: Deleting and disabling accounts The Identity Monitoring system should verify events at each process step, record the events, and write the audit log messages in a way that they can be correlated for security incident response and compliance purposes. This links the event to the account(s) that initiated and authorized the action. For example, who authorized the account that were Provisioned? What manager(s) authorized the account updates? As we saw in the recent Societe Generale case, Jerome Kerviel (the trader who lost billions of the bank’s money) was originally an IT employee who moved over to the trading desk. When he made the move from IT to trading, his account retained his IT privileges and gained new trading privileges. Snowball entitlements enabled him to both execute trades and remove logs and hide evidence. It seems likely there was a process mishap in the account update and maintenance rules that allowed this to happen, and it shows how important the identity management processes are to access control. In complex systems, the Identity Management process is often automated using an Identity Management suite. These suites generate reports for Compliance and Security purposes, these reports can be published to the SIEM/Log Management system for analysis. Whether automated with a big name suite or not, its important to start Identity Monitoring by understanding the lifecycle that governs the account data for the accounts in your critical systems. To fully close the loop, some processes also reconcile the changes with change requests (and authorizations) to ensure every change is requested and authorized. Data In addition to identifying the Identity repositories and the management processes around them, the data itself is useful to inform the auditable messages that are published to the SIEM/Log Management systems. The data aspects for collection typically include the following: User Subject (or entity) which could be a person, an organization, or a host or application. Resource Object which could be a database, a URL, component, queue or a Web Service, Attributes such as Roles, Groups and other information that is used to make authorization decisions. The identity data should be monitored to record any lifecycle events such as Create, Read, Update, Delete, and Usage events. This is important to give the SIEM/Log Management system an end to end view of the both the account lifecycle and the account data. Challenges One challenge in Identity Monitoring is that the systems that are to be monitored (such as authentication systems) sport byzantine protocols and are not easy to get data and reports out of. This may require some extra spelunking to find the optimal protocol to use to communicate with the Identity repository. The good news is this is a one-time effort during implementation. These protocols do not change frequently. Another challenge is the accuracy of associating the user identity with the activity that a SIEM collects. Simply matching user ID to IP or MAC address is limited, so heuristic and deterministic algorithms are used to help associate users with events. The association can be performed by the collector, but more commonly this feature is integrated within the SIEM engine as an log/event enrichment activity. The de-anonymization occurs as data is normalized, and stored with the events. Federated identity systems that separate the authentication, authorization and attribution create additional challenges, because the end to end view of the account in both the Identity Provider and in the Relying Party is not usually easy to attain. Granted this is the point of Federation, which resolves the relationship at runtime, but it’s worth pointing out the difficulty this presents to end to end

Share:
Read Post

Monitoring up the Stack: App Monitoring, Part 2

In the last post on application monitoring, we looked at why applications are an essential “context provider” and interesting data source for SIEM/Log Management analysis. In this post, we’ll examine how to get started with the application monitoring process, and how to integrate that data into your existing SIEM/Log Management environment. Getting Started with Application Monitoring As with any new IT effort, its important to remember that it’s People, Process and Technology – in that order. If your organization has a Build Security in software security regime in place, then you can leverage those resources and tools for building visibility in. If not, application monitoring provides a good entree into the software security process, so here are some basics to get started with Application Monitoring. Application Monitors can be deployed as off the shelf products (like WAFs), and they can be delivered as custom code. However they are delivered, the design of the Application Monitor must address these issues: Location: Where application monitors may be deployed; what subjects, objects, and events are to be monitored. Audit Log Messages: How the Audit Log Observers collect and report events; these messages must be useful to the human(!) analysts who use them for incident response, event, management and compliance. Publishing: The way the Audit Log Observer publishes data to a SIEM/Log Manager must be robust and implement secure messaging to provide the analyst with high-quality data to review, and to avoid creating YAV (Yet Another Vulnerability). Systems Management: Making sure the monitoring system itself is working and can respond to faults. Process The process of integrating your application monitoring data into the SIEM/Log Management platform has two parts. First identify where and what type of Application Monitor to deploy. Similar to the discovery activity required for any data security initiative, you need to figure out what needs to be monitored before you can do anything else. Second, select the way to communicate from the Application Monitor to the SIEM/Log Management platform. This involves tackling data formats and protocols, especially for homegrown applications where the communication infrastructure may not exist. The most useful Application Monitor provides a source of event data not available elsewhere. Identify key interfaces to high priority assets such as message queues, mainframes, directories, and databases. For those interfaces, the Application Monitor should give visibility into the message exchanges to and from the interfaces, session data, and the relevant metadata and policy information that guides its use. For applications that pass user content, the interception of messages and files provides the visibility you need. In terms of form factor for Application Monitor deployment (in specialized hardware, in the application itself, or in an Access Manager), performance and manageability are key aspects, but less important than what subjects, objects, and events the Application Monitor can access to collect and verify data. Typically the customer of the Application Monitor is a security incident responder, an auditor, or other operations staff. The Application Monitor domain model described below provides guidance on how to communicate in a way that enables this customer to rely on the information found in the log in a timely way. Application Monitor Domain Model The Application Monitor model is fairly simple to understand. The core parts of the Application Monitor include: Observer: A component that listens for events Event Model: Describes the set of events the Observer listens for, such as Session Created and User Account Created Audit Log Record Format: The data model for messages that the Observer writes to the SIEM/Log Manager, based on Event Type Audit Log Publisher: The message exchange patterns, such as publish and subscribe, that are used to communicate the Audit Log Records to the SIEM/Log Manager These areas should be specified in some detail with the development and operations teams to make sure there is no confusion during the build process (building visibility in), but the same information is needed when selecting off-the-shelf monitoring products. For the Event Model and Audit Log Record, there are several standard log/event formats which can be leveraged, including CEE (from Mitre and ArcSight), XDAS (from Open Group), and PCI DSS (from you-know-who). CEE and XDAS give general purpose frameworks for types of events the observer should listen for and which data should be recorded; the PCI DSS standard is more specific to credit card processing. All these models are worth reviewing to find the most cost-effective way to integrate monitoring into your applications, and to make sure you aren’t reinventing the wheel. To tailor the standards to your specific deployment, avoid the “drinking from the firehose” effect, where the speed and volume of incoming data make the signal-to-noise ratio unacceptable. As we like to say at Securosis: just because you can doesn’t mean you should. Or think about phasing in the application monitoring process, where you collect the most critical data initially and then expand the scope of monitoring over time to gain a broader view of application activity. The Event Model and Audit Records should collect and report on the areas described in the previous post (Access Control, Threats, Compliance, and Fraud). However, if your application is smart enough to detect malice or misuse, why wouldn’t you just block it in the application anyway? Ay, there’s the rub. The role of the monitor is to collect and report, not to block. This gets into a philosophical discussion beyond the scope of this research, but for now suffice it to say that figuring out if and what to block is a key next step beyond monitoring. The Event Model and Audit Records collected should be configureable (not hard-coded) in a rule or other configuration engine. This enables the security team to flexibly turn logging events up and down, data gathering, and other actions as needed without recompiling and redeploying the application. The two main areas the standards do not address are the Observer and the Audit Log Publisher. The optimal placement of the Observer is often a choke point with visibility into a boundary’s (for example, crossing technical boundaries like Java to .NET or from the web to a mainframe) inputs and outputs. Choke points

Share:
Read Post

Monitoring up the Stack: Application Monitoring, Part 1

As we continue to investigate additional data sources to make our monitoring more effective, let’s now turn our attention to applications. At first glance, many security practitioners may think applications have little to offer SIEM and Log Management systems. After all, applications are built on mountains of custom code and security and development teams often lack a shared collaborative approach for software security. However, application monitoring for security should not be dismissed out of hand. Closed-minded security folks miss the fact that applications offer an opportunity to resolve some of the key challenges to monitoring. How? It comes back to a key point we’ve been making through this series, the need for context. If knowing that Node A talked to Node B helps pinpoint a potential attack, then network monitoring is fine. But both monitoring and forensics efforts can leverage information about what transaction executed, who signed off on it, who initiated it, and what the result was – and you need to tie into to the application to get that context. In real estate, it’s all about location, location, location. By climbing the stack and monitoring the application, you collect data located closer to the core enterprise assets like transactions, business logic, rules, and policies. This proximity to valuable assets make the application an ideal place to see and report on what is happening at the level of user and system behavior, which can (and does) establish patterns of good and bad behavior that can provide additional indications of attacks. The location of the application monitor is critical for tracking both authorized users and threats, as Adrian pointed out in his post on Threat Monitoring: This challenge is compounded by the clear focus on application-oriented attacks. For the most part, our detection only pays attention to the network and servers, while the attackers are flying above that. It’s kind of like repeatedly missing the bad guys because they are flying at 45,000 feet, but you cannot get above 20,000 feet. You aren’t looking where the attacks are actually happening, which obviously presents problems. Effective monitoring requires access to the app, the data, and the system’s identity layers. They are the core assets of interest for both legitimate users and attackers trying to compromise your data. So how can we get there? We can look to software security efforts for some clues. The discipline of software engineering has made major strides in building security into applications over the last ten years. From static analysis, to threat modeling, to defensive programming, to black box scanners, to stronger identity standards like SAML, we have seen the software engineering community make real progress on improving overall application security. From the current paradigm of building security in, the logical next step is building visibility in, meaning the next step is to instrument applications with monitoring capabilities that collect and report on application use and abuse. Application Monitoring delivers several essential layers of visibility to SIEM and Log Management: Access control: Access control protects applications (including web applications) from unauthorized usage. But the access control container itself is often attacked via methods such as Cross Site Request Forgery (CSRF) and spoofing. Security architects rely heavily on access control infrastructure to enforce security at runtime and this data should be pumped into the SIEM/Log Management platform to monitor and report on its efficacy. Threat monitoring: Attackers specialize in crafting unpredictable SQL, LDAP, and other commands that are injected into servers and clients to troll through databases and other precious resources. The attacks are often not obviously attacks, until they are received and processed by the application – after all “DROP TABLE” is a valid string. The Build Security In school has led software engineers to build input validation, exception management, data encoding, and data escaping routines into applications to protect against injection attacks, but it’s crucial to collect and report on a possible attack, even as the application is working to limit its impact. Yes, it’s best to repel the attack from within the application, but you also need to know about it, both to provide a warning to more closely monitor other applications, and in case the application is successfully compromised – the logs must be securely stored elsewhere, so even in the event of a complete application compromise, the alert is still received. Transaction monitoring: Applications are increasingly built in tiers, components, and services, where the application is composed dynamically at runtime. So the transaction messages’ state is assembled from a series of references and remote calls, which obviously can’t be monitored from an infrastructure view. The solution is to trigger an alert within the SIEM/Log Management platform when the application hits a crucial limit or other indication of malfeasance in the system; then by collecting critical information about the transaction record and history, the time required to investigate potential issues can be reduced. Fraud detection: In some systems, particularly financial systems, the application monitoring practice includes velocity and throttles to record behaviors that indicate the likelihood of fraud. In more sophisticated systems, the monitors are active participants (not strictly monitors) and change the data and behavior of the system, such as through automatically flagging accounts as untrustworthy and sending alerts to the fraud group to start an investigation based on monitored behavior. Application monitoring represents a logical progression from “build security in” practices. For security teams actively involved in building in security the organizational contacts, domain knowledge, and tooling should already be in place to execute on an effective application monitoring regime. In organizations where this model is still in early days, building visibility in through application monitoring can be an effective first step, but more work is required to set up people, process, and technologies that will work in the environment. In the next post, we’ll dig deeper into how to get started with this application monitoring process, and how to integrate the data into your existing SIEM/Log Management environment. Share:

Share:
Read Post

Identity and Access Management Commoditization: a Tale of Two Cities

Identity and access management are generally 1) staffed out of the same IT department, 2) sold in vendor suites, and 3) covered by the same analysts. So this naturally lumps them together in people’s minds. However, their capabilities are quite different. Even though identity and access management capabilities are frequently bought as a package, what identity management and access management offer an enterprise are quite distinct. More importantly, successfully implementing and operating these tools requires different organizational models. Yesterday, Adrian discussed commoditization vs. innovation, where commoditization means more features, lower prices, and wider availability. Today I would like to explore where we are seeing commoditization and innovation play out in the identity management and access management spaces. Identity Management: Give Me Commoditization, but Not Yet Identity management tools have been widely deployed for the last 5 years and that are characterized in many respects as business process workflow tools with integration into somewhat arcane enterprise user repositories such as LDAP, HR, ERP, and CRM systems. So it is reasonable to expect that over time we will see commoditization (more features and lower prices), but so far this has not happened. Many IDM systems still charge per user account, which can appear cheap – especially if the initial deployment is a small pilot project – grow to a large line item over time. In IDM we have most of the necessary conditions to drive features up and prices down, but there are three reasons this has not happened yet. First, there is a small vendor community – it is not quite a duopoly, but the IDM vendors can be counted on one hand – and the area has not attracted open source on any large scale. Next there is a suite effect, where the IDM products that offer features such as provisioning are also tied to other products like entitlements, role management, and so on. Last and most important, the main customers which drove initial investment in IDM systems were not feature-hungry IT but compliance-craving auditors. Compliance reports around provisioning and user account management drove initial large-scale investments – especially in large regulated enterprises. Those initial projects are both costly and complex to replace, and more importantly their customers are not banging down vendor doors for new features. Access Management – Identity Innovation The access management story is quite different. The space’s recent history is characterized by web application Single Sign On products like SiteMinder and Tivoli Webseal. But unlike IDM the story did not end there. Thanks to widespread innovation in the identity field, as well as standards like SAML, OpenID, oauth, information cards, XACML and WS-Security, we see considerable innovation and many sophisticated implementations. These can be seen in access management efforts that extend the enterprise – such as federated identity products enabling B2B attribute exchange, Single Sign On, and other use cases; as well as web facing access management products that scale up to millions of users and support web applications, web APIs, web services, and cloud services. Access management exhibits some of the same “suite effect” as identity management, where incumbent vendors are less motivated to innovate, but at the same time the access management tools are tied to systems that are often direct revenue generators such as ecommerce. This is critical for large enterprise and the mid-market, and companies have shown no qualms about “doing whatever it takes” when moving away from incumbent suite vendors and to best of breed, in order to enable their particular usage models. Summary We have not seen commoditization in either identity management or access management. For the former, large enterprises and compliance concerns combine to make it a lower priority. In the case of access management, identity standards that enable new ways of doing business for critical applications like ecommerce have been the primary driver, but as the mid-market adopts these categories beyond basic Active Directory installs – if and when they do – we should see some price pressure.   Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.