This post will discuss the common security domains with enterprise applications, areas where generalized security tools lack the depth to address application and database specific issues, and some advice on how to fill in the gaps. But first I want to announce that Onapsis has asked to license the content of this research series. As always, we are pleased when people like what we write well enough to get behind our work, and encourage our Totally Transparent Research style. With that, on with today’s post!
Enterprise applications typically address a specific business function: supply chain management, customer relations management, inventory management, general ledger, business performance management, and so on. They may support thousands of users, tie into many other application platforms, but these are specialized applications with very high complexity. To understand the nuances of these systems, the functional components that comprise an application, how they are configured, and what a transaction looks like to that application takes years of study. Security tools also often specialize as well, focusing on a specific type of analysis – such as malware detection – and applying it in particular scenarios such as network flow data, log files, or binary files. They are generally designed to address threats across IT infrastructure at large; very few move up the (OSI) stack to look at generic presentation or application layer threats. And fewer still actually have any knowledge of specific application functions to understand a complex platform like Oracle’s Peoplesoft of SAP’s ERP systems.
Security vendors pay lip service to understanding the application layer, but their competence typically ends at the network service port. Generic events and configuration data outside applications may be covered; internals generally are not. Let’s dig into specific examples:
Understanding Application Usage
The biggest gap and most pressing need is that most monitoring systems do not understand enterprise applications. To continuously monitor enterprise applications you need to collect the appropriate data and then make sense of it. This is a huge problem because data collection points vary by application, and each platform speaks a slightly different ‘language’. For example platforms like SAP speak in codes. To monitor SAP you need to understand SAP operation codes such as T-codes, and there are a lot of different codes. Second you need to know where to collect these requests – application and database log files generally do not provide the necessary information. As another example most Oracle applications rely heavily on stored procedures to efficiently process data within the database. Monitoring tools may see a procedure name and a set of variables in the user request, but unless you know what operation that procedure performs, you have no idea what is happening. Again you need to monitor the connection between the application platform and the database because audit logs do not provide a complete picture of events; then you need to figure out what the query, code, or procedure request means.
Vendors who claim “deep packet inspection” for application security skirt understanding how the application actually works. Many use metadata (including time of day, user, application, and geolocation) collected from the network, possibly in conjunction with something like an SAP code, to evaluate user requests. They essentially monitor daily traffic to develop an understanding of ‘normal’, then attempt to detect fraud or inappropriate access without understanding the task being requested. This is certainly helpful for compliance and change management use cases, but not particularly effective for fraud or misuse detection. And it tends to generate false positive alerts. Products designed to monitor applications and databases actually understand their targeted application, and provide much more precise detection and enforcement. Building application specific monitoring tools is difficult and specialized work. But when you understand the application request you can focus your analysis on specific actions – order entry, for example – where insider fraud is most prevalent. This speeds up detection, lessens the burden of data collection, and makes security operations teams’ job easier.
Throughout this research we use the term ‘database’ a lot. Databases provide the core storage, search, and data management features for applications. Every enterprise application relies on a database of some sort. In fact databases are complex applications themselves. To address enterprise application security and compliance you must address many issues and requirements for both the and the application platforms.
We seldom see two instances of the same application deployed the same. They are tailored to each company’s needs, with configuration and user provisioning to support specific requirements. This complicates configuration and vulnerability scanning considerably. What’s more, application and database assessment scans are very different from typical OS and network assessments, requiring different evaluation criteria to assess suitability. The differences lie in both how information is collected, and the depth and breadth of the rule set. All assessment products examine software revision levels, but generic assessment tools stop at list vulnerabilities and known issues, based exclusively on software versions. Understanding an application’s real issues requires a deeper look. For example test and sample applications often introduce back doors into applications, which attackers then exploit. Software revision level cannot tell you what risks are posed by vulnerable modules; only a thorough analysis of a full software manifest can do that. Separation of duties between application, database, and IT administrators cannot be determined by scanning a network port or even hooking into LDAP – it requires interrogation of applications and persistent data storage. Network configuration deficiencies, weak passwords and public accounts, all easily spotted by traditional scanners – provided they have a suitable policy to check – but scanners do not discover data ownership rights, user roles, whether auditing is enabled, unsafe file access rights, or dozens of other well-known issues.
Data collection is the other major difference. Most assessment scans offer a basic network port scanner – for cases where agents are inappropriate – to interrogate the application. This provides a quick, non-invasive way to discover basic patch information. Application assessment scanners look for application specific settings, both on disk and within the database. These scans may be initiated by an agent on the application platform, or from a remote host over SSL/TLS. We call these “credentialed scans” because they require access to the file system or database, or to both. But to gather a complete picture of configuration settings, you need to collect information from the file system and database as well. This enables application assessment tools to fully address vendor best practices, industry best practices, and any ad hoc security or compliance rules the enterprise wants to validate. Generic assessment tools can cover about one-third of the total picture. Application and database assessment scanners get 70-100% depending on how they collect data and the policy set.
Application Patch Cycles
If you have an iPhone – or any Apple product, really – you will notice there is an update to one or more apps every single day. Enterprise applications are the opposite, which is unfortunate because their need is greater and the stakes are much higher. Many of you reading this know your enterprise applications run three to six months behind on security patches. If you are running big Oracle databases or SAP, odds are you are closer to 12 months behind. It’s not that IT is ignoring the problem, or fails to understand that these patches address critical security issues, it’s that the likelihood – and financial impact – of crashing the application is so well understood. Security patches rushed out the door have a bad habit of doing just that. It costs a lot of money to recover from such a failure, and all other IT work stops until the system is back online. The likelihood of an attacker breaching the system is not nearly so clear and any estimate of potential damage is at best a guess, so a security risk analysis cannot drive organization to patch quickly. Instead IT does what it has always done: iteratively test the patch installer, then applications, on a series of test and pre-production systems, until they are satisfied they can safely roll the patch into production.
There are many potential workarounds for this problem, but the traditional approaches are all flawed. Feature removal, reduced Internet connectivity, blocking, manual process intervention, and prayer are all approaches we have heard. The good news is that some firms are speeding up the patching process by leveraging disruptive trends in IT: virtualization and the cloud. Some are using “canary testing”, where the load balancer splits production traffic between patched and unpatched servers, with full switchover after the patch is vetted live. Others leverage the cloud or virtualization to spin up two sets of production servers, both patched and unpatched, and quickly rollback to unpatched systems in case of failure. These new approaches are not yet widely embraced.
Application and Database Logs
As mentioned earlier, database and application logs are typically not designed for security – they are primarily intended for IT personal to help understand performance issues and errors. They often omit important events including administrative activity, or provide a subset of the data such as before-and-after values for a transaction. They often lack filtering options to gather the subset of information you need – perhaps specific to a user or a transaction type – so you may be drinking from a proverbial firehose. In many cases the log file format can be set to
syslog, so SIEM and log management systems can collect the data, but they often lack understanding of application-specific event data. But the real issue is performance – application logging typically increase platform overhead by 10-20%, and native database logging by 20-40%. This is simply a non-starter for many companies.
If you are running SAP or Oracle enterprise applications, we can be confident that you have many security tools at your disposal. Most vendors offer a combination of basic logging, identity, and encryption services to go along with published best security practices. But even vendor tools fail to address some of these deficiencies. In many cases the provided solutions were never designed for security at all, being intended to highlight errors or performance issues. To effectively monitor, assess, and audit enterprise applications you will likely need to either build your own tools or leverage third-party products to supplement what you already have. Platform vendors know how to collect the correct information from their platforms, but gear their solutions to experts with their systems: system administrators. Auditors, security professionals, and even IT administrators often lack the technical depth to leverage these tools. And as we mentioned earlier, the “best practices” vendors provide leave out a lot of helpful information, and do not recommend tools or services not available from the vendor.
Our next post will discuss how to assemble a complete program.