This post will discuss technical facets of RASP products, including how the technology works, how it integrates into an application environment, and the advantages or disadvantages of each. We will also spend some time on which application platforms supported are today, as this is one area where each provider is limited and working to expand, so it will impact your selection process. We will also consider a couple aspects of RASP technology which we expect to evolve over next couple years.
Integration
RASP works at the application layer, so each product needs to integrate with applications somehow. To monitor application requests and make sense of them, a RASP solution must have access to incoming calls. There are several methods for monitoring either application usage (calls) or execution (runtime), each deployed slightly differently, gathering a slightly different picture of how the application functions. Solutions are installed into the code production path, or monitor execution at runtime. To block and protect applications from malicious requests, a RASP solution must be inline.
- Servlet Filters & Plugins: Some RASP platforms are implemented as web server plug-ins or Java Servlets, typically installed into either Apache Tomcat or Microsoft .NET to process inbound HTTP requests. Plugins filter requests before they reach application code, applying detection rules to each inbound request received. Requests that match known attack signatures are blocked. This is a relatively simple approach for retrofitting protection into the application environment, and can be effective at blocking malicious requests, but it doesn’t offer the in-depth application mapping possible with other types of integration.
- Library/JVM Replacement: Some RASP products are installed by replacing the standard application libraries, JAR files, or even the Java Virtual Machine. This method basically hijacks calls to the underlying platform, whether library calls or the operating system. The RASP platform passively ‘sees’ application calls to supporting functions, applying rules as requests are intercepted. Under this model the RASP tool has a comprehensive view of application code paths and system calls, and can even learn state machine or sequence behaviors. The deeper analysis provides context, allowing for more granular detection rules.
- Virtualization or Replication: This integration effectively creates a replica of an application, usually as either a virtualized container or a cloud instance, and instruments application behavior at runtime. By monitoring – and essentially learning – application code pathways, all dynamic or non-static code is mimicked in the cloud. Learning and detection take place in this copy. As with replacement, application paths, request structure, parameters, and I/O behaviors can be ‘learned’. Once learning is complete rules are applied to application requests, and malicious or malformed requests are blocked.
Language Support
The biggest divide between RASP providers today is their platform support. For each vendor we spoke with during our research, language support was a large part of their product roadmap. Most provide full support for Java; beyond that support is hit and miss. .NET support is increasingly common. Some vendors support Python, PHP, Node.js, and Ruby as well. If your application doesn’t run on Java you will need to discuss platform support with vendors. Within the next year or two we expect this issue to largely go away, but for now it is a key decision factor.
Deployment Models
Most RASP products are deployed as software, within an application software stack. These products work equally well on-premise and in cloud environments. Some solutions operate fully in a cloud replica of the application, as in the virtualization and replicated models mentioned above. Still others leverage a cloud component, essentially sending data from an application instance to a cloud service for request filtering. What generally doesn’t happen is dropping an appliance into a rack, or spinning up a virtual machine and re-routing network traffic.
Detection Rules
During our interviews with vendors it became clear that most are still focused on negative security: they detect known malicious behavior patterns. These vendors research and develop attack signatures for customers. Each signature explicitly describes one attack, such as SQL injection or a buffer overflow. For example most products include policies focused on the OWASP Top Ten critical web application vulnerabilities, commonly with multiple policies to detect variations of the top ten threat vectors. This makes their rules harder for attackers to evade. And many platforms include specific rules for various Common Vulnerabilities and Exposures, providing the RASP platform with signatures to block known exploits.
Active vs. Passive Learning
Most RASP platforms learn about the application they are protecting. In some cases this helps to refine detection rules, adapting generic rules to match specific application requests. In other cases this adds fraud detection capabilities, as the RASP learns to ‘understand’ application state or recognize an appropriate set of steps within the application. Understanding state is a prerequisite for detecting business logic attacks and multi-part transactions. Other RASP vendors are just starting to leverage a positive (whitelisting) security model. These RASP solutions learn how API calls are exercised or what certain lines of code should look like, and block unknown patterns.
To do more than filter known attacks, a RASP tool needs to build a baseline of application behaviors, reflecting the way an application is supposed to work. There are two approaches: passive and active learning. A passive approach builds a behavioral profile as users use the application. By monitoring application requests over time and cataloging each request, linking the progression of requests to understand valid sequences of events, and logging request parameters, a RASP system can recognizes normal usage. The other baselining approach is similar to what Dynamic Application Security Testing (DAST) platforms use: by crawling through all available code paths, the scope of application features can be mapped. By generating traffic to exercise new code as it is deployed, application code paths can be synthetically enumerated do produce a complete mapping predictably and more quickly.
Note that RASP’s positive security capabilities are nascent. We see threat intelligence and machine learning capabilities as a natural fit for RASP, but these capabilities have not yet fully arrived. Compared to competing platforms, they lack maturity and functionality. But RASP is still relatively new, and we expect the gaps to close over time. On the bright side, RASP addresses application security use cases which competitive technologies cannot.
We have done our best to provide a detailed look at RASP technology, both to help you understand how it works and to differentiate it from other security products which sound similar. If you have questions, or some aspect of this technology is confusing, please comment below, and we will work to address your questions. A wide variety of platforms – including cloud WAF, signal intelligence, attribute-based fraud detection, malware detection, and network oriented intelligence services – all market value propositions which overlap with RASP. But unless the product can work in the application layer, it’s not RASP.
Next we will discuss emerging use cases, and why firms are looking for alternatives to what they have today.
Reader interactions
9 Replies to “Understanding and Selecting RASP: Technology Overview”
Thanks everyone for the comments. Tis was a difficult post to write as concise explanations that capture unique properties are difficult. The feedback will help close the gaps. I have a few comments and questions:
@Arshan – You’re first point – Impedance mismatch captures the issue, but what do we _call_ the positive response to the issue? I used to simply bundle this under ‘contextual analysis’ but what you describe is subtly different. I do not believe it falls under input validation, and de-obfuscation is an unwieldily term in this case. Is it external entity resolution?
I’ve mentioned contextual analysis in a couple places in the paper but I see I need to flesh that portion out a bit more as well.
@Raphael – I’ll add instrumentation on the list and cover the the limitations of servlet filters. I do however disagree on virtualization; the implementations of this approach may be limited, but the possibility to monitor all memory structures, API calls and system calls does not limit access to ‘lower layers’. That said this approach is only used by one vendor I am aware of so it’s not fully fleshed out.
And good catch on versions! Most points I touch on in the series but based on the way you’re asking the questions I will flesh these out as well.
@Mike – Coverage is a good point, and I think what you are saying is that you’re covering the app in many places, and able to apply rules depending upon where in the code the sensor is placed. I should expand the section on contextual analysis to address this coverage. Also, several have mentioned performance and my plan was to cover under the buyers guide. I talk about scalability models but I’ve yet to see published performance numbers or even a criteria upon which to base performance. And no offense, but I don’t trust vendor published numbers – something about me working for Oracle for a number of years, fairness, apples-to-apples comparisons, yadda-yadda. I’ll circle back with the handful of customers I’ve spoken with to see if I can gather enough information to create some form of yardstick for evaluation, but it’s not clear to me what that entails at this time.
1. Integration – really glad to see that you are breaking down different methods.
a. Servlet filters only give access to HTTP (front door)
b. Replacement only gives access to platform calls (floor)
c. Virtualization only gives access to lower layer calls (floor)
d. ?? it is possible to hook the backend (back door)
Would like to see instrumentation added to the list! We think it’s the most powerful approach to integration is runtime instrumentation of the entire application stack (runtime, server, frameworks, libraries, and custom code) with sensors. Instrumentation covers the whole house, and provides access to basically everything. Huge information advantage.
2. Language support – would also love to see you go beyond simple language checkbox. Supporting a language is just step one. The hard thing is to verify that your solution works on all the possible combinations of runtime platform (all vendors and versions), application server (all versions), and application frameworks (many hundreds of combinations).
3. Language support – Also, do you support APIs and web services. There are numerous challenges in handling XML and REST requests. Increasingly modern development, even normal websites, is mostly angular in the browser and REST APIs on the backend.
4. Detection Rules – I’d like to distinguish “signatures” from “negative security”. Just as it’s possible to have positive signatures, it’s also possible to do “negative security” without signatures. I think the key characteristic to look for here is how easily and powerfully the RASP allows you to model the expected or prohibited application behavior. A simple abstract definition of the prohibited behavior is not a signature (to me) and can be quite effective. Modeling expected (positive) behavior is naturally stronger, but cannot be done in the abstract – it almost always requires a specific application.
5. Learning – the missing approach is that organizations can specify the way applications are supposed to work. Passive and active learning are always noisy – which leads to broken applications and the need for experts.
Some additional topics to add…
6. Installation – how many pieces are there? Do you have to train RASP to the application?
7. Performance – this is critical to an effective RASP solution. Is your vendor open about their performance numbers, backed with some science?
8. How is blocking actually implemented?—implemented wrong, blocking an attack with RASP can break applications or violate integrity constraints by breaking transactions.
9. How is policy controlled across the entire application portfolio?
10. How is attack data shared with SIEM?
11. Scalability—Does the solution require any centralized decision-making component or is all the work performed within the application itself (i.e. is there a bottleneck?).
Another interesting topic is how are RASP agents and console updated? Applies to the software, rules, and databases. How quickly can new rules be pushed out across the application portfolio?
Hey Adrian, great post – looking forward to the rest of the series!
Regarding signatures, they are useful to fingerprint scanners and common attack tools, but IMMUNIO uses dedicated algorithms for each attack type we protect against, and these algorithms are much tougher to bypass than signatures. In our case, we know what the SQL statement emitted from a line of code should look like. An injection would alter that.
As @TJ points out, using negative signatures doesn’t buy much beyond traditional WAF technology – you still need to know the attack vector in order to create a signature – you’re still playing catchup with the attackers.
We discuss how IMMUNIO compares to WAF’s on https://www.immun.io/runtime-application-self-protection
The real promise of RASP technology is coverage. Within the application you have much more context about what the application is doing, which allows protection from a broad range of attacks with almost no integration effort. IMMUNIO has deep integration from the raw request right down to the OS and DB layer. Stopping XSS and SQLi is awesome, but when you also cut off remote command execution, arbitrary file access, credential stuffing, brute force, session stealing, etc. you make it seriously difficult for an attacker.
Coverage applies to platform support as well. Most organizations we talk to at IMMUNIO want to use the same tech across their portfolio. It’s rare these days to find anyone who’s 100% Java. We’re seeing Python, Ruby, and NodeJS all over the place.
Of course all this protection doesn’t come for free. As @Raphael mentions, it would be great to get an idea how vendors think about performance. We spend a lot of effort at IMMUNIO measuring and improving performance. It’s the number 2 priority after Don’t break the customer’s app.
We think that WAF’s were a good first generation technology that stops known attack patterns (signature based). These days however, with web applications being a major attack vector, a different technology stack (microservices, WebSockets, etc.), more sophisticated attackers and attacker tooling, RASP will emerge as a clearly better alternative: more effective, more accurate, harder to evade, and easier to deploy and manage.
@TJ, @Adrian—RE: signaturing, I’d say your comments are almost correct.
As a RASP vendor, we use the best possible techniques to detect and prevent attacks — and those techniques vary for every type of attack. Some of our rules use pattern matching as a part of the evaluation process. Many don’t.
Consider XML external entity injection (XXE.) Sure, we could scan the input and try to pattern-match for external entities in a user’s submitted XML. However, it’s much more accurate to simply put sensors into the code that resolves external entities, something that doesn’t ever happen normally. Before Contrast, this type of sensor-based application self-protection was impossible.
Many times, signaturing is the right choice. For instance, the Mark of the Beast flaw (CVE-2010-4476) which instantly takes down legacy Java applications, is incredibly easy to signature. We use signaturing to see if a user attempted to perform a Mark of the Beast attack, and sensors to make sure the attack isn’t successful.
Most of the concern around signaturing for attacks results from two unfortunate truths about previous-generation technology.
First—the dreaded impedance mismatch. The “thing” that’s looking for attacks is often far away, architecturally, from where the exploit is activated. So far away, in fact, that various encoding, canonicalization and other language-specific issues can prevent the attack detector from seeing the attack in the same light as the vulnerable application. Because Contrast is checking from inside the application, on the same exact object in memory that would be used in the exploit, an impedance mismatch is practically impossible. The same isn’t true for a web application firewall, which is written in a different language than the app, and architecturally separated by character multiple protocol layers.
Second—the lack of context. Before today, there wasn’t a way to safely handle the “in between” data — the data that’s not clearly an attack, but could be, under the right circumstances. Perimeter-oriented technology, like WAFs, have to make a decision on the dangerousness of the data before it gives it to the app — which is impossible to do correctly because it has no idea how the app is going to handle that data. Contrast has sensors in the app, its libraries and the server it’s running on — we can make sure the suspicious data isn’t used unsafely, and let our users know if it was.
It’s true that WAFs have a ten year head start on signaturing maturity. My counterpoint is that given our context in the application, this gap won’t be difficult to bridge.
@Joe, @Adrian—those are indeed two of my goals as a RASP vendor.
1) We’re just a part of your application, so yes, we’re much easier to package and deploy. We can go in the cloud, a data center, a Docker container, Droplet, EC2 node, Raspberry Pi—anywhere.
It was frustrating to us when we were early in the life of Contrast Security that we couldn’t run with our WAF in DEV and QA environments—so there were always surprises (RE: problems) when we got to PROD. Contrast can be a part of your app from the beginning, so that you gain assurance your app will “just work” in production like it did in earlier in environments.
2) Just like APM tools put application developers on the “front line” of application operations, RASP can put app developers on the “front line” of application security.
@Joe – thanks. Yes, I think on the operations side this will make things easier. I talk more about that in the upcoming post on Use Cases. And the ability to aid developers by pinpointing which modules—in some cases which lines of code—are vulnerable is a big win. So yes, I think RASP appeals to the developers and (agile) operations teams and less to traditional security buyers. For these folks, why would they look beyond the investments they already have? I don’t think security sees the need for APIs and runtime security analytics.
@TJ – Signatures for SQLi will be just as effective/ineffective with custom apps as that part remains consistent, but less effective for other threat vectors. I’ve not pen tested a RASP so I don’t know if WAF evasion techniques will work with RASP – they may. I think RASP, not working at the HTTP layer, has a small advantage given all requests are fully unpacked. I also think the positive security approaches RASP vendors use will be effective.
All that said I can’t say these will be more or less effective than WAF on the whole, especially given WAF has a ten year head start in maturity. The real advantages are about application instrumentation, automation through APIs, better coupling with development defect tracking.
-Adrian
Hey Adrian, really great post. The best I’ve seen on RASP. As for the direction technology. Negative security models (signatures) in the WAF world have been proven to be ineffective when it comes to custom apps. Where do you see the difference with RASP?
Great post, Adrian. I’m looking forward to this mini-series on RASP.
RASP appears to very ‘portable’ since it becomes part of the running application server and doesn’t require other operational changes. Do you think this deployment model will make adoption of operational security easier?
Secondly, RASP has visibility into the runtime data flow – a key difference when compared to other security technologies. Do you think this pushes the ownership of application protection technologies from the security team -> development team?