Wow. It’s 2008. How did that happen?!?
When I was younger I couldn’t wait for the future. What geek can? We all grew up on entirely too much science fiction; far more of which is now reality than I expected (other than the space program; hello? NASA? Anyone home?). Now that I get older I realize that while the future is great in concept, the reality is eventually I won’t be around for it anymore. Every year is a smaller fraction of life, and thus every year passes relatively more quickly.
Aw hell, I’m far too young to be thinking about garbage like this.
As 2007 closed many of us pundit types devoted our time to looking at current trends and predicting the next few years. If you’ve been following me and Hoff at all, you also know some of us are always thinking about how we can do security differently. Not that we’re doing it “wrong” today, but if you don’t look for what’s next you’ll always be playing from behind.
One big trend I’ve been seeing is the shift towards anti-exploitation technologies. For those who don’t know, anti-exploitation is where you build in defenses to operating systems and platforms so that when there is a vulnerability (and there will be a vulnerability), it is difficult or impossible to exploit. Java was my first introduction to the concept at the application level (sandboxing), and Vista at the operating system level.
There’s no single anti-exploitation technology, but a bunch of techniques and features that work together to make exploitation more difficult. Things like ASLR (library/memory randomization), sandboxing, and data execution protection.
Most of the anti-exploitation focus today is on operating systems, but conceptually it can be applied anywhere. One of my big concepts in Application and Database Monitoring and Protection (ADMP) is building anti-exploitation into business and (especially) web applications. I’ve even converted from credit monitoring to credit protection (via Debix) for anti-exploitation against identity theft.
There was a lot of focus in 2007 on vulnerability scanning and secure coding. While important, those can never solve the problems. The bad guys will always find some vulnerabilities before we do. Our programmers will always make exploitable mistakes- no matter how much we hammer them with training and the code with tools.
When designing security controls we must assume vulnerabilities will exist and we won’t always identify and mitigate them before they are discovered by attackers.
Not that anti-exploitation is some mystical perfect remedy; it too will fail, but the goal is for it to fail slowly enough that we are able to discover, detect, and mitigate vulnerabilities before they are exploited.
You’ll be hearing a lot more about anti-exploitation at all levels of the industry over the next few years, especially as we start seeing it outside of operating systems. It’s the one thing that gets me jazzed that we might be able to get a leg up on the attackers.
Reader interactions
7 Replies to “It’s Time To Move Past Vulnerability Scanning To Anti-Exploitation”
[…] F5 to provide specific vulnerability data from a web application to the F5 WAF. Fortify is moving down the anti-exploitation path with real-time blocking (and other actions) directly on the web application server. Imperva is […]
@dre- you also win the award for the best comment of the year. You should go paste that on your blog, really good stuff.
@dre- I think it wasn’‘t proven enough in 07- few developers use ASLR and DEP in applications and some operating systems, like Apple, haven’‘t implemented it properly. Never mind that we don’‘t see it in web apps yet.
I don’‘t see this as being in conflict with software assurance and secure coding- they play together, but we need to recognize that just as anti-exploitation is not perfect, secure coding alone will never solve the problem.
We need both.
Most people refer to “anti-exploitation” as “exploitation countermeasures”.
In TCSEC, this is called “trusted path”, which is most commonly known to the world as “sandboxing”. Almost every security professional is familiar with the Java sandbox.
Is it defense-in-depth? I don’‘t think that’s a valid question. Functionality can be layered with DiD, yes. But ASLR can be used all by itself with no firewalls, anti-virus, HIPS, HIMS, forensic agents, etc. HIPS sometimes is ASLR, but unfortunately often uses user hooking. Real ASLR can’‘t be turned off.
I think that RBAC and MAC authorization frameworks are a part of this “functionality / exploitation countermeasure” near-future we speak of (instead of the usual DAC controls). SELinux (only really useful if compiled into the kernel like GRSecurity with LKM’s off) has some pretty powerful mojo in this particular area. Not that PaX/GRSecurity don’‘t… I mean… PaX is an ASLR implementation for Linux that is built-in to the kernel. Why doesn’‘t every distro include this? I feel the same way about DieHard-Software.Org
There are still plenty of machines that are incapable of NOEXEC in hardware because they lack NX or XD-bit support. Namely, the “Big Intel Screwup” where Intel made Centrino laptops that do not have an XD-bit. Thanks to the Uninformed Journal we now have open-source software such as the ImmunitySec Debugger which include the ability to break software and hardware DEP. Safe exception handlers (SEH) are no different. Even Vista and Visual Studio 2005 have some issues with their exploitation countermeasures… but hey it’s better than being on the Mac OS X platform which has no known exploitation countermeasures that work. Apple’s ASLR is a marketing checkbox followed by the world’s worst broken implementation ever.
So, you have Apple and Intel to thank for dwarfing exploitation countermeasures over the past 12 years. You have sysadmins and OS distributors to thank for doing really stupid things, namely:
1) Not using chroot
2) Turning SELinux off
3) Using DAC models instead of MAC or RBAC
4) Allowing log files to be read/write instead of append-only
5) Running as root, ever, for anything
6) Not checking valid PKI (aka not MD5 aka not SHA) signatures based on RSA, DSA, or El Gamal for all installed software
7) Not running GRSecurity and PaX
8) Allowing Linus to conquer SELinux with LSM
9) SELinux as LKM by default and not taking into account kernel protection issues
10) Not restricting system calls with things like systrace
11) Mounting important directories without noexec,nosuid,nodev
12) Installing/maintaining any Operating Systems that aren’‘t Linux, NetBSD, OpenBSD, Windows 2003 Server (64-bit Edition only) or Windows Vista
13) Compiling code or installing a compiler that isn’‘t the best GCC or Visual Studio with SSP/ProPolice, LibSafe, et al
This stuff has been around for over 12 years. It’s just taking it’s sweet time to get integrated. Think of all the money you’‘ve wasted on firewalls, AV, and IDS. Now start to re-think DiD.
I’‘d agree that Anti-Exploitation techniques are a good idea (following the defence in depth principle)and increased focus on them is a good idea I think that focus also has to remain on secure coding.
A couple of reasons for this. Firstly, the software industry as a whole hasn’‘t (in my opinion) really got the message yet. What we’‘ve started seeing over the last year or two is a shift away from Operating System Attacks (as Microsoft has improved their software security practices) and a shift towards attacks on user-space applications. However I don’‘t think that’s played out yet and increasing numbers of application level attacks will keep focus on software vendors security practices…
The other reason is that almost all Anti-Exploitation techniques rely on software… if that software isn’‘t securely coded then those controls can be worthless (or indeed an active hazard as people assume that they’‘re providing protection which they are not). Ultimately until most software vendors are using secure coding techiques/practices as a matter of course then I think the focus needs to remain there.
Is ‘‘Anti-Exploitation’’ not just an alternative name for ‘‘defense in depth’‘?
TCSEC (Orange Book) differentiates what you call “secure coding” vs. “anti-exploitation” as “assurance” vs. “functionality”. The concepts aren’‘t new; they are just new to some people. Especially people who have been sold on the firewall, AV, IDS, IPS, UTM, NAC upgrade-path / hamster-wheels of pain.
However, I disagree with your vision of 2008. Functionality such as ASLR was introduced/tested/proven in 2007. In 2008, we’‘ll bring in equivalent ideas for software assurance. Instead of secure coding practices—we’‘ll have software assurance measurements (e.g. five-star rating systems based on code coverage), audit standards for software assurance (e.g. PA-DSS), and evolutions to PABP, the Microsoft SDL, OWASP CLASP, Cigital’s “Touchpoints”, etc. My CPSL and “why crawling / pen-testing doesn’‘t matter” threads are my way of shooting the starter-gun for this software assurance race.