Thanks to an anonymous reader, we may have some additional information on how the Heartland breach occurred. Keep in mind that this isn’t fully validated information, but it does correlate with other information we’ve received, including public statements by Heartland officials.
On Monday we correlated the Heatland breach with a joint FBI/USSS bulletin that contained some in-depth details on the probable attack methodology. In public statements (and private rumors) it’s come out that Heartland was likely breached via a regular corporate system, and that hole was then leveraged to cross over to the better-protected transaction network.
According to our source, this is exactly what happened. SQL injection was used to compromise a system outside the transaction processing network segment. They used that toehold to start compromising vulnerable systems, including workstations. One of these internal workstations was connected by VPN to the transaction processing datacenter, which allowed them access to the sensitive information. These details were provided in a private meeting held by Heartland in Florida to discuss the breach with other members of the payment industry.
As with the SQL injection itself, we’ve seen these kinds of VPN problems before. The first NAC products I ever saw were for remote access – to help reduce the number of worms/viruses coming in from remote systems.
I’m not going to claim there’s an easy fix (okay, there is, patch your friggin’ systems), but here are the lessons we can learn from this breach:
- The PCI assessment likely focused on the transaction systems, network, and datacenter. With so many potential remote access paths, we can’t rely on external hardening alone to prevent breaches. For the record, I also consider this one of the top SCADA problems.
- Patch and vulnerability management is key – for the bad guys to exploit the VPN connected system, something had to be vulnerable (note – the exception being social engineering a system ‘owner’ into installing the malware manually).
- We can’t slack on vulnerability management – time after time this turns out to be the way the bad guys take control once they’ve busted through the front door with SQL injection. You need an ongoing, continuous patch and vulnerability management program. This is in every freaking security checklist out there, and is more important than firewalls, application security, or pretty much anything else.
- The bad guys will take the time to map out your network. Once they start owning systems, unless your transaction processing is absolutely isolated, odds are they’ll find a way to cross network lines.
- Don’t assume non-sensitive systems aren’t targets. Especially if they are externally accessible.
Okay – when you get down to it, all five of those points are practically the same thing.
Here’s what I’d recommend:
- Vulnerability scan everything. I mean everything, your entire public and private IP space.
- Focus on security patch management – seriously, do we need any more evidence that this is the single most important IT security function?
- Minimize sensitive data use and use heavy egress filtering on the transaction network, including some form of DLP. Egress filter any remote access, since that basically blows holes through any perimeter you might think you have.
- Someone will SQL inject any public facing system, and some of the internal ones. You’d better be testing and securing any low-value, public facing system since the bad guys will use that to get inside and go after the high value ones. Vulnerability assessments are more than merely checking patch levels.
Reader interactions
10 Replies to “New Details, and Lessons, on Heartland Breach”
Patch management is so critically important. But Rich, as you know from your work on Project Quant, many many companies are still not taking it serious…
Sad and will have very serious consequences…
This reminds me of the attitude in many IT departments that client firewalls are unnecessary, as is encryption for non-portable equipment. We need to start understanding that all systems should be treated as hostile, unfortunately this includes corporate owned (or pwned?) equipment inside the perimeter. Until we have this shift in thinking, we will keep reading about these events.
Chet Wisniewski
http://www.sophos.com
I believe it is certainly possible a database server was on an internal, trusted network which was used by a web server in a DMZ. Or maybe it was all internal as an internal app.
I mean, it’s expensive to keep adding interfaces to your ASAs (or firewall du jour) just to segment a few systems. At least, that’s how pretty much everyone looks at it. Many shops keep externally-facing systems in the DMZ, and databases with everything else in the internal network. If you’re lucky, you get some VLANs to add some separation…
And you’re right, Rich, SQLi should be scanned for regardless. I agree, and was just kinda continuing the thought with my own musings. 🙂 I don’t think many people realize they should/could scan purchased products in addition to home-grown ones. Shit, hackers do it all the freakin time!
This is how they did it:
1. SQL inject external facing system on the non-transaction processing network. Gain command shell due to poorly configured SQL Server.
2. With command shell, install tools to begin attacks on other internal systems. Likely possible due to allowing the DB server instance to run under admin or another privileged account.
3. Begin pwning internal systems. Accidentally manage to pwn a vulnerable workstation with an active VPN to the transaction processing network.
4. Profit.
“SQL injection was used to compromise a system outside the transaction processing network segment”
I have the same question as one of the previous commenters: SQLi implies an externally facing system i.e. a web server connected to a database server. Getting from their to real inside is [supposed to be] tricky AND getting from inside to payment network is tricky [but it was rumored how this one was done]. However, there was no rumor on SQLi -> internal. Please don’t tell me that said SQL server was ALREADY on the real inside and DMZ webserver directly communicated with it….
Please somebody leak it 🙂
Off the shelf app or not, getting a shell from SQL injection should be something that’s tested on any external facing web app. I don’t mean to sound preachy or anything, but that’s a very well known flaw that’s been used since at least 2004. I know, easy for me to say since I don’t have to actually do this stuff for a living anymore.
The wireless was the first round of attacks (TJX), SQL injection was this round. Same people, different times.
Yes- DLP would. Network DLP would have monitored all outbound traffic, and examined any non-encrypted files for sensitive info. It can’t always catch leaks, and if the bad guys even used a simple ROT13 DLP would have missed it, but in this case it seems it could have caught it.
No shit on the indictment bit- and it annoys me they would share the info about the breach to a select group of other payment companies, but not the outside world so the rest of us can protect against the attack.
Some lingering thoughts of my own:
– Keep in mind companies run many systems and applications, not all of which are built in-house. It is a quick reaction to think the victim’s developers should code better, but what if it was a purchased ticket system or inventory system that had SQL injection flaws that connected to a SQL server that had certain credentials cached or revealed admin accounts/password patterns? Or then serves up bad pages to the work users who almost certainly use IE…add complexity ad nauseum. But it brings up the question of who should be scanning what systems. Do your internal security teams scan purchased products in addition to internally developed apps? That’s a big deal for most.
– What happened to the reports of breaking in through weak wireless networks? Maybe that’s how they got a non-transaction system…?
– Would DLP actually find things like this? I’m still not sold, but would be *far* more inclined to agree with the more general concept of egress filtering and even inspection. Maybe that’s network-based DLP and I’m just not aware of it.
– I still think it sucks that we need an unsealed indictment to get folks to share the damned information.
Great advice. Remember folks, that vulnerability scanning is more then just running Qualys or nessus, you need web app scanning tools and database scanning tools as well, to look for issues there as well. Similarly, you want to be looking for more then just vulns per se, but services and tools you don’t need (case in point xp_cmdshell stored procedures)
It’s interesting to observe that this vector has been or should have been well known for quite a while. The oldest publicly available presentation I can find for using a SQL server to execute attacks against non-SQL servers is circa 2005. That PPT clearly outlined how to use SQL injection to upload useful binaries into BLOB’s on the SQL server & execute them from the database engine (a shell).
I suspect though, that even though the info on how to execute that type of attack has been public for quite a while, my experience is that very few DBA’s and sysadmins know it exists.
—Mike
Some enterprises in this cluster of attacks also chose HIPS for their internal primary systems as an additional defense.
Obviously, we can’t talk about specific details, but some HIPS systems seem to be relevant to both the specific form of SQLInjection used in this attack *and* on internal systems that handle the most sensitive data.
Kevin
P.s.: I hope the DLP cynics will now finally recant.