Thanks to a missing arrival I’m blogging live from the “Analyst Hamster Maze” at Symposium in Orlando. That’s how we refer to the One-on-One area in the Swan hotel- there’s really no other way to describe about 100 temporary booths in a big conference room filled with poorly fed and watered analysts. If you’ve never been to a Gartner conference, any paying attendee can sign up for a 30 minute face to face analyst meeting for Q&A on pretty much anything. I like to call it “Stump the Analyst”, and it’s a good way for us to interact with a lot of end users. (You vendors need to stop abusing the system with veiled briefings and inane “face time”). It does, however, get pretty brutal by day 5. My first meeting today was pretty interesting. The discussion started with SAP security and ended with SCADA. For those that don’t know, SCADA (Supervisory Control and Data Acquisition) is the acronym to cover the process control systems that connect the digital and physical worlds in utilities and manufacturing. These are large-scale systems and networks for controlling everything from a manufacturing floor, to a power network, to a sewage system. SCADA is kind of interesting. These are systems that do things, from making your Cheerios to keeping your electricity running. When SCADA goes down it’s pretty serious. When an outsider gets in (very rare, but there are some cases) they can do some really nasty sh*t. We’re talking critical infrastructure here. SANS seems to be focusing a lot on SCADA these days, either out of good will or (more likely) because it’s hot enough they can make some money on it. I started writing about SCADA around 5 years ago and my earlier work sort of martyred me in the SCADA security world (or with the few people who read the research). These days I’m feeling a bit vindicated as the industry shifts a tad towards my positions. What’s the debate? There’s been a trend for a while to move process control networks onto TCP/IP (in other words, Internet compatible) based networks and standard (as in Windows and UNIX) systems. SCADA developed long before our modern computing infrastructure, and until the past 5-10 years most systems ran on proprietary protocols, networks, and applications. It’s only natural to want to leverage existing infrastructure, technology advancements, standardization, and skill sets by moving to fairly universal platforms. The problem is the very proprietary nature of SCADA was an excellent security control- few outsiders understood it and it wasn’t accessible from the Internet. You know, that big global network, home to the script kiddies of the world. To exacerbate the problem, many companies started converging their business networks with their process control (SCADA) networks. Now their engineers could control the power grid from the same PCs they read email and browsed porn on. It was early in the trend, and my advice was to plan carefully going forward and keep these things separate, often at additional cost, before we created an insurmountable problem with our critical infrastructure. I saw three distinct problems emerging as we moved to TCP/IP and standard platforms: Network failure due to denial of service: if the network is saturated with attack traffic, such as we saw with the SQL Slammer and Blaster viruses/worms, then you can’t communicate with the SCADA systems. Even if they aren’t directly affected- and most have failsafes to keep them from running amok- you still can’t monitor and adjust for either efficiency or safety. The failsafe might shut down that boiler before it explodes, but there are probably some serious costs in a situation like that. I’ve heard rumors that Blaster interfered with the communications during the big Northeast power outage- it didn’t infect SCADA systems, but it sure messed up all those engineer PCs and email between power stations. Exploitation of the standard platform: if your switching substation, MRI, or chemical mixer runs on Windows or Unix; it’s subject to infection/exploitation through standard attacks/viruses/worms. Even locked down, we’ve seen plenty of vulnerabilities in the cores of all operating systems that are remotely exploitable over standard ports via mass infection. Mix your email and web servers on the same network as these things, even with some firewalls in the mix, and you’re asking for trouble. Direct exploitation: plenty of hackers would love to pwn a chemical plant. I know of one case, outside the US, where external attackers controlled a commuter train system on 2 separate occasions. Maybe it was just a game of Railroad Tycoon gone bad a la “War Games”, but I’d rather not have to worry about these kinds of things. Standard networks, platforms, and Internet connectivity sure make this a lot easier. So where are we today? One way to solve the problem is to completely isolate networks. More realistically we can use virtual air gaps, and what I call virtual air locks, to safely exchange information between process control and business networks. Imagine an isolated server, between two firewalls, running TCP/IP on one side and maybe something like IPX (a different network protocol) on the other, with only one, non-standard, port for exchanging information. The odds of traversing to the process control network are pretty darn slim. (For more details check out the Gartner research I wrote on this; I don’t want to violate my employer’s copyright). Thanks to what I hear are some close calls, the industry is taking security a heck of a lot more seriously. The power industry consortium (NERC) issued some security guidelines recently that redefine the security perimeter to include everything connected to the SCADA side. The federal regulatory body, FERC, requires conformance with the NERC standard. Thus if you converge your process control and business networks you have to secure and audit the business side as tightly as the process control side (a much tougher standard). The business costs could be extreme. The result? It’s quite possibly now cheaper to isolate the networks and secure the heck out