I have been debating writing anything on the spate of publicly reported defense contractor breaches. It’s always risky to talk about breaches when you don’t have any direct knowledge about what’s going on. And, to be honest, unless your job is reporting the news it smells a bit like chasing a hearse.
But I have been reading the stories, and even talking to some reporters (to give them background info – not pretending I have direct knowledge). The more I read, and the more I research, the more I think the generally accepted take on the story is a little off.
The storyline appears to be that RSA was breached, seed tokens for SecurID likely lost, and those were successfully used to attack three major defense contractors. Also, the generic term “hackers” is used instead of directly naming any particular attacker.
I read the situation somewhat differently:
- I do believe RSA was breached and seeds lost, which could allow that attacker to compromise SecurID if they also know the customer, serial number of the token, PIN, username, and time sync of the server. Hard, but not impossible. This is based on the information RSA has released to their customers (the public pieces – again, I don’t have access to NDA info).
- In the initial release RSA stated this was an APT attack. Some people believe that simply means the attacker was sophisticated, but the stricter definition refers to one particular country. I believe Art Coviello was using the strict definition of APT, as that’s the definition used by the defense and intelligence industries which constitute a large part of RSA’s customer base.
- By all reports, SecurIDs were involved in the defense contractor attacks, but Lockheed in particular stated the attack wasn’t successful and no information was lost. If we tie this back to RSA’s advice to customers (update PINs, monitor SecurID logs for specific activity, and watch for phishing) it is entirely reasonable to surmise that Lockheed detected the attack and stopped it before it got far, or even anywhere at all. Several pieces need to come together to compromise SecurID, even if you have the customer seeds.
- The reports of remote access being cut off seem accurate, and are consistent with detecting an attack and shutting down that vector. I’d do the same thing – if I saw a concerted attack against my remote access by a sophisticated attacker I would immediately shut it down until I could eliminate that as a possible entry point.
- Only the party which breached RSA could initiate these attacks. Countries aren’t in the habits of sharing that kind of intel with random hackers, criminals, or even allies.
- These breach disclosures have a political component, especially in combination with Google revealing that they stopped additional attacks emanating from China. These cyberattacks are a complex geopolitical issue we have discussed before. The US administration just released an international strategy for cybersecurity. I don’t think these breaches would have been public 3 years ago, and we can’t ignore the political side when reading the reports. Billions – many billions – are in play.
In summary: I do believe SecurID is involved, I don’t think the attacks were successful, and it’s only prudent to yank remote access and swap out tokens. Politics are also heavily in play and the US government is deeply involved, which affects everything we are hearing, from everybody.
If you are an RSA customer you need to ask yourself whether you are a target for international espionage. All SecurID customers should change out PINs, inform employees to never give out information about their tokens, and start looking hard at logs. If you think you’re on the target list, look harder. And call your RSA rep.
But the macro point to me is whether we just crossed a line. As I wrote a couple months ago, I believe security is a self-correcting system. We are never too secure because that’s more friction than people will accept. But we are never too insecure (for long at least) because society stops functioning. If we look at these incidents in the context of the recent Mac Defender hype, financial attacks, and Anonymous/Lulz events, it’s time to ask whether the pain is exceeding our thresholds.
I don’t know the answer, and I don’t think any of us can fully predict either the timing or what happens next. But I can promise you that it doesn’t translate directly into increased security budgets and freedom for us security folks to do whatever we want. Life is never so simple.
Reader interactions
2 Replies to “A Different Take on the Defense Contractor/RSA Breach Miasma”
I’d like to offer two different perspectives. From an infosec perspective, we should realize that the RSA was the vector, a means to end so that an adversary could then exploit high value targets who rely on SecurID. The Defense contractor disclosures point to a well funded nation state attacker since defense secrets were the goal, not monetizing stolen information assets such as personal info, credit card numbers, etc.
My second perspective can be found on my blog which covers Psychological Operations (PSYOP) – the psychological effect of a cyber attack. I compare it to a burglary.
As for the administration’s efforts, thus far the legislative route and the ‘strategy’ route strike me as tilting at windmills.I’d like to offer two different perspectives. From an infosec perspective, we should realize that the RSA was the vector, a means to end so that an adversary could then exploit high value targets who rely on SecurID. The Defense contractor disclosures point to a well funded nation state attacker since defense secrets were the goal, not monetizing stolen information assets such as personal info, credit card numbers, etc.
My second perspective can be found on my blog which covers Psychological Operations (PSYOP) – the psychological effect of a cyber attack. I compare it to a burglary.
As for the administration’s efforts, thus far the legislative route and the ‘strategy’ route strike me as tilting at windmills.
Rich,
If the speculation about seed records is right, I think that the RSA breach is somewhat like (for illustration purposes, anyway) a CA losing the private key for an issuing authority. In the CA case, we would expect that they would revoke the certificates/keys of the issuing authority and take on the pain to issue new certificates to their customers to restore trust ( trust == value) in their product and brand, and because if they didn’t, the browser vendors would remove them from their trusted root lists to protect their own customers. The main difference is that RSA doesn’t have that counterbalance because the end users don’t have a choice about what TFA system they use.
However, we’ve seen via Stuxnet’s code signing that sometimes revoking a certificate isn’t as easy as it seems, certificates don’t get revoked when they should be, and we’re forced to limp along less secure than we were before the incident and hope nothing else bad happens.
A recent Network World article sums up the main lesson for organizations is to not trust their security vendors, and while it’s sound advice for security in general, a lot of business folks will get tired of paying for more analyst time to watch a system that’s lost trust ( == value) regardless of how likely they are to be a target for foreign intelligence. Absent some motivating factor for RSA to restore that trust (i.e. decisive, immediate, and catastrophic loss of business), customers may be forced to correct the problem for themselves over time by replacing SecureID with something more secure at a comparable price, or a cheaper solution with comparable security.
Or, they could just accept the temporarily increased risk until all of their tokens are replaced in their normal lifecycle, and hope nothing else bad happens.