This Summary is a short rant on how most firms appear baffled about how to handle mobile and cloud computing. Companies tend to view the cloud and mobile computing as wonderful new advancements, but unfortunately without thinking critically about how customers want to use these technologies – instead they tend to project their own desires onto the technology. Just as I imagine early automobiles were saddled with legacy holdovers from horse-drawn carriages, when they were in fact something new. We are in that rough transition period, where people are still adjusting to these new technologies, and thinking of them in old and outmoded terms.

My current beef is with web sites that block users who appear to be coming from cloud services. Right – why on earth would legitimate users come from a cloud? At least that appears to be their train of thought. How many of you ran into a problem buying stuff with PayPal when connected through a cloud provider like Amazon, Rackspace, or Azure? PayPal was blocking all traffic from cloud provider IP addresses. Many sites simply block all traffic from cloud service providers. I assume it’s because they think no legitimate user would do this – only hackers. But some apps route traffic through their cloud services, and some users leverage Amazon as a web proxy for security and privacy. Chris Hoff predicted in The Frogs Who Desired a King that attackers could leverage cloud computing and stolen credit cards to turn “The mechanical Turk into a maniacal jerk”, but there are far more legitimate users doing this than malicious ones. Forbidding legitimate mobile apps and users from leveraging cloud proxies is essentially saying, “You are strange and scary, so go away.”

Have you noticed how may web sites, if they discover you are using a mobile device, screw up their web pages? And I mean totally hose things up. The San Jose Mercury News is one example – after a 30-second promotional iPad “BANG – Get our iPad app NOW” page, you get locked into an infinite ‘django-SJMercury’ refresh loop and you can never get to the actual site. The San Francisco Chronicle is no better – every page transition gives you two full pages of white space sandwiching their “Get our App” banner, and somewhere after that you find the real site. That is if you actually scroll past their pages of white space instead of giving up and just going elsewhere. Clearly they don’t use these platforms to view their own sites. Two media publications that cover Silicon Valley appear incapable of grasping media advances that came out of their own back yard. I won’t even go into how crappy some of their apps are (looking at you, Wall Street Journal) at leveraging the advantages of the new medium – but I do need to ask why major web sites think you can only use an app on a mobile device or a browser from a PC?

Finally, I have a modicum of sympathy for Mat Honan after attackers wiped out his data, and I understand my rant in this week’s Incite rubbed some the wrong way. I still think he should have taken more personal responsibility and done less blame-casting. I think Chris Hoff sees eye to eye with me on this, but Chris did a much better job of describing how the real issues in this case were obfuscated by rhetoric and attention seeking. I’d go one step further, to say that cloud and mobile computing demonstrate the futility of passwords. We have reached a point where we need to evolve past this primitive form of authentication for mobile and cloud computing. And the early attempts, a password + a mobile device, are no better. If this incident was not proof enough that passwords need to be dead, wait till Near Field Payments from mobile apps hit – cloned and stolen phones will be the new cash machines for hackers.

I could go on, but I am betting you will notice if you haven’t already how poorly firms cope with cloud and mobile technologies. Their bumbling does more harm than good.

On to the Summary:

Webcasts, Podcasts, Outside Writing, and Conferences

Favorite Securosis Posts

  • Adrian Lane: The TdF edition of the Friday Summary.. Just because that would be friggin’ awesome!
  • Mike Rothman: Friday Summary, TdF Edition. It’s not a job, it’s an adventure. As Rich experienced with his Tour de France trip. But the message about never getting complacent and working hard, even when no one is looking, really resonated with me.
  • Rich: Mike slams the media. My bet is this is everyone’s favorite this week. And not only because we barely posted anything else.

Other Securosis Posts

Favorite Outside Posts

  • Adrian Lane: Software Runs the World. I’m not certain we can say software is totally to blame for the Knight Capital issue, but this is a thought-provoking piece in mainstream media. I am certain those of you who have read Daemon are not impressed.
  • Mike Rothman: NinjaTel, the hacker cellphone network. Don’t think you can build your own cellular network? Think again – here’s how the Ninjas did it for Defcon. Do these folks have day jobs?
  • Rich: How the world’s larget spam botnet was brought down. I love these success stories, especially when so many people keep claiming we are failing.

Project Quant Posts

Research Reports and Presentations

Top News and Posts

Blog Comment of the Week

Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Ahmed Masud, in response to Pragmatic WAF Management: the WAF Management Process.

First of all, Very cool! I think that fundamental principles of configuring WAFs is very important and understanding how to run any security device is important. There is an issue with the way current WAFs are built though, that may not be covered by your axioms.

I think that the causes you give for failure of WAFs are valid but are an incomplete picture. In particular, the root cause that you conclude with, namely that WAFs don’t work due to operational failure rather than a technology failure. I humbly disagree. Let me try to explain:

You pointed out that “WAF is not a set and forget product, but for compliance purposes it is often used that way – resulting in mediocre protection.” However, you don’t provide any examination of why a WAF is not a set and forget technology. Or for that matter why WAFs require so much manual intervention which, in turn, leads to operational bottlenecks.

The basic reason is that current approach to web-application security is trying to do something that is not possible because the approach is an attempt to violate the Halting Problem (actually it’s an attempt to violate Rice’s Theorem) but analogy works.

WAFs, generally, use some form of regular language to describe the patterns that they’re trying to hunt down, whereas the attacks that occurred on both applications are typically at least as complex as non-contextual languages, and on the outset outright Turing machine deltas. What I mean by Turing machine deltas is that the cyber attack itself introduces either a new state or the hidden condition that generates a new sentential form inside the web app, something a regular language parser would never be able to figure out. And pretty much all WAFs use some form of pattern matching which can be mathematically reduced to a group of regular languages, with hardcoded non-contextual overlay that selects between them.

It can also be mathematically demonstrated, that no matter how well an individual WAF administrator understands spoofing, fraud, non-repudiation, denial of service attacks, and application misuse; that she cannot use a tool to encode that knowledge which is fundamentally incapable of capturing that knowledge due to its theoretical design limitations. So in this case, it’s a violation of Rice’s Theorem.

Of course, if we say that all attacks are app input-based and that the input all falls within Regular Language scope and that the WAF, administrator completely understands the hidden turing machines of her web applications we can successfully use a WAF. The problem with this is that even the web-app developers cannot claim to know whether or not and they have hidden (inadvertent, weird, or malicious) Turing machines inside their code because 1) they don’t understand covert channels (basis for most SQL injection for example) and 2) they don’t control anything about the foundation of their App Stack (buffer overflow in an extension library of a PHP script) … But let’s leave that alone for now.

The fact that all current generation WAFs engines can demonstrably be reduced to a collection of regular languages groups, which one can be subclass as a non-contextual language. Namely, a set of regular expressions with selection operation on them forming the group. The WAF administrator needs to (1) keep up to date with all possible known attacks, albeit with help of the WAF vendors through automation, and constantly update the WAF with new patterns for known attacks. (2) Hope that none of the attacks fall out of the WAF schema.

Given this, it’s fairly trivial to show that there exists at least one attack (sentential form) that falls out of the detection scope of a WAF. In other words something a WAF will misclassify.

Not so trivial would be the proof of the general case, but it is proovable that any subset of a Turing detection engine will have an asymptotic limit at around 1/sqrt(2) as the upper limit on successfully classifying whether an input forms an attack on an app provided we fully know all of the machines within the target web apps, and that each machine’s language is fully qualified in just two categories. This asymptote is a calculable statistical consequence of Rice’s Theorem. A number which is corroborated by observing 20 years of data on antiviral programs trying to detect computer viruses. Their success rate is achieving practical maximum at about 68%.

So I would submit to you that the reasons regular WAFs don’t work is not just based in operational flaws, but that the approach itself has a fundamental design flaw. In that a WAF is trying to qualify a hidden machine ( the one that makes up the bugs inside the app) by examining input, formally a sentential form for one of multiple machines in your APP, both good and bad, and determining whether or not that sentential form creates an undesired state transition or a previously not-programmed state machine substrate on which is its own catalyst that yields an undesired state within the app, or on which another sentential form can act maliciously to yield the same result.

With this fundamental issue of trying to violate the defining principles of Computer Science, all we can do is create perfect operational approach to asymptotic failure, which as I said earlier can demonstrated to be ~ 30%. The 30% = (1 – 0.707) comes from the RMS value of normalized detection of malicious sentential forms in general; assuming no malicious intent by the programmer.

So to summarize, it’s not just operational flaws, it’s fundamental flaws within the current design of WAFs.

Best regards, Ahmed

Trustifier

Share: