Login  |  Register  |  Contact


Tuesday, July 09, 2013

Kudos: Microsoft’s App Store Security Policy

By Rich

Today on the Microsoft Security Response Center Blog:

Under the policy, developers will have a maximum of 180 days to submit an updated app for security vulnerabilities that are not under active attack and are rated Critical or Important according to the Microsoft Security Response Center rating system. The updated app must be submitted to the store within 180 days of the first report that reproduces the issue. Microsoft reserves the right to take swift action in all cases, which may include immediate removal of the app from the store, and will exercise its discretion on a case-by-case basis.

But the best part:

If you have discovered a vulnerability in a store application and have unsuccessfully attempted to work with the developer to address it, you can request assistance by contacting secure@microsoft.com.

Clear, concise, and puts users first. My understanding is that Apple is also pretty tight on suspending vulnerable apps, but they don’t have it formalized into visible policy, with a single contact point. If anyone knows Google’s policy (formal or otherwise), please drop it in the comments, but that is clearly a different ecosystem.


Tuesday, June 04, 2013

New Google disclosure policy is quite good

By Rich

Google has stated they will now disclose vulnerability details in 7 days under certain circumstances:

Based on our experience, however, we believe that more urgent action – within 7 days – is appropriate for critical vulnerabilities under active exploitation. The reason for this special designation is that each day an actively exploited vulnerability remains undisclosed to the public and unpatched, more computers will be compromised.

Gunter Ollm, among others, doesn’t like this:

The presence of 0-day vulnerability exploitation is often a real and considerable threat to the Internet – particularly when very popular consumer-level software is the target. I think the stance of Chris Evans and Drew Hintz over at Google on a 60-day turnaround of vulnerability fixes from discovery, and a 7-day turnaround of fixes for actively exploited unpatched vulnerabilities, is rather naive and devoid of commercial reality.

As part of responsible disclosure I have always thought disclosing actively exploited vulnerabilities immediately is warranted. There are exceptions but users need to know they are at risk.

The downside is that if the attack is limited in nature, revealing vulnerability details exposes a wider user base.

Its a no-win situation, but I almost always err toward giving people the ability to defend themselves. Keep in mind that this is only for active, critical exploitation – not unexploited new vulnerabilities. Disclosing those without time to fix only hurts users.


Tuesday, March 26, 2013

How Cloud Computing (Sometimes) Changes Disclosure

By Rich

When writing about the flaw in Apple’s account recovery process last week, something set my spidey sense tingling. Something about it seemed different than other similar situations, even though exploitation was blocked quickly and the flaw fixed within about 8 hours.

At first I wanted to blame The Verge for reporting on an unpatched flaw without getting a response from Apple. But I can’t, because the flaw was already public on the Internet, they didn’t link to it directly, and users were at active risk.

Then I realized that it is the nature of cloud attacks and disclosures in general. With old-style software vulnerabilities when a flaw is disclosed, whatever the means, attackers need to find and target victims. Sometimes this is very easy and sometimes it’s hard, but attacks are distributed by nature.

With a cloud provider flaw, especially against a SaaS provider, the nature of the target and flaw give attackers a centralized target. Full disclosure can be riskier for users of the service, depending on the degree of research or effort required to attack the service. All users are immediately at risk, all exploits are 0-days, and users may have no defensive recourse.

This places new responsibilities on both cloud providers and security researchers. I suspect we will see this play out in some interesting ways over the next few years.

And I am neither equating all cloud-based vulnerabilities, nor making a blanket statement on the morality of disclosure. I am just saying this smells different, and that’s worth thinking about.


Wednesday, February 27, 2013

Bit9 Details Breach

By Rich

Bit9 released more details of how they were hacked.

The level of detail is excellent, and there seems to be minimal or no spin. There are a couple additional details it might be valuable to see (specifics of the SQL injection and how user accounts were compromised), but overall the post is clear, with a ton of specifics on some of what they are finding.

More security vendors should be open and disclose with at least this level of detail. Especially since we know many of you cover up incidents. When we are eventually breached, I will strive to disclose all the technical details.

I gave Bit9 some crap when the breach first happened (due to some of their earlier marketing), but I can’t fault how they are now opening up.


Monday, January 21, 2013

Don’t respond to a breach like this

By Rich

A student who legitimately reported a security breach was expelled from college for checking to see whether the hole was fixed.

(From the original article):

Ahmed Al-Khabaz, a 20-year-old computer science student at Dawson and a member of the school’s software development club, was working on a mobile app to allow students easier access to their college account when he and a colleague discovered what he describes as “sloppy coding” in the widely used Omnivox software which would allow “anyone with a basic knowledge of computers to gain access to the personal information of any student in the system, including social insurance number, home address and phone number, class schedule, basically all the information the college has on a student.”

Two days later, Mr. Al-Khabaz decided to run a software program called Acunetix, designed to test for vulnerabilities in websites, to ensure that the issues he and Mija had identified had been corrected. A few minutes later, the phone rang in the home he shares with his parents.

It was the President of the SaaS company who forced him to sign an NDA under threat of reporting him to law enforcement, and he was then expelled.

Reactions like this have a chilling effect. They motivate discoverers to not report them, to release them publicly, or to sell or give them to someone who will use them maliciously. None of those are good. Even if it pisses you off, even if you think a line was crossed, if someone finds a flaw and tries to work with you to protect customers and users rather than using it maliciously, you need to engage with them positively. No matter how much it hurts.

Because you sure as heck don’t want to end up on the pointy end of an article like this.


Friday, July 10, 2009

Pure Extortion

By Rich

Threatpost has an interesting article up on the latest disclosure slime-fest (originally from Educated Guesswork). It seems VoIPShield decided vendors should pay them for vulnerabilities – or else.

While I personally think security researchers should disclose vulnerabilities to the affected vendors, I understand some make the choice to keep things to themselves. Others make the choice to disclose everything no matter what, and while I vehemently disagree with that approach, I at least understand the reasoning behind it. At other times, per reasonable disclosure, researchers should publicly disclose vulnerability details if the vendor is placing customers at risk through unresponsiveness.

But VoIPShield? Oh my:

“I wanted to inform you that VoIPshield is making significant changes to its Vulnerabilities Disclosure Policy to VoIP products vendors. Effective immediately, we will no longer make voluntary disclosures of vulnerabilities to Avaya or any other vendor. Instead, the results of the vulnerability research performed by VoIPshield Labs, including technical descriptions, exploit code and other elements necessary to recreate and test the vulnerabilities in your lab, is available to be licensed from VoIPshield for use by Avaya on an annual subscription basis.

Later this month we plan to make this content available to the entire industry through an on-line subscription service, the working name of which is VoIPshield “V-Portal” Vulnerability Information Database. There will be four levels of access (casual observer; security professional; security products vendor; and VoIP products vendor), each with successively more detailed information about the vulnerabilities. The first level of access (summary vulnerability information, similar to what’s on our website presently) will be free. The other levels will be available for an annual subscription fee. Access to each level of content will be to qualified users only, and requests for subscription will be rigorously screened.

If you require vendor payment for vulnerability details, but will release those details to others, that’s extortion. VoIPShield is saying, “We’ve found something bad, but you only get to see it if you pay us – of course so does anyone else who pays.”

Guess what guys – you aren’t outsourced QA. You made the decision to research vulnerabilities in particular vendors’ products, and you made the decision to place those companies’ customers at risk by releasing information to parties other than the appropriate vendor. This is nothing more than blackmail. Is vulnerability research valuable? Heck yes, but you can’t force someone to pay you for it and still be considered ethical.

If you demand vendor payment for vuln details, but never release them, that might be a little low but isn’t completely unethical. But demanding payment and releasing details to anyone other than the vendor? Any idiot knows what that’s called.

* Image courtesy dotolearn.com.