Securosis

Research

Tuesday Patchapalooza

“Wait, didn’t I effing just patch that?” That was my initial reaction this morning, when I read about another Adobe Flash security update. Having just updated my systems Sunday, I was about to ignore the alerts until I saw the headline from Threatpost: Deja Vu: Another Adobe Flash Player Security Update Released: Adobe released its regularly scheduled security updates today, including another set of fixes for its ubiquitous Flash Player, less than a week after an emergency patch took care of two zero-day vulnerabilities being exploited in the wild. … The vulnerabilities were rated most severe on Windows, and Adobe recommends those users update to version 11.6.602.168, while Mac OS X users should update to 11.6.602.167. But that’s not all: Microsoft’ Patch Tuesday bundle included 57 fixes, and in case you missed it, there was another Java update last week, with one more on the way. I want to make a few points. The most obvious one is that there are a great many new critical security patches, most of which are actively being exploited. Even if you patched a few hours ago you should consider updating. Again. Java, Flash, and your MS platforms. As we spiral in on what seems to be ever shorter patch cycles, is it time to admit that this is simply the way it is going to be, and that software is a best-effort work in progress? If so, we should expect to patch every week. What do shorter patch cycles mean to regression testing? Is that model even possible in today’s functional and security patch hailstorm? Platforms like Oracle relational database still lag 18 to 24 months. It’s deep-seated tradition that we don’t patch until things are fully tested, as the applications and databases are mission critical and customers cannot afford downtime or loss of functionality if the patch breaks something critical. Companies remain entrenched in this mindset that back-office applications are not as susceptible to 0-day attacks and things must remain at the status quo ante. When Rich wrote his benchmark research paper on quantifying patch management costs, one of his goals was to provide IT managers with the tools necessary to understand the expense of patching – in time, money, and manpower. But tools in cloud and virtual environments automate many of the manual parts and make patch processes easier. And some systems are not fully under the control of IT. It is time to re-examine patch strategies, and the systemic tradeoffs between fast and slow patching cycles. Share:

Share:
Read Post

RSA Conference Guide 2013: Endpoint Security

The more things change, the more they stay the same. Endpoint security remains predominately focused on dealing with malware and the bundling continues unabated. Now we increasingly see endpoint systems management capabilities integrated with endpoint protection, since it finally became clear that an unpatched or poorly configured device may be more of a problem than fighting off a malware attack. And as we discuss below, mobile device management (MDM) is next on the bundling parade. But first things first: advanced malware remains the topic of every day, and vendors will have a lot to say about it at RSAC 2013. AV Adjunctivitus Last year we talked about the Biggest AV Loser and there is some truth to that. But it seems most companies have reconciled themselves to the fact that they still need an endpoint protection suite to get the compliance checkbox. Endpoint protection vendors, of course, haven’t given up, and continue to add incremental capabilities to deal with advanced attacks. But the innovation is outside endpoint protection. IP reputation is yesterday’s news. As we discussed in our Evolving Endpoint Malware Detection research last year, it’s no longer about what the malware file looks like, but now all about what it does. We call this behavioral context, and we will see a few technologies addressing it at the RSA Conference. Some integrate at the kernel level to detect bad behavior, some replace key applications (such as the browser) to isolate activity, and others actually use very cool virtualization technology to keep everything separate. Regardless of how the primary technology works, the secondary bits provide a glimmer of hope that someday we might able to stop advanced malware. Not that you can really stop it, but we need something better than trying to get a file signature for a polymorphic attack. Also pay attention to proliferation analysis to deal with the increasing amount of VM-aware malware. Attackers know that all these network-based sandboxes (network-based malware detection) use virtual machines to explode the malware and determine whether it’s bad. So they do a quick check and when the malware is executed in a VM it does nothing. Quite spiffy. That a file that won’t trigger in the sandbox is likely wreak havoc once it makes its way onto a real device. At that point you can flag the file as bad, but it might already be running rampant through your environment. It would be great to know where that file came from and where it’s been, with a list of devices that might be compromised. Yup, that’s what proliferation analysis does, and it’s another adjunct we expect to become more popular over the next few years. Mobile. Still management, not security BYOD will be hot hot hot again at this year’s RSA Conference, as we discussed in Key Themes. But we don’t yet see much malware on these devices. Sure, if someone jailbreaks their device all bets are off. And Google still has a lot of work to provide a more structured app environment. But with mobile devices the real security problem is still management. It’s about making sure the configurations are solid, only authorized applications are loaded, and the device can be wiped if necessary. So you will see a lot of MDM (mobile device management) at the show. In fact, there are a handful of independent companies growing like weeds because any company with more than a dozen or so folks has a mobile management problem. But you will also see all the big endpoint security vendors talking about their MDM solutions. Like full disk encryption a few years ago, MDM is being acquired and integrated into endpoint protection suites at a furious clip. Eventually you won’t need to buy a separate MDM solution – it will just be built in. But ‘eventually’ means years, not months. Current bundled endpoint/MDM solutions are less robust than standalone solutions. But as consolidation continues the gap will shrink, until MDM is eventually just a negotiating point in endpoint protection renewal discussions. We will also see increasing containerization of corporate data. Pretty much all organizations have given up on trying to stop important data making its way onto mobile devices, so they are putting the data in walled gardens instead. These containers can be wiped quickly and easily, and allow only approved applications to run within the container with access to the important data. Yes, it effectively dumbs down mobile devices, but most IT shops are willing to make that compromise rather than give up control over all the data. The Increasingly Serious “AV Sucks” Perception Battle We would be the last guys to say endpoint security suites provide adequate protection against modern threats. But statements that they provide no value aren’t true either. It all depends on the adversary, the attack vector, monitoring infrastructure to react faster and better, and most importantly on complimentary controls. Recently SYMC took a head shot when the NYT threw them under the bus for an NYT breach. A few days later Bit9 realized that Karma is a Bit9h, when they apparently forgot to run their own software on internal devices and got were breached. I guess what they say about the shoemaker’s children is correct. It will be interesting to see how much the endpoint protection behemoths continue their idiotic APT defense positioning. As we have said over and over, that kind of FUD may sell some product but it is a short-sighted way to manage customer expectations. They will get hit, and then be pissed when they realize their endpoint protection vendor sold them a bill of goods. To be fair, endpoint protection folks have added a number of new capabilities to more effectively leverage the cloud, the breadth of their customer bases, their research capabilities, and to improve detection – as discussed above. But that doesn’t really matter if a customer isn’t using the latest and greatest versions of the software, or if they don’t have sufficient additional controls in place. Nor will it convince customers who already believe endpoint tools are inherently weak. They can ask Microsoft about that – most folks

Share:
Read Post

Incite 2/13/2013: Baby(sitter) on Board

The Boss and I don’t get out to see movies too often. At least for the last 12 years or so. It was hard to justify paying a babysitter for two extra hours so we could go see a movie. Quick dinner? Sure. Party with friends, absolutely. But a movie, not so much. We’d wait until Grandma came to visit, and then we’d do things like see movies and have date nights. But I’m happy to say that’s changing. You see, XX1 is now 12, which means she can babysit for the twins. We sent her to a day-long class on babysitting, where she learned some dispute resolution skills, some minor first aid, and the importance of calling an adult quickly if something goes south. We let her go on her maiden voyage New Year’s Eve. We went to a party about 10 minutes from the house. Worst case we could get home quickly. But no worries – everything went well. Our next outing was a quick dinner with some friends very close to the house. Again, no incidents at all. We were ready to make the next jump. That’s right, time for movie night! We have the typical discussions with XX1 about her job responsibilities. She is constantly negotiating for more pay (wonder where she got that?), but she is unbelievably responsible. We set a time when we want the twins in bed, and she sends us a text when they are in bed. The twins respect her authority when she’s in the babysitting mode, and she takes it seriously. It’s pretty impressive. Best of all, the twins get excited when XX1 is babysitting. Maybe it’s because they can watch bad TV all night. Or bang away on their iTouches. But more likely it’s because they feel safe and can hang out and have a good time with their siblings. For those of you (like me), who grew up in a constant state of battle with your siblings, it’s kind of novel. We usually have to set up an Aerobed over the weekend, so all three kids can pile into the same room for a sleepover. They enjoy spending time together. Go figure. Sure it’s great to be able to go out and not worry about having to pay a babysitter some ungodly amount, which compounds the ungodly amount you need to pay to enjoy Hollywood’s finest nowadays. But it’s even better to know that our kids will only grow closer through the rest of their lives. As my brother says, “You can pick your friends, but you can’t pick your family!” I’m just glad my kids seem to be okay with the family they have. –Mike Photo credits: Bad babysitter originally uploaded by PungoM Heavy Research We’re back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. Network-based Threat Intelligence Following the Trail of Bits Understanding the Kill Chain Understanding Identity Management for Cloud Services Architecture and Design Integration Newly Published Papers Building an Early Warning System Implementing and Managing Patch and Configuration Management Defending Against Denial of Service Attacks Securing Big Data: Security Recommendations for Hadoop and NoSQL Environments Pragmatic WAF Management: Giving Web Apps a Fighting Chance Incite 4 U We are all next: I may have been a little harsh in my post on the Bit9 hack: Karma is a Bit9h, but the key point is that all security vendors need to consider themselves high value targets. I wouldn’t be surprised if lot more get compromised and (attempt to) cover it up. There isn’t any schadenfreude here – I derive no pleasure from someone being hacked, no matter how snarky I seem sometimes. I also assume that it is only a matter of time until I get hacked, so I try to avoid discussing these issues from a false position of superiority. Wendy Nather provides an excellent reminder that defense is damn hard, with too many variables for anyone to completely control. In her words: “So if you’re one of the ones scolding a breach victim, you’re just displaying your own ignorance of the reality of security in front of those who know better. Think about that for a while, before you’re tempted to pile on.” Amen to that. – RM Swing and a miss: Managing database accounts to deny attackers easy access is a hassle – as pointed out by Paul Roberts in his post on Building and Maintaining Database Access Control Permissions. But the ‘headaches’ are not just due to default packages and allowing public access – these issues are actually fairly easy to detect and fix before putting a database server into production. More serious are user permissions within enterprise applications which have thousands of users assigned multiple roles. In these cases finding an over-subscribed user is like finding the proverbial “needle in a haystack”. The use of generic “service accounts” shared by multiple users – make it much harder to detect misuse, and if spotted to figure out who the real perpetrator is. Perhaps the most difficult problem is segregation of database administrative duties, where common tasks should be split up, at the expense of making administrators’ jobs far more complex – annoying and time-consuming. Admins are the ones who set these roles up, and they don’t want make their daily work harder. Validating good security requires someone with access and knowhow. Database operations are more difficult that database setup, which is why monitoring and periodic assessments are necessary to ensure security. – AL First things first: Wim Remes wrote an interesting post about getting value from a SIEM investment, Your network may not be what is SIEMs. Wim’s point is that you can get value from the SIEM, even if it’s horribly delayed and over budget (as so many are), but without a few key things in place initially, you would just be wasting your time. You need to know what’s important in your environment and

Share:
Read Post

Cycling, Baseball, and Known Unknowns

This morning, not even thinking about security, I popped off a tweet on cycling:   I have been annoyed lately, as I keep hearing people write off cycling while ignoring the fact that, despite all its flaws, cycling has a far more rigorous testing regimen than most other professional sports – especially American football and baseball. (Although baseball is taking some decent baby steps). Then I realized this does tie to security, especially in our very current age of selective information sharing. The perception is that cycling has more cheating because more cheaters are caught. Even in Lance’s day, when you really did have to cheat to compete, there was more testing than in many of today’s pro sports. Anyone with half a brain knows that cheating via drugs is rampant in under-monitored sports, but we like to pretend it is cleaner because players aren’t getting caught and going on Oprah. That is willful blindness. We often face the same issue in security, especially in data security. We don’t share much of the information we need to make appropriate risk decisions. We frequently don’t monitor what we need to, in order to really understand the scope of our problems. Sometimes it’s willful, sometimes it is simply cost and complexity. Sometimes it’s even zero-risk bias: we can’t use DLP because it would miss things, even though it would find more than we see today. But when if comes to information sharing I think security, especially over the past year or so, has started to move much more in the direction of addressing the known unknowns. Actually, not just security, but the rest of the businesses and organizations we work for. This is definitely happening in certain verticals, and is trickling down from there. It’s even happening in government, in a big way, and we may see some of the necessary structural changes for us to move into serious information sharing (more on that later). Admitting the problem is the first step. Collecting the data is the second, and implementing change is the third. For the first time in a long time I am hopeful that we are finally, seriously, headed down this long path. Share:

Share:
Read Post

Directly Asking the Security Data

We have long been fans of network forensics tools to provide a deeper and more granular ability to analyze what’s happening on the network. But most of these network forensics tools are still beyond the reach (in terms of both resources and expertise) of mass markets at this point. Rocky D of Visible Risk tackles the question, “I’m collecting packets, so what now?” in his Getting Started with Network Forensics Tools post. With these tools we can now ask questions directly of the data and not be limited to or rely on pre-defined questions that are based on an inference of subsets of data. The blinders are off. To us, the tools themselves aren’t the value proposition – the data itself and the innovation in analytical techniques is the real benefit to the organization. It always gets back to the security data. Because any filtered and/or normalized view of the data (or metadata, as the case may be) is inherently limited because it’s hard to go back and ask the question(s) you didn’t know to ask at the beginning of the investigation, query, etc. When investigating a security issue, you often don’t know what to ask ahead of time. But that pretty much breaks the model of SIEM (and most security, by the way) because you need to define the patterns you are looking for. Of course we know attackers are unpredictable by nature, so it is getting harder and harder to isolate attacks based on what we know attacks look like. When used properly, network forensic tools can fundamentally change your security organization from the broken alert-driven model into a more effective data-driven analytic model. It’s hard not to agree with this position, but the details remain squishy. Conceptually we buy this analytics-centric view of the world, where you pump a bunch of security data through a magic machine that finds patterns you didn’t know where there – the challenge is to interpret what those patterns really mean in the context of your problem. And that’s not something that will be automated any time soon, if ever. But unless you have the data the whole discussion is moot anyway. So start collecting packets now, and figure out what to do with them later. Share:

Share:
Read Post

RSA Conference Guide 2013: Cloud Security

2012 was a tremendous year for cloud computing and cloud security, and we don’t expect anything slowdown in 2013. The best part is watching the discussion slowly march past the hype and into the operational realities of securing the cloud. It is still early days, but things are moving along steadily as adoption rates continue to chug along. On the downside, this steady movement is a total buzzkill when it comes to our tendency toward pithy deconstruction. Much of what you see on the show floor (and in all marketing materials for the next couple quarters) represent mere incremental advancements of the trends we identified last year. Cloudwashing is alive and well, the New Kids on the Cloud Security Block are still chugging along patiently waiting for the market to pop (though their investors may not be so patient), data security is still a problem for cloud computing, and ops is handling more security than you realize. What is old is new again. Again. SECaaS: Good for More Than Cheap Laughs We realize we sometimes push the edge of acceptable language during our presentations and blog posts, but nothing seems to garner a laugh better this year than saying ‘SECaaS’. The thing is, Security as a Service is maturing faster than security for cloud services, with some very interesting offerings hitting the market. Some security operations, including inbound email security, web filtering, and WAF, demonstrate clear advantages when implemented outside your perimeter and managed by someone else. You can provide better protection for mobile users and applications, reduce overhead, and keep the easily identified crud from ever hitting your network by embracing SECaaS. One of the most interesting aspects of SECaaS (we know, so juvenile!) is the far-reaching collection of security data across different organizations, and the ability to feed it into Big Data Analytics. Now that we’ve attained our goal of writing Big Data Analytics at least a few times each day, this isn’t all smoke and mirrors – especially for threat intelligence. Pretty much every anti-malware tool worth a darn today relies on cloud-based information sharing and analysis of some sort, along with most of the monitoring and blocking tools with cloud components. We will also touch on this tomorrow for endpoint security. We all know the limitations of sitting around and only getting to see what’s on your own network, but cloud providers can pull data from their entire customer base, so they get a chance to recognize the important bits and react faster. Admittedly, a few neighbors need to get shot before you can figure out who pulled the trigger and what the bullet looked like, but as long as it’s not you, the herd benefits, right? Other areas, such as network monitoring (including forensics), configuration management, and key management, all demonstrate creative uses for the cloud. The trick when looking at SECaaS providers is to focus on a few key characteristics to see if they are really cloud-based, and if they provide benefits over more traditional options. The first acid test is whether they are truly architected for multi-tenancy and security. Throwing some virtual appliances into a few colocation data centers and billing the service monthly isn’t quite good enough to make our personal SECaaS list. Also make sure you understand how they leverage the cloud to benefit you, the customer. Some things don’t make sense to move to the cloud – for example certain aspects of DLP work in the cloud but many others don’t. Will moving a particular function to the cloud make your life easier without reducing security? Skip the marketing folks and sales droids (wearing suits) and find the most anti-social-looking guy or girl you can in a scruffy logo shirt. That’s usually a developer or engineer – ask them what the service does and how it works. SecDevOps or SecByeBye DevOps refers to the operational model of increasing the communications and agility between operations and development to increase overall responsiveness and technology velocity. It relies heavily on cloud computing, agile/iterative development processes, automation, and team structures to reduce the friction normally associated with creating, managing, and updating software applications (internal or external). DevOps is growing quickly, especially in organizations leveraging cloud computing. It is the reason, for example, that many self-service private clouds start as tools for developers. DevOps is more than just another overhyped management trend. Cloud computing, especially IaaS and PaaS, with APIs to manage infrastructure, draw DevOps like a moth to flame. One benefit is that developers don’t need to ask IT ops to provision a server for a new project, and it is irresistible to many developers. If it reduces developer and operations overhead, what’s not to love? Oh, right. Security. Security has a reputation for slowing things down, and while at times that is the right approach, it is often the wrong one. For example, it just doesn’t work well if security has to manually update the firewall for every cloud instance a dev spins up for external testing. Fortunately DevOps also brings some security advantages, such as extensive use of automated configuration scripts and pre-set platforms and applications that can start from a secure state. But what does this all have to do with the RSA Conference? Keep an eye out for security options that tie into agile DevOps approaches if you are evaluating cloud security. These products will typically consume, and even deliver, APIs for automation and scripting. They rely on security policies more than manual operations. Frequently they tie directly into the leading cloud platforms, such as your private cloud or something up on Amazon, Rackspace, Microsoft Azure, or HP. When looking at security tools for cloud computing, definitely talk DevOps with reps on the show floor to see if the tool is as agile as what it’s protecting. Otherwise it’s deader than a red shirt on Walking Dead. (We like to mix analogies). And don’t forget to register for the Disaster Recovery Breakfast if you’ll be at the show on Thursday morning. Where else can you kick your hangover, start a new one, and

Share:
Read Post

Macworld: The Everyday Agony of Passwords

My very first Macworld op-ed: It’s hard to imagine an idea more inane than passwords. That we protect many of the most important aspects of our lives with little more than a short string of text is an extreme absurdity. These collections of–admit it–eight characters are the gateways to everything from our bank accounts and medical records to our family photos to the most sensitive thoughts we’ve ever let slip via keyboard. To say merely that I loathe passwords would be to lump them with myriad other things in this world that deserve of a good loathing–whereas passwords deserve their own very special throne of infamy. And the worst part of it all? There isn’t a single, viable alternative. This piece is oriented towards consumers but the enterprise issues are extremely similar. I really don’t see any alternatives that work at scale, especially because most employees are ‘consumers’ (what a crappy word). CAC cards for gov/DoD are the closest to an exception I can find, and that’s a pretty specific audience (admittedly a large large one). Share:

Share:
Read Post

RSA Conference Guide 2013: Identity and Access Management

Usually at security events like the RSA Conference there isn’t much buzz about Identity and Access Management. Actually, identity is rarely thought of as a security technology; instead it is largely lumped in with general IT operational stuff. But 2013 feels different. Over the past year our not-so-friendly hacktivists (Anonymous) embarrassed dozens of companies by exposing private data, including account details and password information. Aside from this much more visible threat and consequence, the drive towards mobility and cloud computing/SaaS at best disrupts, and at worst totally breaks, traditional identity management concepts. These larger trends have forced companies to re-examine their IAM strategies. At the same time we see new technologies emerge, promising to turn IAM on its ear. We will see several new (start-up) IAM vendors at this year’s show, offering solutions to these issues. We consider this is a very positive development – the big lumbering companies largely dominating IAM over the past 5 years haven’t kept pace with these technical innovations. IDaaS = IAM 2.0 The most interesting of the shiny new objects you will see at RSAC is identity-as-a-service (IDaaS), which extend traditional in-house identity services to external cloud providers and mobile devices. These platforms propagate and/or federate identity outside your company, providing the glue to seamlessly link your internal authoritative source with different cloud providers – the latter of which generally offer a proprietary way to manage identity within their environment. Several vendors offer provisioning capabilities as well, linking internal authorization sources such as HR systems with cloud applications, helping map permissions across multiple external applications. It may look like we are bolting a new set of capabilities onto our old directory services, but it is actually the other way around. IDaaS really is IAM 2.0. It’s what IAM should have looked like if it had originally been architected for open networks, rather than the client-server model hidden behind a network firewall. But be warned: the name-brand directory services and authorization management vendors you are familiar with will be telling the same story as the new upstart IDaaS players. You know how this works. If you can’t innovate at the same pace, write a data sheet saying you do. It’s another kind of “cloud washing” – we could call it Identity Washing. They both talk about top threats to identity, directory integration, SSO, strong authentication, and the mobile identity problem. But these two camps offer very different visions and technologies to solve the problem. Each actually solves distinctly different problems. When they overlap it is because the traditional vendor is reselling or repackaging someone else’s IDaaS under the covers. Don’t be fooled by the posturing. Despite sales driod protestations about simple and easy integrations between the old world and this new stuff, there is a great deal of complexity hiding behind the scenes. You need a strong understanding of how federation, single sign-on, provisioning, and application integration are implemented to understand whether these products can work for you. The real story is how IDaaS vendors leverage standards such as SAML, OAuth, XACML, and SCIM to extend capabilities outside the enterprise, so that is what you should focus on. Unfortunately managing your internal LDAP servers will continue to suck, but IDaaS is likely the easier of the two to integrate and manage with this new generation of cloud and mobile infrastructure. Extending what you have to the cloud is likely easier than managing what you have in house today. Death to Passwords Another new theme as RSAC will be how passwords have failed us and what we should do about it. Mat Honan said we should Kill The Password. Our own Gunnar Peterson says Infosec Slowly Puts Down Its Password Crystal Meth Pipe. And I’m sure Sony and Gawker are thinking the same thing. But what does this mean, exactly? Over time it means we will pass cryptographic tokens around to assert identity. In practice you will still have a password to (at least partially) authenticate yourself to a PC or other device you use. But once you have authenticated to your device, behind the scenes an identity service that will generate tokens on your behalf when you want access to something. Passwords will not be passed, shared, or stored, except within a local system. Cryptographic tokens will supplant passwords, and will transparently be sent on your behalf to applications you use. Instead of trusting a password entered by you (or, perhaps, not by you) applications will establish trust with identity providers which generate your tokens, and then verify the token’s authenticity as needed. These tokens, based on some type of standard technology (SAML, Kerberos, or OAuth, perhaps), will include enough information to validate the user’s identity and assert the user’s right to access specific resources. Better still, tokens will only be valid for a limited time. This way even if a hacker steals and cracks a password file from an application or service provider, all its data will be stale and useless before it can be deciphered. The “Death to Passwords” movement represents a seismic shift in the way we handle identity, and seriously impacts organizations extending identity services to customers. There will be competing solutions offered at the RSA show to deal with password breaches – most notably RSA’s own password splitting capability, which is a better way to store passwords rather than a radical replacement for the existing system. Regardless, the clock is ticking. Passwords’ deficiencies and limitations have been thoroughly exposed, and there will be many discussions on the show floor as attendees try to figure out the best way to handle authentication moving forward. And don’t forget to register for the Disaster Recovery Breakfast if you’ll be at the show on Thursday morning. Where else can you kick your hangover, start a new one, and talk shop with good folks in a hype-free zone? Nowhere, so make sure you join us… Share:

Share:
Read Post

Saving Them from Themselves

The early stages of the Internet felt a bit like the free love era, in that people could pretty much do what they wanted, even if it was bad for them. I remember having many conversations with telecom carriers about the issues of consumers doing stupid things, getting their devices pwned, and then wreaking havoc on other consumers on the same network. For years the carriers stuck their heads in the sand, basically offering endpoint protection suites for free and throwing bandwidth at the problem. But that seems to be changing. I know a few large- scale ISPs who put compromised devices in the penalty box, preventing them from doing much of anything until the device is fixed. This is an expensive proposition for an ISP. You, like me, probably end up doing a decent amount of tech support for less sophisticated family members, and you know how miserable it is to actually remediate a pwned machine. But as the operating systems have gotten much better at protecting themselves, attackers increasingly target applications. And that means attacking browsers (and other high-profile apps such as Adobe Reader and Java) where they are weakest: the plug-in architecture. So kudos to Mozilla, who has started blocking plug-ins as their default setting. It will now be up to the user to enable plug-ins, such as Java, Adobe, and Silverlight, according to Mozilla director of security assurance Michael Coates, who announced the new functionality yesterday in a blog post. Mozilla’s Click to Play feature will be the tool for that: “Previously Firefox would automatically load any plugin requested by a website. Leveraging Click to Play, Firefox will only load plugins when a user takes the action of clicking to make a particular plugin play, or the user has previously configured Click To Play to always run plugins on the particular website,” he wrote. Of course users will still be able to get around it (like the new Gatekeeper feature in Mac OS X), but they will need to make a specific decision to activate the plug-in. It’s a kind of default deny approach to plug-ins, which is a start. And more importantly it’s an indication that application software makers are willing to adversely affect the user experience to reduce attack surface. Which is good news from where I sit. Share:

Share:
Read Post

LinkedIn Endorsements Are Social Engineering

Today I popped off a quick tweet after yet another email from LinkedIn: Please please please… … stop endorsing me. Seriously. I barely use LinkedIn. For me it is little more than a contact manager, and otherwise lost most of its other value long ago. Perhaps thats my own bias, but there it is. As for endorsements… this is LinkedIn deliberately social engineering us. Reciprocity is one of the most common human behaviors used for social engineering, because it is one of the most fundamental behaviors in building a social society. From Wikipedia: With reciprocity, a small favor can produce a sense of obligation to a larger return favor. This feeling of obligation allows an action to be reciprocated with another action. Because there is a sense of future obligation with reciprocity it can help to develop and continue relationships with people. Reciprocity works because from a young age people are taught to return favors and to disregard this teaching will lead to the social stigma of being an ingrate. This is used very frequently in various scams. LinkedIn uses endorsements and reciprocity to draw people into logging into the service. You feel you need to return the endorsement, you log in, endorse, and then maybe endorse someone else, spreading it like Chinese malware. If LinkedIn wasn’t so obnoxious about the notification emails I wouldn’t consider this such a blatant attempt at manipulation. But the constant nags are crafted to elicit a specific return behavior. In other words, clear-cut social engineering. Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.