Securosis

Research

The First Phishing Email I Almost Fell For

Like many of you, I get a ton of spam/phishing email to my various accounts. Since my email is very public, I get a little more than most people. It’s so bad I use 3 layers of spam/virus filtering, and still have some messages slip through (1 cloud based filter [Postini, which will probably change soon], one on-premise UTM [Astaro], and SpamSieve on my Mac). If something gets through all of that, I still have some additional precautions I take on my desktop to (hopefully) help against targeted malware. Despite all that, I assume that someday I’ll be compromised, and it will probably be ugly. This morning I got the first phishing email in a very long time that almost tricked me into clicking. It came from “Administrator” at one of my hosts and read: Attention! On October 22, 2009 server upgrade will take place. Due to this the system may be offline for approximately half an hour. The changes will concern security, reliability and performance of mail service and the system as a whole. For compatibility of your browsers and mail clients with upgraded server software you should run SSl certificates update procedure. This procedure is quite simple. All you have to do is just to click the link provided, to save the patch file and then to run it from your computer location. That’s all. http://updates.[cut for safety] Thank you in advance for your attention to this matter and sorry for possible inconveniences. System Administrator Two things tipped me off. First, that system is a private one administered by a friend. While he does send updates like this out, he always signs them with his name. Second, the URL is clearly not really that domain (but you have to read the entire thing). And finally, it leads to an Active Server Pages domain, which that administrator never uses since our system is *nix based. But it was early in the morning, I hadn’t had coffee yet, and we often need to upgrade our SSL after a system update on this server, so I still almost clicked on it. According to Twitter this is a Zbot generated message: SecBarbie: RT @mikkohypponen ZBot malware being spammed out right now in emails starting “On October 22, 2009 server upgrade will take place” Ignore it. Thanks Erin! It’s interesting that despite multiple obvious markers this was malicious, and be being very attuned to these sorts of things, I still almost clicked on it. It just goes to show you how easy it is to screw up and make a mistake, even when you’re a paranoid freak who really shouldn’t be let out of the house. Share:

Share:
Read Post

Friday Summary – October 16, 2009

All last week I was out of the office on vacation down in Puerto Vallarta. It was a trip my wife and I won in a raffle at the Phoenix Zoo, which was pretty darn cool. I managed to unplug far more than I can usually get away with these days. I had to bring the laptop due to an ongoing client project, but nothing hit and I never had to open it up. I did keep up with email, and that’s where things got interesting. Before heading down I added the international plan to my iPhone, for about $7, which would bring my per-minute costs in Mexico down from $1 per minute to around $.69 a minute. Since we talked less than 21 minutes total on the phone down there, we lose. For data, I signed up for the 20 MB plan at a wonderfully fair $25. You don’t want to know what a 50 MB plan costs. Since I’ve done these sorts of things before (like the Moscow trip where I could never bring myself to look at the bill), I made sure I reset my usage on the iPhone so I could carefully track how much I used. The numbers were pretty interesting – checking my email ranged from about 500K to 1MB per check. I have a bunch of email accounts, and might have cut that down if I disabled all but my primary accounts. I tried to check email only about 2-3 times a day, only responding to the critical messages (1-4 a day). That ate through the bandwidth so quickly I couldn’t even conceive of checking the news, using Maps, or nearly any other online action. In 4 days I ran through about 14 MB, giving me a bit more space on the last day to occupy myself at the airport. To put things in perspective, a satellite phone (which you can rent for trips – you don’t have to buy) is only $1 per minute, although the data is severely restricted (on Iridium, unless you go for a pricey BGAN). Since I was paying $3/minute on my Russia trip, next time I go out there I’ll be renting the sat phone. So for those of you who travel internationally and want to stay in touch… good luck. -rich On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian’s Dark Reading post on Getting Around Vertical Database Security. Favorite Securosis Posts Rich, Mort, and Adrian: Which Bits Are the Right Bits. We all independently picked this one, which either means it’s really good, or everything else we did this week sucked. Meier: It Isn’t Risk Management If You Can’t Lose. Other Securosis Posts Where Art Thou, Security Logging? IDM: Reality Sets In Barracuda Networks Acquires Purewire Microsoft Security Updates for October 2009 Personal Information Dump Favorite Outside Posts Rich: Michael Howard’s post on the SMBv2 bug and the Microsoft SDL. This kind of analysis is invaluable. Adrian: Well, the entire Protect the Data series really. Mortman: Think, over at the New School blog. Meier: Security Intelligence: Attacking the Kill Chain. (Part three of a series on security principles in network defense.) Top News and Posts Mozilla Launches Plugin Checker. This is great, but needs to be automatic for Flash/QuickTime. Adobe recommends turning off JavaScript in Acrobat/Reader due to major vulnerability. I recommend uninstalling Acrobat/Reader, since it’s probably the biggest single source of cross platform 0-days. Air New Zealand describes reason for outage. Not directly security, but a good lesson anyway. I’ve lost content in the past due to these kinds of assumptions. Google to send information about hacked Web sites to owners. Details of Wal-Mart’s major security breach. Greg Young on enterprise UTM and Unicorns. Microsoft fixes Windows 7 (and other) bugs. Delta being sued over email hack. 29 Bugs fixed by Adobe. California County hoarding data. Mozilla Plugin Check. Paychoice Data Breach. Blog Comment of the Week This week’s best comment comes from Rob in response to Which Bits are the Right Bits: Perhaps it is not well understood that audit logs are generally not immutable. There may also be low awareness of the value of immutable logs: 1) to protect against anti-forensics tools; 2) in proving compliance due diligence, and; 3) in providing a deterrent against insider threats. Share:

Share:
Read Post

It Isn’t Risk Management If You Can’t Lose

I was reviewing the recent Health and Human Services guidance on medical data breach notifications and it’s clear that the HHS either was bought off, or doesn’t understand the fundamentals of risk assessment. Having a little bit of inside experience within HHS, my vote is for willful ignorance. Basically, the HHS provides some good security guidance, then totally guts it. Here’s a bit from the source article with the background: The American Recovery and Reinvestment Act of 2009 (ARRA) required HHS to issue a rule on breach notification. In its interim final rule, HHS established a harm standard: breach does not occur unless the access, use or disclosure poses “a significant risk of financial, reputational, or other harm to individual.” In the event of a breach, HHS’ rule requires covered entities to perform a risk assessment to determine if the harm standard is met. If they decide that the risk of harm to the individual is not significant, the covered entities never have to tell their patients that their sensitive health information was breached. You have to love a situation where the entity performing the risk assessment for a different entity (patients) is always negatively impacted by disclosure, and never impacted by secrecy. In other words, the group that would be harmed by protecting you gets to decide your risk. Yeah, that will work. This is like the credit rating agencies, many aspects of fraud and financial services, and more than a few breach notification laws. The entities involved face different sources of potential losses, but the entity performing the assessment has an inherent bias to mis-assess (usually by under-assessing) the risk faced by the target. Now, if everyone involved is altruistic and unbiased this all works like a charm. Hell, even in Star Trek they don’t think human behavior that perfect. Share:

Share:
Read Post

IDM: Reality Sets In

IDM fascinates me, if only because it is such an important base for a good security program. Despite this, many organizations (even ones with cutting edge technology) haven’t really focused on solving the issues around managing users’ identity. This is, no doubt, in part due to the fact that IDM is hard in the real world. Businesses can have hundreds if not thousands of applications (GM purportedly had over 15,000 apps at one point) and each application itself can have hundreds or thousands of roles within it. Combine this with multiple methods of authentication and authorization, and you have a major problem on your hands which makes digging into the morass challenging to say the least. I also suspect IDM gets ignored because it does not warrant playing with fun toys, so as a result, doesn’t get appropriate attention from the technophiles. Don’t get me wrong – there are some great technologies out there to help solve the problem, but no matter what tools you have at your disposal, IDM is fundamentally not a technology problem but a process issue. I cannot possibly emphasize this enough. In the industry we love to say that security is about People, Process, and Technology. Well, IDM is pretty much all about Process, with People and Technology supporting it. Process is an area that many security folks have trouble with, perhaps due to lack of experience. This is why I generally recommend that security be part of designing the IDM processes, policies, and procedures – but that the actual day to day stuff be handled by the IT operations teams who have the experience and discipline to make it work properly. DS had a great comment on my last post, which is well worth reading in its entirety, but today there is one part I’d like to highlight because it nicely shows the general process that should be followed regardless of organization size: While certainly not exhaustive, the above simple facts can help build a closed loop process. When someone changes roles, IT gets notified how. A request is placed by a manager or employee to gain access to a system. If employee request, manager must(?) approve. If approved as “in job scope” by manager, system owner approves. IT (or system owner in decentralized case) provisions necessary access. Requester is notified. Five steps, not terribly complicated and easy to do, and essentially what happens when someone gets hired. For termination, all you really need are steps 1, 2, and 5 – but in reverse. This process can even work in large decentralized organizations, provided you can figure out (a) the notification/request process for access changes and (b) a work flow process for driving through the above cycle. (a) is where the Info Sec team has to get outside the IT department and talk to the business. This is huge. I’ve talked in the past about the need for IT to understand the business and IDM is a great example of why. This isn’t directly about business goals or profit/loss margins, but rather about understanding how the business operates on a day to day basis. Don’t assume that IT knows what applications are being used – in many organizations IT only provides the servers and sometimes only the servers for the basic infrastructure. So sit down with the various business units and find out what applications/services are being used and what process they are using today to provision users, who is handling that process, and what changes if any they’d like to see to the process. This is an opportunity to figure out which applications/services need to be part of your IDM initiative (this could be compliance, audit, corporate mandate etc.) and which ones currently aren’t relevant. It has the added benefit of discovering where data is flowing, which is key to not only compliance mandates under HIPAA, SOX, and the European Data Directive (to name a few), but also incredibly handy when electronic discovery is necessary. One all this data has been gathered, you can evaluate the various technologies available and see if they can help. This could be anything from a web app to manage change requests, to workflow (see below), to a full-scale automated access provisioning and de-provisioning system, driven by the approval process. Once you’ve solved (a), (b) is comparatively straightforward and another place where technology can make life easier. The best part is that your organization likely has something like this deployed for other reasons, so the additional costs should be relatively low. Once your company/department/university/etc. grows to a decent size and/or starts to decentralize, manually following the process will become more and more cumbersome, especially as the number of supported applications goes up. A high rate of job role changes within the organization has a similar effect. So some sort of software that automatically notifies employees when they have tasks will greatly streamline the process and help people get the access they need much more quickly. Workflow software is also a great source of performance metrics and can help provide the necessary logs when dealing with audit or compliance issues. As I mentioned above, the business reality for many organizations is far from pristine or clear, so in my next post I’ll explore more those issues in more depth. For now, suffice it to say that until you address those issues, the above process will work best with a small company with fewer apps/auth methods. If you are involved in a larger more complex organization, all is not lost. In that case, I highly recommend that you not try to fix things all at once, but start with one a group or sub-group within the organization and roll out there first. Once you’ve worked out the kinks, you can roll in more and more groups over time. Share:

Share:
Read Post

Where Art Thou, Security Logging?

Today you’d be hard pressed to find a decent sized network that doesn’t have some implementation of Security Event Management (SEM). It’s just a fact of modern regulation that a centralized system to collect all that logolicious information makes sense (and may be mandatory). Part of the problem with architecting and managing these systems is that one runs into the issue of securely collecting the information and subsequently verifying its authenticity. Almost every network-aware product you might buy today has a logging capability, generally based on syslog – RFC3164. Unfortunately, as defined, syslog doesn’t provide much security. In fact if you need a good laugh I’d suggest reading section 6 of the RFC. You’ll know you’re in the right place when you start to digest information about odors, moths and spiders. It becomes apparent, very quickly, when reading subparagraphs 6.1 through 6.10, that the considerations outlined are there more to tip you off that the authors already know syslog provides minimal security – so don’t complain to them. At this point most sane people question using such a protocol at all because surely there must be something better, right? Yes and no. First let me clarify: I didn’t set out to create an exhaustive comparison of [enter your favorite alternative to syslog here] for this writeup. Sure RFC5424 obsoletes the originally discussed RFC3164 and yes RFC5425 addresses using TLS as a transport to secure syslog. Or maybe it would be better to configure BEEP on your routers and let’s not forget about the many proprietary and open source agents that you can install on your servers and workstations. I freely admit there are some great technologies to read about in event logging technology. The point though is that since there is considerable immaturity and many options to choose from, most environments fall back to the path of least resistance: good ol’ syslog over UDP. Unfortunately I’ve never been asked how to do logging right by a client. As long as events are streaming to the SEM and showing up on the glass in the NOC/SOC, it’s not a question that comes up. It may not even be a big deal right now, but I’d be willing to bet you’ll see more on the topic as audits become more scrutinizing. Shouldn’t the integrity of that data be something a little more robust than the unreliable, unauthentic, repudiable and completely insecure protocol you probably have in production? You don’t have to thank me later, but I’d start thinking about it now. Share:

Share:
Read Post

Which Bits Are the Right Bits?

(The following post covers some rather esoteric bits of security philosophy, or what Rich has affectionately called “Security Jazz” in the past. Unless you are into obscure data-centric security minutiae, you will probably not be interested). Richard Bejtlich tweeted and posted on data integrity: The trustworthiness of a digital asset is limited by the owner’s capability to detect incidents compromising the integrity of that asset. This statement is absolutely correct and a really important point that is often overlooked. The problem is that most technologies which produce digital assets do not build tamper detection in, thus giving owners no way to detect integrity violtaions. And far too often people confuse interested party with owner of digital assets, as there can be many copies, each in the possession of a different person or group. It’s not that we can’t provide validation, because technology exists to provide assurance and authenticity. Let’s look at an example: Who owns syslog data? Is it the IT administrator? The security professional? An auditor? In my opinion, none of them do. The OS owns the syslog, as it created the content. Much like you may think you own ‘your’ credit card number, but you don’t – it is something the issuing bank created and owns. They are the custodians of that number, and change it when they choose to. syslog has no way to verify the contents of the log it creates over time. We take it on faith that it is unlikely a log file was corrupted or altered. If we need to verify integrity in the future, too bad. If you did not build in safeguards and a method for validating integrity when you created the data, it’s too late. The trustworthiness of the digital asset is limited to the owner’s capability to detect a compromise, and for many digital assets like syslog, that is nil. For most digital assets, it is sufficient that we use them every day, as this provides sufficient confidence in their integrity. Encryption keys are a useful example. If the keys are corrupted, especially in a public-key situation, either the encryption or decryption operations fail. We may keep a backup somewhere safe to compare our working copy to, and while that can be effective in the most common problem situations, it’s only relevant for certain (common) use cases. Digital assets have an additional challenge over physical objects in terms of generations. Even if we can verify a particular iteration of a digital object, we can have infinite copies, so we need to be able to verify the most current iteration is in use. For digital assets like encryption keys, account numbers, access tokens, and digital representations of self, the owner has a strong vested interest in not sharing the asset, keeping it safe, and possibly even keeping redundant copies against future emergencies or for verification. There are several technologies to prove integrity, they are just not used much. I posted a comment on Richard’s blog to this effect: The trustworthiness of a digital asset is limited more by the trustworthiness of the owner than tamper detection. An owner with desire of privacy and data integrity has the means to protect digital assets. Richard’s premise is an important one as we very seldom build in safeguards to validate ownership, state, authenticity or integrity. Non-repudiation tools and digital escrow services are nearly non-existent. There simply is not enough motivation to implement the tools we have which can provide assurance. Gunnar Peterson blogged on this subject earlier this week as well, taking a slightly more applied look at the problem. His statement that these issues are outside the purview of DLP are absolutely correct. DLP is an outside-in model. This discussion has more to do with Digital Rights Management, which is an inside-out model. The owner must attest to integrity, and while a 3rd party proxy such as a DLP service could be entrusted with object escrow and integrity certification, it would require an alteration of the DLP’s “discover and protect” model. DRM is designed to be part of the application that creates the digital object, and while it is not often discussed, digital object ownership is part of that responsibility. Attestation to ownership is not possible without some form of integrity and state checking. I have seen select DRM systems that were interested in high integrity, but none were commercially viable. Which answers Gunnar’s question: Our ability using today’s technologies to deliver vastly improved audit logging is, I believe, a worthwhile and achievable goal. But it’s fair to ask – why hasn’t it happened yet? There has been no financial incentive to do so. We have had excellent immutable log technologies for years but they are only used in limited cases. Web application audit trails are an interesting application of this technology and easy to do, but there is no compelling business problem motivating people to spend money on retrofitting what they have. I would like to see this type of feature for consumer protection, built into financial transactions where we really need to protect consumers from shoddy corporate record-keeping and failed banking institutions. Share:

Share:
Read Post

Microsoft Security Updates for October 2009

We don’t normally cover Patch Tuesday unless there is something unusual, but the October 2009 advanced notification appears to be just that. It lists patches for 13 different security bulletins, for what looks like 30 separate security problems. Eight of the bulletins are for critical vulnerabilities with the possibility of remote code execution. The majority of the patches are for Windows itself, with a couple for SQL Server, Office, and Forefront, but it looks like just about every production version of Windows is affected. Given the scope of this security patch and the seriousness of the bugs, it looks like IT departments are going to be working overtime for a while. Details of each of the vulnerabilities will be released later today, and I will update this post with specific points of interest as I find them. I am assuming that at least one of the patches is in response to the Server Message Block vulnerability discovered back in August. IIS is not listed as one of the affected products, but odds are the underlying OS will be, and folks will be restarting app servers either way. I am still trying to determine the issue with SQL Server. More to come… ==== Updated ==== Microsoft has updated the bulletin and included the security advisory links and some details on the threats. The SQL Server vulnerability is not within the core database engine, but the GDI ActiveX library in the print server. It’s in 2005, not 2000. When SQL Server Reporting Services is installed, the affected installations of SQL Server software may host the RSClientPrint ActiveX control. This ActiveX control distributes a copy of gdiplus.dll containing the affected code. Customers are only impacted when the RSClientPrint ActiveX control is installed on Microsoft Windows 2000 operating systems. If the RSClientPrint ActiveX control is installed on any other operating system, the system version of GDI+ will be used and the corresponding operating system update will protect them. The GDI+ vulnerability pretty much allows you to take down any Microsoft platform or function that uses the GDI dll, which is basically anything that uses images for forms, which is just about everything. My earlier comment that IIS was not listed was true, but there is in fact a bug linked to IIS: version 5.0 of the FTP service is vulnerable to remote code exploitation. Some of the exploits have workarounds and can be masked through firewall and web application firewall settings, however given the number and severity of the issues, we do recommend patching as soon as possible. Share:

Share:
Read Post

Barracuda Networks Acquires Purewire

Today Barracuda Networks announced their acquisition of Purewire. Barracuda has an incredibly broad product suite, including AV, WAF, Anti-spam, anti-malware, SSL gateways, and so on, but are behind their competition in web filtering and seriously lacking in solutions delivered as SaaS. The Purewire product set closes Barracuda’s biggest product gap, giving them URL filtering and some basic content inspection. But most importantly it can be delivered as SaaS. This is important for two reasons: first, Barracuda has been losing market share to email and web security vendors with comprehensive SaaS product lines. SaaS offers flexible deployment and extends the usable lifespan of existing appliance/software security investments. Second, SaaS can be sold ‘up-market’ or ‘down-market’, as pricing is simply adjusted for the desired capacity. This will keep the handful of Barracuda enterprise customers happy, and provide SME customers the ability to add capacity as needed, hopefully keeping them from bolting to other providers. I have never had my hands on the Purewire product so I have little knowledge of its internal workings or competitive differentiators. I have only spoken with a couple customers but they seemed to be satisfied with the web filtering capabilities. No wholehearted endorsements, but I did not hear any complaints either – nothing wrong if the endorsements are not passionate as often the best than can be said for web filtering products is they perform their jobs and go unnoticed. Based on recent press releases and joint customer announcements, I was expecting Proofpoint to be the acquirer. Regardless, this is a better fit for both companies given Proofpoint’s significant overlap with Purewire. And Barracuda has greater need for this technology. It has been a long time coming but they are finally turning around and showing a dedication to a service based delivery model. Remember, it was only two years ago that Barracuda bet on Web Application Firewalls acquired with Netcontinuum. That bet did not pay off particularly well, as the WAF market never blossomed as predicted. And it further entrenched Barracuda as a box shop. This is a move in the right direction. Share:

Share:
Read Post

Personal Information Dump

Interesting story of a San Francisco commercial landlord who found 46 boxes of personal information and financial data for thousands of people left behind by a failed title company. The boxes were the detitrus of what was until last year a thriving business, Financial Title. Then the economy tanked, and the company folded up its locations all across California, including the one Tookoian rented to it. “They basically abruptly closed shop,” he said as he walked past the company’s logo still affixed to a white wall. “Turned the lights off, closed the door and walked away.” Despite all of the data breaches and crazy stuff we see in the data security profession, I am still shocked at this type of carelessness. I expect to see prosecutors go after the owners of the company for failure to exercise their custodial responsibilities for these records. Ridout says the Federal Trade Commission has implemented new laws requiring businesses to properly dispose of sensitive personal information. So far, an Illinois mortgage company was fined $50,000 for throwing personal records in a dumpster. But fines like that are rare. And after his good deed of having the records destroyed, the landlord still had to pay the bill. Perhaps the FTC will set an example in this case. Share:

Share:
Read Post

Friday Summary – October 9, 2009

A lot of not this week. I was not at SECtor, although I understand it was a good time. I am not going to Oracle Open World. I should be going, but too many projects are either beginning or remain unfinished for me to travel to the Bay Area, visiting old friends and finding a good bar to hang out at. That is lots of fun I will not be having. I will not be going to Atlanta in November as the Tech Target event for data security has been knocked off the calendar. And I am not taking a free Mexican holiday in Peurta de Cancun or wherever Rich is enjoying himself. Oh well, weather has been awesome in Phoenix. With the posts for Dark Reading this week I spent a bunch of time rummaging around for old database versions and looking through notes for database audit performance testing. Some of the old Oracle 7.3 tests with nearly 50% transactional degradation still seem unreal, but I guess it should not surprising that auditing features in older databases are a problem. They were not designed to audit transactions like we do today. They were designed to capture a sample of activity so administrators could understand how people were using the database. Performance and resource allocation were the end goals. Once a sample was collected, auditing was turned off. Security was not really a consideration, and no thought given to compliance. Yet the order of use and priority has been turned upside down, as they fill a critical compliance need but require careful deployment. While I was at RSA this year, one database vendor pointed out some of the security vendors citing this 50% penalty as what you could expect. Bollocks! Database security and compliance vendors who do not use native database auditing would like you to embrace this performance myth. They have a competitive offering to sell, so the more people are fearful of performance degradation, the better their odds of selling you an alternative to accomplish this task. I hear DBAs complain a lot about using native auditing features because it used to be a huge performance problem, and DBAs would get complaints from database and application users. Auditing produces a lot of data. Something has to be done with that data. It needs to be parsed for significant events, reported on, acted upon, erased or backed up, or some combination thereof. In the past, database administrators performed these functions manually, or wrote scripts to partially automate the responsibility, and rewrote them any time something within IT changed. As a form of self preservation, DBAs in general do not like accepting this responsibility. And I admit, it takes a little time to get it set up right, and you may even discover some settings to be counter-intuitive. However, auditing is a powerful tool and it should not be dismissed out of hand. It is not my first choice for database security; no way, no how! But for compliance reporting and control validation, especially for SOX, it’s really effective. Plus, much of this burden can be removed by using third party vendors to handle the setup, data extraction, cleanup, and reporting. Anyway, enough about database auditing. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian’s Dark Reading post on Database Auditing Essentials. David Mortman’s Diversity of Thinking article on Threatpost. Adrian’s follow-up Dark Reading post on Auditing Pitfalls. Favorite Securosis Posts Rich: Database Audit Events. This is a lot of research! Adrian: This week’s Friday Summary. No link necessary! David Meier & David Mortman: Visa’s Data Field Encryption. Favorite Outside Posts Rich: Coconut Television. “No tequila yet, but we will see how the night goes.” Adrian & Mortman: JJ on SecTor’s Wall of Shame. Meier: Comcast pop-ups alert customers to PC infections. It may be effective, but why are you inspecting my traffic? How do I opt out? Top News and Posts Bloggers who review products must disclose compensation. But nothing says you need to disclose compensation for not writing about a product (wink-wink). Payola may be illegal, but hush money is bueno! Statistics from the Hotmail Phishing Scam. This closely mimics some of the weak password detection and dictionary attack work I conducted. You will notice any dictionary attack must be altered for regional preferences. Express Scripts notifying 700,000 in Pharma data breach. Bank fraud Malware that rewrites your bank statement. PayPal Pissed! Why the FBI Director does not bank online. Botnet research conducted by University of California at Santa Barbara. Full research paper forthcoming. AVG launches new AV suite while Microsoft is breathing down their necks. Hundreds arrested in Phishing scam where as much as $1M US was stolen. What I found most interesting about this is MSNBC and Fox News only mention ‘overseas’ participants, while small investigative papers like the Sacramento Bee and others gave details and noted the cooperation of Egyptian authorities. I guess ‘fair and balanced’ does not necessarily mean ‘complete and accurate’. McAfee and Verizon partnership. Passwords for Gmail, Yahoo and Hotmail accounts leaked. What’s wrong with a wall of sheep? Kidding. People who don’t understand security grasping at straws. Malware Flea Market. Blog Comment of the Week This week’s best comment comes from Adam in response to Mortman’s Online Fraud Report: It’s sort of hard to answer without knowing more about what data he has, but what I’d like is raw data, anonymized to the extent needed, and shared in both data and analyzed forms, so other people can apply their own analysis to the data. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.