Securosis

Research

McAfee Acquires MX Logic

During the week of Black Hat/Defcon, McAfee acquired MX Logic for about $140M plus incentives, adding additional email security and web filtering services to their product line. I had kind of forgotten about McAfee and email security, and not just because of the conferences. Seriously, they were almost an afterthought in this space. Despite their anti-virus being widely used in mail security products, and the vast customer base, their own email & web products have not been dominant. Because they’re one of the biggest security firms in the industry it’s difficult to discount their presence, but honestly, I thought McAfee would have made an acquisition last year because their email security offering was seriously lacking. In the same vein, MX Logic is not the first name that comes to mind with email security either, but not because of product quality issues – they simply focus on reselling through managed service providers and have not gotten the same degree of attention as many of the other vendors. So what’s good about this? Going back to my post on acquisitions and strategy, this purchase is strategic in that it solidifies and modernizes McAfee’s own position in email and web filtering SaaS capabilities, but it also opens up new relationships with the MSPs. The acquisition gives McAfee a more enticing SaaS offering to complement their appliances, and should more naturally bundle with other web services and content filtering, reducing head-to-head competitive issues. The more I think about it, the more it looks like the managed service provider relationships are a big piece of the puzzle. McAfee just added 1,800 new channel partners, and has the opportunity to leverage those channels’ relationships into new accounts, who tend to hold sway over their customers’ buying decisions. And unlike Tumbleweed, which was purchased for a similar amount of $143M on falling revenues and no recognizable SaaS offering, this appears to be a much more compelling purchase that fits on several different levels. I estimated McAfee’s revenue attributable to email security was in the $55M range for 2008, which was a guess on my part because I have trouble deciphering balance sheets, but backed up by another analyst as well as a former McAfee employee who said I was in the ballpark. If we add another $30M to $35M (optimistically) of revenue to that total, it puts McAfee a lot closer to the leaders in the space in terms of revenue and functionality. We can hypothesize about whether Websense or Proofpoint would have made a better choice, as both offer what I consider more mature and higher-quality products, but their higher revenue and larger installed bases would have cost significantly more, overlapping more with what McAfee already has in place. This accomplished some of the same goals for less money. All in all, this is a good deal for existing McAfee customers, fills in a big missing piece of their SaaS puzzle, and I am betting will help foster revenue growth in excess of the purchase price. Share:

Share:
Read Post

The Securosis Intern and Contributing Analyst Programs

Update: based on questions over email- this is only part time and we expect you to have another job, and we are looking for 1-2 people to test the idea out. Also, if you are on the Contributing Analyst track, we’ll focus more on research and writing, and you won’t be asked to do much of normal intern-level stuff. Over the years we’ve met a heck of a lot of smart people, many of whom we’d like to work with, but we haven’t really had a good mechanism to pull off direct collaboration under the Securosis umbrella. Like pretty much any self-funded services company on the face of the planet, we need to be super careful on managing growth to limit overhead. We’ve also been dropping some activities over here that aren’t at the top of the to-do list, which is just as dangerous as bloated overhead. Right before Black Hat I tweeted that we were thinking of starting an intern program, and I received a bigger response than expected. Some of these people are far too qualified for an “intern” title. It also got us thinking that there might be some creative ways to pull other people in, without too much overhead or unrealistic commitments on either side. Being something of community and social media junkies, we also thought we’d like to incorporate some of those ideas into whatever we come up with. Thus we’re officially announcing our intern and Contributing Analyst program. Here’s what we are thinking, and we are open to other ideas: The intern program is for anyone with a good security background who’s also interested in learning what it’s like to be an analyst. We’ll ask for some cheap labor (writing projects, site maintenance, other general help) and in exchange we’ll bring you in, show you the analyst side, and give you access to our resources. We’ll pay for certain scut work, but it won’t be a lot. Floggings will be kept to a minimum, unless you are into that sort of thing. The Contributing Analyst positions are for experienced industry analysts, or others capable of contributing high quality research and analysis. We will ask you to blog occasionally and bring you in on specific projects. We will also support you if you bring in your own projects. In exchange, we will pay you the same rates we pay ourselves on projects, including some of the research products we are planning on producing. In both cases you will be part of the Securosis team – participating on briefings, using our resources, and so on. We realize there might be the occasional conflict of interest, depending on your current employer. Anyone in either program will be restricted from writing about anything that promotes, or potentially promotes, their current employer and will be restricted from briefings and proprietary materials from competitors. You’ll have to be firewalled off from any conflicts. Also, any potential conflicts will be disclosed on your site bio and in any publications. You’ll have to sign a contract agreeing to all this. You’ll get a Securosis email, direct blog access, internal collaboration server access, business cards, editorial support and use of anything else – like our SurveyMonkey account, and so on. We are really persnickety about how we write and the quality of our work. Anything you publish under our name will have to get approved by a full-time analyst and go through an editorial process that may be considered brutal, if not outright sadistic. We’ll train interns up, but any Contributing Analyst will be expected to write at the same level we do, and reviewed too. Unless you are already an established industry analyst (or have that experience) we will have you start in the intern program for a minimum of 3 months. This is so we can feel each other out and make sure it’s going to work. Anyone in either program can eventually become a full timer, if the workload and quality supports it. We don’t plan on “dictating” to people. We want to give you freedom to explore different research projects and new ideas. We’re totally up for helping implement (and even funding) good ideas as long as they support our no-bull totally transparent research philosophies. Basically, we want to expand the community of people we work with directly, even if it’s not a traditional employee/employer relationship. Eventually we’d love to have a network of contributors of different types, and this is only a first step. There are perspectives out there that no full-time analyst will ever get, by the nature of the job, and this might be a way to expand that window. We also think we can support some new, interesting kinds of research that might be difficult to perform someplace else. Think of us as a platform, especially since we don’t feel compelled to directly monetize everything we do. If you are interested, please email us at info@securosis.com. We’ll need a resume, bio, which program you are interested in, and why. We’ll have an interview process that will require some writing, presenting, and an interview or two. We only plan on taking a couple people at a time since it can take a lot of time to get someone up and running, but we’ll stack rank and fill in as we have the capacity to support people. Share:

Share:
Read Post

Mini Black Hat/Defcon 17 recap

At Black Hat/Defcon, Rich and I are always convinced we are going to be completely hacked if we use any connection anywhere in Las Vegas. Heck, I am pretty sure someone was fuzzing my BlackBerry even though I had Bluetooth, WiFi, and every other function locked down. It’s too freakin’ dangerous, and as we were too busy to get back to the hotel for the EVDO card, neither Rich or I posted anything last week during the conference. So it’s time for a mini BH/Defcon recap. As always, Bruce Schneier gave a thought provoking presentation on how the brain conceptualizes security, and Dan Kaminsky clearly did a monstrous amount of research for his presentation on certificate issuance and trust. Given my suspicion my phone might have been hacked, I probably should have attended more of the presentations on mobile security. But when it comes down to it, I’m glad I went over and saw “Clobbering the Cloud” by the team at Sensepost. I thought their presentation was the best all week, as it went over some very basic and practical attacks against Amazon EC2, both the system itself and its trust relationships. Those of you who were in the room in the first 15 minutes and left missed the best part where Haroon Meer demonstrated how to put a rogue machine up and escalate its popularity. They went over many different ways to identify vulnerabilities, fake out the payment system, escalate visibility/popularity, and abuse the identity tokens tied to the virtual machines. In the latter case, it looks like you could use this exploit to run machines without getting charged, or possibly copy someone else’s machine and run it as a fake version. I think I am going to start reading their blog on a more regular basis. Honorable mention would have to be Rsnake and Jabra’s presentation on how browsers leak data. A lot of the examples are leaks I assumed were possible, but it is nonetheless shocking to see your worst fears regarding browser privacy demonstrated right in front of your eyes. Detecting if your browser is in a VM, and if so, which one. Reverse engineering Tor traffic. Using leaked data to compromise your online account(s) and leave landmines waiting for your return. Following that up with a more targeted attack. It shows not only specific exploits, but how when bundled together they comprise a very powerful way to completely hack someone. I felt bad because there were only 45 or so people in the hall, as I guess the Matasano team was supposed to present but canceled at the last minute. Anyway, if they post the presentation on the Black Hat site, watch it. This should dispel any illusions you had about your privacy and, should someone have interest in compromising your computer, your security. Last year I thought it really rocked, but this year I was a little disappointed in some of the presentations I saw at Defcon. The mobile hacking presentations had some interesting content, and I laughed my ass off with the Def Jam 2 Security Fail panel (Rsnake, Mycurial, Dave Mortman, Larry Pesce, Dave Maynor, Rich Mogull, and Proxy-Squirrel). Other than that, content was kind of flat. I will assume a lot of the great presentations were the ones I did not select … or were on the second day … or maybe I was hung over. Who knows. I might have seen a couple more if I could have moved around the hallways, but human gridlock and the Defcon Goon who did his Howie Long impersonation on me prevented that from happening. I am going to stick around for both days next year. All in all I had a great time. I got to catch up with 50+ friends, and meet people whose blogs I have been reading for a long time, like Dave Lewis and Paul Asadoorian. How cool is that?! Oh, and I hate graffiti, but I have to give it up for whomever wrote ‘Epic Fail’ on Charo’s picture in the garage elevator at the Riviera. I laughed halfway to the airport. Share:

Share:
Read Post

Friday Summary – July 24, 2009

“Hi, my name is Adrian, and, uh … I am a technologist” … Yep. I am. I like technology. Addicted to it in fact. I am on ‘Hack A Day’ almost once a day. I want to go buy a PC and over-clock it and I don’t even use PCs any more. I can get distracted by an interesting new technology or tool faster than a kid at Toys R Us. I have had a heck of a time finishing the database encryption paper as I have this horrible habit of dropping right down into the weeds. Let’s look at a code sample! What does the API look like? What algorithms can I choose from? How fast is the response in key creation? Can I force a synch across key servers manually, or is that purely a scheduled job? How much of the API does each of the database vendors support? Yippee! Down the rabbit hole I go … Then Rich slaps me upside the head and I get back to strategy and use cases. Focus on the customer problem. The strategy behind deployment is far more important to the IT and security management audiences than subtleties of implementation, and that should be the case. All of the smaller items are interesting, and may be an indicator off the quality of the product, but are not a good indicator to the suitability of a product to meet a customers need. I’ll head to the technologist anonymous meeting next week, just as soon as I wrap the recommendations section on this paper. But the character flaw remains. In college, studying software, I was not confident I really understood how computers worked until I went down into the weeds, or in this case, into the hardware. Once I designed and built a processor, I understood how all the pieces fit together and was far more confident in making software design trade-offs. It’s why I find articles like this analysis of the iPhone 3GS design so informative as it shows how all of the pieces are designed and work together, and now I know why certain applications perform they way they do, and why some features kill battery life. I just gotta know how all the pieces fit together! I think Rich has his addiction under control. He volunteers to do a presentation at Defcon/Black Hat each year, and after a few weeks of frenzied soldering, gets it out of his system. Then he’s good for the remainder of the year. I think that is what he is doing right now: bread board and soldering iron out, and making some device perform in a way nature probably did not intend it to. Last year it was a lamp that hacked your home network. God only knows what he is doing to the vacuum cleaner this year! A couple notes: We are having to manually approve most comments due to the flood of message spam. If you don’t see your comment, don’t fret, we will usually open it up within the hour. And we are looking for some intern help here at Securosis. There is a long list of dubious qualities we are looking for. Basically we need some help with some admin and site work, and in exchange will teach you the analyst game and get you involved with writing and other projects. And since our office is more or less virtual, it really does not matter where you live. And if you can write well enough you can help me finish this damned paper and write the occasional blog post or two. We are going to seriously look after Black Hat, but not before, so get in contact with us next month if you are interested. We’re also thinking we might do this in a social media/community kind of way, and have some cool ideas on making this more than the usual slave labor internship. As both Rich and I will be at Black Hat/Defcon next week, there will not be a Friday summary, but we will return to our regularly scheduled programming on the 7th of August. We will be blogging live and I assume we’ll even get a couple of podcasts in. Hope to see you at BH and the Disaster Recovery Breakfast at Cafe Lago! Hey, I geek out more than once a year! I use microcontrollers in my friggen Halloween decorations for Pete’s sake! -rich And now for the week in review: Webcasts, Podcasts, Outside Writing, and Conferences Rich and Martin in Episode 159 of the Network Security Podcast. Rich wrote an article on iPhone 3GS security over at TidBITS. Favorite Securosis Posts Rich: Adrian’s post on the FTC’s Red Flag rules. Adrian: Amazon’s SimpleDB looks like it is going to be a very solid, handy development tool. Other Securosis Posts Electron Fraud, Central American Style Project Quant Posts Project Quant: Partial Draft Report Favorite Outside Posts Adrian: Jack Daniel’s pragmatic view on risk and security. Rich: Techdulla with a short post that makes a very good point. I have a friend in exactly the same situation. Their CIO has no idea what’s going on, but spends a lot of time speaking at vendor conferences. Top News and Posts Get ready for Badge Hacking! RSnake and Inferno release two new browser hacks. I want to be a cyber-warrior, I want to live a life of dang-er-ior, or something like that. A great interview with our friend Stepto on gaming safety. The Pwnie award nominations are up. The dhcpclient vulnerability is very serious, and you shouldn’t read this post. There is another serious unpatched Adobe Flash/PDF vulnerability. George Hulme with some sanity checking on malware numbers. Medical breach reports flooding California. Blog Comment of the Week This week’s best comment comes from Bernhard in response to the Project Quant: Create and Test Deployment Package post: I guess I’m mosty relying on the vendor’s packaging, being it opatch, yum, or msi. So, I’m mostly not repackaging things, and the tool

Share:
Read Post

Sorry, Data Labeling is *Not* the Same as DRM/ERM

First, a bit of a caveat. Andrew Jaquith of Forrester is an excellent analyst and someone I know and respect. This is a criticism of a single piece of his research, and nothing more. Over at the Forrester Security Blog today, Andrew posted a change of policy on their use of two important data security terms. In short, they will now be using the term Data Labeling instead of Enterprise Rights Management: So, here’s what Forrester will do in our future coverage. The ERM (enterprise rights management) acronym will vanish, except as a “bridge” term to jog memories. In the future, we will practice “truth in labeling” and call this ERM thing data labeling. Unfortunately, this is a factually incorrect change since data labeling already exists. I agree with Andrew that ERM is a terrible term – in large part because I’ve covered Enterprise Risk Management, and know there are about a dozen different uses for that acronym. Personally, I refuse to use ERM in this context, and use the term Enterprise DRM (Digital Rights Management). Enterprise Rights Management is a term created to distinguish consumer DRM from enterprise DRM, in no small part because nearly everyone hates consumer DRM. The problem is that data labeling is also a specific technology with an established definition. One we’ve actively criticized in the past. Andrew refers back to the Orange book: Here’s what the Orange Book says about data labeling: “Access control labels must be associated with objects. In order to control access to information stored in a computer, according to the rules of a mandatory security policy, it must be possible to mark every object with a label that reliably identifies the object’s sensitivity level (e.g., classification), and/or the modes of access accorded those subjects who may potentially access the object.” Sounds just like what what ERM is doing, no? No – the difference is under the covers. Data labeling refers to tags or metadata attached to structured or unstructured data to define a classification level. Labels don’t normally include specific handling controls, since those are handled at a layer above the label itself (depends on the implementation). DRM is the process of encrypting data, then applying usage rights that are embedded in the encrypted object. For example, you encrypt a file and define who can view it, print it, email it, or whatever. Any application with access to decrypt the file is designed to respect and enforce those policies… as opposed to regular encryption, which includes no usage rights, and where anyone with the key can read the file. This shows the problem with consumer DRM and why it always breaks – in an enterprise we have more control over locking down the operating environment. In the consumer world, the protected file is always in a hostile environment. Since you have to have the key to decrypt the file, the key and the data are both potentially exposed. Labeling and DRM may work together, but are distinct technologies. You can label an individual record/row in a database, but you can’t apply DRM rights to it (I suppose you could, but it’s completely impractical and there isn’t a single tool on the market for it). You can apply DRM rights to a file without ever applying a classification level. I asked Andrew about this over Twitter, and our conversation went like this (Andrew’s post is first): @rmogull Really? Do you think “ERM” is actually a useful name for that category? Want to discuss alternatives? @arj I use “Enterprise DRM” I also hate ERM and refuse to use it. @rmogull Makes sense. Want to send me an e-mail (or do a blog post) critiquing the post? I’m a pretty good sport. I think we are on the same page now, and thank Andrew for bringing this up, and being willing to take some gentle lumps. Share:

Share:
Read Post

Amazon’s SimpleDB

I have always felt the punctuated equilibrium of database technology is really slow, with long periods between the popularity of simple relational ‘desktop’ databases (Access, Paradox, DBIII+, etc) and ‘enterprise’ platforms (DB2, Oracle, SQL Server, etc). But for the first time in my career, I am beginning to believe we are seeing a genuine movement away from relational database technology altogether. I don’t really study trends of relational database management platforms like I did a decade or so ago, so perhaps I have been slightly ignorant of the progression, but I am somewhat surprised by the rapidity with which programmers and product developers are moving away from relational DB platforms and going to simple indexed flat files for data storage. Application developers need data storage and persistence as much as ever, but it seems simpler is better. Yes, they still use tables, and they may use indices, but complex relational schemata, foreign keys, stored procedures, normalization, and triggers seem to be unwanted and irrelevant. Advanced relational technologies are being ignored, especially by web application developers, both because they want to manage the functions within the application they know (as opposed to the database they don’t), and because it makes for a cleaner design and implementation of the application. What has surprised me is the adoption of indexed flat files for data storage in lieu of any relational engine at all. Flat files offer a lot of flexibility, they can deal with bulk data insertions very quickly, and depending upon how they are implemented may offer extraordinary query response. It’s not like ISAM and other variants ever went away, as they remain popular in everything from mainframes to control systems. We moved from basic flat files to relational platforms because they offered more efficient storage, but that requirement is long dead. We have stuck with relational platforms because they offered data integrity and transactional consistency lacking in the simple data storage platforms, as well as excellent lookup speed on reasonably static data sets, and they provide a big advantage with of pre-compiled, execution ready stored procedure code. However when the primarily requirement is quick collection and scanning of bulk data, you don’t really care about those features so much. This is one of the reasons why many security product vendors moved to indexed flat files for data storage as it offers faster uploads, dynamic structure, and correlation capabilities, but that is a discussion for another post. I have been doing some research into ‘cloud’ service & security technologies of late, and a few months ago I was reminded of Amazon Web Services’ offering, Amazon SimpleDB. It’s a database, but in the classic sense, or what databases were like prior to the relational database model we have been using for the last 25 years. Basically it is a flat file, with each entry having attached name/value attribute pairs. Sounds simple because it is. It’s a bucket to dump data in. And you have the flexibility to introduce as much or as little virtual structure into it as you care to. It has a query interface, with all of the same query language constructs that most SQL languages offer. It appears to have been quietly launched in 2007, and I am guessing it was built by Amazon to solve their own internal data storage needs. In May of this year they augmented the query engine to support comparison operators such as ‘contains’ and several features for managing result sets. At this point, the product seems to have reached a state where it offers enough functionality to support most web application developers. You will be giving up a lot of (undesired?) functionality, but f you just want a simple bucket to dump data into with complete flexibility, this is a logical option. I am a believer that ‘cheaper, faster, easier’ always wins. Amazon’s SimpleDB fits that model. It’s feasible that this technology could snatch away the low end of the database market that is not interested in relational functions. Share:

Share:
Read Post

Premature Cyberjaculation: Security, Skepticism, and the Press

Over the past few weeks we’ve seen yet two more security stories get completely blown out of proportion in the press. The first was, of course, the DDoS attacks that were improperly attributed by most commentators to North Korea. The second, no surprise, was the Great Twitter Hack of 2009, which might also be referred to the Great Cloud Security Collapse. In both cases the stories were not only blown completely out of proportion, but many of the articles devoted more space to hyperbole and innuendo than facts. In the meantime, we had a series of unpatched vulnerabilities being exploited on Internet Explorer and Firefox, placing users at very real risk of becoming a victim. Share:

Share:
Read Post

Electron Fraud, Central American Style

When I was a kid, the catchphrase “Computers don’t lie” was very common, implying that machines were unbiased and accurate, in order to engender faith in the results they produced. Maybe that’s why I am in security – because I found the concept to be very strange. Machines, and certainly computers, do pretty much exactly what we tell them to do, and implicit trust is misguided. As their inner workings are rarely transparent, they are perfectly suited to hiding all sorts of shenanigans, especially when under the control of power hungry despots. It is being reported that Honduran law enforcement has seized a number of computers that contain certified results for an election that never took place. It appears that former President Manuel Zelaya attempted to rig the vote on constitutional reform, and might have succeeded if he had not been booted prior to the vote. I cannot vouch for the quality of the translated versions, but here is an except: The National Direction of Criminal Investigation confiscated computers in the Presidential House in which were registered the supposed results of the referendum on the reform of the Constitution that was planned by former President Manuel Zelaya on last June 28, the day that he was ousted. “This group of some 45 computers, by the appearance that present, they would be used for the launch of the supposed final results of the quarter ballot box”, he explained. The computers belonged to the project ‘Learns’ of the Honduran Counsel of Science and Technology directed towards rural schools. All of the computers had been lettered with the name of the department for the one that would transmit the information accompanied by a document with the headline: “Leaf of test”, that contained all the data of the centers of voting. From the translated articles, it’s not clear to me if these computers were going to be used in the polling places and would submit the pre-loaded results, or if they were going to mimic the on-site computers and upload fraudulent data. You can pretty do anything you want when you have full access to the computer. Had this effort been followed through, it would have been difficult to detect, and the results would have been considered legitimate unless proven otherwise. Share:

Share:
Read Post

FTC Requirements for Customer Data

There was an article in Sunday’s Arizona Republic regarding to the Federal Trade Commission’s requirements for any company handling sensitive customer information. Technically this law went into effect back in January 2008, but it was enforced due to lack of awareness. Now that the FTC has completed their education and awareness program, and enforcement will begin August 1st of this year, it’s time to begin discussing these guidelines. This means that any business that collects, stores, or uses sensitive customer data needs a plan to protect data use and storage. The FTC requirements are presented in two broad categories. The first part spells out what companies can do to detect and spot fraud associated with identity theft. The Red Flags Rule spells out the four required components. Document specific ‘red flags’ that indicate fraud for your type of business. Document how your organization will go about detecting those indicators. Develop guidelines on how to respond when they are encountered. Periodically review the process and indicators for effectiveness and changes to business processes. The second part is about protecting personal information and safeguarding customer data. It’s pretty straightforward: know what you have, keep only what you need, protect it, periodically dispose of data you don’t need, and have a plan in case of breach. And, of course, document these points so the FTC knows you are in compliance. None of this is really ground-breaking, but it is a solid generalized approach that will at least get businesses thinking about the problem. It’s also broadly applied to all companies, which is a big change from what we have today. After reviewing the overall program, there are several things I like about the way the FTC has handled this effort. It was smart to cover not just data theft, but how to spot fraudulent activity as part of normal business operations. I like that the recommendations are flexible, and the FTC did not mandate products or process, only that you document. I like the fact that they were pretty clear on who this applied to and who it does not. I like the way that reducing the amount of sensitive data retention is a shown as a natural way to simplify requirements for many companies. Finally, providing simple educational materials, such as this simplified training video, is a great way to get companies jump started, and gives them some material to train their own people. Most organizations are going to be besieged by vendors with products that ‘solve’ this problem, and to them I can only say ‘Caveat emptor’. What I am most interested in is the fraud detection side, both what the red flags are for various business verticals, and how and where they detect. I say that for several reasons, but specifically because the people who know how to detect fraud within the organization are going to have a hard time putting it into a checklist and training others. For example, most accountants I know still use Microsoft Excel to detect fraud on balance sheets! Basically they import the balance sheet and run a bunch of macros to see if there is anything ‘hinky’ going on. There is no science to it, but practical experience tells them when something is wrong. Hopefully we will see people share their experiences and checklists with the community at large. I think this is a good basic step forward to protect customers and make companies aware of their custodial responsibility to protect customer data. Share:

Share:
Read Post

Friday Summary – July 17, 2009

I apologize to those of you reading this on Saturday morning – with the stress of completing some major projects before Black Hat, I forgot that to push the Summary out Friday morning, we have to finish it off Thursday night. So much for the best laid plans and all. The good news is that we have a lot going on at Black Hat. Adrian and I will both be there, and we’re running another Disaster Recovery Breakfast, this time with our friends over at Threatpost. I’m moderating the VC panel at Black Hat on Wednesday, and will be on the Defcon Security Jam 2: The Fails Keep on Coming panel. This is, by far, my favorite panel. Mostly because of the on-stage beverages provided. Since I goon for the events (that means work), Adrian will be handling most of our professional meetings for those of you who are calling to set them up. To be honest, Black Hat really isn’t the best place for these unless you catch us the first day (for reasons you can probably figure out yourself). This is the one conference a year when we try and spend as much of our time as possible in talks absorbing information. There is some excellent research on this year’s agenda, and if you have the opportunity to go I highly recommend it. I think it’s critical for any security professional to keep at least half an eye on what’s going on over on the offensive side. Without understanding where the threats are shifting, we’ll always be behind the game. I’ve been overly addicted to the Tour de France for the past two weeks, and it’s fascinating to watch the tactical responsiveness of the more experienced riders as they intuitively assess, dismiss, or respond to the threats around them. While the riders don’t always make large moves, they best sense what might happen around the next turn and position themselves to take full advantage of any opportunities, or head off attacks (yes, they’re called attacks) before they post a risk. Not to over-extend another sports analogy, but by learning what’s happening on the offensive side, we can better position ourselves to head off threats before they overly impact our organizations. And seriously, it’s a great race this year with all sorts of drama, so I highly recommend you catch it. Especially starting next Tuesday when they really hit the mountains and start splitting up the pack. -Rich And now for the week in review: Webcasts, Podcasts, Outside Writing, and Conferences Martin interviews Steve Ocepek on this week’s Network Security Podcast (plus we cover a few major news stories). Rich is quoted in a Dark Reading article on implemented least privileges. Rich is quoted alongside former Gartner co-worker Jeff Wheatman on database privileges over at Channel Insider. John Sawyer refers to our Database Activity Monitoring paper in another Dark Reading article. Favorite Securosis Posts Rich: Adrian’s Technology vs. Practicality really hit home. I miss liking stuff. Adrian: Database Encryption, Part 6: Use Cases. Someone has already told us privately that one of the use cases exactly described their needs, and they are off and implementing. Other Securosis Posts Oracle Critical Patch Update, July 2009 Microsoft Patched; Firefox’s Turn Second Unpatched Microsoft Flaw Being Exploited Subscribe to the Friday Summary Mailing List Pure Extortion Project Quant Posts We’re getting near the end of phase 1 and here’s the work in progress: Project Quant: Partial Draft Report Favorite Outside Posts Adrian: Amrit Williams North Korea Cyber Scape Goat of the World. The graphic is priceless! Rich: David and Alex over at the New School preview their Black Hat talk. Top News and Posts Critical JavaScript Vulnerability in Firefox 3.5. Microsoft Windows and Internet Explorer security issues patched. Oracle CPU for July 2009. Goldman Trading Code Leaked. Mike Andrews has a nice analysis on Google Web “OS”. Twitter Hack makes headlines. Lexis-Nexus breached by the mob? Vulnerability scanning the clouds. State department worker sentenced for snooping passports. Casino sign failure (pretty amusing). PayPal reports security blog to the FBI for a phishing screenshot. A school sues a bank over theft due to hacked computer. This is a tough one; the school was hacked and proper credentials stolen, but according to their contract those transfers shouldn’t have been allowed even from the authenticated system/account. Nmap 5 released – Ed’s review. Blog Comment of the Week This week’s best comment comes from SmithWill in response to Technology vs. Practicality: Be weary of the CTO/car fanatic. Over-built engines=over instrumented, expensive networks. But they’re smoking fast! Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.