Securosis

Research

Quantify Me: Friday Summary: February 15, 2013

Rich here. There are very few aspects of my life I don’t track, tag, analyze, and test. You could say I’m part of the “Quantified Self” movement if it weren’t for the fact that the only movement I like to participate in involves sitting down, usually with a magazine or newspaper. I track all my movements during the day with a Jawbone Up (when it isn’t broken). I track my workouts with a Garmin 910XT, which looks like a watch designed by a Russian gangster, but is really a fitness computer that collects my heart rate, GPS coordinates, foot-pod accelerometer data, and bike data; and can even tell me which swimming stroke, how long, and how far I am using in my feeble attempts to avoid drowning. My bike trainer uses a Kurt Kinetic InRide power meter for those days my heart rate is lying to me about how hard I’m pushing. I track my sleep with a Zeo, test my blood with WellnessFX, and screen my genes with 23andMe. I correlate most of my fitness data in TrainingPeaks, which uses math and data to track my fitness level and overall training stress, and optimize my workouts whichever data collection device du jour I have with me. My swim coach (when I use him) uses video and an endless pool to slowly move me from “avoiding drowning in a forward direction” to “something that almost resembles swimming”. My bike is custom fit based on video, my ride style, and power output and balance measurements; the next one will probably be calibrated from computerized real-time analysis and those dot trackers used for motion capture films. Every morning I track my weight with a WiFi enabled scale that automatically connects to TrainingPeaks to track trends. I can access nearly all this data from my phone, and I am probably forgetting things. Some days I wonder if this all makes a difference, especially when I think back to my hand-written running and lifting logs, and the early days using a basic heart rate monitor with no data recording. Or the earlier days when I’d just run for running’s sake, without so much as headphones on. But when I sit back and crunch the numbers, I do find tidbits that affect the quality of my life and training. I have learned that I tend to average three deep sleep cycles a night, but one is usually between 6-8 am, which is when I almost always wake up. Days I sleep in a bit and get that extra cycle correlate with a significant upswing in how well I feel, and my work productivity. When the kids are older I will most definitely adjust my schedule – getting that sleep even 1-2 days a week make a big difference. I am somewhat biphasic, and if I’m up in the middle of the night for an hour or so I still feel good if I get that morning rest. With a new baby coming, I will really get to test this out. I am naturally a sprinter. I knew this based on my athletic history, but genetics confirms it. I was insanely fast when I competed in martial arts, but always had stamina issues (keep the jokes to yourself). As I have moved into endurance sports this has been a challenge, but I can now tune my training to hit specific goals with great success and very little wasted effort. I have learned that although I can take a ton of high-intensity training punishment, if I am otherwise stressed in life at the same time I get particular complications. I am in the midst of tweaking my diet to fit my lifestyle and health goals. I have a genetic disposition to heart disease, and my numbers prove it, but I have managed to make major strides through diet. Without being able to make these changes and then test the results, I would be flying blind. I’m learning exactly what works for me. This helped me lose 10 pounds in less than a month with only minimal diet changes, for example, and drop my cholesterol by 40 points. Not all of the data I collect is overly useful. I’m still seeing where steps-per-day fit in, but I think that is more a daily motivator to keep me moving. The genetics testing with 23andMe was interesting, but we’ll see whether it affects any future health decisions. Perhaps if I need to go on statins someday, since I don’t carry a genetic sensitivity that can really cause problems. It’s obsessive (but not as obsessive as my friend Chris Hoff), but it does provide incredible control over my own health. Life is complex, and no single diet or fitness regimin works the same for everyone. From how I work out, to how I sleep, to what I eat, I am learning insanely valuable lessons that I then get to test and validate. I can’t emphasize how much more effective this is than the guesswork I had to live with before these tools became available. I plan on living a long time, and being insanely active until the bitter end. I’m in my 40s, and can no longer do whatever I want and rely on youth to clean up my mistakes. Data is awesome. Measure, analyze, correct, repeat. Without that cycle you are flying in the dark, and this is as true for security (or anything else, really) as it is for health. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Rich’s password rant at Macworld. Favorite Securosis Posts Mike Rothman: RSA Conference Guide 2013: Cloud Security. Rich did a good job highlighting one of the major hype engines we’ll see at the RSA Conference. And he got to write SECaaS. Win! Adrian Lane: LinkedIn Endorsements Are Social Engineering. As LinkedIn looks desperately for ways to be more than just contact management, Rich nails the latest attempt. David Mortman: Directly Asking the Security Data. Rich: The Increasing Irrelevance of Vulnerability Disclosure. Yep. Other Securosis Posts RSA Conference Guide 2013: Application Security. I’m losing track – is this

Share:
Read Post

I’m losing track—is this ANOTHER Adobe 0-day?

As reported on Tom’s Guide, FireEye reports they have discovered a PDF 0-Day that is currently being exploited in the wild: According to the report, this exploit drops two DLLs upon successful exploitation, one of which displays a fake error message and opens a decoy PDF document. The second DLL drops the callback component which talks to a remote domain. “We have already submitted the sample to the Adobe security team,” the firm stated on Wednesday in this blog. “Before we get confirmation from Adobe and a mitigation plan is available, we suggest that you not open any unknown PDF files. We will continue our research and continue to share more information.” And note that this is not just a Windows issue – Linux and OS X versions are also susceptible. So avoid using unknown PDF files – that is the recommended work-around – while you wait for a patch. No kidding! Personally I just disabled Adobe Reader from my machine and I’ll consider re-enabling at some point in the future. Some of you don’t have this option, so use caution. Share:

Share:
Read Post

RSA Conference Guide 2013: Application Security

So what hot trends in application security will you see at the RSA Conference? Mostly the same as last year’s trends, as lots of things are changing in security, but not much on the appsec front. Application security is a bit like security seasoning: Companies add a sprinkle of threat modeling here, a dash of static analysis there, marinate for a bit with some dynamic app testing (DAST), and serve it all up on a bed of WAF. The good news is that we see some growth in security adoption in every phase of application development (design, implementation, testing, deployment, developer education), with the biggest gains in WAF and DAST. Additionally, according to many studies – including the SANS application security practices survey – better than 2/3 of software development teams have an application security program in place. The Big Money Game With WhiteHat Security closing a $31M funding round, and Veracode racking up $30M themselves in 2012, there won’t be any shortage of RSA Conference party dollars for application security. Neither of these companies are early stage, and the amount of capital raised indicates they need fuel to accelerate expansion. In all seriousness, the investment sharks smell the chum and are making their kills. When markets start to get hot you typically see companies in adjacent markets reposition and extend into the hot areas. That means you should expect to see new players, expanded offerings from old players, and (as in all these RSA Guide sections) no lack of marketing to fan the hype flames (or at least smoke). But before you jump in, understand the differences and what you really need from these services. The structure of your development and security teams, the kinds of applications you work with, your development workflow, and even your reliance on external developers will all impact what direction you head in. Then, when you start talking to company reps on the show floor, dig into their methodology, technology, and the actual people they use behind any automated tools to reduce false positives. See if you can get a complete sample assessment report, from a real scan; preferably provided by a real user, because that gives you a much better sense of what you can expect. And don’t forget to get your invite to the party. Risk(ish) Quantification(y) One of the new developments in the field of application security is trying out new metrics to better resonate with the keymasters of the moneybags. Application security vendors pump out a report saying your new code still has security bugs and you’re sitting on a mountain of “technical debt”, which basically quantifies how much crappy old code you don’t have time or resources to fix. Vendors know that Deming’s principles, the threat of a data breach, compliance requirements, and rampant fraud have not been enough whip companies into action. The conversation has shifted to Technical Debt, Cyber Insurance, Factor Analysis of Information Risk (FAIR), the Zombie Apocalypse and navel gazing at how well we report breach statistics. The common thread through all these is the providing a basis to quantify and evaluate risk/reward tradeoffs in application security. Of course it’s not just vendors – security and development teams also use this approach to get management buy-in and better resource allocation for security. The application security industry as a whole is trying to get smarter and more effective in how it communicates (and basically sells) the application security problem. Companies are not just buying application security technologies ad hoc – they are looking to more effectively apply limited resources to the problem. Sure, you will continue to hear the same statistics and all about the urgency of fixing the same OWASP Top 10 threats, but the conversation has changed from “The End is Nigh” to “Risk Adjusted Application Security”. That’s a positive development. (Please Don’t Ask Us About) API Security Just like last year, people are starting to talk about “Big Data Security,” which really means securing a NoSQL cluster against attack. What they are not talking about is securing the applications sitting in front of the big data cluster. That could be Ruby, Java, JSON, Node.js, or any one of the other languages used to front big data. Perhaps you have heard that Java had a couple security holes. Don’t think for a minute these other platforms are going to be more secure than Java. And as application development steams merrily on, each project leveraging new tools to make coding faster and easier, little (okay – no) regard is being paid to the security of these platforms. Adoption of RESTful APIs makes integration faster and easier, but unless carefully implemented they pose serious security risks. Re-architecture and re-design efforts to make applications more secure are an anomaly, not a trend. This is a serious problem that won’t have big hype behind it at RSA because there is no product to solve this issue. We all know how hard it is to burn booth real estate on things that don’t end up on a PO. So you’ll hear how insecure Platform X is, and be pushed to buy an anti-malware/anti-virus solution to detect the attack once your application has been hacked. So much for “building security in”. And don’t forget to register for the Disaster Recovery Breakfast if you’ll be at the show on Thursday morning. Where else can you kick your hangover, start a new one, and talk shop with good folks in a hype-free zone? Nowhere, so make sure you join us… Share:

Share:
Read Post

Don’t Bring BS to a Data Fight

Thanks to a heads-up from our Frozen Tundra correspondent, Jamie Arlen, I got to read this really awesome response by Elon Musk of Tesla refuting the findings of a NYT car reviewer, A Most Peculiar Test Drive. After a negative experience several years ago with Top Gear, a popular automotive show, where they pretended that our car ran out of energy and had to be pushed back to the garage, we always carefully data log media drives. While the vast majority of journalists are honest, some believe the facts shouldn’t get in the way of a salacious story. The logs show again that our Model S never had a chance with John Broder. Logs? Oh crap. You think the reviewer realized Tesla would be logging everything? Uh, probably not. Then Musk goes through all the negative claims and pretty much shows the reviewer to be either not very bright (to drive past a charging station when the car clearly said it needed a charge) or deliberately trying to prove his point, regardless of the facts. I should probably just use Jamie’s words, as they are much better than mine. So courtesy of Jamie Arlen: It’s one of those William Gibson moments. You know, where “the future is here, it’s just not evenly distributed yet.” As more “things in the world” get smart and connected, Moore’s Law type interactions occur. The technology necessary to keep a Tesla car running and optimized requires significant monitoring and logging of all control systems, which has an unpleasant side effect for the reviewer. The kicker (for me) in all of this is the example that the NYT writer makes of himself: Sorry dude, the nerds have in-fact inherited the earth – if you want to play a game with someone who excels in the world of high-performance cars and orbital launch systems simultaneously, you need to be at least as smart as your opponent. Mr. Broder – you’ve cast yourself as Vizzini and yes, Elon does make a dashing Dread Pirate Roberts. Vizzini. Well played, Mr. Arlen. Well played. But Jamie’s point is right on the money – these sophisticated vehicle control systems may be intended to make sure the systems are running as they should. But clearly a lot can be done with the data after something happens. How about placing a car at the scene of a crime? Yeah, the possibilities are endless, but I’ll leave those discussions to Captain Privacy. I’m just happy data won over opinion in this case. UPDATE: It looks like we will get to to have a little he said/she said drama here, as Rebecca Greenfield tells Broder’s side of the story in this Atlantic Wire post. As you can imagine, the truth probably is somewhere in the middle. Share:

Share:
Read Post

ECC Certificates About More Than Speed

Major Update: I got a core fact incorrect, in a big way. Thanks to @ivanristic for catching it. It’s an obvious error and I wasn’t thinking things through. ECC is used at a different point than RC4 in establishing a connection, so this doesn’t necessarily affect the use of RC4. David Mortman seems to think it may be more about mobile support and speeding up SSL/TLS on smaller devices. My apologies, and I will leave the initial post up as a record of my error. In a rambling press release that buries far too much interesting stuff, Symantec announced the release of both ECC and DSA digital certificates for SSL/TLS. On the surface this looks like merely an attempt to speed things up with ECC, and hit government requirements for DSA, but that’s not the entire story. As some of you might remember, a total d*ck of a patent troll operating under the name of TQP Development has been suing everyone they can get their hands on for using the RC4 cipher in TLS/SSL. We know of small businesses, not merely big guys, getting hit with these suits. This is important because RC4 was the best way to get around certain attacks against SSL/TLS. Which brings us back to ECC. I wouldn’t bet my personal fortune on it, but I suspect it avoided both the security and legal issues in question. Pretty interesting, but I suppose the Symantec lawyers wouldn’t let them put that in a release. Share:

Share:
Read Post

Tuesday Patchapalooza

“Wait, didn’t I effing just patch that?” That was my initial reaction this morning, when I read about another Adobe Flash security update. Having just updated my systems Sunday, I was about to ignore the alerts until I saw the headline from Threatpost: Deja Vu: Another Adobe Flash Player Security Update Released: Adobe released its regularly scheduled security updates today, including another set of fixes for its ubiquitous Flash Player, less than a week after an emergency patch took care of two zero-day vulnerabilities being exploited in the wild. … The vulnerabilities were rated most severe on Windows, and Adobe recommends those users update to version 11.6.602.168, while Mac OS X users should update to 11.6.602.167. But that’s not all: Microsoft’ Patch Tuesday bundle included 57 fixes, and in case you missed it, there was another Java update last week, with one more on the way. I want to make a few points. The most obvious one is that there are a great many new critical security patches, most of which are actively being exploited. Even if you patched a few hours ago you should consider updating. Again. Java, Flash, and your MS platforms. As we spiral in on what seems to be ever shorter patch cycles, is it time to admit that this is simply the way it is going to be, and that software is a best-effort work in progress? If so, we should expect to patch every week. What do shorter patch cycles mean to regression testing? Is that model even possible in today’s functional and security patch hailstorm? Platforms like Oracle relational database still lag 18 to 24 months. It’s deep-seated tradition that we don’t patch until things are fully tested, as the applications and databases are mission critical and customers cannot afford downtime or loss of functionality if the patch breaks something critical. Companies remain entrenched in this mindset that back-office applications are not as susceptible to 0-day attacks and things must remain at the status quo ante. When Rich wrote his benchmark research paper on quantifying patch management costs, one of his goals was to provide IT managers with the tools necessary to understand the expense of patching – in time, money, and manpower. But tools in cloud and virtual environments automate many of the manual parts and make patch processes easier. And some systems are not fully under the control of IT. It is time to re-examine patch strategies, and the systemic tradeoffs between fast and slow patching cycles. Share:

Share:
Read Post

RSA Conference Guide 2013: Endpoint Security

The more things change, the more they stay the same. Endpoint security remains predominately focused on dealing with malware and the bundling continues unabated. Now we increasingly see endpoint systems management capabilities integrated with endpoint protection, since it finally became clear that an unpatched or poorly configured device may be more of a problem than fighting off a malware attack. And as we discuss below, mobile device management (MDM) is next on the bundling parade. But first things first: advanced malware remains the topic of every day, and vendors will have a lot to say about it at RSAC 2013. AV Adjunctivitus Last year we talked about the Biggest AV Loser and there is some truth to that. But it seems most companies have reconciled themselves to the fact that they still need an endpoint protection suite to get the compliance checkbox. Endpoint protection vendors, of course, haven’t given up, and continue to add incremental capabilities to deal with advanced attacks. But the innovation is outside endpoint protection. IP reputation is yesterday’s news. As we discussed in our Evolving Endpoint Malware Detection research last year, it’s no longer about what the malware file looks like, but now all about what it does. We call this behavioral context, and we will see a few technologies addressing it at the RSA Conference. Some integrate at the kernel level to detect bad behavior, some replace key applications (such as the browser) to isolate activity, and others actually use very cool virtualization technology to keep everything separate. Regardless of how the primary technology works, the secondary bits provide a glimmer of hope that someday we might able to stop advanced malware. Not that you can really stop it, but we need something better than trying to get a file signature for a polymorphic attack. Also pay attention to proliferation analysis to deal with the increasing amount of VM-aware malware. Attackers know that all these network-based sandboxes (network-based malware detection) use virtual machines to explode the malware and determine whether it’s bad. So they do a quick check and when the malware is executed in a VM it does nothing. Quite spiffy. That a file that won’t trigger in the sandbox is likely wreak havoc once it makes its way onto a real device. At that point you can flag the file as bad, but it might already be running rampant through your environment. It would be great to know where that file came from and where it’s been, with a list of devices that might be compromised. Yup, that’s what proliferation analysis does, and it’s another adjunct we expect to become more popular over the next few years. Mobile. Still management, not security BYOD will be hot hot hot again at this year’s RSA Conference, as we discussed in Key Themes. But we don’t yet see much malware on these devices. Sure, if someone jailbreaks their device all bets are off. And Google still has a lot of work to provide a more structured app environment. But with mobile devices the real security problem is still management. It’s about making sure the configurations are solid, only authorized applications are loaded, and the device can be wiped if necessary. So you will see a lot of MDM (mobile device management) at the show. In fact, there are a handful of independent companies growing like weeds because any company with more than a dozen or so folks has a mobile management problem. But you will also see all the big endpoint security vendors talking about their MDM solutions. Like full disk encryption a few years ago, MDM is being acquired and integrated into endpoint protection suites at a furious clip. Eventually you won’t need to buy a separate MDM solution – it will just be built in. But ‘eventually’ means years, not months. Current bundled endpoint/MDM solutions are less robust than standalone solutions. But as consolidation continues the gap will shrink, until MDM is eventually just a negotiating point in endpoint protection renewal discussions. We will also see increasing containerization of corporate data. Pretty much all organizations have given up on trying to stop important data making its way onto mobile devices, so they are putting the data in walled gardens instead. These containers can be wiped quickly and easily, and allow only approved applications to run within the container with access to the important data. Yes, it effectively dumbs down mobile devices, but most IT shops are willing to make that compromise rather than give up control over all the data. The Increasingly Serious “AV Sucks” Perception Battle We would be the last guys to say endpoint security suites provide adequate protection against modern threats. But statements that they provide no value aren’t true either. It all depends on the adversary, the attack vector, monitoring infrastructure to react faster and better, and most importantly on complimentary controls. Recently SYMC took a head shot when the NYT threw them under the bus for an NYT breach. A few days later Bit9 realized that Karma is a Bit9h, when they apparently forgot to run their own software on internal devices and got were breached. I guess what they say about the shoemaker’s children is correct. It will be interesting to see how much the endpoint protection behemoths continue their idiotic APT defense positioning. As we have said over and over, that kind of FUD may sell some product but it is a short-sighted way to manage customer expectations. They will get hit, and then be pissed when they realize their endpoint protection vendor sold them a bill of goods. To be fair, endpoint protection folks have added a number of new capabilities to more effectively leverage the cloud, the breadth of their customer bases, their research capabilities, and to improve detection – as discussed above. But that doesn’t really matter if a customer isn’t using the latest and greatest versions of the software, or if they don’t have sufficient additional controls in place. Nor will it convince customers who already believe endpoint tools are inherently weak. They can ask Microsoft about that – most folks

Share:
Read Post

Incite 2/13/2013: Baby(sitter) on Board

The Boss and I don’t get out to see movies too often. At least for the last 12 years or so. It was hard to justify paying a babysitter for two extra hours so we could go see a movie. Quick dinner? Sure. Party with friends, absolutely. But a movie, not so much. We’d wait until Grandma came to visit, and then we’d do things like see movies and have date nights. But I’m happy to say that’s changing. You see, XX1 is now 12, which means she can babysit for the twins. We sent her to a day-long class on babysitting, where she learned some dispute resolution skills, some minor first aid, and the importance of calling an adult quickly if something goes south. We let her go on her maiden voyage New Year’s Eve. We went to a party about 10 minutes from the house. Worst case we could get home quickly. But no worries – everything went well. Our next outing was a quick dinner with some friends very close to the house. Again, no incidents at all. We were ready to make the next jump. That’s right, time for movie night! We have the typical discussions with XX1 about her job responsibilities. She is constantly negotiating for more pay (wonder where she got that?), but she is unbelievably responsible. We set a time when we want the twins in bed, and she sends us a text when they are in bed. The twins respect her authority when she’s in the babysitting mode, and she takes it seriously. It’s pretty impressive. Best of all, the twins get excited when XX1 is babysitting. Maybe it’s because they can watch bad TV all night. Or bang away on their iTouches. But more likely it’s because they feel safe and can hang out and have a good time with their siblings. For those of you (like me), who grew up in a constant state of battle with your siblings, it’s kind of novel. We usually have to set up an Aerobed over the weekend, so all three kids can pile into the same room for a sleepover. They enjoy spending time together. Go figure. Sure it’s great to be able to go out and not worry about having to pay a babysitter some ungodly amount, which compounds the ungodly amount you need to pay to enjoy Hollywood’s finest nowadays. But it’s even better to know that our kids will only grow closer through the rest of their lives. As my brother says, “You can pick your friends, but you can’t pick your family!” I’m just glad my kids seem to be okay with the family they have. –Mike Photo credits: Bad babysitter originally uploaded by PungoM Heavy Research We’re back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. Network-based Threat Intelligence Following the Trail of Bits Understanding the Kill Chain Understanding Identity Management for Cloud Services Architecture and Design Integration Newly Published Papers Building an Early Warning System Implementing and Managing Patch and Configuration Management Defending Against Denial of Service Attacks Securing Big Data: Security Recommendations for Hadoop and NoSQL Environments Pragmatic WAF Management: Giving Web Apps a Fighting Chance Incite 4 U We are all next: I may have been a little harsh in my post on the Bit9 hack: Karma is a Bit9h, but the key point is that all security vendors need to consider themselves high value targets. I wouldn’t be surprised if lot more get compromised and (attempt to) cover it up. There isn’t any schadenfreude here – I derive no pleasure from someone being hacked, no matter how snarky I seem sometimes. I also assume that it is only a matter of time until I get hacked, so I try to avoid discussing these issues from a false position of superiority. Wendy Nather provides an excellent reminder that defense is damn hard, with too many variables for anyone to completely control. In her words: “So if you’re one of the ones scolding a breach victim, you’re just displaying your own ignorance of the reality of security in front of those who know better. Think about that for a while, before you’re tempted to pile on.” Amen to that. – RM Swing and a miss: Managing database accounts to deny attackers easy access is a hassle – as pointed out by Paul Roberts in his post on Building and Maintaining Database Access Control Permissions. But the ‘headaches’ are not just due to default packages and allowing public access – these issues are actually fairly easy to detect and fix before putting a database server into production. More serious are user permissions within enterprise applications which have thousands of users assigned multiple roles. In these cases finding an over-subscribed user is like finding the proverbial “needle in a haystack”. The use of generic “service accounts” shared by multiple users – make it much harder to detect misuse, and if spotted to figure out who the real perpetrator is. Perhaps the most difficult problem is segregation of database administrative duties, where common tasks should be split up, at the expense of making administrators’ jobs far more complex – annoying and time-consuming. Admins are the ones who set these roles up, and they don’t want make their daily work harder. Validating good security requires someone with access and knowhow. Database operations are more difficult that database setup, which is why monitoring and periodic assessments are necessary to ensure security. – AL First things first: Wim Remes wrote an interesting post about getting value from a SIEM investment, Your network may not be what is SIEMs. Wim’s point is that you can get value from the SIEM, even if it’s horribly delayed and over budget (as so many are), but without a few key things in place initially, you would just be wasting your time. You need to know what’s important in your environment and

Share:
Read Post

Cycling, Baseball, and Known Unknowns

This morning, not even thinking about security, I popped off a tweet on cycling:   I have been annoyed lately, as I keep hearing people write off cycling while ignoring the fact that, despite all its flaws, cycling has a far more rigorous testing regimen than most other professional sports – especially American football and baseball. (Although baseball is taking some decent baby steps). Then I realized this does tie to security, especially in our very current age of selective information sharing. The perception is that cycling has more cheating because more cheaters are caught. Even in Lance’s day, when you really did have to cheat to compete, there was more testing than in many of today’s pro sports. Anyone with half a brain knows that cheating via drugs is rampant in under-monitored sports, but we like to pretend it is cleaner because players aren’t getting caught and going on Oprah. That is willful blindness. We often face the same issue in security, especially in data security. We don’t share much of the information we need to make appropriate risk decisions. We frequently don’t monitor what we need to, in order to really understand the scope of our problems. Sometimes it’s willful, sometimes it is simply cost and complexity. Sometimes it’s even zero-risk bias: we can’t use DLP because it would miss things, even though it would find more than we see today. But when if comes to information sharing I think security, especially over the past year or so, has started to move much more in the direction of addressing the known unknowns. Actually, not just security, but the rest of the businesses and organizations we work for. This is definitely happening in certain verticals, and is trickling down from there. It’s even happening in government, in a big way, and we may see some of the necessary structural changes for us to move into serious information sharing (more on that later). Admitting the problem is the first step. Collecting the data is the second, and implementing change is the third. For the first time in a long time I am hopeful that we are finally, seriously, headed down this long path. Share:

Share:
Read Post

Directly Asking the Security Data

We have long been fans of network forensics tools to provide a deeper and more granular ability to analyze what’s happening on the network. But most of these network forensics tools are still beyond the reach (in terms of both resources and expertise) of mass markets at this point. Rocky D of Visible Risk tackles the question, “I’m collecting packets, so what now?” in his Getting Started with Network Forensics Tools post. With these tools we can now ask questions directly of the data and not be limited to or rely on pre-defined questions that are based on an inference of subsets of data. The blinders are off. To us, the tools themselves aren’t the value proposition – the data itself and the innovation in analytical techniques is the real benefit to the organization. It always gets back to the security data. Because any filtered and/or normalized view of the data (or metadata, as the case may be) is inherently limited because it’s hard to go back and ask the question(s) you didn’t know to ask at the beginning of the investigation, query, etc. When investigating a security issue, you often don’t know what to ask ahead of time. But that pretty much breaks the model of SIEM (and most security, by the way) because you need to define the patterns you are looking for. Of course we know attackers are unpredictable by nature, so it is getting harder and harder to isolate attacks based on what we know attacks look like. When used properly, network forensic tools can fundamentally change your security organization from the broken alert-driven model into a more effective data-driven analytic model. It’s hard not to agree with this position, but the details remain squishy. Conceptually we buy this analytics-centric view of the world, where you pump a bunch of security data through a magic machine that finds patterns you didn’t know where there – the challenge is to interpret what those patterns really mean in the context of your problem. And that’s not something that will be automated any time soon, if ever. But unless you have the data the whole discussion is moot anyway. So start collecting packets now, and figure out what to do with them later. Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.