Securosis

Research

2011 Research Agenda: the Practical Bits

I always find it a bit of a challenge to fully plan out my research agenda for the coming year. Partly it’s due to being easily distracted, and partly my recognition that there are a lot of moving cogs I know will draw me in different directions over the coming year. This is best illustrated by the detritus of some blog series that never quite made it over the finish line. But you can’t research without a plan, and the following themes encompass the areas I’m focusing on now and plan to continue through the year. I know I won’t able to cover everything in the depth I’d like, so I could use feedback on to what you folks find interesting. This list is as much about the areas I find compelling from a pure research standpoint as what I might write about. This post is about the more pragmatic focus areas, and the next post will delve into more forward-looking research. Information-Centric (Data) Security for the Cloud I’m spending a lot more time on cloud computing security than I ever imagined. I’ve always been focused on information-centric (data) security, and the combination of cloud computing adoption, APT-style threats, the consumerization of IT, and compliance are finally driving real interest and adoption of data security. Data security consistently rates as a top concern – security or otherwise – when adopting cloud computing. This is in large driven part by the natural fear of giving up physical control of information assets, even if the data ends up being more secure than it was internally. As you’ll see at the end of this post, I plan on splitting my coverage into two pieces: what you can do today, and what to watch for the future. For this agenda item I’ll focus on practical architectures and techniques for securing data in various cloud models using existing tools and technologies. I’m considering writing two papers in the first half of the year, and it looks like I will be co-branding them with the Cloud Security Alliance: Assessing Data Risk for the Cloud: A cloud and data specific risk management framework and worksheet. Data Security for Cloud Computing: A dive into specific architectures and technologies. I will also continue my work with the CSA, and am thinking about writing something up on cloud computing security for SMB because we see pretty high adoption there. Pragmatic Data Security I’ve been writing about data security, and specifically pragmatic data security, since I started Securosis. This year I plan to compile everything I’ve learned into a paper and framework, plus issue a bunch of additional research delving into the nuts and bolts of what you need to do. For example, it’s time to finally write up my DLP implementation and management recommendations, to go with Understanding and Selecting. The odds are high I will write up File Activity Monitoring because I believe it’s at an early stage and could bring some impressive benefits – especially for larger organizations. (FAM is coming out both stand-alone and with DLP). It’s also time to cover Enterprise DRM, although I may handle that more through articles (I have one coming up with Information Security Magazine) and posts. I also plan to run year two of the Data Security Survey so we can start comparing year-over-year results. Finally, I’d like to complete a couple more Quick Wins papers, again sticking with the simple and practical side of what you can do with all the shiny toys that never quite work out like you hoped. Small Is Sexy Despite all the time we spend talking about enterprise security needs, the reality is that the vast majority of people responsible for implementing infosec in the world work in small and mid-sized organizations. Odds are it’s a part time responsibility – or at most 1 to 2 people who spend a ton of time dealing with compliance. More often than not this is what I see even in organizations of 4,000-5,000 employees. A security person (who may not even be a full-time security professional) operating in these environments needs far different information than large enterprise folks. As an analyst it’s very difficult to provide definitive answers in written form to the big company folks when I know I can never account for their operational complexities in a generic, mass-market report. Aside from the Super Sekret Squirrel project for S Share:

Share:
Read Post

React Faster and Better: Incident Response Gaps

In our introduction to this series we mentioned that the current practice of incident response isn’t up to dealing with the compromises and penetrations we see today. It isn’t that the incident response process itself is broken, but how companies implement response is the problem. Today’s incident responders are challenged on multiple fronts. First, the depth and complexity of attacks are significantly more advanced than commonly discussed. We can’t even say this is a recent trend – advanced attacks have existed for many years – but we do see them affecting a wider range of organizations, with a higher degree of specificity and targeting than ever before. It’s no longer merely the defense industry and large financial institutions that need to worry about determined persistent attackers. In the midst of this onslaught, the businesses we protect are using a wider range of technology – including consumer tools – in far more distributed environments. Finally, responders face the dual-edged sword of a plethora of tools; some of them are highly effective, and others that contribute to information overload. Before we dig into the gaps we need to provide a bit of context. First, keep in mind that we are focusing on larger organizations with dedicated incident response resources. Practically speaking, this probably means at least a few thousand employees and a dedicated IT security staff. Smaller organizations should still glean insight from this series, but probably don’t have resources to implement the recommendations. Second, these issues and recommendations are based on discussions with real incident response teams. Not everyone has the same issues – especially across large organizations – nor the same strengths. So don’t get upset when we start pointing out problems or making recommendations that don’t apply to you – as with any research, we generalize to address a broad audience. Across the organizations we talk with, some common incident response gaps emerge: Too much reliance on prevention at the expense of monitoring and response. We still find even large organizations that rely too heavily on their defensive security tools rather than balancing prevention with monitoring and detection. This imbalance of resources leads to gaps in the monitoring and alerting infrastructure, with inadequate resources for response. All organizations are eventually breached, and targeted organizations always have some kind of attacker presence. Always. Too much of the wrong kinds of information too early in the process. While you do need extensive auditing, logging, and monitoring data, you can’t use every feed and alert to kick off your process or in the initial investigation. And to expect that you can correlate all of these disparate data sources as an ongoing practice is ludicrous. Effective prioritization and filtering is key. Too little of the right kinds of information too early (or late) in the process. You shouldn’t have to jump right from an alert into manually crawling log files. By the same token, after you’ve handled the initial incident you shouldn’t need to rely exclusively on SIEM for your forensics investigation and root cause analysis. This again goes back to filtering and prioritization, along with sufficient collection. This also requires two levels of collection for your key device types – the first being what you can do continuously. The second is the much more detailed information you need to pinpoint root cause or perform post-mortem analysis. Poor alert filtering and prioritization. We constantly talk about false positives because those are the most visible, but the problem is less that an alert triggered, and more determining its importance in context. This ties directly to the previous two gaps, and requires finding the right balance between alerting, continuing collection of information for initial response, and gathering more granular information for after-action investigation. Poorly structured escalation options. One of the most important concepts in incident response is the capability to smoothly escalate incidents to the right resources. Your incident response process and organizations must take this into account. You just can’t effectively escalate with a flat response structure; tiering based on multiple factors such as geography and expertise is key. And this process must be determined well in advance of any incident. Escalation failure during response is a serious problem. Response whack-a-mole. Responding without the necessary insight and intelligence leads to an ongoing battle where the organization is always one step behind the attacker. While you can’t wait for full forensic investigations before clamping down on an incident to contain the damage, you need enough information to make informed and coordinated decisions that really stop the attack – not merely a symptom. So balancing hair-trigger response with analysis/paralysis is critical to ensure you minimize damage and potential data loss. *Your goal in incident response is to detect and contain attacks as quickly as possible – limiting the damage by constraining the window within the attacker operates.** To pull this off you need an effective process with graceful escalation to the right resources, to collect the right amount of the right kinds of information to streamline your process, to do ongoing analysis to identify problems earlier, and to coordinate your response to kill the threat instead of just a symptom. But all too often we see flat response structures, too much of the wrong information early in the process with too little of the right information late in the process, and a lack of coordination and focus that allow the bad guys to operate with near impunity once they establish their first beachhead. And let’s be clear, they have a beachhead. Whether you know about it is another matter. In our next couple posts Mike will start talking about what information to collect and how to define and manage your triggers for alerts. Then I’ll close out by talking about escalation, investigations, and intelligently kicking the bad guys out. Share:

Share:
Read Post

Friday Summary: December 17, 2010

I think we can firmly declare December 2010 the Month of Pwnage. Between WikiLeaks, Gawker, McDonalds, and Anonymous DDoS attacks, I’m not sure infosec has been in the news this much since the early days of big data breaches. Heck, I haven’t been in the news this much since I got involved with the Kaminsky DNS thing. To be honest, it’s a little refreshing to have a string of big stories that don’t involve Albert Gonzales. But here’s the thing I find so fascinating. In a very real sense, most of these high profile incidents are meaningless compared to the real compromises occurring daily out there. Our large enterprise clients are continuously compromised and mostly focusing on minimizing the damage. While everyone worries about Gawker passwords, local bad guys are following delivery trucks and stealing gifts off doorsteps – our local police nailed someone who hit a dozen houses and 50 gifts, and Pepper also had a couple incidents. I can no longer tell someone my profession without hearing a personal – generally recent – story of credit card or bank fraud. Heck, this week my bank teller described how a debit card she cut up months earlier was used for online purchases. But I guess none of that is nearly as interesting as Gizmodo and Lifehacker account compromises. Or DDoS attacks that don’t cause any real damage. And even that story became pretty darn funny when they tried to attack Amazon… which is sort of like trying to deflect the course of the Sun with a flock of highly-motivated carrier pigeons. I love my job. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Rich quoted in the Wall Street Journal. Rich also quoted by the AP on the Gawker hack… which made it into a couple hundred publications.. For the record I wasn’t trying to downplay the severity to Gawker, but to contrast vandalism-style attacks (however severe) against financially motivated ones. Some of the context was lost, and I can’t blame the journalist. Network Security Podcast, Episode 225. Mike quoted in Weighing Optimism vs. Pragmatism. Dark Reading on Gawker Goof. Favorite Securosis Posts David Mortman: Market Maturity and Security Competitive Advantage. Mike Rothman: Get over it. If we spent half the time doing stuff that we do bitching about it, a lot more would get done. Rich has it exactly right in this one. Adrian Lane: Market Maturity and Security Competitive Advantage. Not sure the title captures the essence, but an important lesson in how the security industry is shaped. Rich: Sigh. Everyone stole my fave (Market Maturity). I guess we should have written more this week. Other Securosis Posts React Faster and Better: Incident Response Gaps. Infrastructure Security Research Agenda 2011 – Part 4: Egress and Endpoints. Infrastructure Security Research Agenda 2011 – Part 3: Vaulting and Assurance. Incite 12/15/2010: It’s not a sprint…. Infrastructure Security Research Agenda 2011 – Part 2: Posturing and Reacting Faster/Better. Quick Wins with DLP Webinar. Favorite Outside Posts Rich: The Real Lessons Of Gawker’s Security Mess. Daniel nails it with some hype-free, useful in-depth coverage. Some serious pwnage here. Adrian Lane: DO NOT poke the bear. And the beauty is that it ends with 1. David Mortman: The Flawed Legal Architecture of the Certificate Authority Trust Model. Mike Rothman: Can’t measure love. xkcd via Chandler. We can’t measure everything, but we can measure some things. and that’s key to remember for 2011 planning. Pepper: Avast! Beware ‘pirates’!. I just wish ‘Avast’ could be the most ‘pirated’ software of all time, because the name is just too perfect. Research Reports and Presentations The Securosis 2010 Data Security Survey. Monitoring up the Stack: Adding Value to SIEM. Network Security Operations Quant Metrics Model. Network Security Operations Quant Report. Understanding and Selecting a DLP Solution. Understanding and Selecting an Enterprise Firewall. Understanding and Selecting a Tokenization Solution. Top News and Posts Major Ad Networks Found Serving Malicious Ads. Backscatter X-Ray Machines Easily Fooled (pdf). Back door in HP network storage solution – Update. Mozilla Adding Web Applications to the Security Bug Bounty Program. Dancing Snowman storms its way across Facebook. OpenBSD has FBI backdoor, claims contractor. Most likely a hoax. Your email deserves due process. Over 500 patches for SAP. HeapLocker Tool Protects Against Heap-Spray Attacks. Twitter Spam Results from Gawker Leak. Gawker Password Pwnage. Microsoft to address IE, Stuxnet flaws. Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Marisa, in response to Get over it. Only my dad calls it The BayThreat, Rich. :p Gal Shpantzer had a great talk at DojoCon also this weekend about the “Security Outliers” and using analogies from other health and safety industries to tackle the subjects of infosec education and adoption. Seems like there is hope out there, and when the security industry is as old as sterilization practices in hospitals we’ll be seeing more trickle down adoption. Share:

Share:
Read Post

Quick Wins with DLP Webinar

Back in April I published a slightly different take on DLP: Low Hanging Fruit: Quick Wins with Data Loss Prevention. It was all about getting immediate value out of DLP while setting yourself up for a full deployment. On Wednesday at 11:30am EST I’ll be giving a free presentation on that material. If you’re interested, you can register at the Business of Information Security site. Share:

Share:
Read Post

Get over It

Over the weekend I glanced at Twitter and saw a bit of hand-wringing inspired by something going on at (I think) the Baythreat in California. This is something that’s been popping up quite a bit on Twitter and in blog posts for a while now. The core of the comments centered on the problem of educating the unwashed security masses, combined with the problems induced by a compliance mentality, and the general “they don’t understand” and “security is failing” memes. (Keep in mind I’m referring to a bunch of comments over a period of time, and not pointing fingers because I’m over-generalizing). My response? You can probably figure it out from the title of this post. I long ago stopped worrying about the big picture. I accepted that some people understand security, some don’t, and we all suffer from deformation professionnelle (a cognitive bias: losing the broader perspective due to our occupation). In any risk management profession it’s hard to temper our daily exposure to the worst of the worst with the attitudes and actions of those with other priorities. I went through a lot of similar hand-wringing first in my physical security days, and then with my rescue work. Ask any cop or firefighter and you’ll see the same tendencies. We need to keep in mind that others won’t always share our priorities, no matter how much we explain them, and no matter how well we “speak in the language of business”. The reality is that unless someone suffers noticeable pain or massive fear, human nature will limit how they prioritize risk. And even when they do get hit, the changes in thought from the experience fade over time. Our job is to keep slogging through; doing our best to educate as we optimize the resources at our disposal and stay prepared to sweep in when something bad happens and clean up the mess. Which we will then probably be blamed for. Thankless? Only if you want to look at it that way. Does it mean we should give up? No, but also don’t expect human nature to change. If you can’t accept this, all you will do is burn yourself out until you end up as an alcoholic passed out behind a dumpster, naked, with your keys up your a**. Fight the good fight. But only if you can still sleep well at night. Share:

Share:
Read Post

My 2011 Security Predictions

Someone will predict a big cyberattack someplace that may or may not happen. Someone will predict a big SCADA attack/failure someplace that probably won’t happen, but I suppose it’s still possible. Someone will predict that Apple will do something big that enterprises won’t adopt, but then they will. Someone will predict some tech will die, which is usually when a lot of people will buy it. Most people will renew every security product currently in their environment no matter how well they works (or don’t). Someone will predict that this time it’s really the year mobile attacks happen and steal everyone’s money and nekked photos off their phones. But it probably won’t happen, and if it does the press headlines will all talk about ‘iPhone’ even if it only affects Motorola StarTACs. Vendors will scare customers into thinking 20 new regulations are right around the corner – all of which require their products. There will be a lot of predictions with the words “social networking”, “2.0”, “consumerization”, “Justin Bieber”, and whatever else is trending on Twitter the day they write the predictions. Any time there’s a major global event or disaster, I will receive at least 8 press releases from vendors claiming bad guys are using it for spam/phishing. Some botnet will be the biggest. And a bonus: #11. The Securosis Disaster Recovery Breakfast at RSA will totally rock. I miss anything? Update – 12. Someone will predict cloud computing will cause/fix all these other problems (via @pwrcycle) Share:

Share:
Read Post

What Amazon AWS’s PCI Compliance Means to You

This morning Amazon announced that Amazon Web Services achieved PCI-DSS 2.0 Validated Service Provider compliance. This is both a very big deal, and no big deal at all. Here’s why: This certification means that the AWS services within scope (EC2, EBS, S3, and VPC – most of the important bits) passed an annual assessment by a QSA and undergo quarterly scans by an ASV. This means that Amazon’s infrastructure is certified to support payment system applications and services (anything that takes a credit card). This is a big deal, because there is no longer any question (until something changes) that you are allowed to deploy a payment system/application on AWS. Just because AWS is certified doesn’t mean you are. You still need to deploy a PCI compliant application/service and anything on AWS is still within your assessment scope. But any assessment you pay for will be limited to your installation – the back-end AWS components are covered by Amazon’s assessment, and your assessor won’t need to pound through all of Amazon to certify your environment deployed on AWS. Chris Hoff presciently wrote about this the night before Amazon’s announcement. Anything on your side that’s in scope (containing PAN data) is still in scope and needs to be assessed, but there are no longer any questions that you can deploy into AWS (another big deal). The “big whoop” part? As we said, your systems are still in scope even if you deploy on AWS, and still need to be assessed (and compliant). The open question? PCI-DSS 2.0 doesn’t address multi-tenancy concerns (which Amazon actually notes in their release). This is a huge political battleground behind the scenes (ask anyone in the virtualization SIG), and just because AWS is certified as a service provider doesn’t mean all cloud IaaS providers will be, nor that there won’t be a multi-tenancy failure on AWS leading to exposure of cardholder data. Compliance (still) != security. For a practical example: you can store PAN data on S3, but it still needs to be encrypted in accordance with PCI-DSS requirements. Amazon doesn’t do this for you – it’s something you need to implement yourself; including key management, rotation, logging, etc. If you deploy a server instance in EC2 it still needs to undergo ASV scans and meet all other requirements, and will be assessed by your QSA (if in scope). What this certification really does is eliminate any doubts that you are allowed to deploy an in-scope PCI system on AWS, and reduces your assessment scope to only your in-scope bits on AWS, not the entire service. This is a big deal, but your organization’s assessment scope isn’t necessarily reduced, as it might be when you move to something like a tokenization service where you reduce your handling of PAN data. Share:

Share:
Read Post

What Quantum Mechanics Teaches Us about Data Leaks

Thanks to some dude who looks like a James Bond villain and rents rack space in a nuclear bomb resistant underground cavern, combined with a foreign nation running the equivalent of a Hoover mated with a Xerox over the entire country, “data leaks” are back in the headlines. While most of us intuitively understand that preventing leaks completely is impossible, you wouldn’t know it from listening to various politicians/executives/pundits. We tend to intuitively understand the impossibility, but we don’t often dig why – especially when it comes to technology. Lately I’ve been playing with aspects of quantum mechanics as metaphors for information-centric (data) security. When we start looking at problems like protecting data in the highly distributed and abstracted environments enabled by virtualization, decentralization, and cloud computing, they are eerily reminiscent of the transition from the standard physics models (which date back to Newton) to the quantum world that came with the atomic age. My favorite new way to explain the impossibility of preventing data leaks is quantum tunneling. Quantum tunneling is one of those insane aspects of quantum computing that defies our normal way of thinking about things. Essentially it tells us that elementary particles (like electrons) have a chance of moving across any physical barrier, regardless of size. Even if the barrier clearly requires more energy to pass than the particle possesses. This isn’t just a theory – it’s essential to the functioning of real-world devices like scanning-tunneling microscopes, and explains radioactive particle decay. Quantum tunneling is due to the wave-particle duality of these elementary particles. Without going too deeply into it, these particles express aspects of both particles and waves. One aspect is that we can’t ever really put our finger on both the absolute position and momentum of the particle; this means they live in a world defined by probabilities. Although the probability of a particle passing the barrier is low, it’s within the realm of the possible, and thus with enough particles and time it’s inevitable that some of them will cross the barrier. Data loss is very similar conceptually. In our case we don’t have particles, we have datum (for our purposes, the smallest unit of data with value). Instead of physical barriers we have security controls. For datum our probabilities are location and momentum (movement), and for security controls we have effectiveness. Combine this together and we learn that for any datum, there is a probability of it escaping any security control. The total function is all the values of that datum (the data), and the combined effectiveness of all the security controls for various exit vectors. This is a simplification of the larger model, but I’ll save that for a future geekout (yes, I even made up some equations). Since no set of security controls is ever 100% effective for all vectors, it’s impossible to prevent data leaks. Datum tunneling. But this same metaphor also provides some answers. First of all, the fewer copies of the datum (the less data) and the fewer the vectors, the lower the probability of tunneling. The larger the data set (a collection of different datums), the less probability of tunneling if you use the right control set. In other words, it’s a lot easier to get a single credit card number out the door despite DLP, but DLP can be very effective against larger data sets, if it’s well positioned to block the right vectors. We’re basically increasing the ‘mass’ of what we’re trying to protect. In a different case, such as a movie file, the individual datum has more ‘mass’ and thus is easier to protect. Distill this down and we get back to standard security principles: How much are we trying to protect? How accessible is it? What are the ways to access and distribute/exfiltrate it. I like thinking in terms of these probabilities to remind us that perfect protection is an impossibility, while still highlighting where to focus efforts in order to reduce overall risk. Share:

Share:
Read Post

Friday Summary: December 3, 2010

What a week. Last Monday and Tuesday I was out meeting with clients and prospects and was totally psyched at all the cool opportunities coming up. I was a bit ragged on Wednesday, but figured it was the lack of sleep. Nope. It was the flu. The big FLU, not its little cousin the cold. I was laid up in bed for 4 days, alternating between shivering and sweating. I missed our annual Turkey Trot 10K, Thanksgiving, and a charity dinner at our zoo I’ve been looking forward to all year. Then a bronchial infection set in, resulting in a chest x-ray and my taking (at one point) five different meds. Today (Thursday) is probably my first almost-normal day of work since this started. Those of you in startups know the joy that is missing unexpected time. But all is not lost. We are in the midst of some great projects we’ll be able to provide more detail on in the coming months. We are partnering with the Cloud Security Alliance on a couple things, and finally building out our first product. I’m actually getting to do some application design work again, and I forgot how much I miss it. I really enjoy research, but even though the writing and presenting portion is a creative act, it isn’t the same as building something. Not that I’m doing much of the coding. No one needs a new “Hello World” web app, no matter how cleverly I can use the <BLINK> tag. On a different note, we are starting (yes, already) to put together our 2011 Guide to RSA. We think we have the trends we will cover nailed, but if you have something you’d like in the guide please let us know. And don’t forget to reserve Thursday morning for the Disaster Recovery Breakfast. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian on Database Password Crackers. Rich quoted in SC: WikiLeaks prompts U.S. government to assess security. No easy tech answers for leaks, folks. Mike on consumerization of IT security issues at Threatpost. Rich wasn’t on the Network Security Podcast, but you should listen anyway. Favorite Securosis Posts Adrian Lane: Criminal Key Management Fail. No Sleep Till… David Mortman: Are You off the Grid? Mike Rothman: Are You off the Grid? You’ve got no privacy. Get over it. Again. Rich: Counterpoint: Availability Is Job #1. Actually, read the comments. Awesome thread. Other Securosis Posts I can haz ur email list. Incite 12/1/10: Pay It Forward. Holiday Shopping and Security Theater. Grovel for Budget Time. Ranum’s Right, for the Wrong Reasons. Incident Response Fundamentals: Phasing It in. Incite 11/24/2010: Fan Appreciation. I Am T-Comply. Meatspace Phishing Encounter. Availability and Assumptions. Favorite Outside Posts Adrian Lane: Security Offense vs. Defense. It’s a week old, but I thought this post really hit the mark. David Mortman: Software [In]security: Cyber Warmongering and Influence Peddling. Mike Rothman: And Beyond…. We all owe a debt of gratitude to RSnake as he rides off into the sunset. To pursue of all things – happiness. Imagine that. Rich: More than just numbers. Jack Jones highlights why no matter what your risk approach – quantitative or qualitative – you need to be very careful in how to interpret your results. Mike Rothman: Palo Alto Networks Initiates Search for Top Executive. Rarely do you see a white-hot private start-up take out the CEO publicly over differences in “management philosophy.” Board room conversations must have been ugly. Chris Pepper: Modern Espionage and Sabotage. Project Quant Posts NSO Quant: Index of Posts. Research Reports and Presentations The Securosis 2010 Data Security Survey. Monitoring up the Stack: Adding Value to SIEM. Network Security Operations Quant: Metrics Model. Network Security Operations Quant Report. Understanding and Selecting a DLP Solution. White Paper: Understanding and Selecting an Enterprise Firewall. Understanding and Selecting a Tokenization Solution. Top News and Posts Chrome Gets a Sandbox for the Holidays. WordPress Fixes Vuln. RSnake’s 1000th post. Top Web Hacking Techniques Contest. Some great links! Robert Graham and the TSA. Kinda fun following his rants about the TSA. User Profiles Security Issue on Twitter. Ford employee stole $50M worth of secrets. Armitage UI for metasploit. Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week is a bit different – we had a ton of amazing comments on Firestarter: A Is Not for Availability and Counterpoint: Availability Is Job #1. Far too many to choose just one, so this is a group award that goes to: Somebloke Mark Wallace endo Steve Paul ds Dean Matt Franz Lubinski LonerVamp Franc mokum von Amsterdam TL sark Andrew Yeomans And of course, Adrian, Mike, Gunnar, and Mortman. Share:

Share:
Read Post

Meatspace Phishing Encounter

I had an insanely early flight this morning for some client work in the Bay Area, so last night I hopped out to fill up on gas and grab some pizza for family movie night (The Muppets Take Manhattan, in case you were wondering). I’m at the gas station when the guy at the pump next to me asks if I ever shop at Target. This is the sort of question that raises my wariness under most circumstances, and since we were, at that moment, about 100 meters from said Target, this line of conversation was clearly headed someplace interesting. My curiosity piqued, I said, “yes”. My pump-mate then proceeded to ask me, “We’re just trying to get some cash to find a place to stay tonight, I have this $50 gift card that I’ll sell you for $40…” “No thanks.” I realize it’s been over two decades since I lived in New Jersey (the part that likes to say they’re from New York), but some instincts never die. Anyone reading this blog knows that said gift card was, shall we say, certified pre-owned. The odds of there being $.01 left on it, never mind $50, were significantly lower than those of my baby’s diaper not requiring a full hazmat response. Or it was totally fake. This isn’t that significant an event. Most of you encounter this sort of stuff every couple years or so, at a minimum. I even once fell for an artful scam when I was traveling abroad, although my paranoia did manage to constrain the damage. But I do find the parallels with online scams interesting. Unlike my overseas adventure, this dude was clearly not the most trustworthy on the face of the planet. That’s one nice thing about online – even with bad grammar, no one knows you smell like a wet dog on a three week bender, and look like Lindsay Lohan after a weekend drug vacation with Charlie Sheen. And this dude had to run from location to location, because sitting still for very long would result in a call to law enforcement. And never mind that each contact is a one-off, costing time and gas. Perhaps it’s an effective scam, but certainly not overly efficient. Anyway, it’s been a long time since someone tried to defraud me face to face, so it was kind of refreshing. Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.