Securosis

Research

Where Are We? Nowhereville.

It’s been about 11 months since the first time I ever spoke with Joshua Corman. He had this idea for a Rugged Software movement and wanted some feedback. After he filled me in on the concept, I told him I thought it was a good idea, and told him I was in. A few weeks later the Rugged Manifesto was published. There were a flurry of blog posts, and a bunch of email discussions, which ended February this year. Since then, I have heard … crickets. New stuff on RuggedSoftware.org? No. OWASP? Nada. Twitter? Presentations? Chat groups? Pretty much not a damned thing. So what’s up, guys? Where is the movement? What problems have been solved? Don’t ask what is missing from software security; ask what’s missing from Rugged! Josh, David, and Jeff … I am calling you out! When the Agile Manifesto was originally published, there were a lot of frustrated software engineers who had specific problems, gripes, and issues that they wanted to address. They did not necessarily have the right answers, nor did they know what tools and techniques would work (either for themselves or for others), but they identified specific problems to address (lack of planning, fear, outside influencers, periodic validation, people’s inability to estimate, etc.), and had a bunch of stuff in their bag of tricks (peer programming, task cards, stories, test driven development, etc.). The Rugged Software movement has some of the same ingredients: a bunch of frustrated security professionals want code to be secure. And many very public security failures illustrate the need. And we have the same piss-poor failure analysis and finger-pointing that looks very familiar as well. But I don’t think we have adequately identified the problems that contribute to insecure code, and we definitely don’t have a bag of tricks ready to go. If you look closely at Agile techniques, most are actually process changes, without much to do with actual code. The processes were designed to address issues of complexity and lack of metrics, and to minimize negative human interaction. Do we have a similar set of guidance for Rugged? Nope. What’s missing? At this stage, at least three things: Clear, concise, no-BS descriptions of the problems that lead to insecure code. Some simple techniques, tricks, and ideas to help people get around these problems. Some people to help get this rolling. I am not trying to shove all this onto the backs of the three gents who started this movement. They need help. I want to help. And I know that there are many security pros and coders who will help as well. And I really don’t want to see another year of inactivity on this (sorry) ‘non-movement’. Ultimately I think Rugged is the right idea. But just like 11 months ago, there is a concept without direction. We don’t need a complete roadmap – early extreme programming sure wasn’t complete – but we do need to start moving the effort forward with some basics ways to solve problems. I push. You push back. Or not. Your call. Share:

Share:
Read Post

Friday Summary: December 10, 2010

The Securosis team is here in San Francisco, meeting with vendors and presenting at the TechTarget Data Protection event. Weather has been reasonable and the food was awesome. But since it’s been going non-stop since something like 3:00am to (What is it now? 11:01pm) – this summary will be a short one. I got to talk to a lot of people today. The common questions I get when meeting with vendors are, “What are you seeing? What are you hearing? What are the new technologies?” I had to stop and think about that last one for a minute. I am not really seeing any new technologies or innovation. I am seeing lots of platforms consolidating multiple technologies under a single umbrella. I am seeing configuration and vulnerability assessment vendors redefine their spaces, seeing web application security vendors bundle their products in different ways, hearing more interest in how to develop secure code, witnessing DAM go from misunderstood platform to well regarded feature, hearing lots of interest in taking advantage of fast and cheap cloud resources, and getting even more questions about what cloud security actually means. But new technologies? Not really. Yet this is one of the most interesting times in security that I have seen in the 15 years since I started working in the field. APT, Stuxnet, skimming, money mules, spam kings, and hacktivism all together make for fascinating reading. And there are tons of really good software conferences around the world with lots of great presentations. But not a lot of problems that we don’t have some solutions for. Have we reached a point where the flood of innovation has created enough tools, and now we just need to use them properly? On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian’s paper on Oracle database security. (PDF; registration required). Adrian’s article on Database Password Crackers. Rich speaking at the Cloud Security Alliance Congress next week. I’m co-presenting with Hoff again, and premiering my new Quantum Datum pitch on information-centric security for cloud computing. Haven’t been this excited to present new content in a long time. Favorite Securosis Posts David Mortman: Where Are We? Nowhereville.. Mike Rothman: My 2011 Security Predictions. Rich is so funny. Especially lampooning this ridiculous season of predictions. Adrian Lane: What Amazon AWS’s PCI Compliance Means to You. Rich Mogull: Edge Tokenization. Other Securosis Posts Adrian Speaking at NRF in January. Infrastructure Security Research Agenda 2011 – Part 1: Positivity. Incite 12/8/2010: the Nutcracker. RIP Marty Martian. React Faster and Better: Introduction. Incident Response Fundamentals: Index of Posts. What Quantum Mechanics Teaches Us about Data Leaks. Favorite Outside Posts Mike Rothman: Shearing Firesheep with the cloud. Great step by step tutorial for building an OpenVPN server in Amazon AWS. Literally anyone can do this. Even me! David Mortman: Unpeeling the mystique of tamper-indicating seals. Coincidentally, Defcon now has a tamper-evident seal contest….. Adrian Lane: Comment Induced Follow-up post. The comments are the post. Rich: One Click Application Security – How did we get here? Project Quant Posts NSO Quant: Index of Posts. NSO Quant: Health Metrics–Device Health. NSO Quant: Manage Metrics–Monitor Issues/Tune IDS/IPS. NSO Quant: Manage Metrics–Deploy and Audit/Validate. NSO Quant: Manage Metrics–Process Change Request and Test/Approve. Research Reports and Presentations The Securosis 2010 Data Security Survey. Monitoring up the Stack: Adding Value to SIEM. Network Security Operations Quant Metrics Model. Network Security Operations Quant Report. Understanding and Selecting a DLP Solution. Top News and Posts Basics of History Sniffing. JavaSnoop Analysis Tool Released. Amazon Receipt Generator Scam. Cloud Vulnerability Scanner Launched. Cloud WAF launched. Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Daniel, in response to My 2011 Security Predictions. 15 The Hoffachino becomes an official Starbucks drink and secures their public wireless by it’s pure awesomeness Yeah, there are that awesome! Share:

Share:
Read Post

Edge Tokenization

A couple months ago Akamai announced Edge Tokenization, a service to tokenize credit card numbers for online payments. The technology is not Akamai’s – it belongs to CyberSource, a Visa-owned payment processing company. I have been holding off on this post for a couple months in order to get a full briefing from CyberSource, but that is not currently happening, and this application of tokenization technology is worth talking about, so it’s time to forge ahead. I preface this by stating that I don’t write much about specific vendor announcements – I prefer to comment on trends within a specific industry. That’s largely because most product announcements are about smaller iterative improvements or full-blown puffy marketing doublespeak. To avoid being accused of being in somebody’s pocket, I avoid product announcements, except the rare cases that are important enough to demand discussion. A new deployment model for payment processing and tokenization qualifies. So what the heck is edge tokenization? Just what it sounds like: tokenization embedded in Akamai’s distributed edge platform. As we defined in our series on tokenization a few months ago, edge tokenization is functionally exactly the same as any other token server. It substitutes sensitive credit card/PAN data with a token as a reference to the original value. What’s different in this model is that it’s basically offloading the payment service to the Akamai infrastructure, which intercepts the credit card number before the merchant can receive it. The PAN and the rest of the payment data are passed to one of several payment gateways. At least theoretically it is – I have not verified which processors or how they are selected. CyberSource software issues the tokens during Akamai’s payment transaction with the buyer, and sends the token to the merchant as confirmation of approved payment. One of the challenges of tokenization is to enable the merchant to have full control over the user experience – whether point-of-sale or web – while removing their systems from the scope of a PCI audit. But from a security standpoint, removing the merchant is ideal. Edge tokenization allows the merchant to have control over the on-line shopping experience, but be completely out of the picture when handling the credit card. Without more information I cannot tell whether the merchant it is more or less removed from the sensitive aspects than with any other token service, but it looks like fewer merchant systems should be exposed. No service is ever simply ‘drop-in’, despite vendor assurances, so there will be some integration work to perform. But from Akamai’s descriptions it looks like the adaptations are no different than what you would do to accept tokens directly. This is one of several reasons I want to drill into the technology, but that will have to wait until I get more information from CyberSource. This announcement is important because it’s one of the few tokenization models that completely removes the merchant from processing the credit card. They only get a token on the back end for a transactional reference, and Akamai’s service takes care of clearing the payment and any needed remediation. Depending on how the integration is performed, this form of tokenization should also reduce PCI scope (just like those from NuBridges, Protegrity, RSA, and Voltage). Additionally, it’s build into the web infrastructure, instead of the merchant site. This gives merchants another option in case they are unhappy with the price, performance, or integration requirements of their existing payment processor’s tokenization offering (or lack thereof). And you would be surprised how often tokenization latency is the number one concern of merchants – rather than security. Imagine that! Finally, the architecture is inherently scalable, suitable for firms with multiple sites, and compatible with disaster recovery and failover. From what I understand, as tokens are single-use random numbers created on a per-merchant basis, so token generation should be very simple and fast. I do have a bit of an ethical dilema talking about this service, as Visa owns CyberSource. Creating a security standard for merchants to comply with, and then selling them a service to make them compliant, seems a bit dodgy to me. Sure, it’s great revenue if you can get it, but merchants are paying Visa – indirectly – to handle Visa’s risk, under Visa’s terms. This is our common refrain about PCI here at Securosis. But I guess this is the way things go. Trustwave’s offering tools to solve PCI checklist items that Trustwave QSAs review are not too different, and the PCI Council does not seem to consider that a conflict of interest. I doubt CyberSource’s Visa connection will raise concern either. In the big picture the goal is to have better security in order to reduce fraud, and for merchants it’s less risk and less cost – edge tokenization does both. I’ll update this post as I learn more. Share:

Share:
Read Post

Speaking at NRF in January

I am presenting at the National Retail Federation’s 100th annual convention in January 2011. I’ll be talking about the past, present, and future of data security, and how new threats and technologies affect payment card security. I am co-presenting with Peter Engert, who is in charge of payment card acceptance at Rooms To Go furniture, and Robert McMillon of RSA. Robert works with RSA’s tokenization product and manages the First Data/RSA partnership. We’ll each give a small slide presentation on what we are seeing in the industry, then we’ll spend the latter half of the session answering questions on any payment security issues you have. The bad news is that the presentation is on Sunday at 10:00 AM, on the first full day of the conference. The good news is both my co-presenters are very sharp guys and I expect this to be a very entertaining session. If you are not attending the conference, I’ll be around Sunday night and Monday morning, so shoot me an email if you are in town and want to chat! Look forward to seeing you. Share:

Share:
Read Post

Are You off the Grid?

I got email from friends this week about a web site that creeped them out. It’s called Spokeo, and it provides a Google-like search on personal information. Rather than creeped out, I was fascinated. Not to look for other people, but to see what the search found for me. I hate mentioning it as I am not endorsing the web site or service, but I can’t help my fascination at seeing what personal data has been collected and aggregated on me. I actually have a larger Internet fingerprint than I expected! This tool is kinda like Firesheep for personal information: the data is already out there, this site just shoves in your face how easy it is for anyone to collect basic stuff about you. But the friends who directed me to the site were genuinely worried that criminals would use the site to locate single women in their late 70s in order to create a robbery target list. Seriously … that explicit. I told them they needed counseling as they probably had ‘mommy’ issues. I find this ridiculous because in Arizona we call have ‘Sun City’ – the age-restricted community where everyone seems to be over 70, with some of the lowest crime rates in the county. I make a big deal about personal data because I believe no good deed goes unpunished. Shared personal information will sooner or later be used against you. My personal phobia is that an insurance company will write an automated crawler for personal data, consider something I do ‘risky’, and quadruple my rate for fun. Yeah, I probably need counseling as well. The paranoid part of me wanted to know how much more I had exposed myself. I looked myself up in various states, with and without my middle name. In most cases it’s easy to see where the data came from. Facebook. LinkedIn. Yelp. Some information has to be public because of government regulations. Sometimes it looks like data collected from other people’s contact lists that I never authorized, which is why I found old phone numbers from decades past. In some cases I couldn’t tell – I looked on all of the social media I use and couldn’t find a reference. It’s been a decade or so but I knew I would eventually see a tool like this. What made me laugh is that my years of paranoia have paid off. This shows up in how they get a lot of stuff wrong. Whenever I sign up for anything on line I always use make-believe data: age, race, contact information, etc. Sure, some digital profiles are work-related and so can’t be totally fake, but it’s kinda fun to see that I am a late-40’s hispanic woman to much of the digital world. Still, private as I am, I lost the bet with my wife, who has less public data out there. She is virtually invisible online. “Ha! Take that, Mr. Privacy Expert!” was her comment. Share:

Share:
Read Post

Holiday Shopping and Security Theater

This is usually the time of year I write a how-to article on safe seasonal shopping. And some of it is the usual generic advice – use a credit card, don’t click email links, use merchants you trust, etc. – but I like to include specific advice to deal with new seasonal threats. Wading into the deluge of threat warnings about Black Friday shopping schemes this year, I found mostly noise. There are plenty of real attacks consumers should be worried about, but many which aren’t worth the attention. And every article seems to have a particular agenda. For example, I have a hard time believing SMS banking scams are a real threat to holiday shoppers, in the same way I can’t imagine someone falling for a Nigerian banking scam or turning off their refrigerator because of a crank call. Some are so targeted at a small group, the news is only interesting to the most dedicated security researchers. Others attacks combine good old fashioned fraud with a few Search Engine Optimization shenanigans to game the system, causing a lot of people grief, but persist until law enforcement makes then a priority to investigate. Of the dozens of articles out there, they all seemed to feed the security theater, making it much harder to know what’s a real threat and what’s not. I don’t know if Bruce Schneier coined the term Security Theater, but he’s certainly the first person I head use the expression. Over the years I thought I knew exactly what he meant: pretending to do something about security when not really doing much of anything. But every couple years I find a new wrinkle to the concept, and now the term embraces several variants. To my mind there are at least four additional variations on this theme, all quasi-political: Grandstanding: For the pure selfish desire to be front and center in a discussion, and a relevant force in the industry, talking about security topics in overheated terms such as ‘Cyber-War’, taking the popular side on a one-sided issue like spam, or stating “X technology is dead!” Voyeuristic Groupies: The audience for security theater. If you have ever been to Washington DC and watched the lawyers and lobbyists huddle around politicians and policy makers for the sheer enjoyment of watching partisan politics as if it were Shakespearean theater, you know what I am talking about. The audience for security theater is simply fascinated by the hacks and clever ways in which hardware, software, and people are subverted. They love security rock stars. Hacking news may not contain much actionable information, but this audience feeds on the drama. Red Herring: Cry loudly about one problem, while studiously avoiding equally troubling issues. A little security theater redirects the spotlight away from the real problem. Like how to protect oneself from Firesheep, when the real problem is security irresponsibility and sloppy web site coding practices, which are much harder to tackle. Or focusing attention on ATM skimmer fraud becoming more of a problem while releasing very little information on the rates of compromised point-of-sale computers that serve credit card readers. Both are serious security problems – and I am guessing that they cause equal financial losses – but we have published numbers in one instance and not for the other. I understand why: one makes the bank or merchant look like the victim, but the other makes them look too cheap/lazy/incompetent to provide security. Reverse Scamming: The ATM skimming article referenced above states that there are technologies that solve these problems, such as ‘Chip-and-PIN’ systems. The theoretical argument is that this system is better because it uses two-factor authentication (knowing your PIN and having the card with the chip in it), in practice these systems have been hacked with great success. Look no further that European ATM fraud rates if you have any doubt. If you are a vendor of such technologies, it’s sure great to have people think you can solve the problem, and maybe even get adopted it as a standard. What better way to fill the company coffers? One thing we know for sure is that on-line fraud rates are on the rise, and both companies and individuas are targets. What we don’t have this year is one or two popular attack types to warn users about – rather we are seeing every known type. And this is further clouded byt seeing more ‘spin’ on security news than I have ever seen before. So this year’s advice is simple: use your head, and use your credit card. Hopefully that will keep you out of trouble, or at least reduce your liability if you do find any. Share:

Share:
Read Post

Ranum’s Right, for the Wrong Reasons

Information Security Magazine’s November issue is available. In it is an interesting rehash of the security monoculture debate between Bruce Schneier and Marcus Ranum some 8 years ago. Basically the hypothesis was that if all your software is provided by one vendor, a single security vulnerability means everyone is vulnerable. The result is a worldwide cascade of failures. The term “domino effect” was thrown around to describe what would happen. I remember reading that debate when it first came out, but the most interesting aspect of this discussion is actually how much the threat landscape has changed in 8 years. Much of the argument was based on a firm with a culture of insecurity. Who knew Microsoft would take security seriously, and dramatically improve their products? Who knew that corporate espionage would be a bigger threat than DDoS? And that whole Apple thing … total surprise. All in all I tend to agree with Ranum’s position, but not because of the shaky points he raised. It’s not because everyone patches at different rates, or that some systems are “loosely coupled” or in “walled gardens”, or even that the organism analogies suck. It’s because of two things: Resiliency – Marcus’s point that the first part of the scenario – hacked systems every week for the last 15 years – is spot on. But the Internet continues to rumble along, warts and all. I don’t think this has so much to do with the difference in the way servers are managed, it’s that companies are a lot better at disaster recovery that they are security. Recover from tape, patch, and move on. We know how to do this. We got hacked, we fixed the immediate problem, and we moved on. Vulnerabilities – Even if we had very small communities of software developers, is there any reason whatsoever to believe security would be better? Just because we don’t have write-once, exploit-everywhere malware, it does not mean that all the smaller vendors would not have been hacked. Just because Microsoft was a large target does not mean Adobe was any more secure. Marcus has published research on how people studiously avoid accepting blame for stupid decisions and are likely to repeat them. Even without a monoculture, classes of vulnerabilities like buffer overflow, SQL injection, and DoS are common to all software. And classes of people persist as well. It would take hackers more time and effort for every system they attack in a diversified model, but they would still be able to hack them. But the goal is usually stealthy theft of data, so the probability of detecting compromise also falls. We did see millions of web sites, applications, and databases compromised over the last 8 tears. And we know many more were never made public. And we have no way to calculate the cost in terms of lost productivity, or the damage due to corporate espionage. But recent APT attacks using unpublished Microsoft 0-day attacks, such as the recent Stuxnet attack, show it does not matter whether it’s mainstream software from a single large vendor, or obscure SCADA software nobody’s ever heard of. Every piece of software I have ever encountered has had security bugs. Monoculture or otherwise, we’ll see lots of vulnerable software. I could offer an organism based analogy, or a parable about genetics and software development, but that would probably just annoy Marcus more than I already have. Share:

Share:
Read Post

Availability and Assumptions

Skipped out of town for a much needed vacation Friday, and spent the weekend in a very remote section of desert. I spent my time hiking to the top of several peaks and overlooking vast areas of uninhabited country. I rode quads, wandered around a perfectly intact 100 year old mine shaft, did some target practice with a new rifle, built giant bonfires, and sat around BSing with friends. A total departure from everyday life. So I was in a semi-euphoric state, and trying to ease my way back into work. I was not planning on delving into complex security philosophy and splitting semantic hairs. But here I am … talking about Quantum Datum. Rich’s Monday FireStarter is a departure from the norm for security. The resultant comments, not so much. Cloud, SaaS, and other distributed resource usage models are eviscerating perimeter based security models. For a lot of you who read this blog that’s a somewhat tired topic, but what to do about it is not. You need to view Rich’s comments from a data perspective. If the goal is to secure data, and the data must be self-defending because it can’t trust the infrastructure, what we do today breaks. As is his habit, Gunnar Peterson succinctly captured the essence of the friction between IT & Security in response to Mike’s “Availability Is Job #1” post: I agree that availability is job 1, its just not security’s job. We have built approx zero systems that have traditional cia, time to move on. And we fall back into the same mindset, as we don’t have a mental picture of what Rich is talking about. The closest implementations we have are DLP and DRM, and they are still still off the mark. I look at traditional C-I-A as a set of goals for security in general, and attribution as a tool – much in the way encryption and access control are tools. Rereading Rich’s post, I think I missed some of the subtleties. Rich is describing traits that self-defending data must possess, and attribution is the most difficult to construct because it defines specific use cases. Being so entrenched in our current way of thinking limits our ability to even discuss this topic in a meaningful way, because we have unlearn certain rules and definitions. Is availability job 1? Maybe. If you’re a public library. If you’re the Central Intelligence Agency, no way. Most data will fall somewhere between these two extremes, and should have restrictions on how it is available. So the question becomes: when is data available? Attribution helps us determine what’s allowed, or when data is available, and under what circumstances. But we build IT systems with the concept that the more people can access and use data, the more value it has. Rich is right: treating all data like it should be available is a broken model. Time to learn a new C-I-A. Share:

Share:
Read Post

Friday Summary: November 19, 2010

I got distracted by email. The Friday Summary was going to be about columnar databases. I think. Maybe it’s the flu I have had all week, or my memory is going, or just perhaps the subject was not all that interesting to begin with. But the email that distracted me was kind of funny and kinda sad. A former friend and co-worker contacted me for the first time is something like 10 years. Out of the blue. The gist of the email was he was being harassed by someone with threatening emails. After a while he started to worry and wondered if the mystery harasser was serious. So he contacted the police and forwarded the information to the FBI. No response. Met with the police and they have no interest in further investigation unless there is something more substantive. You know, like a chalk outline. In frustration he reached out to me to see if he could discover the sender. Now I am not exactly a forensics expert, but I can read email headers and run reverse DNS lookups and whois. And in about three minutes I walked this person through the email header and showed the originating accounts, domains, and servers. Easy. Now I must assume that if you know about email header information and don’t want to be traced, with a little effort you could easily cover your tracks. Temp Gmail or Yahoo accounts? Use cloud or hijacked servers, or even the public library computer can hide your tracks? No? How about using your freakin’ Blackberry with your real email account, but just changing the user name? Yeah, that’s the ticket! I am occasionally happy that there are stupid people on the planet. Oh, and since you asked for it (and you know who you are), here’s the Monkey Dance: (-shuffle-shuffle-spin-shuffle-backflip). The video is too embarrassing to post. Yeah, you can make us dance for a .99 cent Kindle subscription. You ought to see what we do for an $8k retainer! On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Someone seems to think we’re one of the top 5 security influencers. Rich thinks Rothman must have paid them. Rich’s presentation at the Cloud Security Congress mentioned in this SearchSecurity article. Adrian’s comments on a database security survey. Favorite Securosis Posts Mike Rothman: Datum Entanglement. Rich’s big thoughts on where information-centric security needs to go. At least the start of those big thoughts… Rich: Rethinking Security. Adrian Lane: Datum Entanglement. Geek out! Le Geek, C’est Chic. Other Securosis Posts Incite 11/17/2010: Hitting for Average. What You Need to Know about DLP for PCI 2.0. React Faster and Better: Mop up, Analyze, and QA. Favorite Outside Posts Mike Rothman: 2011: The Death of Security As We Know IT or Operationalizing Security. From Amrit: “Security must be operationalized, it must become part of the lifecycle of everything IT.” Yeah, man. Rich: Brian Krebs on the foolishness of counting vulnerabilities. Adrian Lane: Amrit’s Operationalizing Security. Because, in its current position, security can only say “No”. Gunnar Peterson: Challenge of Sandboxing by Scott Stender. Project Quant Posts NSO Quant: Index of Posts. NSO Quant: Health Metrics – Device Health. NSO Quant: Manage Metrics – Monitor Issues/Tune IDS/IPS. NSO Quant: Manage Metrics – Deploy and Audit/Validate. Research Reports and Presentations The Securosis 2010 Data Security Survey. Monitoring up the Stack: Adding Value to SIEM. Network Security Operations Quant Metrics Model. Network Security Operations Quant Report. Top News and Posts Adobe Releases Reader X with Sandbox. FreeBSD Sendmail Problem; update: The Problem Is with Gmail. Lawmakers take away TSA’s fringe benefits. Drive-by Downloads Still Running Wild Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Ian Krieger, in response to Datum Entanglement. Whilst it is a really stupidly-complex [sic] introduction it gets you in the right frame of mind, that is the complexities in securing data (yes I’m talking the plural here) when you have the ability to copy, or extract, it. Looking forward to the next pieces and see where your presentation goes. Share:

Share:
Read Post

MS Atlanta: Protection Is Not Security

Microsoft has announced the beta release of something called Microsoft Codename “Atlanta”, which is being described as a “Cloud-Based SQL Server Monitoring tool”. Atlanta is deployed as an agent that embeds into SQL Server 2008 databases and sends telemetry information back to the Microsoft ‘cloud’ on your behalf. This data is analyzed and compared against a set of configuration policies, generating alerts when Microsoft discovers database misconfiguration. How does it do this? It looks at configuration data and some runtime system statistics. The policies seem geared toward helping DBAs with advanced SQL features such as mirroring, clustering, and virtual deployments. It’s looking at version and patch information, and it’s collecting some telemetry data to assist with root cause analysis for performance issues and failures. And finally, the service gets information into Microsoft’s hands faster, in an automated fashion, so support can respond faster to requests. The model is a little different than most cloud offerings, as it’s not the infrastructure that’s being pushed to the cloud, but rather the management features. Analysis does not appear to occur in real time, but this limitation may be lifted in the production product. If you are like me, you might have gotten excited for a minute thinking that Microsoft had finally released a vulnerability assessment tool for SQL Server databases, but alas, “Atlanta” does not appear to be a vulnerability assessment tool at all. In fact, it does not appear to have general configuration policies for security either. Like most System Center Products, “Data Protection” for SQL Server actually means integrity and reliability, not privacy and security. If you have ever read the “How to protect Microsoft SQL Server” white paper, you know exactly what I mean. So if you were thinking you could getting protection and configuration management for security and compliance, you will have to look elsewhere. The good news is I don’t see any serious downside or imminent security concern with Atlanta. The data sent to the cloud does not present a privacy or security risk, and the agent does not appear to provide any command and control interface, so it’s less likely to have be explotable. Small IT teams could benefit from automated tips on how the database should be set up, so that’s a good thing. As the feature sets grows you will need to pay close attention to changes in agent functionality and what data is being transferred. If this evolves and starts pushing database contents around like the Data Protection Manager, a serious security review is warranted. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.