Securosis

Research

Friday Summary: June 25, 2010

Thursday was totally shot. I wasted the entire day standing around. Eight hours and twenty nine minutes standing in line. I got in line at 5:50 AM and did not get back in my car until 3:00. Yep, it was Apple iPhone day. And I did not have a reservation. If you like people-watching, this is about as much fun as you will ever have. There were some 700 people at the mall by 6:30 AM. Close to me in line were two women with infants, and they were there all day. There were small children with their grandparents. The guy next to me had a shattered foot from a recent car accident. There were people calling their bosses, not to call in sick, but to tell them they were taking the day off to buy iPhones. These people were freakin’ dedicated. I have not stood in line for any length of time since 1983, trying to get a good seat for Return of the Jedi. I have not stood in line without knowing whether I would get what I was there for since the Tutankhamun exhibit in, what, 1979? This is not something I do, but I wanted the phone. And actually I did not want the ‘phone’, but everything else. I wanted a (mostly) secure way to get email on the road. I wanted a mobile device to surf the web. I wanted a way to find Thai food. I wanted a better camera. I wanted a way to get directions when traveling. I wanted to have music with me. I wanted to access files in Dropbox whenever and wherever. And the BlackBerry did none of these thing well, if at all. Plus, as a device, the BlackBerry is a poorly-engineered turd in comparison. I was just done with it, and I wanted the iPhone, and I wanted it before Black Hat. So there I stood, for eight and a half hours, holding a place in line for a guy with a broken foot so he could sit on the mall couch. I have to say the Apple employees were great. Every 30 minutes they brought us water and Starbucks coffee. Every 15 minutes they brought snacks. They sent employees into the line to chat. They brought sample phones and sat with us, in line, to demo the differences. They thanked us for sticking it out. They asked us if we needed anything, holding places in line and bringing food. They took care of every part of the transaction, including dealing with AT&T and their inability to process credit cards without dialing up Equifax. Great products and great service … it’s like I was transported back in time to an age when those things mattered. All in all I am glad I waited it out and got my phone. Camera is amazing. Display is crystal-clear. The phone does not have the hideous ‘pops’ in audio that blow my ears out, or randomly shut off for 20 seconds like the BlackBerry. And the FaceTime feature works really well, for what it’s worth. Would I do it again? Would I stand there for 8.5 hours? Ask me in another 25 years. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Chris Pepper gives us The Sun Also Sets. Time to kick Oracle a little. What a bloody fiasco! Adrian’s Dark Reading post on Open Source Database Security Issues. Rich on the Network Security Podcast, number 202. Favorite Securosis Posts Rich: Understanding and Selecting SIEM/LM: Deployment Models. Adrian and Mike do a great job of diagramming out the different deployment models. Really clear. Mike Rothman: The Open Source Database Security Project. Adrian needs to flex his database security kung-fu, and we aren’t going to get in his way. Help him out – it’s a great project. Adrian Lane: Trustwave Acquires Breach. I have not seen anyone openly discuss the apparent conflicts of interest, nor how this changes PCI compliance, the way Rich has captured it. Other Securosis Posts Understanding and Selecting a Tokenization Solution: Introduction. Are Secure Web Apps Possible? Incite 6/23/2010: Competitive Fire. FireStarter: Is Full Disk Encryption without Pre-Boot Secure?. Return of the Security Start-up? Friday Summary: June 18, 2009. Doing Well by Doing Good (and Protecting the Kids). Favorite Outside Posts Rich: Why the Disclosure Debate Doesn’t Matter. Dennis nails it. Bad guys don’t give a rat’s ass what we think of disclosure, they still have plenty to own us with. Mike Rothman: Security Intelligence: Defining APT Campaigns Good analysis of what’s involved in detecting a multi-faceted complex intrusion from Mike Cloppert. If you have a great forensics person who is good at this, pay them more. Those skills are gold. Adrian Lane: Anti-WAF Software Only Security Zealotry. Only because Jeremiah wrote this before I did. Project Quant Posts DB Quant: Manage Metrics, Part 1, Configuration Management. DB Quant: Protection Metrics, Part 4, Web Application Firewalls. DB Quant: Protect Metrics, Part 3, Masking. DB Quant: Protect Metrics, Part 2, Encryption. DB Quant: Protect Metrics, Part 1, DAM Blocking. NSO Quant: Manage IDS/IPS Process Map. DB Quant: Monitoring Metrics, Part 2, Audit. DB Quant: Monitoring Metrics, Part 1, DAM. NSO Quant: Manage Firewall Process Map. DB Quant: Secure Metrics, Part 4, Shield. DB Quant: Secure Metrics, Part 3, Restrict Access. DB Quant: Secure Metrics, Part 2, Configure. Research Reports and Presentations White Paper: Endpoint Security Fundamentals. Understanding and Selecting a Database Encryption or Tokenization Solution. Low Hanging Fruit: Quick Wins with Data Loss Prevention. Report: Database Assessment. Top News and Posts Firefox & Opera updates. Improving HTTPS Side Channel Attacks Google wins Viacom suit. MS plans 10 new patches. SharePoint and IE are the big ones. Cyber Thieves Rob Treasury Credit Union. Ukrainian arrested in India on TJX data-theft charges. These incidents go on for years, not days or even months. iPhone PIN code worthless. Rich published on this a long time ago, but automounting on Ubuntu is new and disturbing. Previously people believed you had to jailbreak

Share:
Read Post

DB Quant: Manage Metrics, Part 3, Change Management

Believe it or not, we are down to our final metrics post! We’re going to close things out today with change management – something that isn’t specific to security, but comes with security implications. Our change management process is: Monitor Schedule and Prepare Alter Verify Document Monitor Variable Notes Time to gather change requests Time to evaluate each change request for security implications Schedule and Prepare Variable Notes Time to map request to specific actions/scripts Time to update change management system Time to schedule downtime/maintenance window and communicate Alter Variable Notes Time to implement change request Verify Variable Notes Time to test and verify changes Document Variable Notes Time to document changes Time to archive scripts or backups Other Posts in Project Quant for Database Security An Open Metrics Model for Database Security: Project Quant for Databases Database Security: Process Framework Database Security: Planning Database Security: Planning, Part 2 Database Security: Discover and Assess Databases, Apps, Data Database Security: Patch Database Security: Configure Database Security: Restrict Access Database Security: Shield Database Security: Database Activity Monitoring Database Security: Audit Database Security: Database Activity Blocking Database Security: Encryption Database Security: Data Masking Database Security: Web App Firewalls Database Security: Configuration Management Database Security: Patch Management Database Security: Change Management DB Quant: Planning Metrics, Part 1 DB Quant: Planning Metrics, Part 2 DB Quant: Planning Metrics, Part 3 DB Quant: Planning Metrics, Part 4 DB Quant: Discovery Metrics, Part 1, Enumerate Databases DB Quant: Discovery Metrics, Part 2, Identify Apps DB Quant: Discovery Metrics, Part 3, Config and Vulnerability Assessment DB Quant: Discovery Metrics, Part 4, Access and Authorization DB Quant: Secure Metrics, Part 1, Patch DB Quant: Secure Metrics, Part 2, Configure DB Quant: Secure Metrics, Part 3, Restrict Access DB Quant: Monitoring Metrics: Part 1, Database Activity Monitoring DB Quant: Monitoring Metrics, Part 2, Audit DB Quant: Protect Metrics, Part 1, DAM Blocking DB Quant: Protect Metrics, Part 2, Encryption DB Quant: Protect Metrics, Part 3, Masking DB Quant: Protect Metrics, Part 4, WAF Share:

Share:
Read Post

Are Secure Web Apps Possible?

We security folks are a tough crowd, and we have trouble understanding why stuff that is obvious to us isn’t so obvious to everyone else. We wonder why app developers can’t understand how to develop a secure application. Why can’t they grok SDL or run a damn scanner against the application before it goes live? Q/A? Ha. Obviously that’s for losers. And those sentiments aren’t totally misplaced. There is a tremendous amount of apathy regarding software security, and the incentives for developers to do it right just aren’t there. But it’s not all the developers fault, because for the most part secure coding is a dream. Yeah, maybe that’s harsh and I’m sure the tool vendors will be hanging me in effigy soon enough, but that’s how it seems to me. Says the guy who hasn’t developed real code for 18+ years and leaves the application security research to folks (like Adrian) who are qualified to have an opinion. But not being qualified never stopped me from having an opinion before. I come to this conclusion after spending some time trying to digest a post by Errata Security’s Rob Graham on the AT&T iPad hack. Rob goes through quite a few application security no-nos, quoting chapter and verse, pointing them out in this rather simple attack. This specific attack vector doesn’t appear in the OWASP Top 10 list, nor should it. But it underscores the difficulty of really securing an application and the need to not just run a scanner against the code, but to really exercise the business logic before turning the app loose on the world. Rob’s post talks about information leakage, security via obscurity, the blurring line between internal and external, and other ways to make an application do unintended things, usually ending in some kind of successful attack. So does that mean we give up, which seemed to be one of the messages from the Gartner show this week (hat tip to Ed at Securitycurve)? Not so much, but we have to continue aggressively managing expectations. If you have smart guys like Rob, RSnake, or Jeremiah beat the crap out of your application, they will find problems. Then you’ll have an opportunity to fix them before the launch. In a perfect world, this is exactly what you would do, but it certainly isn’t the cheapest or fastest option. On the other hand, you can run a scanner against the code and eliminate much of the lowest-hanging fruit that the script kiddies would target. That’s certainly an option, but the key to this approach is to make sure everyone knows a talented attacker specifically targeting your stuff will win. So when an attack not explicitly mentioned in your threat model (like the AT&T/iPad attack) happens, you will have to deal with it. And if you have some buddies in the FBI, maybe you can even get the hacker arrested on drug charges… Or you could do nothing like most of the world, and seem surprised when a 12-year-old in Estonia sells your customers on a grey-market website. To think we can really develop secure web applications is probably a pipe dream – depending on our definition of ‘secure’, obviously. But we certainly can make our apps more secure and outrun our slower competitors, if not the bear. Most of the time that’s enough. Share:

Share:
Read Post

Incite 6/23/2010: Competitive Fire

I’ve always been pretty competitive. For instance, back in high school my friends and I would make boasts about how we’d have more of this or that, and steal the other’s wife, etc. Yes, it was silly high school ego run rampant, but I thought life was a zero sum game back then. Win/win was not in my vocabulary. I win, you lose, that’s it. I carried that competitive spirit into the first 15 years or so of my working career. At META, it was about my service selling more than yours. About me being able to stake out overlapping coverage areas and winning the research battle. In the start-up world, it was about raising the money and beating the other companies with similar stories & models. Then in a variety of vendor gigs, each in very competitive market spaces, it was about competing and winning and having a better story and giving the sales team better tools to win more deals. Nothing was ever good enough – not at work, not at home, and not in my own head. Yeah, I was frackin’ miserable. And made most of the people around me miserable as well. When I was told my services were no longer needed at CipherTrust, I saw it as an opportunity to go in a different direction. To focus on helping folks do better, as opposed to winning whatever ‘needed’ to be won. It wasn’t exactly a conscious decision, but I knew I needed a change in focus and attitude. For the most part, it worked. I was much happier, I was doing better, and I was less grumpy. Then I stepped back into corporate life, but to be honest, my heart wasn’t in it. I didn’t care if we lost a specific deal because we should be able to get into a lot of deals and statistically we’d be OK. Of course, I had to mask that indifference, but ultimately for a lot of reasons it didn’t make sense for me to continue in that role. So I left and got back to where I could help folks, and not worry about winning. But you can’t entirely escape competition. Now I play softball on Sundays with a bunch of old guys like me. But some of them still have that competitive fire burning and to be honest it gets annoying. When someone boots a ground ball or lines out with runners on, these guys get all pissed off. We lost a one-run game last Sunday, after coming back from 3 runs down in the last inning. I was happy with that effort – we didn’t give up. Others were pissed. Personally, I play softball because it’s fun. I get outside, I run around, I get my couple of at-bats and make a few plays in the field. But when guys get all uppity about not winning or someone making a mistake, it’s demotivating to me. I’ve got to find a way to tune out the negativity and still have fun playing. Or I’ll need to stop, which is the wrong answer. But I am working too hard to be positive (which is not my default mode) to hang around with negatives. Yes, I like to win. But I don’t need to win anymore. And I’m a lot happier now because of it. But that’s just me. – Mike. Photo credits: “win win” uploaded to Flickr by TheTruthAbout… Recent Securosis Posts Understanding and Selecting SIEM/LM: Deployment Models. Trustwave, Acquisitions, PCI, and Navigating Conflicts of Interest. FireStarter: Is Full Disk Encryption without Pre-Boot Secure? Return of the Security Start-up? Doing Well by Doing Good (and Protecting the Kids). Take Our Data Security Survey & Win an iPad. Incite 4 U Different NAC strokes for different folks – A few weeks ago, Joel Snyder talked about what went wrong with NAC. It was a good analysis of the market issues. Joel’s conclusion is that there isn’t really a standard set of NAC features, but rather a number of different breeds. Which basically means there is no market – not a consistent one, anyway. No wonder the category has struggled – nobody can agree on what problem the technology is supposed to solve. Joel also points out some of the political issues of deploying a solution that spans network, endpoint, and security teams. This week, NetworkWorld published the Joel’s review. He does likes some of the products (those based on 802.1X like Avenda, Enterasys, and Juniper), and has issues with some of the others (ForeScout and TrustWave). But ultimately the review highlights the reality of the market, which is that there isn’t one. – MR DRM dreams – Designing DRM systems in 1996, I had big hopes that digital lockers would be a popular choice to secure content for people to share on the Internet. I thought everyone from banking systems to media distribution could benefit. By 1998 that dream faded as nobody was really interested in secure content storage or delivery. But it turns out someone has the same dreams I did: hackers embrace DRM as a way to hide pirated content as reported on Yahoo! News. Basically pirated video is wrapped up in a protective blanket of encryption, which can then be moved and stored freely, without detection by content analysis tools. Porn, pirated movies, and whatever else, can be distributed without fear of being inspected and discovered. And this model works really freaking’ well when the buyer and seller want to keep their activity a secret. Hollywood may have complained bitterly about pirated DVDs, but this particular delivery model will be near impossible to stop. No, Cyber-nanny will not cut it. There are only a handful of ways to catch and prosecute this type of crime. Law enforcement will have to figure out how to police the exchange of decryption keys for money. – AL Disclosure is religion – I’ve been known to write and talk about the disclosure debate, but I’m starting to wonder if it’s

Share:
Read Post

The Open Source Database Security Project

I am thinking about writing a guide to secure open source databases, including verification queries. Do you all think that would be useful? For the most part, when I write about database security, I write about generic approaches that apply to all database platforms. I think this is helpful for database managers, as well as security and IT professionals who have projects that span multiple database types. When writing the Database Security Fundamentals series, my goal was to provide a universal checklist of the database security basics that anyone with basic DBA skills could accomplish in a week. DBAs who work in large enterprise may have established guidelines, but small and medium sized firms generally don’t, and I wanted the series to provide an awareness on what to look for and what to do. I also find that mainstream Oracle DBAs tune out because I don’t provide specific queries or discuss native features. The downside is that the series covers what to do, but not how to do it. By taking a more abstract look at the problems to be solved across security and compliance, I cannot provide specific details that will help with Oracle, Sybase, Teradata, PostgreSQL, or others – there are simply too many policies for too many platforms for me to sufficiently cover. Most DBAs know how to write the queries to fulfill the policies I outlined. For the non-DBA security or IT professional, I recognize that what I wrote leaves a gap between what you should do and how to do it. To close this gap you have a couple of options: Acquire tools like DAM, encryption, and assessment from commercial vendors Participate on database chat boards and ask questions RTFM Make friends with a good DBA Yes, there are free tools out there for assessment, auditing, and monitoring. They provide limited value, and that may be sufficient for you. I find that the free assessment tools are pretty bad because they usually only work for one database, and their policies are miserably out of date. Further, if you try to get assessment from a commercial vendor, they don’t cover open source databases like Derby, PostgreSQL, MySQL, and Open Ingres. These platforms are totally underserved by the security community but most have very large installed user bases. But you have to dig for information, and cobble together stuff for anything that is not a large commercial platform. So here is what I am thinking: through the remainder of the year I am going to write a security guide to open source databases. I will create an overview for each of the platforms (PostgreSQL, Derby, Ingres and MySQL), and cover the basics for passwords, communications security, encryption options, and so forth, including specific assessment polices and rules for baselining the databases. Every week I’ll provide a couple new rules for one platform, and I will write some specific assessment policies as well. This is going to take a little resourcefulness on my part, as I am not even sure my test server boots at this point, and I have never used Derby, but what the heck – I think it will be fun. We will post the assessment rules much like Rich and Chris did for the ipfw Firewall Rule Set. So what do you think? Should I include other databases? Should I include under-served but non-open-source such as MS Access and Teradata? Anyone out there want to volunteer to test scripts (because frankly I suck at query execution plans and optimization nowdays)? Let me know because I have been kicking this idea around for a while, but it’s not fully fleshed out, and I would appreciate your input. Share:

Share:
Read Post

Trustwave, Acquisitions, PCI, and Navigating Conflicts of Interest

This morning Trustwave announced their acquisition of Breach Security, the web application firewall vendor. Trustwave’s been on an acquisition streak for a while now, picking up companies such as Mirage (NAC), Vericept (DLP), BitArmor (encryption), and Intellitactics (log management/SIEM). Notice any trends? All these products have a strong PCI angles, none of the companies were seeing strong sales (Trustwave doesn’t do acquisitions for large multiples of sales), and all were more mid-market focused. Adding a WAF to the mix makes perfect sense, especially since Trustwave also has web application testing (both controls meet PCI requirement 6.6). Trustwave is clearly looking to become a one-stop shop for PCI compliance. Especially since they hold the largest share of the PCI assessment market. To be honest, there are concerns about Trustwave and other PCI assessment firms offering both the assessment and remediation services. You know, the old fox guarding the henhouse thing. There’s a reason regulations prohibit financial auditors from offering other services to their clients – the conflicts of interest are extremely difficult to eliminate or even keep under control. When the person making sure you are compliant also sells you tools to help become compliant, we should always be skeptical. We all know how this goes down. Sales folks will do whatever it takes to hit their numbers (you know, they have BMW payments to make), and few of them have any qualms about telling a client they will be compliant if they buy both their assessment services and a nice package of security tools and implementation services. They’ll use words like “partners” and “holistic” to seem all warm and fuzzy. We can’t really blame Trustwave and other firms for jumping all over this opportunity. The PCI Council shows no interest in controlling conflicts of interest, and when a breach does happen the investigation in the kangaroo court will show the company wasn’t compliant anyway. But there is also an upside. We also know that every single client of every single PCI assessment, consulting, or product firm merely wants them to make PCI “go away”, especially in the mid-market. Having firms with a complete package of services is compelling, and companies with big security product portfolios like Symantec, McAfee, and IBM aren’t well positioned to provide a full PCI-related portfolio, even though they have many of the pieces. If Trustwave can pull all these acquisitions together, make them good enough, and hit the right price point, the odds are they will make a killing in the market. They face three major challenges in this process: Failing to properly manage the conflicts of interest could become a liability. Unhappy customers could lead to either bad press and word of mouth, or even changes in PCI code to remove the conflicts, which they want to avoid at all costs. The actual assessors and consultants are reasonably well walled off, but they will need to aggressively manage their own sales forces to avoid problems. Ideally account execs will only sell one side of the product line, which could help manage the potential issues. Customers won’t understand that PCI compliance isn’t the same as general security. Trustwave may get the blame for non-PCI security breaches (never mind the real cardholder data breaches), especially given the PCI Council’s history of playing Tuesday morning QB and saying no breached organization could possibly be compliant (even if they passed their assessment). Packaging all this together at the right price point for the mid-market won’t be easy. Products need real integration, including leveraging a central management console and reporting engine. This is where the real leverage is – not merely services-based integration, which is not good enough for the mid-market. So the Breach acquisition is a smart move for Trustwave, and might be good for the market. But as an assessor, Trustwave needs to carefully manage their acquisition strategy in ways mere product mashup shops don’t need to worry about. Share:

Share:
Read Post

Understanding and Selecting SIEM/LM: Deployment Models

We have covered the major features and capabilities of SIEM and Log Management tools, so now let’s discuss architecture and deployment models. Each architecture addresses a specific issue, such as coverage for remote devices, scaling across hundreds of thousands of devices, real-time analysis, or handling millions of events per second. Each has advantages and disadvantages in analysis performance, reporting performance, scalability, storage, and cost. There are four models to discuss: ‘flat’ central collection, hierarchical, ring, and mesh. As a caveat, none of these deployment models is mutually exclusive. Some regions may deploy a flat model, but send information up to a central location via a hierarchy. These are not absolutes, just guidelines to consider as you design your deployment to solve the specific use cases driving your project. Flat The original deployment model for SIM and log management platforms was a single server that collected and consolidated log files. In this model all log storage, normalization, and correlation occurs within a central appliance. All data collection methods (agent, flow, syslog, etc.) are available, but data is always stored in the same central location. A flat model is far simpler to deploy. All data and policies reside in a single location, so there are no policy or data synchronization issues. But of course ultimately a flat central collection model is limited in scalability, processing, and the quantity of data it can manage. A single installation provides a fixed amount of processing and storage, and reporting becomes progressively harder and slower as data sets grow. Truth be told, we only see this kind of architecture for “checkbox compliance”, predominately for smaller companies with modest data collection needs. The remaining models address the limitations of this base architecture. Hierarchical In the Ring model – or what Mike likes to call the Moat – you have a central SIEM server ringed by many log collection devices. Each logger in the ring is responsible for collecting data from event sources. These log archives are also used to support distributed reporting. The log devices send a normalized and filtered (so substantially reduced) stream of events to the master SIEM device. The SIEM server sitting in the middle is responsible for correlation of events and analysis. This architecture was largely designed to address scalability limitations with some SIEM offerings. It wasn’t cost effective to scale the SIEM engine to handle mushrooming event traffic, so surrounding the SIEM centerpiece with logging devices allowed it to analyze the most critical events while providing a more cost-effective scaling mechanism. The upside of this model is that simple (cheaper) high-performance loggers do the bulk of the heavy lifting, and the expensive SIEM components provide the meat of the analysis. This model addresses scalability and data management issues, while reducing the need to distribute code and policies among many different devices. There are a couple issues with the ring model. The biggest problem remains a lack of integration between the two systems. Management tools for the data loggers and the SIEM may be linked together with some type of dashboard, but you quickly discover the two-headed monster of two totally separate products under the covers. Similarly, log management vendors were trying to graft better analysis and correlation onto their existing products, resulting in a series of acquisitions that provided log management players with SIEM. Either way, you end up with two separate products trying to solve a single problem. This is not a happy “you got your chocolate in my peanut butter,” moment, and will continue to be a thorny issue for customers until vendors fully integrate their SIEM and log management offerings as opposed to marketing bandaids dashboards as integrated products. Mesh The last model we want to discuss is the mesh deployment. The mesh is a group of interrelated systems, each performing full log management and SIEM functions for a small part of the environment. Basically this is a cluster of SIEM/LM appliances; each a functional peer with full analysis, correlation, filtering, storage, and reporting for local events. The servers can all be linked together to form a mesh, depending on customer needs. While this model is more complex to deploy and administer, and requires a purpose-built data store to manage high-speed storage and analysis, it does solve several problems. For organizations that require segregation of both data and duties, the mesh model is unmatched. It provides the ability to aggregate and correlate specific segments or applications on specific subsets of servers, making analysis and reporting flexible. Unlike the other models, it can divide and conquer processing and storage requirements flexibly depending on the requirements of the business, rather than the scalability limitations of the product being deployed. Each vendor’s product is capable implementing two or more of these models, but typically not all of them. Each product’s technical design (particularly the datastore) dictates which deployment models are possible. Additionally, the level of integration between the SIEM and Log Management pieces has an effect as well. As we said in our introduction, every SIEM vendor offers some degree of log management capability, and most Log Management vendors offer SIEM functions. This does not mean that the offerings are fully integrated by any stretch. Deployment and management costs are clearly affected by product integration or lack thereof, so make sure to do your due diligence in the purchase process to understand the underlying product architecture and the limitations and compromises necessary to make the product work in your environment. Share:

Share:
Read Post

FireStarter: Is Full Disk Encryption without Pre-Boot Secure?

This FireStarter is more of a real conversation starter than a definitive statement designed to rile everyone up. Over the past couple months I’ve talked with a few organizations – some of them quite large – deploying full disk encryption for laptops but skipping the pre-boot environment. For those of you who don’t know, nearly every full drive encryption product works by first booting up a mini-operating system. The user logs into this mini-OS, which then decrypts and loads the main operating system. This ensures that nothing is decrypted without the user’s credentials. It can be a bit of a problem for installing software updates, because if the user isn’t logged in you can’t get to the operating system, and if you kick off a reboot after installing a patch it will stall at pre-boot. But every major product has ways to manage this. Typically they allow you to set a “log in once” flag to the pre-boot environment for software updates, but there are a couple others ways to deal with it. I consider this problem essentially solved, based on the user discussions I’ve had. Another downside is that users need to log into pre-boot before the operating system. Some organizations deploy their FDE to require two logins, but many more synchronize the user’s Windows credentials to the pre-boot, then automatically log into Windows (or whatever OS is being protected). Both seem fine to me, and one of the differentiators between various encryption products is how well they handle user support, password changes, and other authentication issues in pre-boot. But I’m now hearing of people deploying a FDE product without using pre-boot. Essentially (I think) they reverse the process I just described and automatically log into the pre-boot environment, then have the user log into Windows. I’m not talking about the tricky stuff a near-full-disk-encryption product like Credent uses, but skipping pre-boot altogether. This seems fracking insane to me. You somewhat reduce the risk of a forensic evaluation of the drive, but lose most of the benefits of FDE. In every case, the reason given is, “We don’t want to confuse our users.” Am I missing something here? In my analysis this obviates most of the benefits of FDE, making it a big waste of cash. Then again, let’s think about compliance. Most regulations say, “Thou shalt encrypt laptop drives.” Thus, this seems to tick the compliance checkbox, even if it’s a bad idea from a security perspective. Also, realistically, the vast majority of lost drives don’t result in the compromise of data. I’m unaware of any non-targeted breach where a lost drive resulted in losses beyond the cost of dealing with breach reporting. I’m sure there have been some, but none that crossed my desk. Share:

Share:
Read Post

Return of the Security Start-up?

As Rich described on Friday, he, Adrian, and I were sequestered at the end of last week working on our evil plans for world domination. But we did take some time for meetings, and we met up with a small company, the proverbial “last company standing” in a relatively mature market. All their competitors have been acquired and every deal they see involves competing with a multi-billion dollar public company. After a few beers, we reminisced about the good old days when it was cool to deal with start-ups. Where the big companies were at a disadvantage, since it was lame to buy from huge monoliths. I probably had dark hair back then, but after the Internet bubble burst and we went through a couple recessions, most end user organizations opt for big and stable vendors – not small and exciting. This trend was compounded by the increasing value of suites in maturing markets, and most of security has been maturing rapidly. There is no award for doing system integration on the endpoint or the perimeter anymore. It’s just easier to buy integrated solutions which satisfy requirements from a single vendor. Add in the constant consolidation of innovative companies by the security and big IT aggregators, and there has been a real shift away from start-ups. But there is a downside of this big company reign. Innovation basically stops at big companies because the aggregators are focused on milking the installed base and not necessarily betting the ranch on new features. Most of the big security companies aren’t very good at integrating acquired technology into their stacks either. So you take an exciting start-up, pay them a lot of money, and then let the technology erode as the big company bureaucracy brings the start-up to its knees. A majority of the brain power leaves and it’s a crap show. Of course, not every deal goes down like this. But enough do that it’s the exception when an acquisition isn’t a total train wreck a year later. So back to my small company friends. Winning as a small company is all about managing the perception of risk in doing business with them. There is funding/viability risk, as more than a couple small security companies have gone away over the past few years, leaving customers holding the bag. Most big companies take a look at the balance sheet of a start-up and it’s a horror show (at least relative to what they are used to), so the procurement group blows a gasket when asked to write a substantial check to a start-up. There is also technology risk, in that smaller companies can’t do everything so they might miss the next big thing. Small companies need good answers on both these fronts to have any shot of beating a large entrenched competitor. It’s commonly forgotten, but small companies do innovate, and that cliche about them being more nimble is actually true. Those advantages need to be substantiated during the sales cycle to address those risks. But end users also face risks outside of the control of a small company. Things like acquisition risk, which is the likelihood of the small company being acquired and then going to pot. And integration risk, where the small company does not provide integration with the other solutions the end user needs, and has no resources to get it done. All of these are legitimate issues facing an end user trying to determine the right product to solve his/her problem. As an end user, is it worth taking these risks on a smaller company? The answer depends on sophistication of the requirement. If the requirement can be met out-of-the box and the current generation of technology meets your needs, then it’s fine to go with the big company. The reality of non-innovation and crappy integration from a big company isn’t a concern. As long as the existing feature set solves your problems, you’ll be OK. It’s when you are looking at either a less mature market or requirements that are not plain vanilla where the decision becomes a bit murky. Ultimately it rests on your organization’s ability to support and integrate the technology yourself, since you can’t guarantee that the smaller company will survive or innovate for any length of time. But there are risks in working with large companies as well. Don’t forget that acquired products languish or even get worse (relative to the market) once acquired, and the benefits of integration don’t necessarily materialize. So the pendulum swings both ways in evaluating risks relative to procurement. And you thought risk management was only about dealing with the risk of attack? There are some tactics end users can use to swing things the right way. Understand that while negotiating the original PO with a small company, you have leverage. You can get them to add features you need or throw in deployment resources or cut the price (especially at the end of the quarter). Once the deal closes (and the check clears), they’ll move onto the next big deal. They have to – the small company is trying to survive. So get what you can before you cut the check. So back to the topic of this post: are we going to see a return of the security start-up? Can smaller security companies survive and prosper in the face of competition from multi-billion dollar behemoths? We think there is a role for the security start-up, providing innovation and responsiveness to customer needs – something big companies do poorly. But the secret is to find the small companies that act big. Not by being slow, lumbering, and bureaucratic, but by aligning with powerful OEM and reseller partners to broaden market coverage. And having strong technology alliances to deliver a broader product than a small company can deliver themselves. Yes, it’s possible, but we don’t see a lot of it. There are very few small companies out there doing anything innovative. That’s the real issue. Even if you wanted

Share:
Read Post

Friday Summary: June 18, 2009

Dear Securosis readers, The Friday Summary is currently unavailable. Our staff is at an offsite in an undisclosed location completing our world domination plans. We apologize for the inconvenience, and instead of our full summary of the week’s events here are a few links to keep you busy. If you need more, Mike Rothman suggests you “find your own &%^ news”. Mike’s attitude does not necessarily represent Securosis, even though we give him business cards. Thank you, we appreciate your support, and turn off your WiFi! Securosis Posts Doing Well by Doing Good (and Protecting the Kids). Take Our Data Security Survey & Win an iPad. Incite 6/16/2010: Fenced in. Need to know the time? Ask the consultant. Top 5 Security Tips for Small Business. If You Had a 3G iPad Before June 9, Get a New SIM. Insider Threat Alive and Well. Other News Unpatched Windows XP Flaw Being Exploited. Zombies take over Newsweek (The brain-eating kind, not a botnet). Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.