Securosis

Research

ESF: Triage: Fixing the Leaky Buckets

As we discussed in the last ESF post on prioritizing the most significant risks, the next step is to build, communicate, and execute on a triage plan to fix those leaky buckets. The plan consists of the following sections: Risk Confirmation, Remediation Plan, Quick Wins, and Communication Risk Confirmation Coming out of the prioritize step, before we start committing resources and/or pulling the fire alarm, let’s take a deep breath and make sure our ranked list really represents the biggest risks. How do we do that? Basically by using the same process we used to come up with the list. Start with the most important data, and work backwards based on the issues we’ve already found. The best way I know to get everyone on the same page is to have a streamlined meeting between the key influencers of security priorities. That involves folks not just within the IT team, but also probably some tech-savvy business users – since it’s their data at risk. Yes, we are going to go back to them later, once we have the plan. But it doesn’t hurt to give them a heads up early in the process about what the highest priority risks are, and get their buy-in early and often throughout the process. Remediation Plan Now comes the fun part: we have to figure out what’s involved in addressing each of the leaky buckets. That means figuring out whether you need to deploy a new product, or optimize a process, or both. Keep in mind that for each of the discrete issues, you want to define the fix, the cost, the effort (in hours), and the timeframe commitment to get it done. No, none of this is brain surgery, and you probably have a number of fixes on your project plan already. But hopefully this process provides the needed incentive to get some of these projects moving. Once the first draft of the plan is completed, start lining up the project requirements with the reality of budget and availability of resources. That way when it comes time to present the plan to management (including milestones and commitments), you have already had the visit with Mr. Reality so you can stick to what is feasible. Quick Wins As you are doing the analysis to build the remediation plan, it’ll be obvious that some fixes are cheap and easy. We recommend you take the risk (no pun intended) and take care of those issues first. Regardless of where they end up on the risk priority list. Why? We want to build momentum behind the endpoint security program (or any program, for that matter) and that involves showing progress as quickly as possible. You don’t need to ask permission for everything. Communications The hallmark of any pragmatic security program (read more about the Pragmatic philosophy here) is frequent communications and senior level buy-in. So once we have the plan in place, and an idea of resources and timeframes, it’s time to get everyone back in the room to get thumbs up for the triage plan. You need to package up the triage plan in a way that makes sense to the business folks. That means thinking about business impact first, reality second, and technology probably not at all. These folks want to know what needs to be done, when it can get done, and what it will cost. We recommend you structure the triage pitch roughly like this: Risk Priorities – Revisit the priorities everyone has presumably already agreed to. Quick Wins – Go through the stuff that’s already done. That will usually put the bigwigs in a good mood, since things are already in motion. Milestones – These folks don’t want to hear the specifics of each project. They want the bottom line. When will each of the risk priorities be remediated? Dependencies – Now that you’ve told them what need to do, next tell them what constraints you are operating under. Are there budget issues? Are there resource issues? Whatever it is, make sure you are very candid about what can derail efforts and impact milestones. Sign-off – Then you get them to sign in blood as to what will get done and when. Dealing with Shiny Objects To be clear, getting to this point in the process tends to be a straightforward process. Senior management knows stuff needs to get done and your initial should plans present a good way to get those things done. But the challenge is only beginning, because as you start executing on your triage plan, any number of other priorities will present that absolutely, positively, need to be dealt with. In order to have any chance to get through the triage list, you’ll need to be disciplined about managing expectations relative to the impact of each shiny object on your committed milestones. We also recommend a monthly meeting with the influencers to revisit the timeline and recast the milestones – given the inevitable slippages due to other priorities. OK, enough of this program management stuff. Next in this series, we’ll tackle some of the technical fundamentals, like software updates, secure configuration, and malware detection. Other posts in the Endpoint Security Fundamentals Series Introduction Prioritize: Finding the Leaky Buckets Share:

Share:
Read Post

Friday Summary: April 2, 2010

It’s the new frontier. It’s like the “Wild West” meets the “Barbary Coast”, with hostile Indians and pirates all rolled into one. And like those places, lawless entrepreneurialism a major part of the economy. That was the impression I got reading Robert Mullins’ The biggest cloud on the planet is owned by … the crooks. He examines the resources under the control of Conficker-based worms and compares them to the legitimate cloud providers. I liked his post, as considering botnets in terms of their position as cloud computing leaders (by resources under management) is a startling concept. Realizing that botnets offer 18 times the computational power of Google and over 100 times Amazon Web Services is astounding. It’s fascinating to see how the shady and downright criminal have embraced technology – and in many cases drive innovation. I would also be interested in comparing total revenue and profitability between, say, AWS and a botnet. We can’t, naturally, as we don’t really know the amount of revenue spam and bank fraud yield. Plus the business models are different and botnets provide abnormally low overhead – but I am willing to bet criminals are much more efficient than Amazon or Google. It’s fascinating to see the shady and downright criminal have embraced the model so effectively. I feel like I am watching a Battlestar Galatica rerun, where the humans can’t use networked computers, as the Cylons hack into them as fast as they find them. And the sheer numbers of hacked systems support that image. I thought it was apropos that Andy the IT Guy asked Should small businesses quit using online banking, which is very relevant. Unfortunately the answer is yes. It’s just not safe for most merchants who do not – and who do not want to – have a deep understanding of computer security. Nobody really wants to go back to the old model where they drive to the bank once or twice a week and wait in line for half an hour, just so the new teller can totally screw up your deposit. Nor do they want to buy dedicated computers just to do online banking, but that may be what it comes down to, as Internet banking is just not safe for novices. Yet we keep pushing onward with more and more Internet services, and we are encouraged by so many businesses to do more of our business online (saving their processing costs). Don’t believe me? Go to your bank, and they will ask you to please use their online systems. Fun times. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences On that note, Rich on Protecting your online banking. Living with Windows: security. Rich wrote this up for Macworld. Insiders Not the Real Database Threat. RSA Video: Enterprise Database Security. Favorite Securosis Posts Rich: Help a Reader: PCI Edition. Real world problem from a reader caught between a rock and an assessor. Mike Rothman: How Much Is Your Organization Telling Google? Yes, they are the 21st century Borg. But it’s always interesting to see how much the Google is really seeing. David Mortman: FireStarter: Nasty or Not, Jericho is Irrelevant. Adrian Lane: Hit the Snooze Button on Lancope’s Data Loss Alarms. Freakin’ Unicorns. Other Securosis Posts Endpoint Security Fundamentals: Introduction. Database Security Fundamentals: Configuration. Incite 3/31/2010: Attitude Is Everything. Security Innovation Redux: Missing the Forest for the Trees. Hello World. Meet Pwn2Own. Favorite Outside Posts Rich: Is Compliance Stifling Security Innovation? Alex over at Verizon manages to tie metrics to security innovation. Why am I not surprised? 🙂 Mike Rothman: Is it time for small businesses to quit using online banking? Andy the IT Guy spews some heresy here. But there is definitely logic to at least asking the question. David Mortman: Side-Channel Leaks in Web Applications. Adrian: A nice salute to April 1 from Amrit: Chinese Government to Ban All US Technology. Project Quant Posts Project Quant: Database Security – Patch. Research Reports and Presentations The short version of the RSA Video presentation on Enterprise Database Security. Report: Database Assessment. Top News and Posts Great interview with security researcher Charlie Miller. Especially the last couple of paragraphs. Man fleeing police runs into prison yard. Google’s own glitch causes blockage in China. Senate Passes Cybersecurity Act. Not a law yet. Nick Selby on the recent NJ privacy in the workplace lawsuit. Microsoft runs fuzzing botnet, finds 1800 bugs. Key Logger Attacks on the Rise. Content Spoofing – Not Just an April Fool’s Day Attack. JC Penny and Wet Seal named as breached firms. Nice discussion: Mozilla Plans Fix for CSS History Hack. Original Mozilla announcement here. Microsoft SDL version 5. Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Martin McKeay, for offering practical advice in response to Help a Reader: PCI Edition. Unluckily, there isn’t a third party you can appeal to, at least as far as I know. My suggestion would be to get both your Approved Scanning Vendor and your hosting provider on the same phone call and have the ASV explain in detail to the hosting provider the specifics of vulnerabilities that have been found on the host. Your hosting provider may be scanning your site with a different ASV or not at all and receiving different information than your seeing. Or it may be that they’re in compliance and that your ASV is generating false positives in your report. Either way, it’s going to be far easier for them to communicate directly at a technical level than for you to try and act as an intermediary between the two. I’d also politely point out to your host that their lack of communication is costing you money and if it continues you may have to take your business elsewhere. If they’re not willing to support you, you should continue to pay them money. Explore your contract, you may have the option of subtracting the

Share:
Read Post

ESF: Prioritize: Finding the Leaky Buckets

As we start to dig into the Endpoint Security Fundamentals series, the first step is always to figure out where you are. Since hope is not a strategy, you can’t just make assumptions about what’s installed, what’s configured correctly, and what the end users actually know. So we’ve got to figure that out, which involves using some of the same tactics our adversaries use. The goal here is twofold: first you need to figure out what presents a clear and present danger to your organization, and put a triage plan in place to remediate those issues. Secondly, you need to manage expectations at all points in this process. That means documenting what you find (no matter how ugly the results) and communicating that to management, so they understand what you are up against. To be clear, although we are talking about endpoint security here, this prioritization (and triage) process should be the first steps in any security program. Assessing the Endpoints In terms of figuring out your current state, you need to pay attention to a number of different data sources – all of which yield information to help you understand the current state. Here is a brief description of each and the techniques to gather the data. Endpoints – Yes, the devices themselves need to be assessed for updated software, current patch levels, unauthorized software, etc. You may have a bunch of this information via a patch/configuration management product or as part of your asset management environment. To confirm that data, we’d also recommend you let a vulnerability scanner loose on at least some of the endpoints, and play around with automated pen testing software to check for exploitability of the devices. Users – If we didn’t have to deal with those pesky users, life would be much easier, eh? Well, regardless of the defenses you have in place, an ill-timed click by a gullible user and you are pwned. You can test users by sending around fake phishing emails and other messages with fake bad links. You can also distribute some USB keys and see how many people actually plug them into machines. These “attacks” will determine pretty quickly whether you have an education problem and what other defenses you may need, to overcome those issues. Data – I know this is about endpoint security, but Rich will be happy to know doing a discovery process is important here as well. You need to identify devices with sensitive information (since those warrant a higher level of protection) and the only way to do that is to actually figure out where the sensitive data is. Maybe you can leverage other internal efforts to do data discovery, but regardless, you need to know which devices would trigger a disclosure if lost/compromised. Network – Clearly devices already compromised need to be identified and remediated quickly. The network provides lots of information to indicate compromised devices. Whether it’s looking at network flow data, anomalous destinations, or alerts on egress filtering rules – the network is a pretty reliable indicator of what’s already happened, and where your triage efforts need to start. Keep in mind that it is what it is. You’ll likely find some pretty idiotic things happening (or confirm the idiotic things you already knew about), but that is all part of the process. The idea isn’t to get overwhelmed, it’s to figure out how much is broken so you can start putting in place a plan to fix it, and then a process to make sure it doesn’t happen so often. Prioritizing the Risks Prioritization is more art than science. After spending some time gathering data from the endpoints, users, data, and network, how do you know what is most important? Not to be trite, but it’s really a common sense type thing. For example, if your network analysis showed a number of endpoints already compromised, it’s probably a good idea to start by fixing those. Likewise, if your automated pen test showed you could get to a back-end datastore of private information via a bad link in an email (clicked on by an unsuspecting user), then you have a clear and present danger to deal with, no? After you are done fighting the hottest fires, the prioritization really gets down to who has access to sensitive data and making sure those devices are protected. This sensitive data could be private data, intellectual property, or anything else you don’t want to see on the full-disclosure mailing list. Hopefully your organization knows what data is sensitive, so you can figure out who has access to that data and build the security program around protecting that access. In the event there is no internal consensus about what data is important, you can’t be bashful about asking questions like, “why does that sales person need the entire customer database?” and “although it’s nice that the assistant to the assistant controller’s assistant wants to work from home, should he have access to the unaudited financials?” Part of prioritizing the risk is to identify idiotic access to sensitive data. And not everything can be a Priority 1. Jumping on the Moving Train In the real world, you don’t get to stop everything and start your security program from scratch. You’ve already got all sorts of assessment and protection activities going on – at least we hope you do. That said, we do recommend you take a step back and not be constrained to existing activities. Existing controls are inputs to your data gathering process, but you need to think bigger about the risks to your endpoints and design a program to handle them. At this point, you should have a pretty good idea of which endpoints are at significant risk and why. In the next post, we’ll discuss how to build the triage plan to address the biggest risks and get past the fire fighting stage. Endpoint Security Fundamentals Series Introduction Share:

Share:
Read Post

Hit the Snooze on Lancope’s Data Loss Alarms

Update– Lanscope posted some new information positioning this as a compliment, not substitute, to DLP. Looks like the marketing folks might have gotten a little out of control. I’ve been at this game for a while now, but sometimes I see a piece of idiocy that makes me wish I was drinking some chocolate milk so I could spew it out my nose in response the the sheer audacity of it all. Today’s winner is Lancope, who astounds us with their new “data loss prevention” solution that detects breaches using a Harry Potter-inspired technique that completely eliminates the need to understand the data. Actually, according to their extremely educational marketing paper, analyzing the content is bad, because it’s really hard! Kind of like math. Or common sense. Lancope’s far superior alternative monitors your network for any unusual activity, such as a large file transfer, and generates an alert. You don’t even need to look at packets! That’s so cool! I thought the iPad was magical, but Lancope is totally kicking Apple’s ass on the enchantment front. Rumor is your box is even delivered by a unicorn. With wings! I’m all for netflow and anomaly detection. It’s one of the more important tools for dealing with advanced attacks. But this Lancope release is ridiculous – I can’t even imagine the number of false positives. Without content analysis, or even metadata analysis, I’m not sure how this could possibly be useful. Maybe paired with real DLP, but they are marketing it as a stand-alone option, which is nuts. Especially when DLP vendors like Fidelis, McAfee, and Palisade are starting to add data traffic flow analysis (with content awareness) to their products. Maybe Lancope should partner with a DLP vendor. One of the weaknesses of many DLP products is that they do a crappy job of looking across all ports and protocols. Pretty much every product is capable of it, but most of them require a large number of boxes with sever traffic or analysis limitations, because they aren’t overly speedy as network devices (with some exceptions). Combining one with something like Lancope where you could point the DLP at target traffic could be interesting… but damn, netflow alone clearly isn’t a good option. Lancope, thanks for a great DLP WTF with a side of BS. I’m glad I read it today – that release is almost as good as the ThinkGeek April Fool’s edition! Share:

Share:
Read Post

Database Security Fundamentals: Configuration

It’s tough for me to write a universal quick configuration management guide for databases, because the steps you take will be based upon the size, number, and complexity of the databases you manage. Every DBA works in a slightly different environment, and configuration settings get pretty specific. Further, when I got started in this industry, the cost of the database server and the cost of the database software were more than a DBA’s yearly salary. It was fairly common to see one database admin for one database server. By the time the tech bubble burst in 2001, it was common to see one database administrator tending to 15-20 databases. Now that number may approach 100, and it’s not just a single database type, but several. The greater complexity makes it harder to detect and remedy simple mistakes that lead to database compromises. That said, re-configuring a database is a straightforward task. Database administrators know it involves little more than changing some parameter value in a file or management UI and, worst case, re-starting the database. And a majority of the parameters, outside the user settings we have already discussed, will remain static over time. The difficulties are knowing what settings are appropriate for database security, and keeping settings consistent and up-to-date across a large number of databases. Research and ongoing management are what makes this step more challenging. The following is a set of basic steps to establish and maintain database configuration. This is not meant to be a process per se, but just a list of tasks to be performed. Research: How should your databases be configured for security? We have already discussed many of the major topics with user configuration management and network settings, and patching takes care of another big chunk of the vulnerabilities. But that still leaves a considerable gap. All database vendors recommend configuration and security settings, and it does not take very long to compare your configuration to the standard. Researching what settings you need to be concerned with, and the proper settings for your databases, will comprise the bulk of your work for this exercise. All database vendors provide recommended configurations and security settings, and it does not take very long to compare your configuration to the standard. There are also some free assessment tools with built-in polices that you can leverage. And your own team may have policies and recommendations. There are also third party researchers who provide detailed information on blogs, as well as CERT & Mitre advisories. Assess & Configure: Collect the configuration parameters and find out how your databases are configured. Make changes according to your research. Pay particular attention to areas where users can add or alter database functions, such as cataloging databases and nodes in DB2 or UTL_File settings in Oracle. Pay attention to the OS level settings as well, so verify that the database is installed under a non-IT or domain administration account. Things like shared memory access and read permissions on database data files need to be restricted. Also note that assessment can verify audit settings to ensure other monitoring and auditing facilities generate the appropriate data streams for other security efforts. Discard What Your Don’t Need: Databases come with tons of stuff you may never need. Test databases, advanced features, development environments, web servers, and other features. Remove modules & services you don’t need. Not using replication? Remove those packages. These services may or may not be secure, but their absence assures they are not providing open doors for hackers. Baseline and Document: Document approved configuration baseline for databases. This should be used for reference by all administrators, as well as guidelines to detect misconfigured systems. The baseline will really help so that you do not need to re-research what the correct settings are, and the documentation will help you and team members remember why certain settings were chosen. A little more advanced: Automation: If you work on a team with multiple DBAs, there will be lots of changes you are not aware of. And these changes may be out of spec. If you can, run configuration scans on a regular basis and save the results. It’s a proactive way to ensure configurations do not wander too far out of specification as you maintain your systems. Even if you do not review every scan, if something breaks, you at least have the data needed to detect what changes were made and when for after-the-fact forensics. Discovery: It’s a good idea to know what databases are on your network and what data they contain. As databases are being embedded into many applications, they surreptitiously find their way onto your network. If hacked, they provide launch points for other attacks and leverage whatever credentials the database was installed with, which you hope was not ‘root’. Data discovery is a little more difficult to do, and comes with separation of duties issues (DBAs should not be looking at data, just database setup), but understanding where sensitive data resides is helpful in setting table, group, and schema permissions. Just as an aside on the topic of configuration management, I wanted to mention that during my career I have helped design and implement database vulnerability assessment tools. I have written hundreds of policies for database security and operations for most relational database platforms, and several non-relational platforms. I am a big fan of being able to automate configuration data collection and analysis. And frankly, I am a big fan of having someone else write vulnerability assessment policies, because it is difficult and time consuming work. So I admit that I have a bias for using assessment tools for configuration management. I hate to recommend tools for an essentials guide as I want this series to stick to lightweight stuff you can do in an afternoon, but the reality is that you cannot reasonably research vulnerability and security settings for a database in an afternoon. It takes time and a willingness to learn, and means you need to learn about

Share:
Read Post

Endpoint Security Fundamentals: Introduction

As we continue building out coverage on more traditional security topics, it’s time to focus some attention on the endpoint. For the most part, many folks have just given up on protecting the endpoint. Yes, we all go through the motions of having endpoint agents installed (on Windows anyway), but most of us have pretty low expectations for anti-malware solutions. Justifiably so, but that doesn’t mean it’s game over. There are lots of things we can do to better protect the endpoint, some of which were discussed in Low Hanging Fruit: Endpoint Security. But let’s not get the cart ahead of the horse. First off, nowadays there are lots of incentives for the bad guys to control endpoint devices. There is usually private data on the device, including nice things like customer databases – and with the strategic use of keyloggers, it’s just a matter of time before bank passwords are discovered. Let’s not forget about intellectual property on the devices, since lots of folks just have to have their deepest darkest (and most valuable) secrets on their laptop, within easy reach. Best of all, compromising an endpoint device gives the bad guys a foothold in an organization, and enables them to compromise other systems and spread the love. The endpoint has become the path of least resistance, mostly because of the unsophistication of the folks using said devices doing crazy Web 2.0 stuff. All that information sharing certainly seemed like a good idea at the time, right? Regardless of how wacky the attack, it seems at least one stupid user will fall for it. Between web application attacks like XSS (cross-site scripting), CSRF (cross-site request forgery), social engineering, and all sorts of drive-by attacks, compromising devices is like taking candy from a baby. But not all the blame can be laid at the feet of users, because many attacks are pretty sophisticated, and even hardened security professionals can be duped. Combine that with the explosion of mobile devices, whose owners tend to either lose them or bring back bad stuff from coffee shops and hotels, and you’ve got a wealth of soft targets. And as the folks tasked with protecting corporate data and ensuring compliance, we’ve got to pay more attention to locking down the endpoints – to the degree we can. And that’s what the Endpoint Security Fundamentals series is all about. Philosophy: Real-world Defense in Depth As with all of Securosis’ research, we focus on tactics to maxize impact for minimal effort. In the real world, we may not have the ability to truly lock down the devices since those damn users want to do their jobs. The nerve of them! So we’ve focused on layers of defense, not just from the standpoint of technology, but also looking at what we need to do before, during, and after an incident. Prioritize – This will warm the hearts of all the risk management academics out there, but we do need to start the process by understanding which endpoint devices are most at risk because they hold valuable data, for a legitimate business reason – right? Assess the current status – Once we know what’s important, we need to figure out how porous our defenses are, so we’ll be assessing the endpoints. Focus on the fundamentals – Next up, we actually pick that low hanging fruit and do the thing that we should be doing anyway. Yes, things like keeping software up to date, leveraging what we can from malware defense, and using new technologies like personal firewalls and HIPS. Right, none of this stuff is new, but not enough of us do it. Kind of like… no, I won’t go there. Building a sustainable program – It’s not enough to just implement some technology. We also need to do some of those softer management things, which we don’t like very much – like managing expectations and defining success. Ultimately we need to make sure the endpoint defenses can (and will) adapt to the changing attack vectors we see. Respond to incidents – Yes, it will happen to you, so it’s important to make sure your incident response plan factors in the reality that an endpoint device may be the primary attack vector. So make sure you’ve got your data gathering and forensics kits at the ready, and also have an established process for when a remote or traveling person is compromised. Document controls – Finally, the auditor will show up and want to know what controls you have in place to protect those endpoints. So you also need to focus on documentation, ensuring you can substantiate all the tactics we’ve discussed thus far. The ESF Series To provide an little preview of what’s to come, here is how the series will be structured: Prioritize: Finding the Leaky Buckets Triage: Fixing the Leaky Buckets Fundamentals: Leveraging existing technologies (a few posts covering the major technology areas) The Endpoint Security Program: Systematizing Protection Incident Response: Responding to an endpoint compromise Compliance: Documenting Endpoint Controls As with all our research initiatives, we count on you to keep us honest. So check out each piece and provide your feedback. Tell me why I’m wrong, how you do things differently, or what we’ve missed. Share:

Share:
Read Post

Help a Reader: PCI Edition

One of our readers recently emailed me with a major dilemma. They need to keep their website PCI compliant in order to keep using their payment gateway to process credit card transactions. Their PCI scanner is telling them they have vulnerabilities, while their hosting provider tells them they are fine. Meanwhile our reader is caught in the middle, paying fines. I don’t dare to use my business e-mail address, because it would disclose my business name. I have been battling with my website host and security vendor concerning the Non-PCI Compliance of my website. It is actually my host’s IP address that is being scanned and for several months it has had ONE Critical and at least SIX High Risk scan results. This has caused my Payment Gateway provider to start penalizing me $XXXX per month for Non-PCI compliance. I wonder how long they will even keep me. When I contact my host, they say their system is in compliance. My security vendor is saying they are not. They are each saying I have to resolve the problem, although I am in the middle. Is there not a review board that can resolve this issue? I can’t do anything with my host’s system, and don’t know enough gibberish to even interpret the scan results. I have just been sending them to my host for the last several months. There is no way that this could be the first or last time this has happened, or will happen, to someone in this situation. This sort of thing is bound to come up in compliance situations where the customer doesn’t own the underlying infrastructure, whether it’s a traditional hosted offering, and ASP, or the cloud. How do you recommend the reader – or anyone else stuck in this situation – should proceed? How would you manage being stuck between two rocks and a hard place? Share:

Share:
Read Post

Incite 3/31/2010: Attitude Is Everything

There are people who suck the air out of the room. You know them – they rarely have anything good to say. They are the ones always pointing out the problems. They are half-empty type folks. No matter what it is, it’s half-empty or even three-quarters empty. The problem is that my tendency is to be one of those people. I like to think it’s a personality thing. That I’m just wired to be cynical and that it makes me good at my job. I can point out the problems, and be somewhat constructive about how to solve them. But that’s a load of crap. For a long time I was angry and that made me cynical. But I have nothing to be angry about. Sure I’ve gotten some bad breaks, but show me a person who hasn’t had things go south at one point or another. I’m a lucky guy. My family loves me. I have a great time at work. I have great friends. One of my crosses to bear is to just remember that – every day. A good attitude is contagious. And so is a bad attitude. My first step is awareness. I make a conscious effort to be aware of the vibe folks are throwing. When I’m at a coffee shop, I’ll take a break and just try to figure out the tone of the room. I’ll focus on the folks in the room having fun, and try to feed off that. I also need to be aware when I need an attitude adjustment. Another reason I’m really lucky is that I can choose who I’m around most of the time. I don’t have to sit in meetings with Mr. Wet Blanket. And if I’m doing a client engagement with someone with the wrong attitude, I just call them out on it. What do I care? I’m there to do a job and people with a bad attitude get in my way. Most folks have to be more tactful, but that doesn’t mean you need to just take it. You are in control of your own attitude, which is contagious. Keep your attitude in a good place and those wet blankets have no choice but to dry up a little. And that’s what I’m talking about. – Mike. Photo credit: “Bad Attitude” originally uploaded by Andy Field Incite 4 U What’s that smell? Is it burnout? – Speaking of bad attitudes, one of the major contributors to a crappy outlook is burnout. This post by Dan Lohrmann deals with some of the causes and some tactics to deal with it. For me, the biggest issue is figuring out whether it’s a cyclical low, or it’s not going to get better. If it’s the former, appreciate that some days you feel like crap. Sometimes it’s a week, but it’ll pass. If it’s the latter start looking for another gig, since burnout can result from not being successful, and not having the opportunity to be successful. That doesn’t usually get better by sticking around. – MR Screw the customers, save the shareholders – Despite their best attempts to prevent disclosure, it turns out that JC Penney was ‘Company A’ in the indictment against Alberto Gonzales that didn’t work for the Bush administration. Penney fought disclosure of their name tooth and nail, claiming it would cause “confusion and alarm” and “may discourage other victims of cyber-crimes to report the criminal activity or cooperate with enforcement officials for fear of the retribution and reputational damage.” In other words, forget about the customers who might have been harmed – we care about our bottom line. Didn’t they learn anything from TJX? It isn’t like disclosure will actually lose you customers, $202 per record and all be damned. – RM Hard filters, injected – SQL injection remains a problem as the attacks are difficult to detect and can often be masked, and detection scripts can fooled by attackers gaming scanning techniques to find stealthy injection patterns. It seems like a fool’s errand, as you foil one attack and attackers just find some other syntax contortion that gets past your filter. Exploiting hard filtered SQL Injections is a great post on the difficulties of scanning SQL statements and how attackers work around defenses. It’s a little more technical, but it walks through various practical attacks, explaining the motivations behind attacks and plausible defenses. The evolution of this science is very interesting. – AL The FTC can haz your crap seal – I ranted a few weeks ago about these web security seals, and the fact they some are bad jokes – just as a number of new vendors are rolling out their own shiny seals. Sure there seems to be a lot of money in it, but promoting a web security seal as a panacea for customer data protection could get you a visit from some nice folks at the Federal Trade Commission. Except they probably aren’t that nice, as they are shutting down those programs. Especially when the vendor didn’t even test the web site – methinks that’s a no-no. Maybe I should ask ControlScan about that – as RSnake points out, they settled with the FTC on deceptive security seals. As Barnum said, there is a sucker born every minute. – MR The Google smells a bit (skip)fishy – Last week Google launched Skipfish. Even though I was on vacation I found a few minutes to download and try it out. From the Google documentation: “Skipfish is an active web application security reconnaissance tool. It prepares an interactive sitemap for the targeted site by carrying out a recursive crawl and dictionary-based probes … The final report generated by the tool is meant to serve as a foundation for professional web application security assessments.” The tool is not bad, and it was pretty fast, but I certainly did not stress test it. But the question on my mind is ‘why’? And no, not “why would I use this tool”, but why

Share:
Read Post

How Much Is Your Organization Telling Google?

Palo Alto Networks just released their latest Application Usage and Risk Report (registration required), which aggregates anonymous data from their client base to analyze Internet-based application usage among their clients. For those of you who don’t know, one of their product’s features is monitoring applications tunneling over other protocols – such as P2P file sharing over port 80 (normally used for web browsing). A ton of different applications now tunnel over ports 80 and 443 to get through corporate firewalls. The report is pretty interesting, and they sent me some data on Google that didn’t make it into the final cut. Below is a chart showing the percentage of organizations using various Google services. Note that Google Buzz is excluded, because it was too new collect a meaningful volume of data. These results are from 347 different organizations. Here are a few bits that I find particularly interesting: 86% of organizations have Google Toolbar running. You know, one of those things that tracks all your browsing. Google Analytics is up at 95% – is 5% less than I expected. Yes, another tool that lets Google track the browsing habits of all your employees. 79% allow Google Calendar. Which is no biggie unless corporate info is going up there. Same for the 81% using Google Docs. Again, these can be relatively private if configured properly, and you don’t mind Google having access. 74% use Google Desktop. The part of Desktop that hits the Internet, since Palo Alto is a gateway product that can’t detect local system activity. Now look back at my post on all the little bits Google can collect on you. I’m not saying Google is evil – I just have major concerns with any single source having access to this much information. Do you really want an unaccountable outside entity to have this much data about your organization? Share:

Share:
Read Post

FireStarter: Nasty or Not, Jericho Is Irrelevant

It seems the Jericho Forum is at it again. I’m not sure what it is, but they are hitting the PR circuit talking about their latest document, a Self-Assessment Guide. Basically this is a list of “nasty” questions end users should ask vendors to understand if their products align with the Jericho Commandments. If you go back and search on my (mostly hate) relationship with Jericho, you’ll see I’m not a fan. I thought the idea of de-perimeterization was silly when they introduced it, and almost everyone agreed with me. Obviously the perimeter was changing, but it clearly was not disappearing. Nor has it. Jericho fell from view for a while and came back in 2006 with their commandments. Most of which are patently obvious. You don’t need Jericho to tell you that the “scope and level of protection should be specific and appropriate to the asset at risk.” Do you? Thankfully Jericho is there to tell us “security mechanisms must be pervasive, simple, scalable and easy to manage.” Calling Captain Obvious. But back to this nasty questions guide, which is meant to isolate Jericho-friendly vendors. Now I get asking some technical questions of your vendors about trust models, protocol nuances, and interoperability. But shouldn’t you also ask about secure coding practices and application penetration tests? Which is a bigger risk to your environment: the lack of DRM within the system or an application that provides root to your entire virtualized datacenter? So I’ve got a couple questions for the crowd: Do you buy into this de-perimeterization stuff? Have these concepts impacted your security architecture in any way over the past ten years? What about cloud computing? I guess that is the most relevant use case for Jericho’s constructs, but they don’t mention it at all in the self-assessment guide. Would a vendor filling out the Jericho self-assessment guide sway your technology buying decision in any way? Do you even ask these kinds of questions during procurement? I guess it would be great to hear if I’m just shoveling dirt on something that is already pretty much dead. Not that I’m above that, but it’s also possible that I’m missing something. Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.