Securosis

Research

Kill. IE6. Now.

I tend to be master of the obvious. Part of that is overcoming my own lack of cranial horsepower (especially when I hang out with serious security rock stars), but another part is the reality that we need someone to remind us of the things we should be doing. Work gets busy, shiny objects beckon, and the simple blocking and tackling falls by the wayside. And it’s the simple stuff that kills us, as evidenced once again by the latest data breach study from TrustWave. Over the past couple months, we’ve written a bunch of times about the need to move to the latest versions of the key software we run. Like browsers and Adobe stuff. Just to refresh your memory: Microsoft IE Issues Reported: Adrian covered the Heise 0-day attack on IE6 and IE7 back in November. Low Hanging Fruit: Endpoint Security: I wrote about the need to keep software updated and patched. Security Strategies for Long Term, Targeted Threats: Rich responded to the APT hysteria with some actionable advice about dealing with the threats, which included moving off IE6. Of course, the news item that got me thinking about reiterating such an obvious concept was that Google is pulling support for IE6 for all their products this year. Google Docs and Google sites loses IE6 support on March 1. Not that you should be making decisions based on what Google is or is not supporting, but if ever there was a good backwards looking indicator, it’s this. So now is the time. To be clear, this isn’t news. The echo chamber will beat me like a dog because who the hell is still using IE6? Especially after our Aurora Borealis escapade. Right, plenty of folks, which brings me back to being master of the obvious. We all know we need to kill IE6, but it’s probably still somewhere in the dark bowels of your organization. You must find it, and you must kill it. Not in the Excuses Business Given that there has been some consistent feedback every time we say to do anything, that it’s too hard, or someone will be upset, or an important application will break, it’s time to man (or woman) up. No more excuses on this one. Running IE6 is just unacceptable at this point. If your application requires IE6, then fix it. If your machines are too frackin’ old to run IE7 or IE8 with decent performance, then get new machines. If you only get to see those remote users once a year to update their machines, bring them in to refresh now. Seriously, it’s clear that IE6 is a lost cause. Google has given up on it, and they won’t be the last. You can be proactive in how you manage your environment, or you can wait until you get pwned. But if you need the catalyst of a breach to get something simple like this done in your environment, then there’s something fundamentally wrong with your security program. But that’s another story for another day. Comment from Rich: As someone who still develops web sites from time to time, it kills me every time I need to add IE6 compatibility code. It’s actually pretty hard to make anything modern work with IE6, and it significantly adds to development costs on more-complex sites. So my vote is also to kill it! Share:

Share:
Read Post

The NSA Isn’t Evil (Even Working with Google)

The NSA is going to work with Google to help analyze the recent Chinese (probably) hack. Richard Bejtlich predicted this, and I consider it a very positive development. It’s a recognition that our IT infrastructure is a critical national asset, and that the government can play a role in helping respond to incidents and improve security. That’s how it should be – we don’t expect private businesses to defend themselves from amphibious landings (at least in our territory), and the government has political, technical, and legal resources simply not available to the private sector. Despite some of the more creative TV and film portrayals, the NSA isn’t out to implant microchips in your neck and follow you with black helicopters. They are a signals intelligence collection agency, and we pay them to spy on as much international communication as possible to further our national interests. Think that’s evil? Go join Starfleet – it’s the world we live in. Even though there was some abuse during the Bush years, most of that was either ordered by the President, or non-malicious (yes, I’m sure there was some real abuse, but I bet that was pretty uncommon). I’ve met NSA staff and sometimes worked with plenty of three-letter agency types over the years, and they’re just ordinary folk like the rest of us. I hope we’ll see more of this kind of cooperation. Now the one concern is for you foreigners – the role of the NSA is to spy on you, and Google will have to be careful to avoid potentially uncomfortable questions from foreign businesses and governments. But I suspect they’ll be able to manage the scope and keep things under control. The NSA probably pwned them years ago anyway. Good stuff, and I hope we see more direct government involvement… although we really need a separate agency to handle these due to the conflicting missions of the NSA. Note: for those of you that follow these things, there is clear political maneuvering by the NSA here. They want to own cybersecurity, even though it conflicts with their intel mission. I’d prefer to see another agency hold the defensive reins, but until then I’m happy for any .gov cooperation. Share:

Share:
Read Post

Friday Summary: February 5, 2010

I think I need to stop feeling guilty for trying to run a business. Yesterday we announced that we’re trying to put together a list of end users we can run the occasional short survey past. I actually felt guilty that we will derive some business benefit from it, even though we give away a ton of research and advice for free, and the goal of the surveys isn’t to support marketing, but primary research. I’ve been doing this job too long when I don’t even trust myself anymore, and rip apart my own posts to figure out what the angle is. Jeez – it isn’t like I felt guilty about getting paid to work on an ambulance. It is weird to try to build a business where you maintain objectivity while trying to give as much away for free as possible. I think we’re doing a good job of managing vendor sponsorship, thanks to our Totally Transparent Research process, which allows us to release most white papers for free, without any signup or paywall. We’ve had to turn down a fair few projects to stick with our process, but there are plenty of vendors happy to support good research they can use to help sell their products, without having to bias or control the content. We’re walking a strange line between the traditional analyst model, media sponsorship, research department, and… well, whatever else it is we’re doing. Especially once we finish up and release our paid products. Anyway, I should probably get over it and stop over-thinking things. That’s what Rothman keeps telling me, not that I trust him either. Back to that user panel – we’d like to run the occasional (1-2 times per quarter) short (less than 10 minutes) survey to help feed our research, and as part of supporting the OWASP survey program. We will release all the results for free, and we won’t be selling this list or anything. If you are interested, please email us at survey@securosis.com. End users only (for now) please – we do plan on expanding to vendors later. If you are at a vendor and are responsible for internal security, that’s also good. All results will be kept absolutely anonymous. We’re trying to give back and give away as much as we can, and I have decided I don’t need to feel guilty for asking for a little of your time in exchange. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian’s Dark Reading post on Dealing with Weak Passwords. Rich’s TidBITS article on iPads for the enterprise Rich’s Endpoint DLP article took the cover of Information Security magazine Rich quoted by cnet on the Mac vs. PC security debate At TidBITS, Pepper points out that the dead are staging a comeback in the ebook market. Zombie Authors Threaten Fiction Ebook Market, from the Grave! – Brains, anyone? Favorite Securosis Posts Rich: Adrian’s post on Agile and SDL. Funny timing on this one, with Microsoft starting to release some new information on it. Adrian: Mike’s Monitor Everything. I disagree with some of it, but there is so much good information that it’s my fave this week. David Mortman: Analysis of Trustwave’s 2010 Breach Report More yummy yummy data Mike: What do DLP and condoms have in common? Any time you can mention condoms on a corporate blog, it’s a win. ‘nuf said. Meier: Comments on Microsoft Simplified SDL I was hoping Adrian would do a rundown when I saw this earlier and I enjoyed how he broke it out. Other Securosis Posts The NSA Isn’t Evil (Even Working with Google) Database Security Fundamentals: Access & Authorization Need Brains. User Brains Incite 2/2/2010: The Life of the Party You Have to Buy Data Security Tools Pragmatic Data Security: Discover The Network Forensics (Full Packet Capture) Revival Tour Network Security Fundamentals: Default Deny (UPDATED) Favorite Outside Posts Rich: Jeremiah’s great post on why we need to break the web to secure it. This is one of the biggest problems we face on the web – the refusal to make important changes which would enable us to move forward, for fear of breaking older content. Not that we should break things willy-nilly, but many of the bits we are talking about breaking are easy to work around in terms of still providing users the same browsing experience. It’s the ad networks that are the big problem. Adrian: Krebs on ATM Skimmers, part 1 and 2, as very practical security tips. Mike: Kudos to Will Gragido, who makes a play for the fundamental building block of pragmatic philosophy – Accountability the non-Negotiable Asset. Keep in mind that accountability cuts both ways: you need to be accountable for meeting deliverables and managing expectations, and folks in your organization need to be accountable for not doing stupid things. David Mortman: Excerpts from Randy George’s “Dark Side of DLP” “It’s not just enough to recognize badness; someone has to be able to classify badness, with authority.” Says so much about security and not just DLP. Chris: Twitter: real but malicious BitTorrent trackers harvesting accounts. Who knew Twitter had real security staff? Meier: How secure are you? Access was easy at 9 out of 10 buildings. It’s easy for staff writers at the Orlando Sentinel – it’s easy for anyone. Project Quant Posts Project Quant: DatabaseSecurity – WAF Project Quant: Database Security – Encryption Project Quant: Project Comments Project Quant: Database Security – Protect through Monitoring Project Quant: Database Security – Audit Project Quant: Database Security – Monitoring Project Quant: Database Security – Open Question to Database Security Community Project Quant: Database Security – Shield Top News and Posts House passes cybersecurity bill. This hit right as we were going to press, so we’ll provide analysis later. PGP Acquires TC TrustCenter & Chosen Security. If a PKI falls in the woods, does anyone hear it? David Litchfield hangs up the gloves. David is an exceptional researcher who was a powerful counterbalance to Oracle marketing. Sad to

Share:
Read Post

Comments on Microsoft Simplified SDL

I spent the last couple hours pouring over the Simplified Implementation of the Microsoft SDL. I started taking notes and making comments, and realized that I have so much to say on the topic it won’t fit in a single post. I have been yanking stuff out of this one and trying to just cover the highlights, but I will have a couple follow-ups as well. But before I jump into the details and point out what I consider are a few weaknesses, let me just say that this is a good outline. In fact, I will go so far as to say that if you perform each of these steps (even half-assed), your applications will more secure. Much more secure, because the average software development shop is not performing these functions. There is a lot to like here but full adoption will be difficult, due to the normal resistance to change of any software development organization. Before I come across as too negative, let’s take a quick look at the outline and what’s good about it. For the sake of argument, let’s agree that the complexity of MS’s full program is the motivation for this simplified implementation of SDL. Lightweight, agile, simple, and modular are common themes for every development tool, platform, and process that enjoys mainstream success. Security should be no different, and let’s say this process variant meets our criteria. Microsoft’s Simplified Implementation of Secure Development Lifecycle (SDL) is a set of companion steps to augment software development processes. From Microsoft’s published notes on the subject, it appears they picked the two most effective security tasks from each phase of software development. The steps are clear, and their recommendations align closely with the web application security research we performed last year. What I like about it: Phased Approach: Their phases map well to most development processes. Using Microsoft’s recommendations, I can improve every step of the development process, and focus each person’s training on the issues they need to account for. Appropriate Guidelines: Microsoft’s recommendations on tools and testing are sound and appropriate for each phase. Education: The single biggest obstacle for most development teams is ignorance of how threats are exploited, and what they are doing wrong. Starting with education covers the biggest problem first. Leaders and Auditors: Appointing someone as ‘Champion’ or leader tasks someone with the responsibility to improve security, and having an auditor should ensure that the steps are being followed effectively. Process Updates and Root Cause Analysis: This is a learning effort. No offense intended, but the first cycle through the process is going to be as awkward as a first date. Odds are you will modify, improve, and streamline everything the second time through. Here’s what needs some work: Institutional Knowledge: In the implementation phase, how do you define what an unsafe function is? Microsoft knows because they have been tracking and classifying attacks on their applications for a long time. They have deep intelligence on the subject. They understand the specifics and the threat classes. You? You have the OWASP top ten threats, but probably less understanding of your code. Maybe you have someone on your team with great training or practical experience. But this process will work better for Microsoft because they understand what sucks about their code and how attackers operate. Your first code analysis will be a mixed bag. Some companies have great engineers who find enough to keep your entire development organization busy for ten years. Others find very little and are dependent on tools to tell them what to fix. Your effectiveness will come with experience and a few lumps on your forehead. Style: When do you document safe vs. unsafe style? Following the point above, your code development team, as part of their education, should have a security style guide. It’s not just what you get taught in the classroom, but proper use of the specific tools and platforms you rely on. New developers come on board all the time, and you need to document so they can learn the specifics of your environment and style. Metrics: What are the metrics to determine accountability and effectiveness? The Simplified Implementation mentions metrics as a way to guide process compliance and retrospective metrics to help gauge what is effective. Metrics are also the way to decide what a risky interface or unsafe function actually is. But the outline only pays lip service to this requirement. Agile: The people who most need this are web developers using Agile methods, and this process does not necessarily map to those methods. I mentioned this earlier. And the parts I could take or leave: Tools: Is the latest tool always the best tool? To simplify their process, Microsoft introduced a couple security oversimplifications that might help or hurt. New versions of linkers & compilers have bugs as well. Threat Surface: Statistically speaking, if you have twice as many objects and twice as many interfaces, you will generally make more mistakes and have errors. I get the theory. In practice, or at least from my experience, threat surface is an overrated concept. I find that the issue is test coverage of the new APIs and functions, which is why you prioritize tests and manual inspection for new code as opposed to older code. Prioritization: Microsoft has a bigger budget than you do. You will need to prioritize what you do, which tools to buy first, and how to roll out tools in production. Some analysis is required on what training and products will be most effective. All in all, a good general guide to improving development process security, and they have reduced the content to a manageable length. This provides a useful structure, but there are some issues regarding how to apply this type of framework to real world programming environments, which I’ll touch on tomorrow. Share:

Share:
Read Post

Analysis of Trustwave’s 2010 Breach Report

Trustwave just released their latest breach (and penetration testing) report, and it’s chock full of metrics goodness. Like the Verizon Data Breach Investigations Report, it’s a summary of information based on their responses to real breaches, with a second section on results from their penetration tests. The breach section is the best part, and I already wrote about one lesson in a quick post on DLP. Here are a few more nuggets that stood out: It took an average of 156 days to detect a breach, and only 9% of victims detected the breach on their own – the rest were discovered by outside groups, including law enforcement and a credit card companies. Chip and PIN was called out for being more resilient against certain kinds of attacks, based on global analysis of breaches. Too bad we won’t get it here in the US anytime soon. One of the biggest sources of breaches was remote access applications for point of sale systems. Usually these are in place for third party support/management, and configured poorly. Memory parsing (scanning memory to pull sensitive information as it’s written to RAM) was a very common attack technique. I find this interesting, since certain versions of memory parsing attacks have virtualization implications… and thus cloud implications. This is something to keep an eye on, and an area I’m researching more. As I mentioned in the post 5 minutes ago, encryption was only used once for exfiltration. Now some suggestions to the SpiderLabs guys: I realize it isn’t politically popular, but it would totally rock if you, Verizon, and other response teams started using a standard base of metrics. You can always add your own stuff on top, but that would really help us perform external analysis across a wider base of data points. If you’re interested, we’d totally be up for playing neutral third party and coordinating and publishing a recommended metrics base. The pen testing section would be a lot more interesting if you released the metrics, as opposed to a “top 10” list of issues found. We don’t know if number 1 was 10x more common than issue 10, or 100x more common. It’s great that we now have another data source, and I consider all these reports mandatory reading, and far more interesting than surveys. Share:

Share:
Read Post

Need Brains. User Brains

As part of our support for the Open Web Application Security Project (OWASP), we participate in their survey program which runs quarterly polls on various application security issues. The idea is to survey a group of users to gain a better understanding of how they are managing or perceiving web application security. We also occasionally run our own surveys to support research projects, such as Project Quant. All these results are released free to the public, and if we’re running the survey ourself we also release the raw anonymized data. One of our ongoing problems is getting together a good group of qualified respondents. It’s the toughest part of running any survey. Although we post most of our surveys directly in the blog, we would also like to run some closed surveys so we can maintain consistency over time. We are going to try putting together a survey board of people in end user organizations (we may also add a vendor list later) who are willing to participate in the occasional survey. There would be no marketing to this list, and no more than 1-2 short (10 minutes or less is our target) surveys per quarter. All responses will be kept completely anonymous (we’re trying to set it up to scrub the data as we collect it), and we will return the favor to the community by releasing the results and raw data wherever possible. We’re also working on other ideas to give back to participants – such as access to pre-release research, or maybe even free Q&A emails/calls if you need some advice on something. No marketing. No spin. Free data.* If you are interested please send an email to survey@securosis.com and we’ll start building the list. We will never use any email addresses sent to this project for anything other than these occasional short surveys. Private data will never be shared with any outside organization. We obviously need to hit a certain number of participants to make this meaningful, so please spread the word. *Obviously we get some marketing for ourselves out of publishing data, but hopefully you don’t consider that evil or slimy. Share:

Share:
Read Post

Database Security Fundamentals: Access & Authorization

This is part 2 of the Database Security Fundamentals series. In part 1, I provided an overview. Here I will cover basic access and authorization issues. First, the basics: Reset Passwords: Absolutely the first basic step is to change all default passwords. If I need to break into a database, the very first thing I am going to try is to log into a known account with a default password. Simple, fast, and it rarely gets noticed. You would be surprised (okay, maybe not surprised, but definitely disturbed) at how often the default SA password is left in place. Public & Demonstration Accounts: If you are surprised by default passwords, you would be downright shocked by how effectively a skilled attacker can leverage ‘Scott/Tiger’ and similar demonstration accounts to take control of a database. Relatively low levels of permissions can be parlayed into administrative functions, so lock out unused accounts or delete them entirely. Periodically verify that they have not reverted because of a re-install or account reset. Inventory Accounts: Inventory the accounts stored within the database. You should have reset critical DBA accounts, and locked out unneeded ones previously, but re-inventory to ensure you do not miss any. There are always service accounts and, with some database platforms, specific login credentials for add-on modules. Standard accounts created during database installation are commonly subject to exploit, providing access to data and database functions. Keep a list so you can compare over time. Password Strength: There is lively debate about how well strong passwords and password rotation improve security. Keystroke loggers and phishing attacks ignore these security measures. On the other hand, the fact that there are ways around these security precautions doesn’t mean they should be skipped, so my recommendation is to activate some password strength checks for all accounts. Having run penetration tests on databases, I can tell you from first-hand experience that weak passwords are pretty easy to guess; with a little time and an automated login program you can break most in a matter of hours. If I have a few live databases I can divide the password dictionary and run password checks in parallel, with a linear time savings. This is really easy to set up, and a basic implementation takes only a couple minutes. A couple more characters of (required) password length, and a requirement for numbers or special characters, both make guessing substantially more difficult. Authentication Methods: Choose domain authentication or database authentication – whichever works for you. I recommend domain authentication, but the point is to pick one and stick with it. Do not mix the two or later on, confusion and shifting responsibilities will create security gaps – cleaning up those messes is never fun. Do not rely on the underlying operating system for authentication, as that would sacrifice separation of duties, and OS compromise would automatically provide control over the data & database as well. Educate: Educate users on the basics of password selection and data security. Teach them how to pick a word or phase that is easy to remember, such as something they see visually each day, or perhaps something from childhood. Now show them simple substitutions of the letters with special characters and numbers. It makes the whole experience more interesting and less of a bureaucratic annoyance, and will reduce your support calls. All these steps are easy to do. Everything I mentioned you should be able to tackle in an afternoon for one or two critical databases. Once you have accomplished them, the following are slightly more complicated, but offer greater security. Unfortunately this is where most DBAs stop, because they make administration more difficult. Group and Role Review: List out user permissions, roles, and groups to see who has access to what. Ideally review each account to verify users have just enough authorization to do their jobs. This is a great idea, which I always hated. First, it required a few recursive queries to build the list, and second it requires a huge list for non-trivial numbers of users. And actually using the list to remove ‘extraneous’ permissions gets you complaining users, such as receptionists who run reports on behalf of department administrators. Unfortunately, this whole process is time consuming and often unpleasant, but do it anyway. How rigorously you pursue excess rights is up to you, but you should at least identify glaring issues when normal users have access to admin functions. For those of you with the opportunity to work with application developers, this is your opportunity to advise them to keep permission schemes simple. Division of Administrative Duties: If you did not hate me for the previous suggestion, you probably will after this one: Divide up administrative tasks between different admins. Specifically, perform all platform maintenance under an account that cannot access the database and visa-versa. You need to separate the two and this is really not optional. For small shops it seems ridiculous to log out as one user and log back in as another, but negates the domino effect: when one account gets breached it does not mean every system must be considered compromised. If you are feeling really ambitious, or your firm employs multiple DBAs, relational database platforms provide advanced access control provisions to segregate database admin tasks such as archival and schema maintenance, improving security and fraud detection. Share:

Share:
Read Post

What Do DLP and Condoms Have in Common?

They both work a heck of a lot better if you use them ahead of time. I just finished reading the Trustwave Global Security Report, which summarizes their findings from incident response and penetration tests during 2009. In over 200 breach investigations, they only encountered one case where the bad guy encrypted the data during exfiltration. That’s right, only once. 1. The big uno. This makes it highly likely that a network DLP solution would have detected, if not prevented, the other 199+ breaches. Since I started covering DLP, one of the biggest criticisms has been that it can’t detect sensitive data if the bad guys encrypt it. That’s like telling a cop to skip the body armor since the bad guy can just shoot them someplace else. Yes, we’ve seen cases where data was encrypted. I’ve been told that in the recent China hacks the outbound channel was encrypted. But based on the public numbers available, more often than not (in a big way) encryption isn’t used. This will probably change over time, but we also have other techniques to try to detect such other exfiltration methods. Those of you currently using DLP also need to remember that if you are only using it to scan employee emails, it won’t really help much either. You need to use promiscuous mode, and scan all outbound TCP/IP to get full value. Also make sure you have it configured in true promiscuous mode, and aren’t locking it to specific ports and protocols. This might mean adding boxes, depending on which product you are using. Yes, I know I just used the words ‘promiscuous’ and ‘condom’ in a blog post, which will probably get us banned (hopefully our friends at the URL filtering companies will at least give me a warning). I realize some of you will be thinking, “Oh, great, but now the bad guys know and they’ll start encrypting.” Probably, but that’s not a change they’ll make until their exfiltration attempts fail – no reason to change until then. Share:

Share:
Read Post

Incite 2/2/2010: The Life of the Party

Good Morning: I was at dinner over the weekend with a few buddies of mine, and one of my friends asked (again) which AV package is best for him. It seems a few of my friends know I do security stuff and inevitably that means when they do something stupid, I get the call. You can be this guy… Seriously…This guy’s wife contracted one of the various Facebook viruses about a month ago and his machine still wasn’t working correctly. Right, it was slow and sluggish and just didn’t seem like it used to be. I delivered the bad news that he needed to rebuild the machine. But then we got talking about the attack vectors and how the bad guys do their evil bidding, and the entire group was spellbound. Then it occurred to me: security folks could be the most interesting guy (gal) in the room at any given time. Like that dude in the Tecate beer ads. Being a geek – especially with security cred – could be used to pick up chicks and become the life of the party. Of course, the picking up chicks part isn’t too interesting to me, but I know lots of younger folks with skillz that are definitely interested in those mating rituals young people or divorcees partake in. Basically, like every other kind of successful pick-up artist, it’s about telling compelling (and funny) stories. At least that’s what I’ve heard. You have to establish common ground and build upon that. Social media attacks are a good place to start. Everyone (that you’d be interested in anyway) uses some kind of social media, so start by telling stories of how spooky and dangerous it is. How they can be owned and their private data distributed all over China and Eastern Europe. And of course, you ride in on your white horse to save the day with your l33t hacker skillz. Then you go for the kill by telling compelling stories about your day job. Like finding pr0n on the computer of the CEO (and having to discreetly clean it off). Or any of the million other stupid things people in your company do, which is actually funny if you weren’t the one with the pooper scooper expected to clean it up. You don’t break confidences, but we are security people. We can tell anecdotes with the best of them. Like AndyITGuy, who shares a few humorous ones in this post. And those are probably the worst stories I’ve heard from Andy. But he’s a family guy and not going to tell his “good” stories on the blog. Now to be clear (and to make sure I don’t sleep in the dog house for the rest of the week), I’m making this stuff up. I haven’t tried this approach, nor did I have any kind of rap when I was on the prowl like 17 years ago. But maybe this will work for you, Mr. (or Ms.) L33t Young Hacker trying to find a partner or at least a warm place to hang for a night or three. Or you could go back to your spot on the wall and do like pretty much every generation of hacker that’s come before you. P!nk pretty much summed that up (video)… – Mike Photo credit: “Tecate Ad” covered by psfk.com Incite 4 U I’m not one to really toot my own horn, but I’m constantly amazed at the amount and depth of the research we are publishing (and giving away) on the Securosis blog. Right now we have 3-4 different content series emerging (DB Quant, Pragmatic Data Security, Network Security Fundamentals, Low Hanging Fruit, etc.), resulting in 3-4 high quality posts hitting the blog every single day. You don’t get that from your big research shops with dozens of analysts. Part of this is the model and a bigger part is focusing on doing the right thing. We believe the paywall should be on the endangered species list, and we are putting our money where our mouths are. If you do not read the full blog feed you are missing out. Prognosis for PCI in 2010: Stagnation… – Just so I can get my periodic shout-out to Shimmy’s new shop, let me point to a post on his corporate blog (BTW, who the hell decided to name it security.exe?) about some comments relative to what we’ll see in PCI-land for 2010. Basically nothing, which is great news because we all know most of the world isn’t close to being PCI compliant. So let’s give them another year to get it done, no? Oh yeah, I heard the bad guys are taking a year off. No, not really. I’m sure all the 12 requirements are well positioned to deal with threats like APT and even new fangled web application attacks. Uh-huh. I understand the need to let the world catch up, but remember that PCI (and every other regulation) is a backward looking indicator. It does not reflect what kinds of attacks are currently being launched your way, much less what’s coming next. – MR This Ain’t Your Daddy’s Cloud – One of the things that annoys me slightly when people start bringing up cloud security is the tendency to reinforce the same old security ideas and models we’ve been using for decades, as opposed to, you know, innovating. Scott Lupfer offers some clear, basic cloud security advice, but he sticks with classic security terminology, when it’s really the business audience that needs to be educated. Back when I wrote my short post on evaluating cloud risks I very purposely avoided using the CIA mantra so I could better provide context for the non-security audience, as opposed to educating them on our fundamentals. With the cloud transition we have an opportunity to bake in security in ways we missed with the web transition, but we can’t be educating people about CIA or Bell La Padula – RM Cisco milking the MARS cash cow? – Larry

Share:
Read Post

Pragmatic Data Security: Discover

In the Discovery phase we figure where the heck our sensitive information is, how it’s being used, and how well it’s protected. If performed manually, or with too broad an approach, Discovery can be quite difficult and time consuming. In the pragmatic approach we stick with a very narrow scope and leverage automation for greater efficiency. A mid-sized organization can see immediate benefits in a matter of weeks to months, and usually finish a comprehensive review (including all endpoints) within a year or less. Discover: The Process Before we get into the process, be aware that your job will be infinitely harder if you don’t have a reasonably up to date directory infrastructure. If you can’t figure out your users, groups, and roles, it will be much harder to identify misuse of data or build enforcement policies. Take the time to clean up your directory before you start scanning and filtering for content. Also, the odds are very high that you will find something that requires disciplinary action. Make sure you have a process in place to handle policy violations, and work with HR and Legal before you start finding things that will get someone fired (trust me, those odds are pretty darn high). You have a couple choices for where to start – depending on your goals, you can begin with applications/databases, storage repositories (including endpoints), or the network. If you are dealing with something like PCI, stored data is usually the best place to start, since avoiding unencrypted card numbers on storage is an explicit requirement. For HIPAA, you might want to start on the network since most of the violations in organizations I talk to relate to policy violations over email/web/FTP due to bad business processes. For each area, here’s how you do it: Storage and Endpoints: Unless you have a heck of a lot of bodies, you will need a Data Loss Prevention tool with content discovery capabilities (I mention a few alternatives in the Tools section, but DLP is your best choice). Build a policy based on the content definition you built in the first phase. Remember, stick to a single data/content type to start. Unless you are in a smaller organization and plan on scanning everything, you need to identify your initial target range – typically major repositories or endpoints grouped by business unit. Don’t pick something too broad or you might end up with too many results to do anything with. Also, you’ll need some sort of access to the server – either by installing an agent or through access to a file share. Once you get your first results, tune your policy as needed and start expanding your scope to scan more systems. Network: Again, a DLP tool is your friend here, although unlike with content discovery you have more options to leverage other tools for some sort of basic analysis. They won’t be nearly as effective, and I really suggest using the right tool for the job. Put your network tool in monitoring mode and build a policy to generate alerts using the same data definition we talked about when scanning storage. You might focus on just a few key channels to start – such as email, web, and FTP; with a narrow IP range/subnet if you are in a larger organization. This will give you a good idea of how your data is being used, identify some bad business process (like unencrypted FTP to a partner), and which users or departments are the worst abusers. Based on your initial results you’ll tune your policy as needed. Right now our goal is to figure out where we have problems – we will get to fixing them in a different phase. Applications & Databases: Your goal is to determine which applications and databases have sensitive data, and you have a few different approaches to choose from. This is the part of the process where a manual effort can be somewhat effective, although it’s not as comprehensive as using automated tools. Simply reach out to different business units, especially the application support and database management teams, to create an inventory. Don’t ask them which systems have sensitive data, ask them for an inventory of all systems. The odds are very high your data is stored in places you don’t expect, so to check these systems perform a flat file dump and scan the output with a pattern matching tool. If you have the budget, I suggest using a database discovery tool – preferably one with built in content discovery (there aren’t many on the market, as we’ll mention in the Tools section). Depending on the tool you use, it will either sniff the network for database connections and then identify those systems, or scan based on IP ranges. If the tool includes content discovery, you’ll usually give it some level of administrative access to scan the internal database structures. I just presented a lot of options, but remember we are taking the pragmatic approach. I don’t expect you to try all this at once – pick one area, with a narrow scope, knowing you will expand later. Focus on wherever you think you might have the greatest initial impact, or where you have known problems. I’m not an idealist – some of this is hard work and takes time, but it isn’t an endless process and you will have a positive impact. We aren’t necessarily done once we figure out where the data is – for approved repositories, I really recommend you also re-check their security. Run at least a basic vulnerability scan, and for bigger repositories I recommend a focused penetration test. (Of course, if you already know it’s insecure you probably don’t need to beat the dead horse with another check). Later, in the Secure phase, we’ll need to lock down the approved repositories so it’s important to know which security holes to plug. Discover: Technologies Unlike the Define phase, here we have a plethora of options. I’ll break this into

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.