Login  |  Register  |  Contact
Tuesday, January 12, 2010

Revisiting Security Priorities

By Mike Rothman

Yesterday’s FireStarter was one of the two concepts we discussed during our research meeting last week. The other was to get folks to revisit their priorities, as we run headlong into 2010.

My general contention is that too many folks are focusing on advanced security techniques, while building on a weak or crumbling foundation: the network and endpoint security environment. With a little tuning, existing security investments can be bolstered and improved to eliminate a large portion of the low-hanging fruit that attackers target. What could be more pragmatic than using what you already have a bit better?

Of course, my esteemed colleagues pointed out that just because the echo chamber blathers about Adobe suckage and unsubstantiated Mac 0-days, that doesn’t mean the run of the mill security professional is worried about this stuff. They reminded me that most organizations don’t do the basics very well, and that not too many mid-sized organizations have implemented a SDL to build secure code.

And my colleagues are right. We refocused the idea on taking a step back and making sure you are focusing on the right stuff for your organization. This process starts with getting your mindset right, and then you need to make a brutally honest assessment of your project list.

Understand that every organization occupies a different place along the security program maturity scale. Some have the security foundation in place and can plan to focus on the upper layers of the stack this year – things like database and application security. Maybe you aren’t there, so you focus on simple blocking and tackling that pundits and blowhards (like me!) take for granted, like patch management and email/web filtering.

All will need to find dollars to fund projects by pulling the compliance card. Rich, Adrian, and I did an interview with George Hulme on that very topic.

Security programs are built and operated based on the requirements, culture, and tolerance for risk of their organizations. Yes, the core pieces of a program (understand what needs to be protected, plan how to protect it, protect it, and document what you protected) are going to be consistent. But beyond that, each organization must figure out what works for them.

That starts with revisiting your assumptions. What’s changing in your business this year? Bringing on new business partners, introducing new products, or maybe even looking at new ways to sell to customers? All these have an impact on what you need to protect. Also decide if your tactics need to be changed. Maybe you need to adopt a more Pragmatic approach or possibly become more of a guerilla security leader. I don’t know your answer – I can only remind you to ask the questions.

Tactically, if you do one thing this week, go back and revisit your basic network and endpoint security strategy. Later this week, I’ll post a hit list of low hanging fruit that can yield the biggest bang for the buck. Though I’m sure the snot nosed kid running your network and endpoint stuff has everything under control, it never hurts to be sure.

Just don’t coast through another year of the same old, same old because you are either too busy or too beaten down to change things.

–Mike Rothman

Monday, January 11, 2010

Mercenary Hackers

By Adrian Lane

Dino Dai Zovi (@DinoDaiZovi) posted the following tweets this Saturday:

Food for thought: What if <vendor> didn’t patch bugs that weren’t proven exploitable but paid big bug bounties for proven exploitable bugs?

and …

The strategy being that since every patch costs millions of dollars, they only fix the ones that can actually harm their customers.

I like the idea. In many ways I really do. Much like an open source project, the security community could examine vendor code for security flaws. It’s an incredibly progressive viewpoint, which has the potential to save companies the embarrassment of bad security, while simultaneously rewarding some of the best and brightest in the security trade for finding flaws. Bounties would reward creativity and hard work by paying flaw finders for their knowledge and expertise, but companies would only pay for real problems. We motivate sales people in a similar way, paying them extraordinarily well to do what it takes to get the job done, so why not security professionals?

Dino’s throwing an idea out there to see if it sticks. And why not? He is particularly talented at finding security bugs.

I agree with Dino in theory, but I don’t think his strategy will work for a number of reasons. If I were running a software company, why would I expect this to cost less than what I do today?

  • Companies don’t fix bugs until they are publicly exploited now, so what evidence do we have this would save costs?
  • The bounty itself would be an additional cost, admittedly with a PR benefit. We could speculate that potential losses would offset the cost of the bounties, but we have no method of predicting such losses.
  • Significant cost savings come from finding bugs early in the development cycle, rather than after the code has been released. For this scenario to work, the community would need to work in conjunction with coders to catch issues pre-release, complicating the development process and adding costs.
  • How do you define what is a worthwhile bug? What happens if I think it’s a feature and you think it’s a flaw? We see this all the time in the software industry, where customers are at odds with vendors over definitions of criticality, and there is no reason to think this would solve the problem.
  • This is likely to make hackers even more mercenary, as the vendors would be validating the financial motivation to disclose bugs to the highest bidder rather than the developers. This would drive up the bounties, and thus total cost for bugs.

A large segment of the security research community feels we cannot advance the state of security unless we can motivate the software purveyors to do something about their sloppy code. The most efficient way to deliver security is to avoid stupid programming mistakes in the application. The software industry’s response, for the most part, is issue avoidance and sticking with the status quo. They have many arguments, including the daunting scope of recognizing and fixing core issues, which developers often claim would make them uncompetitive in the marketplace. In a classic guerilla warfare response, when a handful of researchers disclose heinous security bugs to the community, they force very large companies to at least re-prioritize security issues, if not change their overall behavior.

We keep talking about the merits of ethical disclosures in the security community, but much less about how we got to this point. At heart it’s about the value of security. Software companies and application development houses want proof this is a worthwhile investment, and security groups feel the code is worthless if it can be totally compromised. Dino’s suggestion is aimed at fixing the willingness of firms to find and fix security bugs, with a focus on critical issues to help reduce their expense. But we have yet to get sufficient vendor buy-in to the value of security, because without solid evidence of value there is no catalyst for change.

–Adrian Lane

Database Password Pen Testing

By Adrian Lane

A few years back I worked on a database password checker at the request of my employer. A handful of customers wanted to periodically audit passwords, verifying that they complied with their password policies. As databases can use internal password management – outside the scope of primary access control systems like LDAP – they wanted auditing capabilities across the database systems. The goal was to identify weak passwords for service and general database user accounts. This was purely a research effort, but as I was recently approached by yet another IT person on this subject, I thought it was worth discussing the practical merits of doing this.

There were four approaches that I took to solve the problem:

  1. Run the pen test against the live database. I created a password dictionary and tried to brute force known accounts. The problems of user account discovery, how to handle databases that supported lockout on failed login attempts, load on the database, and even the regional nature of the dictionary made this a costly choice.

  2. Run the pen test against a mirrored or VM copy of the database. Similar to the above in approach except I made the assumption I had credentialed access to the system. In this way I could discover the local accounts and disable lockout if necessary. But this required a copy of an entire production database be kept, resources allocated, logistical problems in getting the copy and so on.

  3. Hash comparisons: Extract the password hashes from the database, replicate the hashing method of the database, pre-hash the dictionary, and run a hash comparison of the passwords. This assumes that I can get access to the hash table and account names, and that I can duplicate what the database does when producing the hashes. It requires a very secure infrastructure to store the hashed passwords.

  4. Use a program to intercept the passwords being sent to the database. I tried login triggers, memory scanning, and network stack agents, all of which worked to one degree or another. This was the most invasive of the methods and needed to be used on the live platform. It solved the problem of finding user accounts and did not require additional processing resources. It did however violate separation of duties, as the code I ran was under the domain of the OS admin.

We even discussed forgetting the pen test entirely, forcing subsequent logins to renew all password, and using a login trigger to enforce password policies. But that was outside the project scope. If you have a different approach I would love to hear it.

As interesting as the research project was, I’m of the opinion that pen testing database passwords is a waste of time! While it was technically feasible to perform, it’s a logistical and operational nightmare. Even if I could find a better way to do this, is it worth it? A better approach leverages enforcement options for password length, attributes, and rotation built into the database itself. Better still, using external access control systems to support and integrate with database password management overcomes limitations in the database password options. Regardless, there are some firms that still want to audit passwords, and I still periodically run across IT personnel cobbling together routines to do this.

Technical feasibility issues aside, this is one of those efforts that, IMO, should not ever have gotten started. I have never seen a study that shows the value of password rotation, and while I agree that more complex passwords help secure databases from dictionary attacks, they don’t help with other attack vectors like key-loggers and post-it notes stuck to the monitor. This part of my analysis, included with the technical findings, was ignored because there was a compliance requirement to audit passwords. Besides, when you work for a startup looking to please large clients, logic gets thrown out the window: if the customer wants to pay for it, you build it! Or at least try.

–Adrian Lane

FireStarter: The Grand Unified Theory of Risk Management

By Rich

The FireStarter is something new we are starting here on the blog. The idea is to toss something controversial out into the echo chamber first thing Monday morning, and let people bang on some of our more abstract or non-intuitive research ideas.

For our inaugural entry, I’m going to take on one of my favorite topics – risk management.

There seem to be few topics that engender as much endless – almost religious – debate as risk management in general, and risk management frameworks in particular. We all have our favorite pets, and clearly mine is better than yours. Rather than debating the merits of one framework over the other, I propose a way to evaluate the value of risk frameworks and risk management programs:

  1. Any risk management framework is only as valuable as the degree to which losses experienced by the organization were accurately predicted by the risk assessments.
  2. A risk management program is only as valuable as the degree to which its loss events can be compared to risk assessments.

Pretty simple – all organizations experience losses, no matter how good their security and risk management. Your risk framework should accurately model those losses you do experience; if it doesn’t, you’re just making sh&% up. Note this doesn’t have to be quantitative (which some of you will argue anyway). Qualitative assessments can still be compared, but you have to test.

As for your program, if you can’t compare the results to the predictions, you have no way of knowing if your program works.

Here’s the ruler – time to whip ‘em out…

–Rich

Friday, January 08, 2010

Project Quant: Database Security - Configure

By Adrian Lane

The next task in the Secure phase is to configure the databases. In the Planning phase we gathered industry standards and best practices, developed internal policies, and defined settings to standardize on. We also established the respective importance of policy violations, so we can filter critical alerts which require from from purely informational notifications. Then, in the Discovery phase, we gathered a list of databases, gained access to those systems, and implemented the rules we want to run (generally in the form of SQL queries), which are the instantiations of policies from the Planning phase. Now we take the results of our scans and figure out how to configure the databases.

Assess

  • Variables: Time to review assessment reports per database. You will have multiple databases and perhaps different types, so add up the time for each.
  • Time to analyze failures, policy violations, and incorrect settings. Review the scans and identify policy/rule violations. Identify rules that failed to execute vs. actual misconfigured entries.

Prescribe

  • Time to gather itemized issues to address. Order according to criticality.
  • Time to select remediation options. Issues may be patching or configuration changes, or workaround options may be available. Specify appropriate response to each policy violation.
  • Time to allocate resources and create work orders. If workflow or trouble ticket systems are used, record necessary changes.

Fix

  • Time to reconfigure database. Make changes to tables and configuration files as prescribed.
  • Time to implement changes and reboot database server. Many configuration changes are not effective until the system restarts.

Rescan

  • Number of retries. If assessment must be rerun to verify configuration changes, include subsequent scans.
  • Variable: Total cost to rescan. This is the setup, scan, and distribution subset of the Assess phase. For failed policies, calculate cost of rescans.

Document

  • Time to document changes. Itemize changes to configuration.
  • Time to document accepted variances from prescribed configuration. If policies are not appropriate for a particular database or database type, note the exceptions.
  • Time to specify configuration, policy, and rule changes. If rules or SQL queries break due to changes, or there is a need to reflect policy changes in rules used, document required changes.

–Adrian Lane

Thursday, January 07, 2010

Friday Summary - January 8th, 2010

By Adrian Lane

I was over at Rich’s place this week while we were recording the network security podcast. When finished we were just hanging out and Riley, Rich’s daughter, came walking down the hall. At 9 months old I was more shocked to see her walking than she was at seeing me standing there in the hall. She looked up at me and sat down. I extended my hand thinking that she would grab hold of my fingers, but she just sat there looking at me. I heard Rich pipe up … “She’s not a dog, Adrian. You don’t need to let her sniff your hand to make friends. Just say hello.” Yeah. I guess I spend too much time with dogs and not much time with kids. I’ll have to work on my little people skills. And the chew toy I bought her for Christmas was, in hindsight, a poor choice.

This has been the week of the Rothman for us. Huge changes in the new year – you probably noticed. But it’s not just here at Securosis. There must have been five or six senior security writers let go around the country. How many of you were surprised by the Washington Post letting Brian Krebs go? How freakin’ stupid is that!?! At least this has a good side in that Brian has his own site up (Krebs on Security), and the quality and quantity are just as good as before. Despite a healthy job market for security and security readership being up, I expect we will see the others creating their own blogs and security continuing to push the new media envelope.

And as a reminder, with the holidays over, Rich and I are making a huge press on the current Project Quant metrics series: Quant for Database Security. We are just getting into the meat of the series, and much like patch management, we are surprised at the lack of formalized processes for database security, so I encourage your review and participation.

On to the Summary:

Webcasts, Podcasts, Outside Writing, and Conferences

Favorite Securosis Posts

Other Securosis Posts

Favorite Outside Posts

Project Quant Posts

Top News and Posts

Blog Comment of the Week

Remember, for every comment selected Securosis makes a $25.00 donation to Hackers For Charity. This week’s best comment comes from ‘smithwill’ in response to Mike Rothman’s post on Getting Your Mindset Straight for 2010:

Bravo. Security common sense in under 1000 words. And the icing on the cake: buy our s#it and you won’t have to do anything line. Priceless.

Congratulations! We will contribute $25.00 to HFC in ‘smithwill’s name!

–Adrian Lane

Google, Privacy, and You

By Rich

A lot of my tech friends make fun of me for my minimal use of Google services. They don’t understand why I worry about the information Google collects on me. It isn’t that I don’t use any Google services or tools, but I do minimize my usage and never use them for anything sensitive. Google is not my primary search engine, I don’t use Google Reader (despite the excellent functionality), and I don’t use my Gmail account for anything sensitive. Here’s why:

First, a quote from Eric Schmidt, the CEO of Google (the full quote, not just the first part, which many sites used):

If you have something that you don’t want anyone to know, maybe you shouldn’t be doing it in the first place, but if you really need that kind of privacy, the reality is that search engines including Google do retain this information for some time, and it’s important, for example that we are all subject in the United States to the Patriot Act. It is possible that that information could be made available to the authorities.

I think this statement is very reasonable. Under current law, you should not have an expectation of privacy from the government if you interact with services that collect information on you, and they have a legal reason and right to investigate you. Maybe we should have more privacy, but that’s not what I’m here to talk about today.

Where Eric is wrong is that you shouldn’t be doing it in the first place. There are many actions all of us perform from day to day that are irrelevant even if we later commit a crime, but could be used against us. Or used against us if we were suspected of something we didn’t commit. Or available to a bored employee.

It isn’t that we shouldn’t be doing things we don’t want others to see, it’s that perhaps we shouldn’t be doing them all in one place, with a provider that tracks and correlates absolutely everything we do in our lives. Google doesn’t have to keep all this information, but since they do it becomes available to anyone with a subpoena (government or otherwise). Here’s a quick review of some of the information potentially available with a single piece of paper signed by a judge… or a curious Google employee:

  • All your web searches (Google Search).
  • Every website you visit (Google Toolbar & DoubleClick).
  • All your email (Gmail).
  • All your meetings and events (Google Calendar).
  • Your physical location and where you travel (Latitude & geolocation when you perform a search using Google from your location-equipped phone).
  • Physical locations you plan on visiting (Google Maps).
  • Physical locations of all your contacts (Maps, Talk, & Gmail).
  • Your phone calls and voice mails (Google Voice).
  • What you read (Search, Toolbar, Reader, & Books)
  • Text chats (Talk).
  • Real-time location when driving, and where you stop for food/gas/whatever (Maps with turn-by-turn).
  • Videos you watch (YouTube).
  • News you read (News, Reader).
  • Things you buy (Checkout, Search, & Product Search).
  • Things you write – public and private (Blogger [including unposted drafts] & Docs).
  • Your photos (Picassa, when you upload to the web albums).
  • Your online discussions (Groups, Blogger comments).
  • Your healthcare records (Health).
  • Your smarthome power consumption (PowerMeter).

There’s more, but what else do we care about? Everything you do in a browser, email, or on your phone. It isn’t reading your mind, but unless you stick to paper, it’s as close as we can get. More importantly, Google has the ability to correlate and cross-reference all this data.

There has never before been a time in human history when one single, private entity has collected this much information on a measurable percentage of the world’s population.

Use with caution.

–Rich

Project Quant: Database Security - Patch

By Adrian Lane

It’s time to move onto the ‘Secure’ phase of the process (Other sections are DB Security Intro, Planning Part 1, Planning Part 2, & Discovery). The Secure phase is where we implement many of the preventative security measures and establish the secure baseline for database operations. First up is the database patching process.

As you may have read, Rich has already produced a detailed report on Quant for Patch Management metrics and processes. And that work is certainly applicable to what we are doing here. In essence I am going to use the same process, but reduce the level of detail in the metrics to focus in on areas where you will spend the majority of your resources and omit anything not relevant to database patching. If you feel you need that level of detail for database patch management, I won’t discourage you from going back and usage that as a guide. For major revisions and releases, that version will provide necessary granularity. For database security patches this process is more than adequate.

There are two types of DBAs out there: those who are more paranoid than busy, and those who are too busy to be paranoid. The later group does what I like to call “patch and pray”: install the patch and pray it works. If it crashes your database you scramble to roll it back out and recover. I know a lot of DBAs for small businesses who use this model, and for the most part, the patches work and they get away with it. The other group sets up a test environment, creates acceptance tests cases, tests and bundles their approved version, plans the rollout carefully, and finally executes. This is more typical for enterprises or firms where database downtime is simply not an option and the resources are available. Regardless of which model you follow, evaluation and testing will comprise the bulk of your effort in this phase.

Security patches are a little different than general products updates to fix bugs. If you are experiencing a functional problem with an application, you know for certain that you need a certain patch and already possess some understanding of how critical that issue is to your firm. Most DBAs may not be aware of what sort of exposure, or be able to assess risk based upon known exploits. If you don’t have a security group helping with the analysis, the evaluation process is often based on matching critical weaknesses to database features used within the environment. If you find a critical vulnerability you patch right away; otherwise you wait for the next patch cycle.

Database vendors make it easy to locate and obtain patches. Security patches are well publicized and alert notices are commonly emailed to DBAs when they become available. Keep in mind that some of the database patches require updates to the underlying operating system kernel, libraries, or modules, and the evaluation process needs to cover those updates as well.

Evaluate

  • Time to monitor sources for advisories – per DB type/per release: Review database vendor alerts and industry advisories.
  • Time to identify which patches are applicable per database: Not all security patches are necessary for your environment. Identify patches that correspond to database type/function in use, & OS platform; then evaluate based on vendor criticality.
  • Time to identify workarounds: Identify if workaround are available, and whether they are appropriate.
  • Time to determine priority: Determine your operational priority for patching.

Acquire

  • Time to acquire: Time required to locate and acquire patch(es).
  • Variables: Costs for maintenance, licensing or support services: Updates to vendor maintenance contracts. Cost for consultants or managed service providers.

Test & Approve

  • Time to develop test cases and criteria: Cost to develop functional, security, or acceptance test cases.
  • Time to establish test environment: Time required to locate and gain access to testing personnel, tools, and platforms needed to verify patches.
  • Variables: time to test: Time to run tests. May require multiple tests sweeps depending on test cases, resources, and configuration.
  • Time to analyze test results.
  • Time to establish approved packages/versions: Time to package verified versions of database and platform patches.

Deploy & Confirm

  • Time to schedule and notify: Schedule personnel and resources; communicate database maintenance schedule to application users.
  • Time to install: Total time to take database offline, perform backups/snapshots, install patch, bring database back online, and reconnect applications.
  • Time to verify installation: Basic functional testing of core services and security tests.
  • Time to clean up: Remove temp files, database snapshots, or rollback files.

Document

  • Time to document: Workflow software, trouble ticket response, compliance change reports, and a record of what you did are all important aspects of this task.

–Adrian Lane

Getting Your Mindset Straight for 2010

By Mike Rothman

Speaking as a “master of the obvious,” it’s worth mentioning the importance of having a correct mindset heading into the new year. Odds are you’ve just gotten back from the holiday and that sinking “beaten down” feeling is setting in. Wow, that didn’t take long.

So I figured I’d do a quick reminder of the universal truisms that we know and love, but which still make us crazy. Let’s just cover a few:

There is no 100% security

I know, I know – you already know that. But the point here is that your management forgets. So it’s always a good thing to remind them as early and often as you can. Even worse, there are folks (we’ll get to them later) who tell your senior people (usually over a round of golf or a bourbon in some mahogany-laden club) that it is possible to secure your stuff.

You must fight propaganda with fact. You must point out data breaches, not to be Chicken Little, but to manage expectations. It can (and does) happen to everyone. Make sure the senior folks know that.

Compliance is a means to an end

There is a lot of angst right now (especially from one of my favorite people, Josh Corman) about the reality that compliance drives most of what we do. Deal with it, Josh. Deal with it, everyone. It is what it is. You aren’t going to change it, so you’d better figure out how to prosper in this kind of reality.

What to do? Use compliance to your advantage. Any new (or updated) regulation comes with some level of budget flexibility. Use that money to buy stuff you really need. So what if you need to spend some time writing reports with your new widget to keep the auditor happy. Without compliance, you wouldn’t have your new toy.

Don’t forget the fundamentals

Listen, most of us have serious security kung fu. They probably task folks like you to fix hard problems and deflect attackers from a lot of soft tissue. And they leave the perimeter and endpoints to the snot-nosed kid with his shiny new Norwich paper. That’s OK, but only if you periodically make sure things function correctly.

Maybe that means running Core against your stuff every month. Maybe it means revisiting that change control process to make sure that open port (which that developer just had to have) doesn’t allow the masses into your shorts.

If you are nailed by an innovative attack, shame on them. Hopefully your incident response plan holds up. If you are nailed by some stupid configuration or fundamental mistake, shame on you.

Widgets will not make you secure

Keep in mind the driving force for any vendor is to sell you something. The best security practitioners I know drive their projects – they don’t let vendors drive them. They have a plan and they get products and/or services to execute on that plan.

That doesn’t mean reps won’t try to convince you their widget needs to be part of your plan. Believe me, I’ve spent many a day in sales training helping reps to learn how to drive the sales process. I’ve developed hundreds of presentations designed to create a catalyst for a buyer to write a check. The best reps try to help you, as long as that involves making the payment on their 735i.

And even worse, as a reformed marketing guy, I’m here to say a lot of vendors will resort to bravado in order to convince you of something you know not to be true. Like that a product will make you secure. Sometimes you see something so objectionable to the security person in you, it makes you sick.

Let’s take the end of this post from LogLogic as an example. For some context, their post mostly evaluates the recent Verizon DBIR supplement.

What does LogLogic predict for 2010? Regardless of whether, all, some, or none, of Verizon’s predictions come true, networks will still be left vulnerable, applications will be un-patched, user error will causes breaches in protocol, and criminals will successfully knock down walls.

But not on a LogLogic protected infrastructure.

We can prevent, capture and prove compliance for whatever 2010 throws at your systems. LogLogic customers are predicting a stress free, safe 2010.

Wow. Best case, this is irresponsible marketing. Worst case, this is clearly someone who doesn’t understand how this business works. I won’t judge (too much) because I don’t know the author, but still. This is the kind of stuff that makes me question who is running the store over there.

Repeat after me: A widget will not make me secure. Neither will two widgets or a partridge in a pear tree.

So welcome to 2010. Seems a lot like 2009 and pretty much every other year of the last decade. Get your head screwed on correctly. The bad guys attack. The auditors audit. And your management squeezes your budget.

Rock on!

–Mike Rothman

Wednesday, January 06, 2010

Incite - 1/6/2009 - The Power of Contrast

By Mike Rothman

Good Morning:

It’s been quite a week, and it’s only Wednesday. The announcement of Securosis “Plus” went extremely well, and I’m settling into my new digs. Seems like the last two days just flew by. As I was settling in to catch some zzzz’s last night, I felt content. I put in a good day’s work, made some progress, and was excited for what the next day had to bring. Dare I say it? I felt happy. (I’m sure I’ve jinxed myself for another 7 years.)

It reminds me of a lyric from Shinedown that really resonated:

There’s a hard life for every silver spoon<br/> There’s a touch of grey for every shade of blue<br/> That’s the way that I see life<br/> If there was nothing wrong,<br/> Then there’d be nothing right<br/>

-Shinedown, What a Shame

It’s about contrast. If I didn’t have less than stellar job experiences (and I’ve had plenty of those), clearly I couldn’t appreciate what I’m doing now. It’s also a big reason why folks that have it pretty good sometimes lose that perspective. They don’t have much bad to contrast. Keep that in mind and if you need a reminder of how lucky you are, head down to the food bank for a few hours.

The most surprising thing to me (in a positive way) about joining the team is the impact of having someone else look at your work, challenge it and suggest ways to make it better. Yesterday I sent a post that will hit Friday on FUDSEC to the team. The first draft was OK, but once Rich, Adrian, Mort and Chris Pepper got their hands on it and suggested some tuning - the post got markedly better. Then I got it.

Just to reinforce the notion, the quote in today’s InformationWeek Daily newsletter hit home as well:

If you want to go quickly, go alone.<br/> If you want to go far, go together.<br/>

-African proverb

True dat. Have a great day.

-Mike

Incite 4 U

This week Mike takes the bulk of the Incite, but did get some contributions from Adrian. Over the coming weeks, as we get the underlying systems in place, you’ll be getting Incite from more of the team. We’ll put our initials next to each snippet we write, just so you know who to send nasty email.

  1. Monetizing Koobface: I’m fascinated by how the bad guys monetize their malware, so this story on Dark Reading highlighting some research from Trend Micro was interesting. The current scheme du jour is fake anti-virus. It must be working since over the holiday I got a call from my FiL (Father in Law) about how he got these pop-ups about needing anti-virus. Thankfully he didn’t click anything and had already made plans to get the machine re-imaged. - MR

  2. Identity + Network = MUST: Gartner’s Neil MacDonald has a post entitled Identity-Awareness Should be a Feature, not a Product, where he’s making the point that as things virtualize and hybrid computing models prevail, it’s not an option to tie security policies to physical attributes. So pretty much all security products will need to tie into Active Directory, RADIUS and LDAP. Yes, I know most already do, but a while back IP to ID was novel. Now, not so much. - MR

  3. Puffery Indeed: I had a personal ban on blogging about the Cloud in 2009 as there were a lot of people doing a lot of talking but saying very little. This NetworkWorld post on “Tone-deaf Unisys official on why cloud computing rocks; Or what shouldn’t get lost in all the puffery over cloud technology” is the embodiment of the puffery. The point of the post - as near as I can tell - was to say companies need to “embrace cloud computing” and “security concerns are the leading cause of enterprise and individual users’ hesitancy in adopting cloud computing”. Duh! The problem is that the two pieces of information are based on unsubstantiated vendor press releases and double-wrapped in FUD. Richard Marcello of Unisys manages to pose cloud technologies as a form of outsourcing US jobs, and Paul Krill says these are a mid-term competitive requirement for businesses. Uh, probably not on either account. Still, giving them the benefit of the doubt, I checked the ‘survey’ that is supposed to corroborate hesitancy of Cloud adoption, but what you get is an unrelated 2007 survey on Internet trust. A subsequent ‘survey’ link goes to a Unisys press releases for c-RIM products. WTF? I understand ‘Cloud’ is the hot topic to write about, but unless your goal is to totally confound readers while mentioning a vendor a bunch of times, just stop it with the random topic association. - AL

  4. Speeds and Feeds Baby: Just more of an observation because I’ve been only tangentially covering network security over the past few years. It seems speeds and feeds still matter. At least from the standpoint of beating your chest in press releases. Fortinet is the latest guilty party in talking about IPv6 thruput. Big whoop. It kills me that “mine is bigger than yours” is still used as a marketing differentiator. I’m probably tilting at windmills here a bit, since these filler releases keep the wire services afloat, so it’s not all bad. - MR

  5. Time for the Software Security Group: It’s amazing how we can get access to lots of data and still ignore it. Gary McGraw, one of the deans of software security, has a good summary of his ongoing BSIMM (Building Security In) research on the InformIT blog. He covers who should do software security, how big your group should be, and also how many software security folks there are out there (not enough). In 2010, band-aids (WAFs, etc.) will still prevail, but if you don’t start thinking of how to structurally address the issue, which means a PROGRAM and a group responsible to execute on that program, things are never going to improve. - MR

  6. Saving Private MySQL: Charles Babcock’s post on “MySQL’s Former Owner Can’t ‘Save’ It After Selling It” was thought provoking. It seems a “no-brainer” that, since Oracle owns MySQL, they should be allowed to do what they please with the code. But factoring in potential anti-competitive aspects of killing MySQL makes it a deeper decision. Charles makes the point that it is somewhat disingenuous to sell an open source product that is viewed as community property, and the seeming hypocrisy of the seller now complaining about the fate of the product. I have maintained that there is no reason for Oracle to kill MySQL off as it can drive upsell opportunities for the Oracle database if properly managed. Realistically speaking, fiefdoms within Oracle will fight for their turf, so all possibilities must be considered. I believe MySQL is too valuable to let wither and die. The piece is worth a read! - AL

  7. Attacking People: Rich just posted a good piece on Macworld about the typical scams Mac users see. Yes, they are the same as what non-Mac users see - phishing, identity theft, auction fraud, etc. I remarked on Twitter that it’s been the same for 10,000 years: folks stealing from folks. CSOAndy makes that point on his blog as well, but talking about the Twitter DNS attack before the holidays. No, DNSSEC would not have stopped this attack because it was an attack on people. Their DNS service got owned, therefore they did. So all the technology in the world is great, but people are still our weakest link, by far. - MR

  8. Beware the FUD: We live in a 24/7 world and that means the media is always looking for something to drive page views. Bill Brenner at CSO mentions 3 examples of stories that got a lot of airtime, but probably shouldn’t have because they were mostly crap. Like the Black Screen of Death, which wasn’t really a problem. PrevX lets the story run for a couple of days and then calls a “my bad.” Guess I don’t blame them, since it was generating plenty of press. Though not sure how admitting you were wrong impacts the credibility bank. He also calls out some Chicken Little behavior from Paul Kurtz and his cyber-katrina scenario. I can just see 30,000 folks stuck in the Superdome without the ability to Tweet. Keep in mind, this is a bed of our own making. We like hyper-connectivity, but there is always a downside. - MR

–Mike Rothman

Tuesday, January 05, 2010

RSA Treks to Sherwood Forest and Buys the Archer

By Mike Rothman

EMC/RSA announced the acquisition of Archer Technologies for an undisclosed price. The move adds an IT GRC tool to EMC/RSA’s existing technologies for configuration management (Ionix) and SIEM/Log Management (EnVision).

Though EMC/RSA’s overall security strategy remains a mystery, they claim to be driving towards packaging technologies to solve specific customer use cases – such as security operations, compliance, and cloud security. This kind of packaging makes a lot of sense, since customers don’t wake up and say “I want to buy widget X today” – instead they focus on solving specific problems. The rubber meets the road based on how the vendor has ‘defined’ the use case to suit what its product does.

Archer as an IT GRC platform fills in the highest level of the visualization by mapping IT data to business processes. The rationale for EMC/RSA is clear. Buying Archer allows existing RSA security and compliance tools, as well as some other EMC tools, to pump data into Archer via its SmartSuite set of interfaces. This data maps to business processes enumerated within Archer (through a ton of professional services) to visualize process and report on metrics for those processes. This addresses one of the key issues security managers (and technology companies) grapple with: showing relevance. It’s hard to take security data and make it relevant to business leaders. A tool like Archer, properly implemented and maintained, can do that.

The rationale for Archer doing the deal now is not as clear. By all outward indications, the company had increasing momentum. They brought on Bain Capital as an investor in late 2008, and always claimed profitability. So this wasn’t a sale under duress. The Archer folks paid lip service to investing more in sales and marketing and obviously leveraging the EMC/RSA sales force to accelerate growth. The vendor ranking exercises done by big research also drove this outcome, as Archer faced an uphill battle competing against bigger players in IT GRC (like Oracle) for a position in the leader area. And we all know that you need to be in the leader area to sell to large enterprises.

Ultimately it was likely a deal Archer couldn’t refuse, and that means a higher multiple (as opposed to lower). The deal size was not mentioned, though 451 Group estimates the deal was north of $100 million (about 3x bookings) – which seems too low.

Customer Impact

IT GRC remains a large enterprise technology, with success requiring a significant amount of integration within the customer environment. This deal doesn’t change that because success of GRC depends more on the customer getting their processes in place than the technology itself working. Being affiliated with EMC/RSA doesn’t help the customer get their own politics and internal processes in line to leverage a process visualization platform.

Archer customers see little value in the deal, and perhaps some negative value since they now have to deal with EMC/RSA and inevitably the bigger organization will slow innovation. But Archer customers aren’t going anywhere, since their organizations have already bet the ranch and put in the resources to presumably make the tool work.

More benefit accrues to companies looking at Archer, since any corporate viability concerns are now off the table. Users should expect better integration between the RSA security tools, the EMC process automation tools, and Archer – especially since the companies have been working together for years, and there is already a middleware/abstraction layer in the works to facilitate integration. In concept anyway, since EMC/RSA don’t really have a sterling track record of clean and timely technology integration.

Issues

As with every big company acquisition, issues emerge around organizational headwinds and channel conflict. Archer was bought by the RSA division, which focuses on security and sells to the technology user. But by definition Archer’s value is to visualize across not just technology, but other business processes as well. The success of this deal will literally hing on whether Archer can “break out” of the RSA silo and go to market as part of EMC’s larger bag of tricks.

Interestingly enough, back in May ConfigureSoft was bought by the Ionix group, which focuses on automating IT operations and seemed like a more logical fit with Archer. As a reminder to the hazards of organizational headwinds, just think back to ISS ending up within the IBM Global Services Group. We’ll be keeping an eye on this.

Issues also inevitably surface around channel conflict, especially relative to professional services. Archer is a services-heavy platform (more like a toolkit) that requires a significant amount of integration for any chance of success. To date, the Big 4 integrators have driven a lot of Archer deployments, but historically EMC likes to take the revenue for themselves over time. How well the EMC field team understands and can discuss GRC’s value will also determine ongoing success.

Bottom Line

IT GRC is not really a market – it’s the highest layer in a company’s IT management stack and only really applicable to the largest enterprises. Archer was one of the leading vendors and EMC/RSA needed to own real estate in that sector sooner or later. This deal does not have a lot of impact on customers, as this is not going to miraculously result in IT GRC breaking out as a market category. The constraint isn’t technology – it’s internal politics and process.

We also can’t shake the nagging feeling that shifting large amounts of resources away from security and into compliance documentation may not be a good idea. Customers need to ensure that any investment in a tool like Archer (and the large services costs to use it) will really save money and effort within the first 3 years of the project, and is not being done to the exclusion of security blocking and tackling. The truth is it’s all too easy for projects like this to under-deliver or potentially explode – adding complexity instead of reducing it – no matter how good the tool.

–Mike Rothman

Project Quant: Database Security Discovery

By Adrian Lane

  • Rich
  • We decided to slow this series down for the holidays, as we are at a point where participation from the user community is very important. With the new year we are kicking back into high gear, and encourage comments and critiques of the processes we are describing. Picking up where we left off, we are at the Discovery phase in the database security process, a critical part of scoping the overall work.

    Personally, discovery and assessment is my favorite step in the database security process. This step was the one that always yielded surprises for my team. Enterprise databases were present we did not know about. Small personal databases, perhaps even embedded in applications, we did not know about. Production data sets on test servers, tables with sensitive data copied into unsecured table-spaces, or cases where replication was turned on without our knowledge. This is over and above databases that were completely misconfigured – usually a new DBA who did not know any better, but sometimes intentionally disabled to make administration easier. I have had clean scans on a Monday, only to find Friday that there were dozens of critical issues. And that’s really what we want to determine in the Discovery phase.

    Before we can act on the plan we developed in the previous section (Planning, Part 1 and Part 2), we must determine the state of the database environment. After all, you have to know what’s wrong before you can fix it. What databases are in your environment, what purposes do they serve, what data do they host, and how are they set up? The first step in this phase is to find the databases.

    Enumerate Databases:

    In this stage we determine the location of database servers in the organization.

    1. Plan: How are you going about scanning the environment? What parts of the process are automated vs. manual? Make sure you have clear guidelines. Refine scope to portions of IT, database type of interest. Also note that the person who created the plan may not be the person who runs the scan. Make sure that the data needed (database name, port number, database type, etc.) for subsequent steps is communicated or this entire process will need to be run again.
    2. Setup: Acquire and install tools to automate the process, or map out your manual process. Configure for your environment, specifying acceptable network address and port ranges. Network segmentation will alter deployment. Databases have multiple connection options so plan accordingly. If you are keeping scan results in a database, create structures and configure.
    3. Enumerate: Run your scan, or schedule for repeat scanning. Capture the results and filter out unwanted information to keep the data in scope for the project. Record as you baseline for future trend reports. In practice you will run this step more than once. As you discover databases you did not know existed, determine credentials you were provided are insufficient, or find subsequent steps require more information than you collected. Schedule to repeat scanning at periodic intervals. If you are using a manual process, this consists of contacting business units to identify assets and manually assessing each system.
    4. Document: Format data, generate reports, and distribute. Use results to seed data discovery and assessment tasks.

    Database discovery can be performed manually or automated. Segmented networks, regional offices, virtual servers, multi-homed hosts, remapping of standard ports, and embedded databases are all examples of common impediments you need to consider. If you choose to automate, most likely you are going to use a tool to examines network addresses and interrogate network ports, which may not yield all database instances but should capture the database installations. If you are using network monitoring to discover databases you will miss some. Regardless of your choice, you may not find everything, at least in the first sweep, so consider scanning more than once. In a manual process you will need to work with business units to identify databases, and perform some manual testing to identify any unreported databases. Understand what data you need to produce in this part of the process as your results from database discovery will be used to feed data discovery and assessment.

    Identify Applications, Owners, and Data:

    Now we take the identified databases and identify application dependencies, database owners, and the kinds of data stored.

    1. Plan: Develop a plan to identify the application dependencies, data owners, and data types/classifications for the databases enumerated in the previous stage. Determine manual vs. automated tasks. If you have particular requirements, specify and itemize required data and assign tasks to qualified personnel. Define data types that require protection. Determine data collection methods (monitoring, assessment, log files, content analysis, etc.) to locate sensitive information.
    2. Setup: Databases, data objects, data, and applications have ownership permissions that govern their access and use. For data discovery create regular expression templates, locations, or naming conventions for discovery scans. Test tools on known data types to verify operation.
    3. Identify Applications: For applications, catalog connection methods and service accounts where appropriate.
    4. Identify Database Owner(s): List database owners. Database owners provide credentials and accounts for dedicated scans, so determine who owns database installations and obtain credentials.
    5. Discover Data: For data discovery return location, schema, data type, and other meta-data information. Rule adjustment requires re-scanning.
    6. Document: Generate reports for operations and data security.

    In essence this series of tasks is multiple discovery processes. Discovering the applications that attach to a database and how it is used, and what is stored within the database, are two separate efforts. Both can be performed by a credentialed investigation of the platform and system, or by observing network traffic. The former provides complete results at the expense of requiring credentials to access the database system, while passive network scanning is easier but provides incomplete results.

    If you have existing security policies or compliance requirements data discovery is a little easier, as you know what you are looking for. If this is the first time you are scanning databases for applications and data, you may not have precise goals, but the results still aid in other activities. Manual discovery policies requires you to define the data types you are interested in detecting, so planning and rule development requires significant time in this phase.

    Identification of applications and data provides information necessary to determine security and regulatory requirements. This task defines not only the scope of the scanning in the next task, but also subsequent monitoring and reporting efforts in different phases of this project.

    Assess Vulnerabilities & Configurations:

    Database Assessment is the analysis of database configuration, patch status, and security settings. It is performed by examining the database system both internally and externally – in relation to known threats, industry best practices, and IT operations guidelines. It is important to note that with assessment, there is a large divide between having requirements and the tools or queries that gather the information. The setup portion of this task, particularly in script development, takes far more time than the scans themselves.

    1. Define Scans: In a nutshell, this is where you define what you wish to accomplish. Compile a list of databases that need to be scanned, and determine requirements for different database types. Investigate best practices, security and compliance review for both internal and external requirements. Assign scans for proper separation of duties.
    2. Setup: Determine how you want to accomplish your goals. Which functions are to be automated and which are manual? Are these credentialed scans or passive? Download updated policies from tools and database vendors, and create custom policies where needed. Create scripts to collect the information, determine priority, and suggest remediation steps for policy violations.
    3. Scan: Scans are an ongoing effort, and most scanning tools provide scheduling capabilities. Collect results and store.
    4. Distribute Results: Scan results will spotlight critical issues, variations from policy, and general recommendations. Filter unwanted data according to audience, then generate reports. Reporting includes feeding automated trouble ticket and workflow systems.

    Database discovery, data discovery, and database security analysis are conceptually simple. Find the databases, determine what they are used for, and figure out if they are secure. In practice they are much harder than they sound. If you run a small IT organization you probably know where your one or two database machines are located, and should have the resources to find sensitive data.

    When it comes to security policies, databases are so complex and the threats evolve so rapidly that definition and setup tasks will comprise the bulk of work for this entire phase. Good documentation, and a method for tracking threats in relation to policies and remediation information, are critical for managing assessments.

    Authorization and Access:

    In this stage we determine how access control systems are configured, then collect permissions and password settings.

    1. Define Access Requirements: Man hours to obtain the list of databases you need to assess access controls for. Man hours to discover how access controls are performed; which functions are at the host or domain level, and how these services are linked to database permissions. Determine what password checks are to be employed.
    2. Setup: For automated scans: cost to acquire, install and configure the tools. Time to obtain host/database permissions needed to perform manual and automated scans. Man hours needed to collect documented roles, groups, or service requirements for user of the database in later analysis. Cost of tools and time to generate report templates for stakeholders who will act upon scan results. If password penetration testing or dictionary attacks for weak passwords are being used, select a dictionary.
    3. Scan: Variable: Man hours to run scan for database users showing group and role memberships, and then to scan groups, roles, and service account membership for each database. Man hours to collect domain and host user account information and settings.
    4. Analyze & Report: Administrative roles need to be reviewed for separation of duties, both between administrative functions and between DBAs and IT administrators. Service accounts used by applications must be reviewed. User accounts need to be reviewed for group memberships and roles. Groups and roles must be reviewed to verify permissions are appropriate for business functions.

    Database authorization and access control is the front line of defense for data privacy and integrity, as well as providing control over database functions. It’s also the most time intensive of these tasks to check, as the process is multifaceted – needing to account not only for the settings inside the database, but how those functions are supported by external host and domain level identity management services. This exercise is typically split between users of the database and administrators, as each has very different security considerations. Password testing can be time-consuming depending and, depending upon the method employed, may require additional database resources to avoid impact on production servers.

    –Adrian Lane

  • Rich
  • Monday, January 04, 2010

    Password Policy Disclosure

    By Adrian Lane

    I am no fan of “security through obscurity”. Peer review and open discourse on security have proven essential in development of network protocols and cryptographic algorithms. Regardless, that does not mean I choose to disclose everything. I may disclose protocols and approach, but certain details I choose to remit.

    Case in point: if I were Twitter, and wanted to reduce account hijacking by ridding myself of weak passwords which can be easily guessed, I would not disclose my list of weak passwords to the user community. As noted by TechCrunch:

    If you’re on Twitter, that means you registered an account with a password that isn’t terribly easy to guess. As you may know, Twitter prevents people from doing just that by indicating that certain passwords such as ‘password’ (cough cough) and ‘123456’ are too obvious to be picked. It just so happens that Twitter has hard-coded all banned passwords on the sign-up page. All you need to do to retrieve the full list of unwelcome passwords is take a look at the source code of that page. Do a simple search for ‘twttr.BANNED_PASSWORDS’ and voila, there they are, all 370 of them.

    The common attack vector is to perform a dictionary attack on known accounts. A good dictionary is an important factor for success. It is much easier to create a good dictionary if you know for certain many common passwords will not be present. Making the list easy to discover makes it much easier for someone to tune their dictionary. I applaud Twitter for trying to improve passwords and thereby making them tougher to guess, but targeted attacks just got better as well. Because here’s a list of 370 passwords I don’t have to test.

    –Adrian Lane

    Securosis + Security Incite Merger FAQ

    By Mike Rothman

    What are you announcing?

    Today, we are announcing that Mike Rothman is joining Securosis as Analyst/President (Rich remains Analyst/CEO). This is a full merger of Securosis and Security Incite.

    Why is this a good move for Securosis?

    Not to sound trite, but bringing on Mike is a no-brainer. This immediately and significantly broadens Securosis’ coverage and positions us to grow materially in ways we couldn’t do without another great analyst. There are very few people out there with Mike’s experience as an independent analyst and entrepreneur. Mike proved he could thrive as a one-man operation (his jump to eIQ wasn’t a financial necessity), completely shares our values, and brings an incredible range of experience to the table.

    Those who read our blog and free research reports gain additional content in areas we simply couldn’t cover. Mike will be leading our network and endpoint security coverage, as well as bringing over the Pragmatic CSO (sorry, you still have to pay for it) and the Daily Incite (which we’re restructuring a bit, as you’ll see later in this FAQ). Given Rich and Adrian’s coverage overlap, adding Mike nearly doubles our coverage… with our contributors (David Mortman, Dave Meier, and Chris Pepper) rounding us out even more. Mike is also a “high producer”, which means we’ll deliver even more free content to the community.

    Our existing clients now gain access to an additional analyst, and Mike’s clients now gain access to all of the Securosis resources and people. Aside from covering different technical areas, Mike brings “in the trenches” strategy, marketing, and business analysis experience that neither Rich nor Adrian have, as they specialize more on the tech side.

    In terms of the company, this also allows us to finally execute on the vision we first started building 18 months ago (Securosis has been around longer, but that’s when we came up with our long-term vision). As we’ll discuss in a second, we have some big plans for new products, and we honestly couldn’t achieve our goals without someone of Mike’s experience.

    Why is this a good move for Security Incite and Mike Rothman?

    Mike digs a lot deeper into his perspectives in a POPE (People, Opportunity, Product, Exit) analysis, but basically there was a limitation in the impact Mike could have and what he could do as a solo practitioner. Finding kindred spirits in Rich and Adrian enables us to build the next great IT research firm. This, in turn, is a prime opportunity to build products targeting a grossly underserved market (mid-market security and IT professionals), while continuing to give back to the community by publishing free research.

    This allows Mike to get back to his roots as a network security analyst and enables Securosis to provide full and broad coverage of all security and compliance topics, which benefits both end user and vendor clients. But mostly it’s as Rich said: a great opportunity to work with great guys and build something great.

    What is the research philosophy of Securosis? Will that change now that Mike Rothman is part of the team?

    Securosis’ core operating philosophy is Totally Transparent Research. That says it all. Bringing Mike to the team doesn’t change a thing. In fact, he wouldn’t have it any other way. As Mike has produced (as a META analyst) and bought (as a vendor) “mostly opaque” research from the big research shops, he certainly understands the limitations of that approach and knows there is a better way.

    Who is your target customer?

    Securosis will target mid-market security and IT professionals. These folks have perhaps the worst job in IT. They have most of the same problems as larger enterprises, but far fewer resources and less funding. Helping these folks ensure and accelerate the success of their projects is our core objective for new information products and syndicated research offerings in 2010.

    Will all the research remain free and available on the Securosis blog?

    Yes, all of the Securosis primary research will continue to be published on the blog. Our research may be packaged up and available in document form from our sponsors, but the core research will always appear first on the blog. This is a critical leg of the Totally Transparent Research model. Our community picks apart our research and makes it better. That makes the end result more actionable and more effective.

    What kind of information products are you going to produce?

    We’re not ready to announce our product strategy quite yet, but suffice it to say we’ll have a family of products designed to accelerate security and compliance project success. The entry price will be modest and participating in a web-based community will be a key part of the customer experience.

    What about the existing retainer clients of Securosis? How will they be supported?

    Securosis will continue to support existing retainer customers. We’ve rolled out a new set of retainer packages for clients interested in an ongoing relationship. All our analysts participate in supporting our retainer clients.

    What’s going to happen to the Daily Incite?

    The Daily Incite is becoming the Securosis Incite and will continue to provide hard-hitting and hopefully entertaining commentary on the happenings in the security industry. Now we have 6 contributors to add their own “Incite” to the mix.

    We are also supplementing the Incite with other structured weekly blog posts including the “Securosis FireStarter,” which will spur discussion and challenge the status quo. We’ll continue producing the Securosis Weekly Summary to keep everyone up to date on what we’ve been up to each week.

    What about the Pragmatic CSO?

    The Pragmatic CSO is alive and well. You can still buy the book on the website and that isn’t changing. You may have noticed many of the research models Securosis has rolled out over the past year are “Pragmatic” in both name and nature. That’s not an accident. Taking a pragmatic approach is central to our philosophy of security and the Pragmatic CSO is the centerpiece of that endeavor.

    So you can expect lots more Pragmatism from Securosis over the coming years.

    –Mike Rothman

    Mike Rothman Joins Securosis

    By Adrian Lane

    Technology start-ups are unique organisms that affect employees very differently than other types of companies. Tech start-ups are about bringing new ideas to market. They are about change, and often founded on an alternative perspective of how to conduct business. They are more likely to leverage new technologies, hire unique people, and try different approaches to marketing, sales, and solving business problems. People who work at start-ups put more of themselves into their jobs, work a little harder, and are more impassioned about achievement and success. The entire frenetic experience is accelerated to the point where you compress years into months, providing an intimate level of participation not available at larger firms – the experience is addictive.

    When technology start-ups don’t succeed (the most common case), they take a lot out of their people. Failures result in layoffs or shutdown, and go from decision to unfortunate conclusion overnight. The technology and products employees have been pouring themselves into typically vanish. That’s when you start thinking about what went right and what went wrong, what worked and what didn’t. You think about what you would do differently next time. That process ultimately ends with some pent-up ideas and frustrations which – if you let them eat at you – eventually drive you back into the technology start-up arena. It took me 12 years and 5 start-ups to figure out that I was on a merry-go-round without end, unless I made the choice to step off and be comfortable with my decision. It took significant personal change to accept that no matter how good the vision, judgement, execution, and assembled team were, success was far from guaranteed.

    Where am I going with this? As you have probably read by now, 18 months ago Rich Mogull, Mike Rothman, and I planned a new IT research firm. Within a few weeks we got the bad news: Mike was going to join a small security technology company to get back on the merry-go-round. From talking with Mike, I knew he had to join them for all the reasons I mentioned above. I could see it in his face, and in the same position I would have done exactly the same thing. Sure, Securosis is a technology start-up as well, but it’s different. While hopeful Mike would be back in 24 months, I could not know for certain.

    If you are a follower of the Securosis blog, you have witnessed the new site launch in early 2009 and seen our project work evolve dramatically. Much of this was part of the original vision. We kept most of our original plans, jettisoned a few, streamlined others, and moved forward. We found some of our ideas just did not work that well, and others required more resources. We have worked continuously to sharpen our vision of who we are and why we are different, but we have a ways to go.

    I can say both Rich and I are ecstatic to have Mike formally join the team. It’s not change in my mind, but rather empowerment. Mike brings skills neither of us possesses, and a renewed determination that will help us execute on our initial vision. We will be able to tackle larger projects, cover more technologies, and offer more services. Plus I am looking forward to working with Mike on a daily basis!

    This is a pretty big day for us here, and thought it appropriate to share some of the thoughts, planning, and emotions behind this announcement.

    –Adrian Lane