Securosis

Research

Friday Summary: November 11, 2010

When we came up with the Friday Summary, the idea was we’d share something personal that was either humorous or relevant to security, then highlight our content from the week, the best thing’s we read on other sites, and any major industry news. The question is always where to draw the line on the personal stuff. I mean, it isn’t like this is Twitter. Hopefully this next story doesn’t cross the line. It’s not too personal, but especially for those of you with kids, it might bring a smile. This morning I was getting my 20-month-old ready for daycare when I may have let loose a little toot. I’ve always known that is one of those things I’ll have to… put a cap on… once she got older and knows what it is. But I’m practically a vegetarian, and that comes with certain consequences. Anyway, it went like this: Me: [toot] Daughter (looking me in the eye): “Daddy pooped!” Me: Er. Anyway, yet one more thing I can’t do in the comfort of my own home. Nope. This has nothing to do with security. Live with it. Webcasts, Podcasts, Outside Writing, and Conferences Rich speaking at the Cloud Security Alliance Congress next week. I’m co-presenting with Hoff again, and premiering my new Quantum Datum pitch on information-centric security for cloud computing. Haven’t been this excited to present new content in a long time. Adrian’s Dark Reading post on NoSQL. Favorite Securosis Posts Rich: Baa Baa Blacksheep. Lather. Rinse. Get pwned. Repeat. Mike Rothman: MS Atlanta: Protection Is Not Security. It’s always hard to wade through the hyperbole and marketing rhetoric, especially with a fairly technical topic. You are lucky Adrian is there to explain things. Adrian: Baa Baa Blacksheep. Zscaler totally freakin’ missed the point. Other Securosis Posts LinkedIn Password Reset FAIL. Incite 11/10/2010: Hallowreck (My Diet). PCI 2.0: the Quicken of Security Standards. React Faster and Better: Contain, Investigate, and Mitigate. React Faster and Better: Trigger, Escalate, and Size up. Security Metrics: Do Something. Favorite Outside Posts Rich: Verizon launches VERIS site to anonymously share incident data. I’m on the advisory board (unpaid) and a bit biased, but I think this is a great initiative. Mike Rothman: Indiana AG sues WellPoint for $300K. $300K * 10-15 states could add up to some real money. This is just a reminder that getting your act together on disclosure remains important, unless you like contributing a couple hundred large to your state’s treasury (and everybody else’s, eventually). Adrian Lane: All In One Skimmers. And yes, it’s really that easy. On a positive note, this may be the only piece of electronic gear not made in China. Research Reports and Presentations The Securosis 2010 Data Security Survey. Monitoring up the Stack: Adding Value to SIEM. Network Security Operations Quant Metrics Model. Network Security Operations Quant Report. Understanding and Selecting a DLP Solution. White Paper: Understanding and Selecting an Enterprise Firewall. Understanding and Selecting a Tokenization Solution. Security + Agile = FAIL Presentation. Top News and Posts New Android Bug Allows for Silent Malicious App Installation. A Database Administrator Disconnect Over Security Duties. PGP Disk Encryption Bricks Upgraded Macs. It’s time to get very serious about Java updates Java is a friggin’ mess. You’ll hear more about it in the coming years… trust us. Body Armor for Bad Web Sites. Danger to IE users climbs as hacker kit adds exploit. The Great Cyberheist A great, in-depth article on Albert Gonzales (the TJX/Heartland/etc. hacker). Chrome, Pitted. Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Asa, in response to Baa Baa Blacksheep. Firesheep is not the attack; it’s the messenger. Share:

Share:
Read Post

Incite 11/10/2010: Hallowreck (My Diet)

I fancy myself to have significant willpower. I self-motivate to work out pretty religiously, and in the blink of an eye gave up meat two and a half years ago – cold turkey (no pun intended). But I’m no superhero – in fact over the past few weeks I’ve been abnormally human. You see I have a weakness for chips. Well I actually have a number of food weaknesses, but chips are close to the top of the list. And it’s not like a few potato chips or tortilla chips will kill me in moderation. But that’s the rub – I don’t do ‘moderation’ very well. As I mentioned last week, for XX1’s birthday weekend we had a number of parties, which meant we had to have snacks around for all the visiting kiddies. Rut Roh. Yeah, the big bag of Kettle Chips from Costco. Untouched by the kids. Mowed through by me. Not in one sitting, so I guess I’m improving a bit. But over the course of 4-5 days I systematically dismantled the bag one bowl at a time. I guess I figured I couldn’t have a full-on binge if I did one bowl at a time, right? I’d have to get up and down to keep filling the bowl. I guess that’s how I got a portion of my exercise last week. Moderation, no so much. And of course, hot on the heels of the parties was Halloween. So the kids came back with bags and bags of candy. Normally my sweet tooth is contained. Maybe once a week I’ll have some ice cream with chocolate syrup. But with Almond Joys and Butterfingers and Peanut M&Ms around, you might as well put a crack pipe in Lindsay Lohan’s bedroom. Even worse, the Boss (who can’t eat chocolate – food allergies) got a 56oz bag of M&Ms. My hands aren’t big, but they can grab about 30 M&Ms in one swoop. And they did. Arghhh. So on Saturday night I put my foot down. The candy had to go. Thankfully they were collecting candy bars for the troops – and by the way, what the hell is that about? What better to send to the frackin’ desert than a couple of truckloads of chocolate bars. I heard those don’t melt on the surface of the sun, and that speaks nothing of their nutritional value (or lack thereof). But I guess they don’t want us donating produce to send to the troops, even though that makes a lot more sense. So, I put my foot down and decreed that the candy must go. The kids dutifully sorted their candy and we let them keep about 10 pieces to be doled out over the next few weeks. The Boss also stashed away some surplus for movie days, so we could use that instead of paying $10 for a box of Raisinettes at the theater. Maybe that’s kind of a dick move, getting rid of the candy because I struggle with self-control. But I’m cool with it. You don’t stock your fridge with beer if you are an alcoholic. You don’t buy a bong for a pothead’s birthday. And you don’t leave the Halloween candy in the house for those who struggle with their weight. Yes, I’ve made progress and working out hard 5-6 days a week gives me some buffer, but having that stuff around is just stupid. So we won’t… – Mike. Photo credits: “That’s a lot of Halloween candy. Bartell’s Drugs, Queen Anne, Seattle, 09/01/06” originally uploaded by photophonic Incite 4 U SCADA hysteria, coming to critical infrastructure near you: As I mentioned in my Storytellers post last week, I was at SecTor, and a lot of great discussion emerged from the conference. I have to give a shout-out to our contributor James Arlen, who from all indications did a great job of deflating the hype around SCADA attacks. Yes, Stuxnet happened and showed what is possible, but the sky isn’t falling. James points out that these systems are built for fault tolerance, and that compromising one control system isn’t likely to take down the power grid. Listen, I don’t want to minimize the risk – we all know these systems are vulnerable. But we do need to be wary of overhyping the issues, and James did a good job presenting both sides. His conclusion is key: “But, he encouraged security professionals to take a deep breath and assess the situation rationally.” Look for this one when it gets posted by the SecTor folks. – MR Cutting out the middle man: The Wall Street Journal highlights how major websites are limiting the number of tracking technologies they allow leeches ‘partner’ sites embed into their web pages. Why? Are they doing this for privacy concerns? Hell, no! And they’re not doing it to save the children, lower your cholesterol, reduce carbon emissions, or any other smokescreen. It’s about money and control, as always. The have lost control by allowing marketing firms to directly gather customer data, resulting in less data and money for site owners. Some firms found tracking software that they did not know about, while others found partners gathering information they did not even know was available. With many web sites desperately trying to stay in business, there will be significant investment into their own tracking software and data marts on the back ends, in order to monetize their data directly. We’ll see them code their own, and we will also see spyware “marketing software” vendors selling more plug-ins. And user privacy will be exactly the same as before, only the web sites will get a bigger slice of the financial pie. But that’s okay … it says so in the new end user (you have no) privacy agreement. – AL If you can’t beat ‘em, sue ‘em: There is no doubt that Microsoft once abused their position with anti-competative practices. I don’t mean in terms of what features they included in Windows, but all that back-room dealing and wrangling with hardware providers. That’s why I’m so amused by Trend trying to drum up antitrust

Share:
Read Post

LinkedIn Password Reset FAIL

It’s never a good day when you lose control over a significant account. First, it goes to show that none of us are perfect and we can all be pwned as a matter of course, regardless of how careful we are. This story has a reasonably happy ending, but there are still important lessons. Obviously the folks at Facebook and Twitter take head shots every week about privacy/security issues. LinkedIn has largely gone unscathed. But truth be told, LinkedIn is more important to me than Facebook, and it’s close to Twitter. I have a bunch of connections and I use it fairly frequently to get contact info and to search for a person with the skills I need to consult on a research project. So I was a bit disturbed to get an email from a former employer today letting me know they had (somewhat inadvertently) gained control of my LinkedIn account. It all started innocently enough. Evidently I had set up this company’s LinkedIn profile, so that profile was attached to my personal LinkedIn account. The folks at the company didn’t know who set it up, so they attempted to sign in as pretty much every marketing staffer who ever worked there. They did password resets on all the email addresses they could find, and they were able to reset my password because the reset notice went to my address there. They didn’t realize it wasn’t a corporate LinkedIn account I set up – it was my personal LinkedIn account. With that access, they edited the company profile and all was well. For them. Interestingly enough, I got no notification that the password had been reset. Yes, that’s right. My password was reset and there was zero confirmation of that. This is a major privacy fail. Thankfully the folks performing the resets notified me right away. I immediately reset the password again (using an email address I control) and then removed the old email address at that company from my profile. Now they cannot reset my password (hopefully), since that email is no longer on my profile. I double-checked to make sure I control all the email addresses listed on my profile. To be clear, I’m to blame for this issue. I didn’t clean up the email addresses on my LinkedIn profile after I left this company. That’s on me. But learn from my mishap and check your LinkedIn profile RIGHT NOW. Make sure there are no emails listed there that you don’t control. If there is an old email address, your password can be reset without your knowledge. Right, big problem. LinkedIn needs to change their process as well. At a minimum, LinkedIn should send a confirmation email to the primary email on the account whenever a password is reset or profile information is changed. If fact, they should send an email to all the addresses on the account, because someone might have lost control of their primary account. I’m actually shocked they don’t do this already. Fix this, LinkedIn, and fix it now. Share:

Share:
Read Post

Incident Response Fundamentals: Contain, Investigate, and Mitigate

In our last post, we covered the first steps of incident response – the trigger, escalation, and size up. Today we’re going to move on to the next three steps – containment, investigation, and mitigation. Now that I’m thinking bigger picture, incident response really breaks down into three large phases. The first phase covers your initial response – the trigger, escalation, size up, and containment. It’s the part when the incident starts, you get your resources assembled and responding, and you take a stab at minimizing the damage from the incident. The next phase is the active management of the incident. We investigate the situation and actively mitigate the problem. The final phase is the clean up; where we make sure we really stopped the incident, recover from the after effects, and try and figure out why this happened and how we can prevent it in the future. This includes the mop up (cleaning the remnants of the incident and making sure there aren’t any surprises left behind), your full investigation and root cause analysis, and Q/A (quality assurance) of your response process. Now since we’re writing this as we go, I technically should have included containment in the previous post, but didn’t think of it at the time. I’ll make sure it’s all fixed before we hit the whitepaper. Contain Containing an ongoing incident is one of the most challenging tasks in incident response. You lack a complete picture of what’s going on, yet you have to take proactive actions to minimize damage and potential incident growth. And you have to do it fast. Adding to the difficulty is the fact that in some cases your instincts to stop the problem may actually exacerbate the situation. This is where training, knowledge, and experience are absolutely essential. Specific plans for certain major incident categories are also important. For example: For “standard” virus infections and attacks your policy might be to isolate those systems on the network so the infection doesn’t spread. This might include full isolation of a laptop, or blocking any email attachments on a mail server. For those of you dealing with a well-funded persistent attacker (yeah, APT), the last thing you want to do is start taking known infected systems offline. This usually leads the attacker to trigger a series of deeper exploits, and you might end up with 5 compromised systems for every one you clean. In this case your containment may be to stop putting new sensitive data in any location accessed by those compromised systems (this is just an example, responding to these kinds of attackers is most definitely a complex art in and of itself). For employee data theft, you first get HR, legal, and physical security involved. They may direct you to you to instantly lock them out or perhaps just monitor their device and/or limit access to sensitive information while they build a case. For compromise of a financial system (like credit card processing), you may decide to suspend processing and/or migrate to an alternative platform until you can determine the cause later in your response. These are just a few quick examples, but the goal is clear – make sure things do not get worse. But you have to temper this defensive instinct with any needs for later investigation/enforcement, the possibility that your actions might make the situation worse, and the potential business impact. And although it’s not possible to build scenarios for every possible incident, you want to map out your intended responses for the top dozen or so, to make sure that everyone knows what they should be doing to contain the damage. Investigate At this point you have a general idea of what’s going on and have hopefully limited the damage. Now it’s time to really dig in and figure out exactly what you are facing. Remember – at this point you are in the middle of an active incident; your focus is to gather just as much information as you need to mitigate the problem (stop the bad guys, since this series is security-focused) and to collect it in a way that doesn’t preclude subsequent legal (or other) action. Now isn’t the time to jump down the rabbit hole and determine every detail of what occurred, since that may draw valuable resources from the actual mitigation of the problem. The nature of the incident will define what tools and data you need for your investigation, and there’s no way we can cover them all in this series. But here are some of the major options, some of which we’ll discuss in more detail as we discuss deeper investigation and root cause analysis later in the process. Network security monitoring tools: This includes a range of network security tools such as network forensics, DLP, IDS/IPS, application control, and next-generation firewalls. The key is that the more useful tools not only collect a ton of information, but also include analysis and/or correlation engines that help you quickly sift through massive volumes of information quickly. Log Management and SIEM: These tools collect a lot of data from heterogenous sources you can use to support investigations. Log Management and SIEM are converging, which is why we include both of them here. You can check out our report on this technology to learn more. System Forensics: A good forensics tool(s) is one of your best friends in an investigation. While you might not use it to its complete capabilities until later in the process, the forensics tool allows you to collect forensically-valid images of systems to support later investigations while providing valuable immediate information. Endpoint OS and EPP logs: Operating systems collect a fair bit of log information that may be useful to pinpoint issues, as does your endpoint protection platform (most of the EPP data is likely synced to its server). Access logs, if available, may be particularly useful in any incident involving potential data loss. Application and Database Logs: Including data from security tools like Database Activity Monitoring and Web Application Firewalls. Identity, Directory and DHCP logs: To determine who

Share:
Read Post

PCI 2.0: the Quicken of Security Standards

A long time ago I tried to be one of those Quicken folks who track all their income and spending. I loved all the pretty spreadsheets, but given my income at the time it was more depressing than useful. I don’t need a bar graph to tell me that I’m out of beer money. The even more depressing thing about Quicken was (and still is) the useless annual updates. I’m not sure I’ve ever seen a piece of software that offered so few changes for so much money every year. Except maybe antivirus. Two weeks ago the PCI Security Standards Council released version 2.0 of everyone’s favorite standard to hate (and the PA-DSS, the beloved guidance for anyone making payment apps/hardware). After many months of “something’s going to change, but we won’t tell you yet” press releases and briefings, it was nice to finally see the meat. But like Quicken, PCI 2.0 is really more of a minor dot release (1.3) than a major full version release. There aren’t any major new requirements, but a ton of clarifications and tweaks. Most of these won’t have any immediate material impact on how people comply with PCI, but there are a couple early signs that some of these minor tweaks could have major impact – especially around content discovery. There are many changes to “tighten the screws” and plug common holes many organizations were taking advantage of (deliberately or due to ignorance), which reduced their security. For example, 2.2.2 now requires you to use secure communications services (SFTP vs. FTP), test a sample of them, and document any use of insecure services – with business reason and the security controls used to make them secure. Walter Conway has a good article covering some of the larger changes at StoreFrontBackTalk. In terms of impact, the biggest changes I see are in scope. You now have to explicitly identify every place you have and use cardholder data, and this includes any place outside your defined transaction environment it might have leaked into. Here’s the specific wording: The first step of a PCI DSS assessment is to accurately determine the scope of the review. At least annually and prior to the annual assessment, the assessed entity should confirm the accuracy of their PCI DSS scope by identifying all locations and flows of cardholder data and ensuring they are included in the PCI DSS scope. To confirm the accuracy and appropriateness of PCI DSS scope, perform the following: The assessed entity identifies and documents the existence of all cardholder data in their environment, to verify that no cardholder data exists outside of the currently defined cardholder data environment (CDE). Once all locations of cardholder data are identified and documented, the entity uses the results to verify that PCI DSS scope is appropriate (for example, the results may be a diagram or an inventory of cardholder data locations). The entity considers any cardholder data found to be in scope of the PCI DSS assessment and part of the CDE unless such data is deleted or migrated/consolidated into the currently defined CDE. The entity retains documentation that shows how PCI DSS scope was confirmed and the results, for assessor review and/or for reference during the next annual PCI SCC scope confirmation activity. Maybe I should change the title of the post, because this alone could merit a full revision designation. You now must scan your environment for cardholder data. Technically you can do it manually. and I suspect various QSAs will allow this for a while, but realistically no one except the smallest organizations can possibly meet this requirement without a content discovery tool. I guess I should have taken a job with a DLP vendor. The virtualization scope also expanded, as covered in detail by Chris Hoff. Keep in mind that anything related to PCI and virtualization is highly controversial, as various vendors try their darndest to water down any requirement that could force physical segregation of cardholder data in virtual environments. Make your life easier, folks – don’t allow cardholder data on a virtual server or service that also includes less-secure operations, or where you can’t control the multi-tenancy. Of course, none of the changes addresses the fact that every card brand treats PCI differently, or the conflicts of interest in the system (the people performing your assessment can also sell you ‘security’; put another way, decisions are made by parties with obvious conflicts of interest which could never pass muster in a financial audit), or shopping for QSAs, or the fact that card brands don’t want to change the system, but prefer to push costs onto vendors and service providers. But I digress. There is one last way PCI is like Quicken. It can be really beneficial if you use it properly, and really dangerous if you don’t. And most people don’t. Share:

Share:
Read Post

MS Atlanta: Protection Is Not Security

Microsoft has announced the beta release of something called Microsoft Codename “Atlanta”, which is being described as a “Cloud-Based SQL Server Monitoring tool”. Atlanta is deployed as an agent that embeds into SQL Server 2008 databases and sends telemetry information back to the Microsoft ‘cloud’ on your behalf. This data is analyzed and compared against a set of configuration policies, generating alerts when Microsoft discovers database misconfiguration. How does it do this? It looks at configuration data and some runtime system statistics. The policies seem geared toward helping DBAs with advanced SQL features such as mirroring, clustering, and virtual deployments. It’s looking at version and patch information, and it’s collecting some telemetry data to assist with root cause analysis for performance issues and failures. And finally, the service gets information into Microsoft’s hands faster, in an automated fashion, so support can respond faster to requests. The model is a little different than most cloud offerings, as it’s not the infrastructure that’s being pushed to the cloud, but rather the management features. Analysis does not appear to occur in real time, but this limitation may be lifted in the production product. If you are like me, you might have gotten excited for a minute thinking that Microsoft had finally released a vulnerability assessment tool for SQL Server databases, but alas, “Atlanta” does not appear to be a vulnerability assessment tool at all. In fact, it does not appear to have general configuration policies for security either. Like most System Center Products, “Data Protection” for SQL Server actually means integrity and reliability, not privacy and security. If you have ever read the “How to protect Microsoft SQL Server” white paper, you know exactly what I mean. So if you were thinking you could getting protection and configuration management for security and compliance, you will have to look elsewhere. The good news is I don’t see any serious downside or imminent security concern with Atlanta. The data sent to the cloud does not present a privacy or security risk, and the agent does not appear to provide any command and control interface, so it’s less likely to have be explotable. Small IT teams could benefit from automated tips on how the database should be set up, so that’s a good thing. As the feature sets grows you will need to pay close attention to changes in agent functionality and what data is being transferred. If this evolves and starts pushing database contents around like the Data Protection Manager, a serious security review is warranted. Share:

Share:
Read Post

Incident Response Fundamentals: Trigger, Escalate, and Size up

Okay, your incident response process is in place, you have a team, and you are hanging out in the security operations center, watching for Bad Things to happen. Then, surprise surprise, an alert triggers: what’s next? Trigger and Escalate The first thing you need to do is determine the basic parameters of the incident, and assign resources (people) to investigate and manage it. This is merely a quick and dirty step to get the incident response process kicked off, and the basic information you gather will vary based on what triggered the alert. Not all alerts require a full incident response – much of what you already deal with on a day to day basis is handled by your existing security processes. Incident response is for situations that cannot be adequately handled by your standard processes. Most IDS, DLP, or other alerts/help desk calls don’t warrant any special response – this series is about incidents that fall outside the range of your normal background noise. Where do you draw the line? That depends entirely on your organization. In a small business a single system infected with malware might lead to a response, while the same infection in a larger company could be handled as a matter of course. Technically these smaller issues (in smaller companies) are “incidents” and follow the full response process, but that entire process would be managed by a single individual with a few clicks. Regardless of where the line is drawn, communication is still critical. All parties must be clear on the specifics of which situations require a full incident investigation and which do not. For any incident, you will need a few key pieces of information early to guide next steps. These include: What triggered the alert? If someone was involved or reported it, who are they? What is the reported nature of the incident? What is the reported scope of the incident? This is basically the number and nature of systems/data/people involved. Are any critical assets involved? When did the incident occur, and is it ongoing? Are there any known precipitating events for the incident? In other words, is there a clear cause? All this should be collected in a matter of seconds or minutes through the alerting process, and provides your initial picture of what’s going on. When an incident does look more serious, it’s time to escalate. We suggest you have guidelines for initiating this escalation, such as: Involvement of designated critical data or systems. Malware infecting a certain number of systems. Sensitive data detected leaving the organization. Unusual traffic/behavior that could indicate an external compromise. Once you escalate it’s time to assign an appropriate resource, request additional resources (if needed), and begin the response. Remember that per our incident response principles, whoever first detects and evaluates the incident is in charge of it until they hand it off to someone else of equal or greater authority. Size up The term size up comes from the world of emergency services. It refers to the initial impressions of the responder as they roll up to the scene. They may be estimating the size of a cloud of black smoke coming out of a house, or a pile of cars in the middle of a highway. The goal here is to take the initial information provided and expand on it as quickly as possible to determine the true nature of the incident. For an IT response, this involves determining specific criteria – some of which you might already know: Scope: Which systems/networks/data are involved? While the full scope of an IT incident may take some time to determine, right now we need to go beyond the initial information provided and learn as much about the extent of the incident as possible. This includes systems, networks, and data. Don’t worry about getting all the details of each of them yet – the goal is merely to get a handle on how big a problem you might be facing. Nature: What kind of incident are you dealing with? If it’s on the network, look at packets or reports from your tools. For an endpoint, start digging into the logs or whatever triggered the alert. If it involves data loss, what data might be involved? Be careful not to assume it’s only what you detected going out, or what you think was inappropriately accessed. People: If this is some sort of an external attack, you probably aren’t going to spend much time figuring out the home address of the people involved at this stage. But for internal incidents it’s important to put names to IP addresses for both suspected perpetrators and victims. You also want to figure out what business units are involved. All of this affects investigation and response. Yes, I could have just said, “who, what, when, where, and how”. We aren’t performing more than the most cursory examination at this point, so you’ll need to limit your analysis to basics such as security tool alerts, and system and application logs. The point here is to get a quick analysis of the situation, and that means relying on tools and data you already have. The OODA Loop Many incident responders are familiar with the OODA Loop originally developed by Colonel Boyd of the US Air Force. The concept is that in an incident, or any decision-making process, we follow a recurring cycle of Observe, Orient, Decide, and Act. These cycles, some of which are nearly instantaneous and practically subconscious, describe the process of continually collecting, evaluating, and acting on information. The OODA Loop maps well to our Incident Response Fundamentals process. While we technically follow multiple OODA cycles in each phase of incident response, at a macro level trigger and escalate are a full OODA loop (gathering basic information and deciding to escalate), while size up maps to a somewhat larger loop that increases the scope of our observations, and closes with the action of requesting additional resources or moving on to the next response phase. Once you have collected and reviewed the basics, you should have a reasonable idea of what you’re dealing with. At

Share:
Read Post

Baa Baa Blacksheep

Action and reaction. They have been the way of the world since olden times, and it looks like they will continue ad infinitum. Certainly they are the way of information security practice. We all make our living from the action/reaction cycle, so I guess I shouldn’t bitch too much. But it’s just wrong, though we seem powerless to stop it. Two weeks ago at Toorcon, Firesheep was introduced, making crystal clear what happens to unsecured sessions to popular social networking sites such as Facebook and Twitter. We covered it a bit in last week’s Incite, and highlighted Rich’s TidBITS article and George Ou’s great analysis of which sites were exposed by Firesheep. Action. Then today the folks at Zscaler introduced a tool called Blacksheep, a Firefox plug-in to detect Firesheep being used on a network you’ve connected to. It lets you know when someone is using Firesheep and thus could presumably make you think twice about connecting to social networking sites from that public network, right? Reaction. Folks, this little cycle represents everything that is wrong with information security and many things wrong with the world in general. The clear and obvious step forward is to look at the root cause – not some ridiculous CYA response from Facebook about how one-time passwords and anti-spam are good security protections – but instead the industry spat up yet another band-aid. I’m not sure how we even walk anymore, we are so wrapped head to toe in band-aids. It’s like a bad 1930’s horror film, and we all get to play the mummy. But this is real. Very real. I don’t have an issue with Zscaler because they are just playing the game. It’s a game we created. New attack vector (or in this case, stark realization of an attack vector that’s been around for years) triggers a bonanza of announcements, spin, and shiny objects from vendors trying to get some PR. Here’s the reality. Our entire business is about chasing our own tails. We can’t get the funding or the support to make the necessary changes. But that’s just half of the issue – a lot of our stuff is increasingly moving to services hosted outside our direct control. The folks building these services couldn’t give less of a rat’s ass about fixing our issues. And users continue about their ‘business’, blissfully unaware that their information is compromised again and again and again. Our banks continue to deal with 1-2% ‘shrinkage’, mostly by pushing the costs onto merchants. Wash, rinse, and repeat. Yes, I’m a bit frustrated, which happens sometimes. The fix isn’t difficult. We’ve been talking about it for years. Key websites that access private information should fully encrypt all sessions (not just authentication). Google went all SSL for Gmail a while ago, and their world didn’t end and their profits didn’t take a hit either. Remote users should be running their traffic through a VPN. It’s not hard. Although perhaps it is, because few companies actually do it right. But again, I should stop bitching. This ongoing stupidity keeps me (and probably most of you) employed. Share:

Share:
Read Post

Friday Summary: November 5, 2010

November already. Time to clean up the house before seasonal guests arrive. Part of my list of tasks is throwing away magazines. Lots of magazines. For whatever perverse reason, I got free subscriptions to all sorts of security and technology magazines. CIO Insight. Baseline. CSO. Information Week. Dr. Dobbs. Computer XYZ and whatever else was available. They are sitting around unread so it’s time to get rid of them. While I was at it I got rid of all the virtual subscriptions to electronic magazines as well. I still read Information Security Magazine, but I download that, and only because I know most of the people who write for it. For the first time since I entered the profession there will be no science, technology, or security magazines – paper or otherwise – coming to my door. I’m sure most of you have already reached this conclusion, but the whole magazine format is obsolete for news. I kept them around just in case they covered trends I missed elsewhere. Well, that, and because they were handy bathroom reading – until the iPad. Skimming through a stack of them as I drop them into the recycling bin, I realize that fewer than one article per magazine would get my attention. When I did stop to read one, I had already read about it on-line at multiple sites to get far better coverage. The magazine format does not work for news. I am giving this more consideration than I normally would, because it’s been the subject of many phone calls lately. Vendors ask, “Where do people go to find out about encryption? Where do people find information on secure software development? Will the media houses help us reach our audience?” Honestly, I don’t have answers to those questions. I know where I go: my feed reader, Google, Twitter, and the people I work with. Between those four outlets I can find pretty much anything I need on security. Where other people go, I have no idea. Traditional media is dying. Social media seems to change monthly; and the blogs, podcasts, and feeds that remain strong only do so by shaking up their presentations. Rich feels that people go to Twitter for their security information and advice. I can see that – certainly for simple questions, directions on where to look, or A/B product comparisons. And it’s the prefect medium for speed reading your way through social commentary. For more technical stuff I have my doubts. I still hear more about people learning new things from blogs, conferences, training classes, white papers and – dare I say it? – books! The depth of the content remains inversely proportionate to the velocity of the medium. Oh, and don’t forget to check out the changes to the Securosis site and RSS feeds! On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian’s Dark Reading post: Does Compliance Drive Patching? Rich, Martin, and Zach on the Network Security Podcast, episode 219. Favorite Securosis Posts Rich: IBM Dances with Fortinet. Maybe. Mike reminds us why all the speculation about mergers and acquisitions only matters to investors, not security practitioners. Mike Rothman: React Faster and Better: Response Infrastructure and Preparatory Steps. Rich nails it, describing the stuff and steps you need to be ready for incident response. Adrian Lane: The Question of Agile’s Success. Other Securosis Posts Storytellers. Download the Securosis 2010 Data Security Survey Report (and Raw Data!) Please Read: Major Change to the Securosis Feeds. React Faster and Better: Before the Attack. Incite 11/3/2010: 10 Years Gone. Cool Sidejacking Security Scorecard (and a MobileMe Update). White Paper Release: Monitoring up the Stack. SQL Azure and 3 Pieces of Flair. Favorite Outside Posts Rich: PCI vs. Cloud = Standards vs. Innovation. Hoff has a great review of the PCI implications for cloud and virtualization. Guess what, folks – there aren’t any shortcuts, and deploying PCI compliant applications and services on your own virtualization infrastructure will be tough, never mind on public cloud. Adrian Lane: HTTP cookies, or how not to design protocols. Historic perspective on cookies and associated security issues. Chris’ favorite too: An illuminating and thoroughly depressing examination of HTTP cookies, why they suck, and why they still suck. Mike Rothman: Are You a Pirate? Arrington sums up the entrepreneur’s mindset crisply and cleanly. Yes, I’m a pirate! Gunnar Peterson offered: How to Make an American Job Before It’s Too Late. David Mortman: Biz of Software 2010, Women in Software & Frat House “Culture”. James Arlen: Friend of the Show Alex Hutton contributed to the ISO 27005 <=> FAIR mapping handbook. Project Quant Posts NSO Quant: Index of Posts. NSO Quant: Health Metrics – Device Health. NSO Quant: Manage Metrics – Monitor Issues/Tune IDS/IPS. NSO Quant: Manage Metrics – Deploy and Audit/Validate. NSO Quant: Manage Metrics – Process Change Request and Test/Approve. Research Reports and Presentations The Securosis 2010 Data Security Survey. Monitoring up the Stack: Adding Value to SIEM. Network Security Ops Quant Metrics Model. Network Security Operations Quant Report. Understanding and Selecting a DLP Solution. White Paper: Understanding and Selecting an Enterprise Firewall. Understanding and Selecting a Tokenization Solution. Top News and Posts Gaping holes in mobile payments via Threatpost. Microsoft warns of 0-day attacks. Serious bugs in Android kernel. Indiana AG sues WellPoint over data breach. Windows phone kill switch. CSO Online’s fairly complete List of Security Laws, Regulations and Guidelines. SecTor 2010: Adventures in Vulnerability Hunting – Dave Lewis and Zach Lanier. SecTor 2010: Stuxnet and SCADA systems: The wow factor – James Arlen. RIAA ass-clowns at it again. Facebook developers sell IDs. Russian-Armenian botnet suspect raked in €100,000 a month. FedRAMP Analysis. it sure looks like a desperate attempt to bypass security analysis in a headlong push for cheap cloud services Part 2 of JJ’s guide to credit card regulations. Dangers of the insider threat and key management. Included as humor, not news. Software security courtesy of child labor. Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Andre Gironda, in response to The Question of Agile’s Success . “To

Share:
Read Post

Security Metrics: Do Something

I was pleased to see the next version of the Center for Internet Security’s Consensus Security Metrics earlier this week. Even after some groundbreaking work in this area in terms of building a metrics program and visualizing the data, most practitioners still can’t answer the simple question: “How good are you at security?” Of course that is a loaded question because ‘good’ is a relative term. The real point is to figure out some way to measure improvement, at least operationally. Given that we Securosis folks tend to be quant-heads, and do a ton of research defining very detailed process maps and metrics for certain security operations (Patch Management Quant and Network Security Ops Quant), we get it. In fact, I’ve even documented some thoughts on how to distinguish between metrics that are relevant to senior folks and those of you who need to manage (improve) operations. So the data is there, and I have yet to talk to a security professional who isn’t interested in building a security metrics program, so why do so few of us actually do it? It’s hard – that’s why. We also need to acknowledge that some folks don’t want to know the answer. You see, as long as security is deemed necessary (and compliance mandates pretty well guarantee that) and senior folks don’t demand quantitative accountability, most folks won’t volunteer to provide it. I know, it’s bass-ackward, but it’s true. As long as a lot of folks can skate through kind of just doing security stuff (and hoping to not get pwned too much), they will. So we have lots of work to do to make metrics easier and useful to the practitioners out there. From a disclosure standpoint, I was part of the original team at CIS that came up with the idea for the Consensus Metrics program and drove its initial development. Then I realized consensus metrics actually involve consensus, which is really hard for me. So I stepped back and let the folks with the patience to actually achieve consensus do their magic. The first version of the Consensus Metrics hit about a year ago, and now they’ve updated it to version 1.1. In this version CIS added a Quick Start Guide, and it’s a big help. The full document is over 150 pages and a bit overwhelming. QS is less than 20 pages and defines the key metrics as well as a balanced scorecard to get things going. The Balanced Scorecard involves 10 metrics, broken out across: Impact: Number of Incidents, Cost of Incidents Performance by Function: Outcomes: Configuration Policy Compliance, Patch Policy Compliance, Percent of Systems with No Known Severe Vulnerabilities Performance by Function: Scope: Configuration Management Coverage, Patch Management Coverage, Vulnerability Scanning Financial Metrics: IT Security Spending as % of IT Budget, IT Security Budget Allocation As you can see; this roughly equates security with vulnerability scanning, configuration, and patch management. Obviously that’s a dramatic simplification, but it’s somewhat plausible for the masses. At least there isn’t a metric on AV coverage, right? The full set of metrics adds depth in the areas of incident management, change management, and application security. But truth be told, there are literally thousands of discrete data points you can collect (and we have defined many of them via our Quant research), but that doesn’t mean you should. I believe the CIS Consensus Security Metrics represent an achievable data set to start collecting and analyzing. One of the fundamental limitations now is that there is no way to know how well your security program and outcomes compare against other organizations of similar size and industry. You may share some anecdotes with your buddies over beers, but nothing close to a quantitative benchmark with a statistically significant data set is available. And we need this. I’m not the first to call for it either, as the New School guys have been all over it for years. But as Adam and Andrew point out, we security folks have a fundamental issue with information sharing that we’ll need to overcome to ever make progress on this front. Sitting here focusing on what we don’t have is the wrong thing to do. We need to focus on what we do have, and that’s a decent set of metrics to start with. So download the Quick Start Guide and start collecting data. Obviously if you have some automation driving some of these processes, you can go deeper sooner – especially with vulnerability, patch, and configuration management. The most important thing you can do is get started. I don’t much care where you start – just that you start. Don’t be scared of the data. Data will help you identify issues. It will help you pinpoint problems. And most importantly, data will help you substantiate that your efforts are having an impact. Although Col. Jessup may disagree (YouTube), I think you can handle the truth. And you’ll need to if we ever want to make this security stuff a real profession. Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.