Securosis

Research

RSA Guide 2011: Application Security

When we say application security, for we generally mean web application security. We probably could have cheated and simply reposted last year’s guide to application security and still been close. Yes, application security is still a nascent market. Last year the focus was anti-exploitation to prevent code injection attacks, and the value provided by integrating assessment and web application firewall technologies. While the threats remain the same, there are some new twists which deserve attention. What We Expect to See Code Review Services: Strapping security onto the network layer and hoping it catches your application vulnerabilities is a band-aid at best, and companies that produce applications know this. With HP’s acquisition of Fortify a few months ago, Microsoft’s announcement of Attack Surface Analyzer, and IBM’s acquisition of Ounce Labs in 2009, it’s clear that the world’s major software providers know this as well. And they are looking to capitalize on the movement. Third party source code review services are on the rise, and most web development teams now use either white-box or black-box testing in their certification processes. “Building security in” is an increasingly common mantra for development teams, and there is tremendous opportunity to sell security products and services into this nascent market. Most development teams are just now learning about secure coding techniques, threat modeling, and how to build unit-based security tests to run alongside their functional tests. We expect to see many vendors offering tools, education, and services that foster secure code everywhere from design to post-deployment. Not just pre-and-post deployment checkers and firewalls, but security offerings for every single step in the development lifecycle. Buyer Shift: “What?” you say. I am not selling to the IT manager? Not here you are not. IT plays a part, but the buying center is shifting to the development team for web application security technologies. And that’s a very different conversation, with a much different set of requirements and use cases the vendor needs to address. OWASP As the Guiding Light: Publicity concerning application security issues is growing. OWASP — the Open Web Application Security Project — provides a Top 10 list of the most common threats to applications. And it’s a good rundown of sneaky, underhanded tricks attackers use to compromise web applications for fun and profit. Even better, it’s backed by measurable statistics so it’s not all conjecture and innuendo. This list is driving many companies’ marketing campaigns, and the alignment of their service offerings as well. How well any given vendor protects applications from these threats is open for debate, but the fact that they are responding to the most common threat vectors we see today is very good news. Web application vulnerabilities represent a significant threat to organizations as web services are an integral part of business operations, and the push for more SaaS and cloud based services means attackers have an increasing number of potential targets. As if you haven’t had enough cloud on a stick, up next are our thoughts on endpoint security, and then virtualization and cloud security in the RSA Guide. I know, you can’t wait. Share:

Share:
Read Post

RSA Guide 2011: Virtualization and Cloud

2010 was a fascinating year for cloud computing and virtualization. VMWare locked down the VMSafe program, spurring acquisition of smaller vendors in the program with access to the special APIs. Cloud computing security moved from hype to hyper-hype at the same time some seriously interesting security tools hit the market. Despite all the confusion, there was a heck of a lot of progress and growing clarity. And not all of it was from the keyboard of Chris Hoff. What We Expect to See For virtualization and cloud security, there are four areas to focus on: Innovation cloudination: For the second time in this guide I find myself actually excited by new security tech (don’t tell my mom). While you’ll see a ton of garbage on the show floor, there are a few companies (big and small) with some innovative products designed to help secure cloud computing. Everything from managing your machine keys to encrypting IaaS or SaaS data. These aren’t merely virtual appliance versions of existing hardware/software, but ground-up, cloud-specific security tools. The ones I’m most interested in are around data security, auditing, and identity management. Looking SaaSy: Technically speaking, not all Software as a Service counts as cloud computing, but don’t tell the marketing departments. But this is another area that’s more than mere hype- nearly every vendor I’ve talked with (and worked with) is looking at leveraging cloud computing in some way. Not merely because it’s sexy, but since SaaS can help reduce management overhead for security in a bunch of ways. And since all of you already pay subscription and maintenance licenses anyway, pure greed isn’t the motivator. These offerings work best for small and medium businesses, and reduce the amount of equipment you need to maintain on site. They also may help with distributed organizations. SaaS isn’t always the answer, and you really need to dig into the architecture, but I’ve been pleasantly surprised at how well some of these services can work. VMSafe cracking: VMWare locked down its VMSafe program that allowed security vendors direct access to certain hypervisor functions via API. The program is dead, except the APIs are maintained for any existing members in the program. This was probably driven by VMWare wanting to control most of the security action, and they forced everyone to move to the less-effective VShield Zones system. What does this mean? Anyone with VMSafe access has a leg up on the competition, which spurred some acquisitions. Everyone else is a bit handcuffed in comparison, so when looking at your private cloud security (on VMware) focus on the fundamental architecture (especially around networking). Virtual appliances everywhere: You know all those security vendors that promoted their amazing performance due to purpose-built hardware? Yeah, now they all offer the same performance in virtual (software) appliances. Don’t ask the booth reps too much about that though or they might pull a Russell Crowe on you. On the upside, many security tools do make sense as virtual appliances. Especially the ones with lower performance requirements (like management servers) or for the mid-market. We guarantee your data center, application, and storage teams are looking hard at, or are already using, cloud and virtualization, so this is one area you’ll want to pay attention to despite the hype. And that’s it for today. Tomorrow will wrap up with Security Management and Compliance, as well as a list of all the places you can come heckle me and the rest of the Securosis team. And yes, Mike will be up all night assembling this drivel into a single document to be posted on Friday. Later… Share:

Share:
Read Post

Incite 2/9/2011: Loose Lips Sink Ships

I think we’ve taken this instant gratification thing a bit too far. Do you remember in the olden days, when you didn’t know what you were getting for your birthday? Now we get no surprises, pretty much as a society. The combination of a 24-hour media cycle, increasingly outsourced manufacturing, and loose lips ensures that nothing remains a secret for long. I remember the day IBM announced the hostile acquisition of Lotus back in 1994. I was at META at the time, and we were hosting a big conference of our clients. No one knew the deal was coming down and there was genuine surprise. We had a lot to talk about at that conference. Nowadays we hear about every big deal weeks before it hits. Every layoff. Every divestiture. It’s like these companies have their board rooms bugged. Or some folks in these shops have loose lips. And what about our favorite consumer gadgets? We already know the iPad 2 isn’t going to be much of an evolution. It’ll have a camera. And maybe a faster processor and more memory. How do we know? Because Apple has to make millions of these things in China ahead of the launch. Of the 200,000 people who work in that factory, someone is going to talk. And they do. Probably for $20. Not to mention all the companies showing off cases they needed a head-start on. So there is no surprise about anything in consumer electronics anymore. But this weekend I hit my limit. You see, I love the Super Bowl. It’s my favorite day of the year. I host a huge party for my friends and I like the commercials. You always get a chuckle when you see a great commercial. It’s a surprise. Remember the Bud Bowl? Or Jordan and Bird’s shooting contest? Awesome. But no more surprises. I saw a bunch of the commercials on YouTube last week. You have to love VW’s Darth Vader commercial, but the novelty had worn off by the time the game started. I know you try to create buzz by moving up your big reveal (it’s been happening at the RSA Conference for years), but enough is enough. We try to teach the kids the importance of keeping secrets. We talk freely in our house (probably a bit too freely) and we’ve gotten bitten a few times when one of the kids spill the beans. But they are kids and we used those experiences to reinforce the need to keep what someone tells you in confidence. But they are in the middle of a world where no one can keep a secret. Which once again forces us to hammer home the age-old refrain: “Do as we say, not as they do…” And no, I’m not telling you about our super sekret project. Unless you are from the WSJ, that is. -Mike Photo credits: “Loose Lips” originally uploaded by fixedgear Big Head Alert Well, it wasn’t enough for me to offer up free refreshments to those meeting up at the Security Blogger’s Party at RSA, in exchange for a vote for most entertaining blog. But the accolades keep rolling in. Yours truly has been nominated for the Best Security Blogger award by the fine folks at SC Magazine. I’m listed with folks like Hoff (does he even blog anymore?) and Bruce Schneier, so I can’t complain. Although the Boss did call the handyman this morning – it seems we need a few doors expanded in the house for my expanding head. Yes, I’m kidding. I’m fortunate to surround myself with people who remind me of my place on the totem pole every day. Yeah, the bottom. I’ll be the last guy to say I’m the best at anything, but I certainly do appreciate being noticed for doing what I love. You can vote. And no, I haven’t contracted with RSnake to game the vote. Not yet, anyway. Incite 4 U PR writing a check your defenses can’t cash: That title came from a Twitter exchange I had earlier this week about the HBGary Federal hack. Basically the CEO of this company talked smack about penetrating and exposing a hacker group and… wait for it… lo and behold they eviscerated him. As Krebs describes, it was a good hack. These Anonymous guys don’t screw around. And that’s the point. Just like our friend the World’s #1 Hacker, if you talk smack you will get hurt. The folks from HBGary are very smart. And even if they could detonate malware (using their own damn device), a determined attacker will find your weak spot. And more often than not it’s the human capital who drinks your coffee, uses your toilet paper, and maybe even gets something done, sometimes. So basically here is a message to everyone out there: STFU. These stupid PR games and testosterone-laden boasts of hacking this or hacking that show you as nothing more than a “big hat, no cattle” hacker. The folks who really can don’t have to talk about it. And odds are they’ll stay anonymous. – MR The Endpoint Is the Network: One of the wacky things about cloud computing is that it royally screws up so many of the existing security controls. Network monitors, firewalls, vulnerability assessment, and even endpoint agent management all sort of go nuts when you start moving machines around randomly in the fluff of the cloud. To work consistently your security controls need to track the virtual machines, no matter where they pop up. I’m just getting caught up, but CloudPassage looks interesting. It uses an agent and security management plane to consistently apply controls as machine instances move around, even in hybrid models. Yes, we now have to dump everything back into the endpoint we built all that ASIC-based hardware for. Sorry. – RM Looking in the Mirror: Rocky DeStefano posted a nice table of common SIEM evaluation criteria on the visiblerisk blog. This is a handy set of RFI questions that companies looking to

Share:
Read Post

RSA Guide 2011: Endpoint Security

In 2010, there was broad acknowledgement that most of the endpoint protection deployed was more about passing PCI (yes, it’s still a requirement) than actually stopping attacks. Unfortunately, at the show we’ll continue to hear about all the advances happening in malware detection, and we’ll laugh again. The traditional signature-based model is broken, no matter how many clouds we see inserted into the mix. But with the AV cash cow continuing to moo uncontrollably, the industry will continue trying to convince customers to maintain their investments. So the real question is: who will show some type of innovation in terms of endpoint malware detection. Anyone? Anyone? Bueller? Bueller? What We Expect to See There are some areas of interest at the show for endpoint security: You get what you pay for (or do you?): Given the clear issues around endpoint malware detection, we’ll be hearing a lot from the Free AV crowd. They’ll be talking about the hundreds of millions of folks who use the free engines, just before they try to upsell you to their paid offerings. The reality is that you need management, because these tools involve deploying software agents to many endpoints. But you should pay the least amount possible. So see who seems the hungriest on the show floor. If they aren’t foaming at the mouth, they likely aren’t hungry enough to win your business. Cloudy with a chance of hyperbole: You will also hear a lot about cloud signatures and crowd sourcing to address the limitations of the traditional AV signature model. To be clear, moving a lot of signatures to the cloud is a good thing. But it’s not an answer. The model of matching bad stuff is still broken, and no amount of cloudy stuff will change that. The idea of crowd sourcing is interesting so check out the folks, like Sourcefire/Immunet and Webroot/PrevX, who are doing this in practice. Ask them how they shorten the window from the time an issue is discovered to distributing an update to the rest of the network. This is yet another option to keep the broken AV model running a bit longer. AWL MIA: What you probably won’t see a lot of is application white listing (AWL). Why? Because the technology remains a niche. It is a core aspect of our Positivity security model, but both perception and reality are still slowing deployment of AWL. Not that the handful of vendors offering these solutions won’t be trying to make some noise. But they have no chance to stand out against the status quo, which represents billions in revenue and spends like drunken sailors at RSA. But this remains an important technology, so you should search out the vendors who offer it and learn how they are working to address the deployment and scaling issues. Signs of the iPocalypse: You will see a lot of vendors giving away iPads and iPhones. Why not? If you don’t have one, you want one. If you already have one, you want another one. Or ten. But the reality is these devices are big, and consumerization is taking root. That means you need to figure out how to control them. OK, maybe not control, but at least manage. So check out the configuration management folks and those with specific mobile technologies to reign in the chaos. OK, maybe not reign in, but at least ensure that when they get lost (and they will), you won’t be in career jeopardy. Man(ning) up: One of the other major stories in 2010 was WikiLeaks, spearheading by Bradley Manning, your friendly neighborhood data leaker. So you’ll hear a lot of vendors talking about the importance of controlling USB ports and doing content control/analysis on the endpoint. Try to figure out how they scale. Try to understand how they classify sensitive data and actually do anything without killing the performance of the endpoint. Yeah, it would be good to figure out whether and how they can play nice with any DLP/device control technologies you already have implemented. We’ve hit the halfway point in our RSA Guide posts. I know you are waiting with baited breath for the Virtualization and Cloud section, but patience is a virtue. That post will be up later today. Share:

Share:
Read Post

React Faster and Better: Contain and Respond

In our last post, we covered the first level of incident response: validating and filtering the initial alert. When that alert triggers and your frontline personnel analyze the incident, they’ll either handle it on the spot or gather essential data and send it up the chain. These roles and responsibilities represent a generalization of best practices we have seen across various organizations, and your process and activities may vary. But probably not too much. Tier 2: Respond and contain The bulk of your incident response will happen within this second tier. While Tier 1 deals with a higher number of alerts (because they see everything), anything that requires any significant response moves quickly to Tier 2, where an incident manager/commander is assigned and the hard work begins. In terms of process, Tier 2 focuses on the short-term, immediate response steps: Size-up: Rapidly scope the incident to determine the appropriate response. If the incident might result in material losses (something execs need to know about), require law enforcement and/or external help, or require specialized resources such as malware analysis, it will be escalated to Tier 3. The goal here is to characterize the incident and gather the information to support containment. Contain: Based on your size-up, try to prevent the situation from getting worse. In some cases this might mean not containing everything, so you can continue to observe the bad guys until you know exactly what’s happening and who is doing it, but you’ll still do your best to minimize further damage. Investigate: After you set the initial incident perimeter, dig in to the next level of information to better understand the full scope and nature of the incident and set up your remediation plan. Remediate: Finish closing the holes and start the recovery process. The goal at this level is to get operations back up and running (and/or stop the attack), which may involve workarounds or temporary measures. This is different than a full recovery. If an incident doesn’t need to escalate any higher, at this level you’ll generally also handle the root cause analysis/investigation and manage the full recovery. This depends on on resources, team structure, and expertise. The Team If Tier 1 represent your dispatchers, Tier 2 are the firefighters who lead the investigation. They are responsible for more-complex incidents that involve unusual activity beyond simple signatures, multi-system/network issues, and issues with personnel that might result in HR/legal action. Basically, any kind of non-trivial incident ends up in the lap of Tier 2. While these team members may still specialize to some degree, it’s important for them to keep a broad perspective because any incident that reaches this level involves the complexity of multiple systems and factors. They focus more on incident handling and less on longer, deeper investigations. Primary responsibilities: Primary incident handling. More advanced investigations that may involve multiple factors. For example, a Tier 1 analyst notes egress activity; and the Tier 2 analyst then takes over and coordinates a more complete network analysis; as well as checking endpoint data where the egress originated, to identify/characterize/prioritize any exfiltration. This person has overall responsibility for managing the incident and pulling in specialist resources, as needed. They are completely dedicated to incident response. As the primary incident handlers, they are responsible for quickly characterizing and scoping the incident (beyond what they got from Tier 1), managing containment, and escalating when required. They are the ones who play the biggest role in closing the attacker’s window of malicious opportunity. Incidents they manage: Multi-system/factor incidents and investigations of personnel. Incidents are more complex and involve more coordination, but don’t require direct executive team involvement. When they escalate: Any activities involving material losses, potential law enforcement involvement, or specialized resources; and those requiring an all-hands response. They may even still play the principal management and coordination role for these incidents, but at that point senior management and specialized expertise needs to be in the loop and potentially involved. The Tools These responders have a broader skill set, but generally rely on a variety of monitoring tools to classify and investigate incidents as quickly as possible. Most people we talk with focus more on network analysis at this level because it provides the broadest scope to identify the breadth of the incident via “touch points” (devices involved in the incident). They may then delve into log analysis for deeper insight into events involving endpoints, applications, and servers; although they often work with a platform specialist – who may not be formally part of the incident response team – when they need deeper non-security expertise. Full packet capture (forensics): As in a Tier 1 response, the network is the first place to look to scope intrusions. The key difference is that in Tier 2 the responder digs deeper, and may use more specialized tools and scripts. Rather than looking at IDS for alerts, they mine it for indications of a broader attack. They are more likely to dig into network forensics tools to map out the intrusion/incident, as that provides the most data – especially if it includes effective analysis and visualization (crawling through packets by hand is a much slower process, and something to avoid at this level if possible). As discussed in our last post, simple network monitoring tools are helpful, but not sufficient to do real analysis of incident data. So full package capture is one of the critical pieces in the response toolkit. Location-specific log management: We’re using this as a catch-all for digging into logs, although it may not necessarily involve a centralized log management tool. For application attacks, it means looking at the app logs. For system-level attacks, it means looking at the system logs. This also likely involves cross-referencing with authentication history, or anything else that helps characterize the attack and provide clues as to what is happening. In the size-up, the focus is on finding major indicators rather than digging out every bit of data. Specialized tools: DLP, WAF, DAM, email/web security gateways, endpoint

Share:
Read Post

RSA Guide 2011: Email/Web (Content) Security

Global Threats. APT. Botnets. Infected Web Pages. Grannies with shotguns. We expect to see anything and everything it takes for vendors to get your attention, including never before seen awards and security metrics. Some ask “Why the hype?” The value of content security — both inbound filtering to prevent unwanted garbage from coming into the network, as well as detection of unwanted activity like surfing for porn or sending company secrets to your cousin as investment advice — is proven. All the major players and most mid-tier providers have closed the major holes in their products, provide unified management for all functions, and offer some type of SaaS service. The technology works. The problem is that the segment is both mature and saturated. To earn a new customer, a vendor must steal one from a competitor. Growing revenue means convincing customers they need a new service. It is increasingly difficult to differentiate the top tier from the mid-tier players, so that noise you hear is vendors trying to find an edge. For the most part, the vendors offer quality services at a price point that continues to drop with reduced cost cloud and SaaS based offerings. But you can’t blame the vendors from trying to “one up” the competition in a crowded market. What We Expect to See There are three areas of interest at the show for content security: It’s Raining Devices: One thing you are going to learn wandering around Moscone is how the cloud protects those endpoint devices. Yep! The Content Security Cloud protects the endpoint. Isn’t that what cloud security is all about? Well, no, actually, but you are will hear about it. Those services that run on your iPhone/Droid/Blackberry are theoretically just as susceptible to attack as what’s on your desktop or laptop. Supposedly. That’s the vendor argument, but attacks against mobile devices are more likely to target lower layers of the infrastructure — but don’t worry, vendors won’t let facts ruin a good story. In most cases the vendor is offering exactly the same services they already provide for your laptop/workstation to protect from the same threats on new devices. But hey, it’s ‘the cloud’, so it must be good! More DLP: Yes, content security providers offer Data Loss Prevention. In most cases, it’s just the subset of DLP needed to detect data exfiltration. And regular expression checking for outbound documents and web requests is good enough to address the majority of content leakage problems, so this is a good addition for most customers. By and large we hear from satisfied customers who implement a dozen or so content policies for specific violations they are interested in detecting, and find the analysis techniques sufficient. Deployments of this type are far less daunting than a full featured soup-to-nuts DLP platform, so we hear far more success stories and less about shelfware. Users Are Employees Too: Scams, fraud, and phishing attacks continue to hammer those uninterested in security, and the IT managers who support them. The content security vendors know that nothing else matters to some users besides getting to their Facebook pages on their lunch hour. It also means these users are unusually susceptible to phishing attacks, drive-by malware, and account compromises. In and of themselves these attacks are fairly low-yield and low-damage. But a compromised computer on a corporate network acts as a launching pad for all sorts of network mayhem. Content security providers can no longer claim the “Insider Threat” is your biggest security concern, but they will let IT managers know they help mitigate damages from stupid human tricks. Next up in the hit parade is Data Security. OK, repeat after me: WikiLeaks, WikiLeaks, WikiLeaks – and you’ll start to get a feel for this year’s RSA Conference rally cry. Share:

Share:
Read Post

RSA Guide 2011: Data Security

As someone who has covered data security for nearly a decade, some days I wonder if I should send Bradley Manning, Julian Assange, whoever wrote the HITECH act, and the Chinese hacker community a personal note of gratitude. If the first wave of data security was driven by breach disclosure laws and a mixture of lost laptops and criminal exploits, this second wave is all about stopping leaks and keeping your pants on in public. This year I’ve seen more serious interest in large enterprises to protect more than merely credit card numbers than ever before. We also see PCI and the HITECH act (in healthcare) pushing greater investment in data security down to the mid-market. And while the technology is still far from perfect, it’s definitely maturing along nicely. What We Expect to See There are five areas of interest at the show for data security: DLP – Great taste, less filling There are two major trends in the Data Loss Prevention market- DLP Light comes of age, and full-suite DLP integration into major platforms. A large percentage of endpoint and network tools now offer basic DLP features. This is usually a regular expression engine or some other technique tuned to protect credit card numbers, and maybe a little personally identifiable information or healthcare data. Often this is included for free, or at least darn cheap. While DLP Light (as we call this) lacks mature workflow, content analysis capabilities, and so on, not every organization is ready for, or needs, a full DLP solution. If you just want to add some basic credit card protection, this is a good option. It’s also a great way to figure out if you need a dedicated DLP tool without spending too much up-front. As for full-suite DLP solutions, most of them are now available from big vendors. Although the “full” DLP is usually a separate product, there’s a lot of integration at various points of overlap like email security or web gateways. There’s also a lot of feature parity between the vendors- unless you have some kind of particular need that only one fulfills, if you stick with the main ones you can probably flip a coin to choose. The key things to ask when looking at DLP Light are what’s the content analysis engine, and how are incidents managed. Make sure the content analysis technique will work for what you want to protect, and that the workflow fits how you want to manage incidents. You might not want your AV guy finding out the CFO is emailing out customer data to a competitor. Also make sure you get to test it before paying for it. As for full-suite DLP, focus on how well it can integrate with your existing infrastructure (especially network gateways, directories, and endpoints). I also suggest playing with the UI since that’s often a major deciding factor due to how much time security and non-security risk folks spend in it. Last of all we’re starting to see more DLP vendors focus on the mid-market and easing deployment complexity. Datum in a haystack Thanks to PCI 2.0 we can expect to see a heck of a lot of discussion around “content discovery”. While I think we all know it’s a good idea to figure out where all our sekret stuff is in order to protect it, in practice this is a serious pain in the rear. We’ve all screamed in frustration when we find that Access database or spreadsheet on some marketing server all chock full of Social Security numbers. PCI 2.0 now requires you demonstrate how you scoped your assessment, and how you keep that scope accurate. That means having some sort of tool or manual process to discover where all this stuff sits in storage. Trust me, no marketing professional will possibly let this one pass. Especially since they’ve been trying to convince you it was required for the past 5 years. All full-suite DLP tools include content discovery to find this data, as well as some DLP Light options. Focus on checking out the management side, since odds are there will be a heck of a lot of storage to scan, and results to filter through. There’s a new FAM in town I hate to admit this, but there’s a new category of security tool popping up this year that I actually like. File Activity Monitoring watches all file access on protected systems and generates alerts on policy violations and unusual activity. In other words, you can build policies that alert you when a sales guy about to depart is downloading all the customer files, without blocking access to them. Or when a random system account starts downloading engineering plans to that new stealth fighter. I like the idea of being able to track what files users access and generate real-time alerts. I started talking about this years ago, but there weren’t any products on the market. now I know of 3, and I suspect more are coming down the pipe. Battle of the tokens Last year we predicted a lot of interest and push in encryption and tokenization, and for once we got it right. One thing we didn’t expect was the huge battle that erupted over ownership of the term. Encryption vendors started pushing encrypted data as tokens (which I find hard to call a token), while tokenization advocates try to convince you encryption is no more secure than guarding Hades with a chihuahua. The amusing part is all these guys offer both options in their products. Play the WIKILEAKS! WIKILEAKS! APT! WIKILEAKS! PCI! HITECH! WIKILEAKS!!! drinking game Since not enough of you are buying data security tools, the vendors will still do their best to scare your pants off and claim they can prevent the unpreventable. Amuse yourself by cruising the show floor with beer in hand and drinking anytime you see those words on marketing materials. It’s one drink per mention in a brochure, 2 drinks for a postcard handout, and 3

Share:
Read Post

RSA Guide 2011: Key Themes

OMG, it’s 6 days and counting to the 2011 RSA Conference. Yes, they moved the schedule up a few months, so you now can look forward to spending Valentine’s Day with cretins like us, as opposed to your loved ones. Send thank-you notes to… But on to more serious business. Last year we produced a pretty detailed Guide to the Conference and it was well received, so – gluttons for punishment that we are – we’re doing it again. This week we’ll be posting the Guide in pieces, and we will distribute the final assembled version on Friday so you can download it and get ready for the show. Without further ado, here is the key themes part of our Guide to RSA Conference 2011. RSA Conference 2011: Key Themes How many times have you shown up at the RSA Conference to see the hype machine fully engaged on a topic or two? Remember how 1999 was going to be the Year of PKI? And 2000. And 2001. And 2002. So what’s going to be news of the show this year? Here is a quick list of some key topics that will likely be top of mind at RSA, and why you should care. Cloud Security – From Pre-K to Kindergarten Last year you could count real cloud security experts on one hand… with a few fingers left over. This year you’ll see some real, practical solutions, but even more marketing abuse than last year. Cloud computing is clearly one of the major trends in enterprise technology, and woe unto the vendor that misses that boat. But we are only on the earliest edge of a change that will reshape our data centers, operations, and application design over the next 10 years. The number of people who truly understand cloud computing is small. And folks who really understand cloud computing security are almost as common as unicorns. Even fewer of them have actually implemented anything in production environments (something only one of our Securosis Contributors has done). The big focus in cloud security these days is public Infrastructure as a Service offerings such as Amazon EC2 and Rackspace, due to increasing enterprise interest and the complexity of the models. But don’t think everyone is deploying all their sensitive applications in the cloud. Most of the bigger enterprises we talk with are only at the earliest stages of public Infrastructure as a Service (IaaS) projects, while a lot more use of “private clouds”. Medium-size and small organizations are actually more likely to jump into public cloud because they have less legacy infrastructure and complexity to deal with, and can realize the benefits more immediately (we’re sure glad we don’t need our own data center). It’s important to separate a trend from its current position on the maturity curve – cloud computing is far from being all hype, but we’re still early in the process. Before hitting the show, we suggest you get a sense of what cloud projects your organization is looking at. We also recommend taking a look at the architectural section of the Cloud Security Alliance Security Guidance for Critical Areas of Focus in Cloud Computing and the Editorial Note on Risk on pages 9-11 (yes, Rich wrote this, and we still recommend you read it). On the security front, remember that design and architecture are your friends, and no tool can simply “make you secure” in the cloud, no matter what anyone claims. For picking cloud sessions, we suggest you filter out the FUD from the meat. Skip over session descriptions that say things like, “will identify the risks of cloud computing” and look for those advertising reference architectures, case studies, and practical techniques (don’t worry, despite the weird titles, Rich includes those in his cloud presentation with Chris Hoff). With the lack of standardization among cloud providers, and even conflicting definitions among organizations as to what constitutes “the cloud”, it’s all too easy to avoid specifics and stick to generalities on stage and in marketing materials. Cloud security is one of our technology areas, so we’ll cover specific things we think you’ll see later in this guide. We are also running the (sold-out) inaugural Cloud Security Alliance training class the Sunday before RSA, and Rich is moderating a panel on government cloud and speaking with the always-entertaining Chris Hoff on cloud security Friday. The Economy Sucks Less – What now? The last few years have been challenging. For one, success has involved keeping yourself and your team employed. It’s not like you had a lot of extra funds lying around, so many projects kept falling off the list. So you tried your best to do the minimum and sometimes didn’t even reach that low bar. Nice-to-have became not-gonna-happen. But now it looks like things are starting to recover a bit. Global stock markets, which tend to look 6 months ahead, are expecting strong growth, and many of our conversations with end users (both large and small) tend to indicate a general optimism we haven’t seen in quite a while. To be clear, no one (certainly not us) expects the go-go days of the Internet bubble to return any time soon – unless you run a mobile games company. But we do think the economy will suck less in 2011, and that means you’ll need to start thinking about projects that have fallen off the plate. Such as: Perimeter renewal: Many organizations let the perimeter go a bit. So it’s overgrown, suboptimal, and not well positioned to do much against the next wave of application and targeted attacks. One project to consider might be an overhaul of your perimeter. Or at minimum, start moving to a different, more application-aware architecture to more effectively defend your networks. At RSAC, you’ll hear a lot about next generation firewalls, which really involve building rules based on application behavior rather than just ports and protocols. At the show, your job will be to determine what is real and what is marketing hyperbole.

Share:
Read Post

RSA Guide 2011: Network Security

2010 was an interesting year for the network security space. There has been a resurgence in interest and budget projections for spending, largely for perimeter security. Part of this is a loosening of the budget purse strings, which is allowing frustrated network security folks to actually start dreaming about upgrading their perimeters. So there will be plenty of vendors positioning to benefit from the wave of 2011 spending. What We Expect to See There are four areas of interest at the show for network security: Next Generation Firewall: Last year we talked about application awareness as absolutely critical to the next wave of network security devices. That capability — to build policies based on applications and users, rather than just ports and protocols — has taken the name next generation firewall. Unless a vendor has no interest in the enterprise market, they will be touting their next generation wares. Some of these will be available exclusively on slide decks in the booth, while other vendors will be able to show varying levels of implementation. While you’ve got an SE at your disposal at the show, ask them some pointed questions about how their application categorization happens and what the effective throughput is for their content oriented functions. It should be pretty clear to what degree their gear is next-generation, or if it’s really just an IPS bolt-on. More marketecture: As these new generation capabilities start to hit, they present the opportunity for a fairly severe disruption in the status quo of vendor leadership. So what do the incumbents do when under attack, without a technical response? Right, they try to freeze the market with some broad statement of direction that is light on detail and heavy on hyperbole. It wouldn’t surprise us to see at least one of the RSA keynoters (yeah, those who pay EMC $250K for the right to pontificate for an hour) talk about a new initiative to address all ills of everything. Virt suck: The good news is that a bunch of the start-ups talking about virtualization security hit the wall and got acquired by big network security. So you probably won’t see many folks talking about their new widget to protect inter-VM network traffic. What you will hear is every vendor on the floor playing up the advantages of their shiny new virtual appliances. It’s just like the box you pay $50K for, but you get to use your own computing power in a horribly wasteful fashion. You know how attractive it is to slice out a chunk of your computers to run IPS signatures. It’s like these folks want to bring us back to 1995 and because it runs on ESX, it’s all good. Not so much. Full packet capture maturing: Yes, this is a carry-over from last year. The fact remains that we still have a lot of work to do in order to streamline our incident response processes and make them useful. So you’ll see folks stacked up to learn about the latest and greatest packet capture and the associated analysis. These tools are now starting to bring some cool visualization and even malware analysis to the table. Check them out because as the market matures (and prices come down), this is a technology you should be looking at. Later today we’ll be posting the sections on Email/Web Content Security, as well as Data Security. So stay tuned for that… Share:

Share:
Read Post

Why You Should Delete Your Twitter DMs, and How to Do It

I’ve been on Twitter for a few years now, and over that time I’ve watched not only its mass adoption, but also how people changed their communication habits. One of the most unexpected changes (for me) is how many people now use Twitter Direct Messages as instant messaging. It’s actually a great feature – with IM someone needs to be online and using a synchronous client, but you can drop a DM anytime you want and, depending on their Twitter settings and apps, it can follow them across any device and multiple communications methods. DM is oddly a much more reliable way to track someone down, especially if they link Twitter with their mobile phone. The problem is that all these messages are persistent, forever, in the Twitter database. And Twitter is now one of the big targets when someone tries to hack you (as we’ve seen in a bunch of recent grudge attacks). I don’t really say anything over DM that could get me in trouble, but I also know that there’s probably plenty in there that, taken out of context, could look bad (as happened when a friend got hacked and some DMs were plastered all over the net). Thus I suggest you delete all your DMs occasionally. This won’t necessarily clear them from all the Twitter apps you use, but does wipe them from the database (and the inboxes of whoever you sent them to). This is tough to do manually, but, for now, there’s a tool to help. Damon Cortesi coded up DM Whacker, a bookmarklet you can use while logged into Twitter to wipe your DMs. Before I tell you how to use it, one big warning: this tool works by effectively performing a Cross-Site Request Forgery attack on yourself. I’ve scanned the code and it looks clean, but that could change at any point without warning, and I haven’t seriously programmed JavaScript for 10 years, so you really shouldn’t take my word on this one. The process is easy enough, but you need to be in the “old” Twitter UI: Go to the DM Whacker page and drag the bookmarklet to your bookmarks bar. Log into Twitter and navigate to your DM page. If you use the “new” Twitter UI, switch back to the “old” one in your settings. Click the bookmarklet. A box will appear in the upper-right of the Twitter page. Select what you want to delete (received and sent) or even filter by user. Click the button, and leave the page running for a while. The process can take a bit, as it’s effectively poking the same buttons you would manually. If you are really paranoid (like me) change your Twitter password. It’s good to rotate anyway. And that’s it. I do wish I could keep my conversation history for nostalgia’s sake, but I’d prefer to worry less about my account being compromised. Also, not everyone I communicate with over Twitter is as circumspect, and it’s only fair to protect their privacy as well. Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.