Securosis

Research

RSA Guide 2011: Email/Web (Content) Security

Global Threats. APT. Botnets. Infected Web Pages. Grannies with shotguns. We expect to see anything and everything it takes for vendors to get your attention, including never before seen awards and security metrics. Some ask “Why the hype?” The value of content security — both inbound filtering to prevent unwanted garbage from coming into the network, as well as detection of unwanted activity like surfing for porn or sending company secrets to your cousin as investment advice — is proven. All the major players and most mid-tier providers have closed the major holes in their products, provide unified management for all functions, and offer some type of SaaS service. The technology works. The problem is that the segment is both mature and saturated. To earn a new customer, a vendor must steal one from a competitor. Growing revenue means convincing customers they need a new service. It is increasingly difficult to differentiate the top tier from the mid-tier players, so that noise you hear is vendors trying to find an edge. For the most part, the vendors offer quality services at a price point that continues to drop with reduced cost cloud and SaaS based offerings. But you can’t blame the vendors from trying to “one up” the competition in a crowded market. What We Expect to See There are three areas of interest at the show for content security: It’s Raining Devices: One thing you are going to learn wandering around Moscone is how the cloud protects those endpoint devices. Yep! The Content Security Cloud protects the endpoint. Isn’t that what cloud security is all about? Well, no, actually, but you are will hear about it. Those services that run on your iPhone/Droid/Blackberry are theoretically just as susceptible to attack as what’s on your desktop or laptop. Supposedly. That’s the vendor argument, but attacks against mobile devices are more likely to target lower layers of the infrastructure — but don’t worry, vendors won’t let facts ruin a good story. In most cases the vendor is offering exactly the same services they already provide for your laptop/workstation to protect from the same threats on new devices. But hey, it’s ‘the cloud’, so it must be good! More DLP: Yes, content security providers offer Data Loss Prevention. In most cases, it’s just the subset of DLP needed to detect data exfiltration. And regular expression checking for outbound documents and web requests is good enough to address the majority of content leakage problems, so this is a good addition for most customers. By and large we hear from satisfied customers who implement a dozen or so content policies for specific violations they are interested in detecting, and find the analysis techniques sufficient. Deployments of this type are far less daunting than a full featured soup-to-nuts DLP platform, so we hear far more success stories and less about shelfware. Users Are Employees Too: Scams, fraud, and phishing attacks continue to hammer those uninterested in security, and the IT managers who support them. The content security vendors know that nothing else matters to some users besides getting to their Facebook pages on their lunch hour. It also means these users are unusually susceptible to phishing attacks, drive-by malware, and account compromises. In and of themselves these attacks are fairly low-yield and low-damage. But a compromised computer on a corporate network acts as a launching pad for all sorts of network mayhem. Content security providers can no longer claim the “Insider Threat” is your biggest security concern, but they will let IT managers know they help mitigate damages from stupid human tricks. Next up in the hit parade is Data Security. OK, repeat after me: WikiLeaks, WikiLeaks, WikiLeaks – and you’ll start to get a feel for this year’s RSA Conference rally cry. Share:

Share:
Read Post

RSA Guide 2011: Data Security

As someone who has covered data security for nearly a decade, some days I wonder if I should send Bradley Manning, Julian Assange, whoever wrote the HITECH act, and the Chinese hacker community a personal note of gratitude. If the first wave of data security was driven by breach disclosure laws and a mixture of lost laptops and criminal exploits, this second wave is all about stopping leaks and keeping your pants on in public. This year I’ve seen more serious interest in large enterprises to protect more than merely credit card numbers than ever before. We also see PCI and the HITECH act (in healthcare) pushing greater investment in data security down to the mid-market. And while the technology is still far from perfect, it’s definitely maturing along nicely. What We Expect to See There are five areas of interest at the show for data security: DLP – Great taste, less filling There are two major trends in the Data Loss Prevention market- DLP Light comes of age, and full-suite DLP integration into major platforms. A large percentage of endpoint and network tools now offer basic DLP features. This is usually a regular expression engine or some other technique tuned to protect credit card numbers, and maybe a little personally identifiable information or healthcare data. Often this is included for free, or at least darn cheap. While DLP Light (as we call this) lacks mature workflow, content analysis capabilities, and so on, not every organization is ready for, or needs, a full DLP solution. If you just want to add some basic credit card protection, this is a good option. It’s also a great way to figure out if you need a dedicated DLP tool without spending too much up-front. As for full-suite DLP solutions, most of them are now available from big vendors. Although the “full” DLP is usually a separate product, there’s a lot of integration at various points of overlap like email security or web gateways. There’s also a lot of feature parity between the vendors- unless you have some kind of particular need that only one fulfills, if you stick with the main ones you can probably flip a coin to choose. The key things to ask when looking at DLP Light are what’s the content analysis engine, and how are incidents managed. Make sure the content analysis technique will work for what you want to protect, and that the workflow fits how you want to manage incidents. You might not want your AV guy finding out the CFO is emailing out customer data to a competitor. Also make sure you get to test it before paying for it. As for full-suite DLP, focus on how well it can integrate with your existing infrastructure (especially network gateways, directories, and endpoints). I also suggest playing with the UI since that’s often a major deciding factor due to how much time security and non-security risk folks spend in it. Last of all we’re starting to see more DLP vendors focus on the mid-market and easing deployment complexity. Datum in a haystack Thanks to PCI 2.0 we can expect to see a heck of a lot of discussion around “content discovery”. While I think we all know it’s a good idea to figure out where all our sekret stuff is in order to protect it, in practice this is a serious pain in the rear. We’ve all screamed in frustration when we find that Access database or spreadsheet on some marketing server all chock full of Social Security numbers. PCI 2.0 now requires you demonstrate how you scoped your assessment, and how you keep that scope accurate. That means having some sort of tool or manual process to discover where all this stuff sits in storage. Trust me, no marketing professional will possibly let this one pass. Especially since they’ve been trying to convince you it was required for the past 5 years. All full-suite DLP tools include content discovery to find this data, as well as some DLP Light options. Focus on checking out the management side, since odds are there will be a heck of a lot of storage to scan, and results to filter through. There’s a new FAM in town I hate to admit this, but there’s a new category of security tool popping up this year that I actually like. File Activity Monitoring watches all file access on protected systems and generates alerts on policy violations and unusual activity. In other words, you can build policies that alert you when a sales guy about to depart is downloading all the customer files, without blocking access to them. Or when a random system account starts downloading engineering plans to that new stealth fighter. I like the idea of being able to track what files users access and generate real-time alerts. I started talking about this years ago, but there weren’t any products on the market. now I know of 3, and I suspect more are coming down the pipe. Battle of the tokens Last year we predicted a lot of interest and push in encryption and tokenization, and for once we got it right. One thing we didn’t expect was the huge battle that erupted over ownership of the term. Encryption vendors started pushing encrypted data as tokens (which I find hard to call a token), while tokenization advocates try to convince you encryption is no more secure than guarding Hades with a chihuahua. The amusing part is all these guys offer both options in their products. Play the WIKILEAKS! WIKILEAKS! APT! WIKILEAKS! PCI! HITECH! WIKILEAKS!!! drinking game Since not enough of you are buying data security tools, the vendors will still do their best to scare your pants off and claim they can prevent the unpreventable. Amuse yourself by cruising the show floor with beer in hand and drinking anytime you see those words on marketing materials. It’s one drink per mention in a brochure, 2 drinks for a postcard handout, and 3

Share:
Read Post

RSA Guide 2011: Key Themes

OMG, it’s 6 days and counting to the 2011 RSA Conference. Yes, they moved the schedule up a few months, so you now can look forward to spending Valentine’s Day with cretins like us, as opposed to your loved ones. Send thank-you notes to… But on to more serious business. Last year we produced a pretty detailed Guide to the Conference and it was well received, so – gluttons for punishment that we are – we’re doing it again. This week we’ll be posting the Guide in pieces, and we will distribute the final assembled version on Friday so you can download it and get ready for the show. Without further ado, here is the key themes part of our Guide to RSA Conference 2011. RSA Conference 2011: Key Themes How many times have you shown up at the RSA Conference to see the hype machine fully engaged on a topic or two? Remember how 1999 was going to be the Year of PKI? And 2000. And 2001. And 2002. So what’s going to be news of the show this year? Here is a quick list of some key topics that will likely be top of mind at RSA, and why you should care. Cloud Security – From Pre-K to Kindergarten Last year you could count real cloud security experts on one hand… with a few fingers left over. This year you’ll see some real, practical solutions, but even more marketing abuse than last year. Cloud computing is clearly one of the major trends in enterprise technology, and woe unto the vendor that misses that boat. But we are only on the earliest edge of a change that will reshape our data centers, operations, and application design over the next 10 years. The number of people who truly understand cloud computing is small. And folks who really understand cloud computing security are almost as common as unicorns. Even fewer of them have actually implemented anything in production environments (something only one of our Securosis Contributors has done). The big focus in cloud security these days is public Infrastructure as a Service offerings such as Amazon EC2 and Rackspace, due to increasing enterprise interest and the complexity of the models. But don’t think everyone is deploying all their sensitive applications in the cloud. Most of the bigger enterprises we talk with are only at the earliest stages of public Infrastructure as a Service (IaaS) projects, while a lot more use of “private clouds”. Medium-size and small organizations are actually more likely to jump into public cloud because they have less legacy infrastructure and complexity to deal with, and can realize the benefits more immediately (we’re sure glad we don’t need our own data center). It’s important to separate a trend from its current position on the maturity curve – cloud computing is far from being all hype, but we’re still early in the process. Before hitting the show, we suggest you get a sense of what cloud projects your organization is looking at. We also recommend taking a look at the architectural section of the Cloud Security Alliance Security Guidance for Critical Areas of Focus in Cloud Computing and the Editorial Note on Risk on pages 9-11 (yes, Rich wrote this, and we still recommend you read it). On the security front, remember that design and architecture are your friends, and no tool can simply “make you secure” in the cloud, no matter what anyone claims. For picking cloud sessions, we suggest you filter out the FUD from the meat. Skip over session descriptions that say things like, “will identify the risks of cloud computing” and look for those advertising reference architectures, case studies, and practical techniques (don’t worry, despite the weird titles, Rich includes those in his cloud presentation with Chris Hoff). With the lack of standardization among cloud providers, and even conflicting definitions among organizations as to what constitutes “the cloud”, it’s all too easy to avoid specifics and stick to generalities on stage and in marketing materials. Cloud security is one of our technology areas, so we’ll cover specific things we think you’ll see later in this guide. We are also running the (sold-out) inaugural Cloud Security Alliance training class the Sunday before RSA, and Rich is moderating a panel on government cloud and speaking with the always-entertaining Chris Hoff on cloud security Friday. The Economy Sucks Less – What now? The last few years have been challenging. For one, success has involved keeping yourself and your team employed. It’s not like you had a lot of extra funds lying around, so many projects kept falling off the list. So you tried your best to do the minimum and sometimes didn’t even reach that low bar. Nice-to-have became not-gonna-happen. But now it looks like things are starting to recover a bit. Global stock markets, which tend to look 6 months ahead, are expecting strong growth, and many of our conversations with end users (both large and small) tend to indicate a general optimism we haven’t seen in quite a while. To be clear, no one (certainly not us) expects the go-go days of the Internet bubble to return any time soon – unless you run a mobile games company. But we do think the economy will suck less in 2011, and that means you’ll need to start thinking about projects that have fallen off the plate. Such as: Perimeter renewal: Many organizations let the perimeter go a bit. So it’s overgrown, suboptimal, and not well positioned to do much against the next wave of application and targeted attacks. One project to consider might be an overhaul of your perimeter. Or at minimum, start moving to a different, more application-aware architecture to more effectively defend your networks. At RSAC, you’ll hear a lot about next generation firewalls, which really involve building rules based on application behavior rather than just ports and protocols. At the show, your job will be to determine what is real and what is marketing hyperbole.

Share:
Read Post

RSA Guide 2011: Network Security

2010 was an interesting year for the network security space. There has been a resurgence in interest and budget projections for spending, largely for perimeter security. Part of this is a loosening of the budget purse strings, which is allowing frustrated network security folks to actually start dreaming about upgrading their perimeters. So there will be plenty of vendors positioning to benefit from the wave of 2011 spending. What We Expect to See There are four areas of interest at the show for network security: Next Generation Firewall: Last year we talked about application awareness as absolutely critical to the next wave of network security devices. That capability — to build policies based on applications and users, rather than just ports and protocols — has taken the name next generation firewall. Unless a vendor has no interest in the enterprise market, they will be touting their next generation wares. Some of these will be available exclusively on slide decks in the booth, while other vendors will be able to show varying levels of implementation. While you’ve got an SE at your disposal at the show, ask them some pointed questions about how their application categorization happens and what the effective throughput is for their content oriented functions. It should be pretty clear to what degree their gear is next-generation, or if it’s really just an IPS bolt-on. More marketecture: As these new generation capabilities start to hit, they present the opportunity for a fairly severe disruption in the status quo of vendor leadership. So what do the incumbents do when under attack, without a technical response? Right, they try to freeze the market with some broad statement of direction that is light on detail and heavy on hyperbole. It wouldn’t surprise us to see at least one of the RSA keynoters (yeah, those who pay EMC $250K for the right to pontificate for an hour) talk about a new initiative to address all ills of everything. Virt suck: The good news is that a bunch of the start-ups talking about virtualization security hit the wall and got acquired by big network security. So you probably won’t see many folks talking about their new widget to protect inter-VM network traffic. What you will hear is every vendor on the floor playing up the advantages of their shiny new virtual appliances. It’s just like the box you pay $50K for, but you get to use your own computing power in a horribly wasteful fashion. You know how attractive it is to slice out a chunk of your computers to run IPS signatures. It’s like these folks want to bring us back to 1995 and because it runs on ESX, it’s all good. Not so much. Full packet capture maturing: Yes, this is a carry-over from last year. The fact remains that we still have a lot of work to do in order to streamline our incident response processes and make them useful. So you’ll see folks stacked up to learn about the latest and greatest packet capture and the associated analysis. These tools are now starting to bring some cool visualization and even malware analysis to the table. Check them out because as the market matures (and prices come down), this is a technology you should be looking at. Later today we’ll be posting the sections on Email/Web Content Security, as well as Data Security. So stay tuned for that… Share:

Share:
Read Post

Why You Should Delete Your Twitter DMs, and How to Do It

I’ve been on Twitter for a few years now, and over that time I’ve watched not only its mass adoption, but also how people changed their communication habits. One of the most unexpected changes (for me) is how many people now use Twitter Direct Messages as instant messaging. It’s actually a great feature – with IM someone needs to be online and using a synchronous client, but you can drop a DM anytime you want and, depending on their Twitter settings and apps, it can follow them across any device and multiple communications methods. DM is oddly a much more reliable way to track someone down, especially if they link Twitter with their mobile phone. The problem is that all these messages are persistent, forever, in the Twitter database. And Twitter is now one of the big targets when someone tries to hack you (as we’ve seen in a bunch of recent grudge attacks). I don’t really say anything over DM that could get me in trouble, but I also know that there’s probably plenty in there that, taken out of context, could look bad (as happened when a friend got hacked and some DMs were plastered all over the net). Thus I suggest you delete all your DMs occasionally. This won’t necessarily clear them from all the Twitter apps you use, but does wipe them from the database (and the inboxes of whoever you sent them to). This is tough to do manually, but, for now, there’s a tool to help. Damon Cortesi coded up DM Whacker, a bookmarklet you can use while logged into Twitter to wipe your DMs. Before I tell you how to use it, one big warning: this tool works by effectively performing a Cross-Site Request Forgery attack on yourself. I’ve scanned the code and it looks clean, but that could change at any point without warning, and I haven’t seriously programmed JavaScript for 10 years, so you really shouldn’t take my word on this one. The process is easy enough, but you need to be in the “old” Twitter UI: Go to the DM Whacker page and drag the bookmarklet to your bookmarks bar. Log into Twitter and navigate to your DM page. If you use the “new” Twitter UI, switch back to the “old” one in your settings. Click the bookmarklet. A box will appear in the upper-right of the Twitter page. Select what you want to delete (received and sent) or even filter by user. Click the button, and leave the page running for a while. The process can take a bit, as it’s effectively poking the same buttons you would manually. If you are really paranoid (like me) change your Twitter password. It’s good to rotate anyway. And that’s it. I do wish I could keep my conversation history for nostalgia’s sake, but I’d prefer to worry less about my account being compromised. Also, not everyone I communicate with over Twitter is as circumspect, and it’s only fair to protect their privacy as well. Share:

Share:
Read Post

The Analyst’s Dillema: Not Everything Sucks

There’s something I have always struggled with as an analyst. Because of the, shall we say, ‘aggressiveness’ of today’s markets and marketers, most of us in the analyst world are extremely cautious about ever saying anything positive about any vendors. This frequently extends to entire classes of technology, because we worry it will be misused or taken out of context to promote a particular product or company. Or, as every technology is complex and no blanket statement can possibly account for everyone’s individual circumstances, that someone will misinterpret what we say and get pissed it doesn’t work for them. What complicates this situation is that we do take money from vendors, both as advisory clients and as sponsors for papers/speaking/etc. They don’t get to influence the content – not even the stuff they pay to put their logos on – but we’re not stupid. If we endorse a technology and a vendor who offers it has their logo on that paper, plenty of people will think we pulled a pay for play. That’s why one of our hard rules is that we will never specifically mention a vendor in any research that’s sponsored by any vendor. If we are going to mention a vendor, we won’t sell any sponsorship on it. But Mike and I had a conversation today where we realized that we were holding ourselves back on a certain project because we were worried it might come too close to endorsing the potential sponsor, even though it doesn’t mention them. We were writing bad content in order to protect objectivity. Which is stupid. Objectivity means having the freedom to say when you like something. Just crapping on everything all the time is merely being contrarian, and doesn’t necessarily lead to good advice. So we have decided to take off our self-imposed handcuffs. Sometimes we can’t fully dance around endorsing a technology/approach without it ending up tied to a vendor, but that’s fine. They still never get to pay us to say nice things about them, and if some people misinterpret that there really isn’t anything we can do about it. We have more objectivity controls in place here than any other analyst firm we’ve seen, including our Totally Transparent Research policy. We think that gives us the freedom to say what we like. And, to be honest, we can’t publish good research without that freedom. Share:

Share:
Read Post

React Faster and Better: Kicking off a Response

Everyone’s process is a bit different, but through our research we have found that the best teams tend to gear themselves through three general levels of response, each staffed with increasing expertise. Once the alert triggers, your goal is to filter out the day-to-day crud junior staffers are fully capable of handling, while escalating the most serious incidents through the response levels as quickly as possible. Having a killer investigation team doesn’t do any good if an incident never reaches them, or if their time is wasted on the daily detritus that can be easily handled by junior folks. As mentioned in our last post, Organizing for Response, these tiers should be organized by skills and responsibilities, with clear guidelines and processes for moving incidents up (and sometimes down) the ladder. Using a tiered structure allows you to more quickly and seamlessly funnel incidents to the right handlers – keeping those with the most experience and skills from being distracted by lower-level events. An incident might be handled completely at any given level, so we won’t repeat the usual incident response fundamentals, but instead focus on what to do at each level, who staffs it, and when to escalate. Tier 1: Validate and filter After an incident triggers, the first step is to validate and filter. This means performing a rapid analysis of the alert and either handling it on the spot or passing it up the chain of command. While incidents might trigger off the help desk or from another non-security source, the initial analysis is always performed by a dedicated security analyst or incident responder. The analyst receives the alert and it’s his or her job to figure out whether the incident is real or not, and if it is real, how severe it might be. These folks are typically in your Security Operations Center and focus on “desk analysis”. In other words they handle everything right then and there, and aren’t running into data centers or around hallways. The alert comes in, they perform a quick analysis, and either close it out or pass it on. For simple or common alerts they might handle the incident themselves, depending on your team’s guidelines. The team These are initial incident handlers, who may be dedicated to incident response or, more frequently, carry other security responsibilities (e.g., network security analyst) as well. They tend to be focused on one or a collection of tools in their coverage areas (network vs. endpoint) and are the team monitoring the SIEM and network monitors. Higher tiers focus more on investigation, while this tier focuses more on initial identification. Primary responsibilities: Their main responsibility is initial incident identification, information gathering, and classification. They are the first human filter, and handle smaller incidents and identify problems that need greater attention. It is far more important that they pass information up the chain quickly than try to play Top Gun and handle things over their heads on their own. Good junior analysts are extremely important for quickly identifying more serious incidents for rapid response. Incidents they handle themselves: Basic network/SIEM alerts, password lockouts/failures on critical systems, standard virus/malware. Typically limited to a single area – e.g., network analyst. When they escalate: Activity requiring HR/legal involvement, incidents which require further investigation, alerts that could indicate a larger problem, etc. The tools The goal at this level is triage, so these tools focus on collecting and presenting alerts, and providing the basic investigative information we discussed in the fundamentals series. SIEM: SIEMs aren’t always very useful for full investigations, but do a good job of collecting and presenting top-level alerts and factoring in data from a variety of sources. Many teams use the SIEM as their main tool for initial reduction and scoping of alerts from other tools and filtering out the low-level crud, including obvious false positives. Central management of alerts from other tools helps to identify what’s really happening, even though the rest of the investigation and response will be handled at the original source. This reduces the number of eyeballs needed to monitor everything and makes the team more efficient. Network monitoring: A variety of network monitoring tools are in common use. They tend to be pretty cheap (and there are a few good open source options) and provide good bang for the buck, so you can get a feel for what’s really happening on your network. Network monitoring typically includes NetFlow, collected device logs, and perhaps even your IDS. Many organizations use these monitoring tools either as an extension of their SIEM environment or as a first step toward deeper network monitoring. Full packet network capture (forensics): If network monitoring represents baby steps, full packet capture is your first bike. A large percentage of incidents involves the network, so capturing what happens on the wire is the linchpin of any analysis and response. Any type of external attack, and most internal attacks, eventually involve the network. The more heavily you monitor, the greater your ability to characterize incidents quickly, because you have the data to reconstruct exactly what happened. Unlike endpoints, databases, or applications; you can monitor a network deeply, passively and securely, using tools that (hopefully) aren’t involved in the successful compromise (less chance of the bad guys erasing your network logs). You’ll use the information from your network forensics infrastructure to scope the incident and identify “touch points” for deeper investigation. At this level you need a full packet capture tool with good analysis capabilities – especially given the massive amount of data involved – even if you feed alerts to a SIEM. Just having the packets to look it, without some sort of analysis of them, isn’t as useful. Getting back to our locomotion example, deep analysis of full packet capture data is akin to jumping in the car. Endpoint Protection Platform (EPP) management console: This is often your first source for incidents involving endpoints. It should provide up-to-date information on the endpoint as well as activity logs. Data Loss Prevention

Share:
Read Post

Good Programming Practices vs. Rugged Development

I had a long chat with Josh Corman yesterday afternoon about Rugged, especially as it applies to software development. I know this will be a continuing topic at the RSA conference, and we are both looking forward to a series of brainstorming sessions on the subject. One aspect that intrigues both of us is the overlap between Agile and Rugged as conceptual frameworks for guding developer decisions. I though this was important enough to blog up prior to the conference. The discussion went something like this: Agile – as a concept – is successful because when the principles behind the Agile Manifesto are put into practice, they make software development easier and faster. Agile development – embodied as one of many different process enhancements – is successful because they promote Agile principles. When creating Rugged, mirroring Agile Manifesto was a great approach because both strive to adjust development’s approach to creating software. But the Agile Manifesto illustrates – through its 12 principles – how you should prioritize preferences and make tradeoffs when molding software. Rugged has yet to define the principles that, when put into practice, promote Rugged software. Rugged is missing embodiments – examples to guide practitioners – that describe and promote Rugged principles. Rugged denotes a problem set (software is fragile, insecure, and feature-focused to the exclusion of all else) but lacks principles and a roadmap for fixing it. That’s the bad part, and the gist of my Twitter rant earlier this week. The good news is that the overlap between the two concepts provides plenty of examples of what Rugged could be. The more I think about it, the more I think the parallels between Agile and Rugged are important and serve as an example of how to approach Rugged development. There is sufficient overlap that several of the Agile principles can be directly adopted by Rugged with no loss of meaning or intent. Specifically, stealing from the Agile Principles: Welcoming changing requirements, even late in development …: Threats evolve rapidly, as do use cases and deployment models (Cloud? Anyone?). The fact that new security issues pop up like whack-a-moles is no different than new feature requests in web application development. The agility to respond to new requests and reprioritize on customer importance is no different that being agile enough to respond to risks. This principle applies equally well to Rugged as to Agile. Working software is the primary measure of progress: If your software does not compile, or is filled with logic errors, development efforts have not been totally successful. If your code is filled with security bugs, your software has variable functionality, then your development efforts have not been totally successful. When it comes to secure code development, I look at security as just another vector to manage on. It’s one of many factors that must be accounted for during the development process. Security is not your only goal, and usually not even the most important goal, but something important to account for. You can choose to ignore it, but if so there will likely be some major issue down the road. We are at that uncomfortable junction in the history of software development where we are seeing some firms have businesses disrupted by failing to manage risks posed by new code. Ruggedness should be a metric for progress. Continuous attention to technical excellence and good design enhances agility: I do threat modelling when building new software. I do it after the architecture phase and sometime during the design phase, because threat modelling finds weaknesses in my design choices. Sometimes they are logic flaws, sometimes communication protocol flaws, and sometimes it’s simply that the platform I was considering does support adequate security. Regardless, examining code with a jaundiced eye – looking for issues before they become serious problems – is all part of the design process. The scrutiny enhances product quality and Ruggedness. I have avoided problems before they became serious, because they were exposed during design, and those problems were easier to fix than ones I found later. That is only one example of the overlap. Build projects around motivated individuals: Coders are competitive. Developers have egos. Most developers want to write quality code, and to be recognized as experts who pay attention to detail. Most I have worked with are highly motivated to practice, to learn, and to get better. When there is a coding standard, they work hard to make sure their code meets it. They don’t want the code they checked in to crash the nightly build because everyone on the team will hear about it. If Ruggedness is part of the coding standard, and security checks are part of daily regression tests, there is no reason to expect the development team to do anything other than raise their game and meet the challenge. Simplicity – the art of maximizing the amount of work done – is essential: Developers are creative. Agile as a concept is intended to allow developers to think of creative solutions. Group discussions and open dialog, combined with rapid prototyping proofs of concept, all help find simple solutions to complex problems. Security problems fall to the same blade if you allow them to be addressed in the same way. Unfortunately much of what I am proposing here is a heck of a lot easier when building new code, as opposed to retrofitting old code. I have lugged the burden of legacy software many times, and the restrictions imposed by some old glob of misbehaving garbage crushes inspiration and ingenuity. Bubble gum and duct tape are fun up to a point, but sooner or later it becomes just fixing someone else’s crap over and over again. With new code I contend that Rugged is simply good programming practices – and if incorporated efficiently it will make software designers, coders, and quality assurance teams better. To some of you I am simply playing security jazz, this is all BS, and it really does not help you get your jobs

Share:
Read Post

Friday Summary: February 4, 2011

My wife says to me, “I seem to be getting your junk mail. Somebody just sent me Data Security Quiz results.” I have no idea what she means, so she forwarded me the email from the National Information Security Assocation (NISA). I confess that I had never heard of this organization before, and I really don’t know what they do. Apparently they quizzed a number of real estate agents and brokers around the country to find out how much they knew about data security. The results were emailed as a way of educating real estate professionals at large. Color me shocked. Actually, I thought the questions were pretty good to be asking for sales people. The Q&A was as follows: According to industry standard practices, when is it safe to leave sensitive client information in your car (either in electronic form, such as a laptop or in paper form)? Answer: d) Never. Which tool is most important once a network breach has been discovered? Answer: c) Access Log For most workplace computers when is it possible to be infected with malicious software? Answer: a) Anytime the computer is on. If I only collect client data for a short sale processing company, I am not responsible for data leaks? Answer: False What are the only actions that can guaranty the security of client data? Answer: c) There is no way to guaranty data security. What is the one sure method to determine if your computer contains malicious software? Answer: b) There is no way to be 100 percent sure. Question three actually cracked me up because it is so true! I think there is a little bit of FUD going on here to get people to attend a seminar, because the email talks about blended threats and Stuxnet. I know real estate agents are pretty pissed about the state of the economy, but I am pretty sure plutonium enrichment is not a general concern. Regardless, it is very interesting to see how much security awareness training and security bullitens are being distributed to real estate professionals. Like Rich’s mention a few weeks ago that the owner of the local coffee shop was aware of PCI-DSS. The times they are a-changin’. One final note: It appears we have SOLD OUT the Cloud Security Training course we are offering February 13th. If you are still interested, let us know and we will see if we can find a bigger room. Probably not, but we will see what we can do. Given the interest in the material, we are looking at providing more classes in the coming months so it helps us if you let us know if you are interested in cloud security certification. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian’s Dark Reading article on Database Security in the Cloud. First in a series. Securosis Mentioned in PF newsletter. Adrian’s podcast on Agile Development, Security Fail for RSA. More NRF quotes from Adrian on security in the retail vertical. Rich quoted on 10 Risks in Public Cloud Computing. Favorite Securosis Posts Mike Rothman: Good Programming Practices vs. Rugged Development. We can always learn from similar initiatives and try not to make the same mistakes. Interesting post here from Adrian comparing Rugged to Agile. Adrian Lane: You Made Your Bed, Now Sleep in It. Ego and bravado have a funny way of coming back to crush security pros. Have I mentioned I suck lately? Other Securosis Posts Incite 2/2/2011: The End of Anonymity. Favorite Outside Posts Mike Rothman: Everyone has a plan until they get hit. I’m glad Gunnar is our team. Great quote from Tyson. Great post. Adrian Lane: Why terror alert codes never made sense. Actually every airport I have been to has been ‘Orange’ for three years. Too bad there are no free market forces to punish this type of stupidity. Project Quant Posts NSO Quant: Index of Posts. NSO Quant: Health Metrics–Device Health. NSO Quant: Manage Metrics–Monitor Issues/Tune IDS/IPS. NSO Quant: Manage Metrics–Deploy and Audit/Validate. Research Reports and Presentations The Securosis 2010 Data Security Survey. Monitoring up the Stack: Adding Value to SIEM. Network Security Operations Quant Metrics Model. Network Security Operations Quant Report. Understanding and Selecting a DLP Solution. White Paper: Understanding and Selecting an Enterprise Firewall. Understanding and Selecting a Tokenization Solution. Security + Agile = FAIL Presentation. Top News and Posts Microsoft accuses Google of Clickjacking. Abusing HTTP Status Codes to Expose Private Information. Plentyofhack, er, Plentyoffish Hack. Skimmers That Never Touch ATM. Mark Anderson says China’s IP Theft Unprecedented. Egypt shuts down their Internet. NRO to announce IPV6. The NRO might want to have someone pen test their site as I am getting error codes straight from the database, but that’s a different subject. Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Joshua Corman, in response to Good Programming Practices vs. Rugged Development. @securityninja Rugged is a Value. A characteristic. An Attribute. A Quality. A State. Rugged in its simplest sense is an affirmative, non-security-executive desirable. Security is a negative – a Cost/Tax and usually an inhibitor to what a CIO wants. Rugged encapsulates things like: Availability Survivability Supportability Longevity Security* Scalability …that the CIO already wants. For your eCommerce, do you want a flimsy Hosting Site? or a Rugged Hosting site? Communities like OWASP can help developers to affect more Rugged outcomes. Jeff is involved in Rugged. Rugged is on the overlooked People level more heavily than on the process and tech level. We have a lot of great tools and technology and frameworks (sure we could use more and better ones). What’s most been lacking is Mainstream awareness and demand for the value of Rugged. In my 11/12 months, I’ve seen the most traction for Rugged on those buying software. on Demand. If we can drive sufficient Demand, Supply will often follow. I’m still looking to connect with you 1 on 1. For

Share:
Read Post

You Made Your Bed, Now Sleep in It

Twitter exploded last night with news that the self-proclaimed world’s #1 hacker’s email and Twitter accounts were compromised. Personally, the amount of time that good people spend feeding that troll annoys me. Which is why I’m not mentioning his name. Why give him any more SEO points for acting poorly? Since the beginning of time there have been charlatans, shysters, and frauds; this guy is no different. Major media outlets are too dumb and lazy to do the work required to vet their experts, so they respond to his consistent PR efforts. Whatever. But let’s deal with the situation at hand because it’s important. First off, if you bait a lion, you shouldn’t be surprised when you get eaten. Tell me you were surprised when Roy got mauled by his white tiger. I was more surprised it took that long. In other words, live by the sword, die by the sword. And clearly that is the case here. Now there are 4gb of email and other sensitive files in the wild, and this guy’s closet will be opened for all to see. And there are skeletons in there. To be clear, this is wrong. The attackers are breaking the law, but it’s hard to feel bad for the victim. His sophomoric threats, frivolous lawsuits, and intimidation games probably worked OK in the schoolyard, but in the real world – not so much. It’s your bed, now you get to sleep in it. Second, if you know you are a target, why would you leave a huge amount of sensitive documents in an email store on a publicly accessible server? I read a Tweet that said his email was at GoDaddy. Really? And isn’t the first rule of email that it’s not a file store? I know we all probably violate that dictum from time to time, but to have financial records, account numbers, and legal filings in your email box? Come on, now! Basically, I suspect there is stuff there that could put our victim in the big house for a long time. Again, you made the bed, now sleep in it. We take ridiculous security precautions for a 3-person company. It’s actually a huge pain in the ass. And we are fully cognizant that at some point we will likely be breached. Crap, if it can happen to Kaminsky it can happen to us. So we don’t do stupid things. Too often. And that really is the lesson here. Everyone can be breached, even the world’s #1 hacker. Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.