Securosis

Research

Objectivity Matters

I owe a tremendous amount to social media. I wasn’t early to either blogging or Twitter (as my friends remind me), but once I got there a whole new world of opportunities opened. I created a boutique business (Security Incite) on the back of a blog and email newsletter. I met so many great people – many of whom became close friends – and even found a business partner or two. But the edge of social media cuts both ways. ‘News’ organizations have emerged with, uh, distinctly unjournalistic methods of handling conflicts of interest. You need to read Hit men, click whores, and paid apologists: Welcome to the Silicon Cesspool by Dan Lyons, about the unholy alliance between some very high-profile tech bloggers and what they publish about companies they invest in. You sort of knew that stuff was going on, but to see it laid bare was eye-opening. To be fair, none of these guys hide their investments in the companies they write about. Or that they leverage their audience to build brand and buzz for the chosen few who take their investment. Or that they strong-arm those that won’t or don’t. If you look hard enough you certainly can find the truth, but they certainly don’t publicize it. I don’t know. Maybe it’s me. Maybe I’m idealistic. Maybe I don’t understand how the world works. But that just seems wrong on so many levels. I guess I’m one of those guys that believes objectivity matters. Listen – we all have biases. I’m no Pollyanna, thinking anyone can truly be unbiased. But we at Securosis are pretty up front about our biases. And none of those biases are economic in nature. None. One of the things that really attracted me to the business model Rich built was the Totally Transparent Research method. We do the work. We write what needs to be written. When we are done, and only then, do we license content for sponsorship. We do line up sponsors ahead of time, but we only offer a right of first refusal, and either party can walk away at any time. We have. And sponsors have. We cannot afford to be beholden to someone, to write what they want, because we already took a down payment on our integrity. By the way, this model sucks for cash flow. We do all the work. We take all the risk. Then we hope the sponsors still have the budget and inclination to license the content. I can’t pay my mortgage with a right of first refusal. But objectivity matters to us, and we don’t see any other way to write credible research. Many folks who blog and tweet a lot about security will be out at the RSA Conference this week. You’ll likely be hearing about all sorts of shiny new objects. This one shinier than the next. But take every blog post and tweet with a grain of salt – even ours! The Internet can provide a wealth of information to help organizations make critical decisions, but it contains a tremendous amount of disinformation. Buyer beware – always. Understand who is writing what. Understand their biases and keep their point of view in mind. Most important: use all this information to get smarter and to zero in on the right questions to ask the right people. If you make buying decisions based on a blog post or a magic chart or anything other than your own research, then you (with all due respect) are an idiot. Share:

Share:
Read Post

RSA Conference 2012 Guide: Cloud Security

We’ve renamed this section from “Virtualization and Cloud Security” to simply “Cloud Security” since if you listen to any of the marketing messages, you can’t tell the difference, even though it’s a big one. And virtualization is a hassle to type, so buh bye! Overall, as we mentioned in the key themes post, cloud security will be one of the biggest trends to watch during the conference and it also happens to be one area where you should focus since there is some real innovation, and you probably have real problems that need some help. New Kids on the Cloud Security Block (NKOTCSB) Hiding in the corners will be some smaller vendors you need to pay attention to. Instead of building off existing security tools designed for traditional infrastructure (we’re looking at you Big Security), they’ve created new products built from the ground up specifically for the cloud. Each of them focuses on a different cloud computing problem that’s hard to manage using existing tools – identity management (federated identity gateways), instance security, encryption, and administrative access. Many of these have a SaaS component, but if you corner them in a back room and have enough cash they’ll usually sell you a stand-alone server you can manage yourself. NKOTCSB FTW. Cloudwashing vs. the Extreme Cloud Makeover If you haven’t heard the term before, “cloudwashing” refers to making a virtual appliance of a product ready to run on Amazon Web Services, VMWare, or some other cloud platform without really changing much in the product. This is especially amusing when it comes from vendors who spent years touting their special hardware secret sauce for their physical appliance. Consider these transitional products, typically better suited for private cloud IaaS. It might help, but in the long run you really need to focus on cloud-specific security controls. But some vendors are pushing deeper and truly adapting for cloud computing. It might be better use of cloud APIs, redesigning software to use a cloud architectural model, or extending an existing product to address a cloud-specific security issue that’s otherwise not covered. The best way to sniff the cloudwashing shampoo is to see if there are any differences between the traditional product and the virtual appliance version. Then ask, “do you use the //cloud platform// APIs or offer any new APIs in the product?” and see if their faces melt. Virtual Private Data We also cover this one in the data security post so we won’t go into much more detail here, but suffice it to say data security is pretty high on the list of things people moving to the cloud need to look at. Most encryption vendors are starting to support cloud computing with agents that run on cloud platforms as an extension of their to their existing management systems (thus requiring a hybrid model), but a couple are more cloud-specific and can deploy stand-alone in public cloud. CloudOps Most of the practical cloud-specific security, especially for Infrastructure as a Service comes from the (relatively) new group of cloud management vendors. Some might be at RSA, but not all of them since they sell to data center operations teams, not CISOs. Why? Well, it just might be the big wads of cash that Ops teams have in comparison. Keep an eye on these folks because aside from helping with configuration management automation, some are adding additional features like CloudAudit support, data protection/encryption, and network security (implemented on a virtualized host). While the NKOTCSB are totally focused on security innovation, the management and operations platforms concentrate on cloud operational innovation, which obviously has a big security component. We’ll be posting the assembled guide within the next day or so, so you’ll have it in plenty of time for your pilgrimage to San Francisco. Share:

Share:
Read Post

RSA Conference 2012 Guide: Data Security

In the the last twelve months we’ve witnessed the highest rates of data theft disclosures since the record setting year of 2008 (including, for the first time in public, Rich’s credit card). So predictably there will be plenty of FUD balloons flying at this year’s conference. From Anonymous to the never-ending Wikileaks fallout and cloud fears, there is no shortage of chatter about data security (or “data governance” for people who prefer to write about protecting stuff instead of actually protecting it). Guess Mr. Market is deciding what’s really important, and it usually aligns with the headlines of the week. But you know us, we still think Data Security is pretty critial and all this attention is actually starting to drive things in a positive direction, as opposed to the days of thinking data security meant SSL + email filtering. Here are five areas of interest at the show for data security: Da Cloud and Virtual Private Storage The top two issues we hear most organizations cite when they are concerned about moving to cloud computing, especially public cloud, are data security and compliance. While we aren’t lawyers or auditors, we have a good idea how data security is playing out. The question shouldn’t be to move or not to move, but should be how to adopt cloud computing securely. The good news is you can often use your existing encryption and key management infrastructure to encrypt data and then store it in a public cloud. Novel, eh? We call it Virtual Private Storage, just as VPNs use encryption to protect communications over a public resource. Many enterprises want to take advantage of cheap (maybe) public cloud computing resources, but compliance and security fears still hold them back. Some firms choose instead to build a private cloud using their own gear or request a private cloud from a public cloud provider (even Amazon will sell you dedicated racks… for a price). But the virtual private storage movement seems to be a hit with early adopters, with companies able to enjoy elastic cloud storage goodness, leveraging cloud storage cost economies instead of growing (and throwing money into) their SAN/NAS investment, and avoiding many of the security concerns inherent to multi-tenant environments. Amazon AWS quietly productized a solution for this a few months back, making it even easier to get your data into their cloud, securely. Plus most encryption and key management vendors have basic IaaS support in current products for private and hybrid clouds, with some better public cloud coverage on the way. Big is the New Big The machine is hungry – must feed the machine! Smart phones sending app data and geolocation data, discreet marketing spyware and web site tracking tools are generating a mass of consumer data increasingly stored in big data and NoSQL databases for analysis, never mind all the enterprises linking together previously-disparate data for analysis. There will be lots of noise about about Big Data and security at RSAC, but most of it is hype. Many security vendors don’t even realize Big Data refers to a specific set of technologies and not any large storage repository. Plus, a lot of the people collecting and using Big Data have no real interest in securing that data; only getting more data and pumping into more sophisticated analysis models. And most of the off-the-shelf security technologies won’t work in a Big Data environment or the endpoints where the data is collected. And let’s also not confuse Big Data from the user standpoint, which as described above, as massive analysis of sensitive business information, with Big Security Data. You’ll also hear a lot about more effectively analyzing the scads of security data collected, but that’s different. We discussed that a bit in our Key Themes section. Masking It’s a simple technology that scrambles data. It’s been around for many years and has been used widely to create safe test data from production databases. But the growth in this market over the last two years leads us to believe that masking vendors will have a bigger presence at the RSA show. No, not as big as firewalls, but these are definitely folks you should be looking at. Fueling the growth is the ability to effectively protect large complex data sets in a way that encryption and masking technologies have not. For example, encrypting a Hadoop cluster is usually neither feasible nor desirable. Second, the development of dynamic masking and ‘in place’ masking variants are easier to use than many ETL solutions. Expect to hear about masking from both big and small vendors during the show. We touched on this in the Compliance section as well. Big Brother and iOS Data Loss Prevention will still have a big presence this year both in terms of the dedicated tools and the DLP-Lite features being added to everything from your firewall to the Moscone beverage stations. But there are also new technologies keeping an eye on how users work with data- from Database Activity Monitoring (which we now call Database Security Platforms, and Gartner calls Database Audit and Protection), to File Activity Monitoring, to new endpoint and cloud-oriented tools. Also expect a lot of talk about protecting data from those evil iPhones and iPads. Breaking down the trend what we will see are more tools offering more monitoring in more places. Some of these will be content aware, while others will merely watch access patterns and activities. A key differentiator will be how well their analytics work, and how well they tie to directory servers to identify the real users behind what’s going on. This is more evolution than revolution, and be cautious with products that claim new data protection features but really haven’t added content analysis or other information-centric technology. As for iOS, Apple’s App Store restrictions are forcing the vendors to get creative. you’ll see a mix of folks doing little more than mobile device management, while others are focusing on really supporting mobility with well-designed portals and sandboxes that still allow the users to work on their devices. To

Share:
Read Post

Incite 2/22/2012: Poop Flingers

It’s a presidential election year here in the US, and that means the master spin meisters, manipulators, and liars politicians are out in full force. Normally I just tune out, wait for the primary season to end, and then figure out who I want to vote for. But I know better than to discuss either religion or politics with people I like. And that means you. So I’m not going to go there. But this election cycle is different for me, and it will be strange. I suspect I won’t be able to stay blissfully unaware until late summer because XX1 is old enough to understand what is going on. She watches some TV and will inevitably be exposed to political attack ads. It’s already happened. She’s very inquisitive, so I was a bit surprised when she asked if the President is a bad man. I made the connection right away and had to have a discussion about negative political ads, spin, and trying to find the truth somewhere in the middle. Your truth may be different than my truth. Fundamentally, totally different. But suffice it to say the venom that will be polluting our airwaves over the next 6 months is not close to anyone’s truth. It’s overt negativity (thanks, Karl Rove) and I have no doubt that once the Republican candidate is identified, the Democratic hounds will be unleashed against him. Notice I was male gender specific, but that’s another story for another day. I guess it must be idealistic Tuesday. Can’t the candidates have an honest, fact-based dialog about the issues? And let citizens make informed decisions instead of manipulating them with fear, uncertainty, and doubt? Funded by billionaires looking to make their next billions. Yeah, no shot of that. You see, I’m no pollyanna. I know that anyone actually trying to undertake a civil discourse would get crushed by the 24/7 media cycle and privately funded attack ads which twist their words anyway. We elect the most effective poop flinger here in the US, and it’s pretty sad. Lord knows, once they get elected they face 4 or 8 years of gridlock and then a lifetime of Secret Service protection. It’s one of those be careful what you wish for situations. But hey, everyone wants to be the most powerful person on the world for a while, right? Again, normally I ignore this stuff and stay focused on the only thing I can really control: my work ethic. But with impressionable young kids in the house we will need to discuss a lot of this crap, debunk obvious falsehoods, and try to educate the kids on the issues. Which isn’t necessarily a bad thing, but it’s not easy either. Or I could enforce a media blackout until November 7. Now, that’s the ticket. -Mike Note: Next week is the RSA Conference, and that doesn’t leave a lot of time to do much Inciting. So we’ll skip the Incite next week and perhaps provide a jumbo edition on March 7. Or maybe not… Photo credits: “Poop Here” originally uploaded by kraskland Heavy Research No holiday for us. We hammered you on the blog Monday, which many of you may have ignored. So here’s a list of the things we’ve posted to the Heavy Feed over the past week. Malware Analysis Quant Metrics – Define Rules and Search Queries Metrics – The Malware Profile Metrics – Dynamic Analysis Metrics – Static Analysis Metrics – Build Testbed Metrics – Confirm Infection Malware Analysis Quant: Take the Survey (and win fancy prizes!) We need your help to understand what you do (and what you don’t) in terms of malware analysis. And you can win some nice gift cards from Amazon for your trouble. RSA Conference 2012 Guide Security Management and Compliance Email & Web Security Endpoint Security Application Security Here’s the other stuff we’ve been up to: Understanding and Selecting DSP: Core Components. Featuring the Jack and the DSPeanstalk image. Check it out. Implementing DLP: Deploying Storage and Endpoint Remember you can get our Heavy Feed via RSS, where you can access all our content in its unabridged glory. So check them out and (as always) please let us know what you think via comments. Incite 4 U It’s not about patching, it’s about web-scale architecture: It seems Rafal Los got his panties in a bunch when Mort threw out a thought balloon about shortening patch windows with smaller and more frequent patching. Though I think the term ‘patch’ here is what’s muddying the issue. Everyone realizes that most SaaS apps ‘patch’ whenever they need to with little downtime. At least if they are architected correctly. And that’s the point – I Mort as saying we need to really rethink application and deployment architectures to be more resilient and less dependent on huge patches/upgrades that can cause more problems than they fix. As LonerVamp points out, downtime is a hassle and more frequent patches are a pain in the backside. And for the way we currently do things, he’s right. But if we rethink architecture (which does take years), why wouldn’t we choose to fix things when they break, instead of when there are a bunch of other things to fix? – MR Political Deniability: I learned long ago to ignore all the cyberchatter coming out of Congress until they actually pass a bill and fund an enforcement body, and someone gets nailed with fines or jail time. How long have we been hearing about that national breach disclosure law that every vendor puts in their PowerPoint decks, despite, you know, not actually being a law? Si we can’t put too much stock in the latest National Cybersecurity Bill, but this one seems to have a chance, if the distinguished senior senator from my home state of Arizona doesn’t screw it up because he wasn’t consulted enough. Come on, man, grow up! The key element of this bill that I think could make a difference is that it’s the first attempt I’m aware of to waive liability for organizations so they can share cybersecurity information (breach data). That’s a common reason

Share:
Read Post

RSA Conference 2012 Guide: Security Management and Compliance

As we continue with our tour through the RSA Conference, we’re in the home stretch. Today we’ll hit both security management and compliance, since the two are intrinsically linked. Security Management Security Management has been a dynamic and quickly evolving space that received a lot of attention at conference like RSA. Yet, we will probably see a little bit less visibility on the part of what we typically call security management (basically SIEM/Log Management) this year, because there will be fewer folks beating the drum for this technology. Why? That brings us to our first observation… I can haz your start-up Amazingly enough, the two highest profile SIEM/Log Management vendors were acquired on the same day last October. Q1Labs by IBM and Nitro Security by McAfee, which we wrote about in this post. This followed Big IT investing in the space over the previous few years (HP bought ArcSight in 2010 and RSA bought Network Intelligence in 2006 and Netwitness in earlier in 2011). So basically at the RSA show, you’ll see these security management platforms positioned clearly as the centerpiece of the security strategies of the Big security vendors. Cool, huh? The technology has moved from being an engine to generate compliance reports to a strategic part of the big security stack. What will you see from these big vendors? Mostly a vision about how buying into their big security stacks you’d be able to enforce a single policy across all of your security domains and gain tremendous operational leverage. I say vision because the reality is these deals have all closed within the last two years and true integration remains way down the line. So make sure to poke hard on the plans for true integration, as opposed to what the booth graphics say. And then add a year or two to their estimates. But there is one area of integration where you can get immediate value which is integration on the purchase order, which we don’t want to minimize. Being able to dramatically expand a security management implementation with money already committed to a 7 or 8-figure enterprise purchase agreement is a good thing. What about the Independents? You know, the handful that remain. These folks have no choice but to focus on the fact they aren’t a big company, but as we mentioned in the IBM/Q1 and MFE/Nitro deal analysis post, security management is a big company game now. But do check out these vendors to see them thinking somewhat out of the box relative to what’s next. Clearly you aren’t going to see a lot of forward thinking innovation out of the big vendors, as they need to focus more in integration. But the smaller vendors should be able to push the ball forward, and then see their innovations co-opted by the big guys. Yup, it’s a brutal world out there, but that’s how things work. Don’t forget about those pesky logs. As mentioned, a lot of focus will be on how SIEM becomes the centerpiece of the big IT companies security stacks. But let’s make the point that Log Management isn’t dead. You’ll see some companies looking to replicate the success of Splunk in focusing on not only security-oriented use cases for log data. That means things like the use cases discussed in our Monitoring Up the Stack research, and things like click stream analysis, transaction fraud detection, and pinpointing IT operations issues. Also expect to hear a bunch about log management in the cloud. For those smaller organizations, this kind of deployment model can make a lot of sense. But there are some multi-tenancy complications to storing your logs in someone else’s cloud. So be sure to ask very detailed and granular questions about how they segment and protect the log data you send to them. Platform hyperbole Finally let’s point out the place where you’ll need to cut through the vendor boasts and hyperbole with a machete. That’s these so-called platforms, described above. We’ve been talking for a long time about the need to go beyond logs for a more functional security management capability, and you’ll hear that at the show as well. But the question will remain, where does the platform begin? And where does it end? There is no clear answer. But let’s be very clear, we believe the security management platform of the future will be able to digest and analyze network full packet capture traffic. As we discussed in our Advanced Network Security Analysis research, to truly confirm a breach and understand the attacks used against you, it requires more granular information that exists in the logs. The question is to what degree the security management vendors acknowledge that. The vendors that have it either via acquisition (RSA) or partnership (everyone else), won’t shy away from this realization. The real question gets back to you. To what degree can your existing personnel and processes make effective use of packet capture data? if you don’t have the sophistication to do malware analysis or do a detailed forensic investigation in house, then logs are good for the time being. But if you are interested in full packet capture, then really hit the vendors on integration with their existing SIEM platform. Firing alerts in two separate consoles doesn’t help you do things faster, nor is clicking on a log record to isolate the packet capture data in another system going to be a long term solution. You’ll also still hear a bit about GRC, but the wind is out of those sails, and justifiably so. Not that IT-GRC platforms can’t add value, but most companies have a hard enough time getting their SIEM to correlate anything, so the idea of a big stack IT-GRC and the associate integration is challenging. Compliance We get the sense that most of the vendors are tired of talking about compliance as they have switched their focus to APT and ‘The Insider Threat’. You know, that sexy security stuff, while compliance continues to be the biggest driver of security spend. Though you know trade shows, the

Share:
Read Post

Malware Analysis Quant: Documenting Metrics (and survey is still going)

Just a little President’s Day update on the Malware Analysis Quant project. At the end of last month we packaged up all the process descriptions into a spiffy paper, which you can download and check out. We have been cranking away at the second phase of the research, and the first step of that is the survey. Here is a direct survey link, and we would love your input. Even if you don’t do in-depth malware analysis every day, that’s instructive, as we try to figure out how many folks actually do this work, and how many rely on their vendors to take care of it. Finally, we have also started to document the metrics that will comprise the cost model which is the heart of every Quant project. Here are links to the metrics posts we include both in the Heavy feed and on the Project Quant blog. Metrics – Confirm Infection Metrics – Build Testbed Metrics – Static Analysis Metrics – Dynamic Analysis Metrics – The Malware Profile One last note: as with all of projects, our research methodology is dynamic. That means posting something on our blog is just the beginning. So if you read something you don’t agree with let us know, and work with us to refine the research. Leave a comment on the blog, or if for some reason you can’t do that drop us an email. Share:

Share:
Read Post

RSA Conference 2012 Guide: Email & Web Security

For a little bonus on a Sunday afternoon, let’s dig into the next section of the RSA Guide, Email and Web Security which remains a pretty hot area. This shouldn’t be surprising since these devices tend to be one of the only defenses against your typical attacks like phishing and drive-by downloads. We’ve decided to no longer call this market ‘content security’; that was a terrible name. Email and Web Security speaks to both the threat models as well as the deployment architectures of what started as the ‘email security gateway’ market. These devices screen email and web traffic moving in and out of your company at the application layer. The goal is to prevent unwanted garbage like malware from coming into your network, as well as detection of unwanted activity like employees clogging up the network with HiDef downloads of ‘Game of Thrones’. These gateways have evolved to include all sorts of network and content analysis tools for a variety of traffic types (not just restricted to web traffic). Some of the vendors are starting to resemble UTM gateways, placing 50 features all on the same box, and letting the user decide what they want from the security feature buffet. Most vendors offer a hybrid model of SaaS and in-house appliances for flexible deployments while keeping costs down. This is a fully mature and saturated market, with the leading vendors on a very even footing. There are several quality products out there, each having a specific strength in their technology, deployment or pricing model. There are quite a few areas of interest at the show for web gateway security: VPN Security and the Cloud Remember how VPN support was a major requirement for every email security appliance? Yeah, well, it’s back. And it’s new and cloudified! Most companies provide their workforce with secure VPN connections to work from home or on the road. And most companies find themselves supporting more remote users more often than ever, which we touched on in the Endpoint Security section. As demand grows so too does the need for better, faster VPN services. Leveraging cloud services these gateways route users through a cloud portal, where user identification and content screening occur, then passing user requests into your network. The advantages are you get scalable cloud bandwidth, better connectivity, and security screening before stuff hits your network. More (poor man’s) DLP Yes, these secure web offerings provide Data Loss Prevention ‘lite’. In most cases, it’s just the subset of DLP needed to detect data exfiltration. And regular expression checking for outbound documents and web requests is good enough to address the majority of content leakage problems, so this works well enough for most customers, which makes it one of the core features every vendor must have. While it’s difficult for any one vendor to differentiate their offering by having DLP-lite, but they’ll have trouble competing in the marketplace without it. It’s an effective tool for select data security problems. Global Threat Intelligence Global threat intelligence involves a security vendor collecting attack data from all their customers, isolating new attacks that impact a handful, and automatically applying security responses to their other client installations. When implemented correctly, it’s effective at slowing down the propagation of threats across many sites. The idea has been around for a couple years, originating in the anti-spam business, but has begun to show genuine value for some firewall, web content and DAST (dynamic application security testing) products. Alas, like many features, some are little more than marketing ‘check the box’ functionality here while others actually collect data from all their clients and promptly distribute anonymized intelligence back to the rest of their customers to ensure they don’t get hammered. It’s difficult to discern one from the other, so you’ll need to dig into the product capabilities. Though it should be fun on the show floor to force an SE or other sales hack to try to explain exactly how the intelligence network works. Anti-malware Malware is the new ‘bad actor’. It’s the 2012 version of the Trojan Horse; something of a catch-all for viruses, botnets, targeted phishing attacks, keystroke loggers and marketing spyware. It infects servers and endpoints by any and all avenues available. And just as the term malware covers a lot of different threats, vendor solutions are equally vague. Do they detect botnet command and control, do they provide your firewall with updated ‘global intelligence’, or do they detect phishing email? Whatever the term really means, you’re going to hear a lot about anti-malware and why you must stop it. Though we do see innovation on network-based malware detection, which we covered in the Network Security section. New Anti-Spam. Same as the old Anti-Spam We thought we were long past the anti-spam discussion, isn’t that problem solved already? Apparently not. Spam still exists, that’s for sure, but any given vendor’s efficiency varies from 98% to 99.9% effective on any given week. Just ask them. Being firm believers in Mr. Market, clearly there is enough of an opportunity to displace incumbents, as we’ve seen a couple new vendors emerge to provide new solutions, and established vendors to blend their detection techniques to improve effectiveness. There is a lot of money spent specifically for spam protection, and it’s a visceral issue that remains high profile when it breaks, thus it’s easy to get budget for. Couple that with some public breaches from targeted phishing attacks or malware infections through email (see above), and anti-spam takes on a new focus. Again. We don’t think this is going to alter anyone’s buying decisions, but we wanted to make sure you knew what the fuss was about, and not to be surprised when you think you stepped into RSA 2005 seeing folks spouting about new anti-spam solutions. Share:

Share:
Read Post

RSA Conference Guide 2012: Endpoint Security

Ah, the endpoint. Do you remember the good old days when endpoint devices were laptops? That made things pretty simple, but alas, times have changed and the endpoint devices you are tasked to protect have changed as well. That means it’s not just PC-type devices you have to worry about – it’s all varieties of smartphones and in some industries other devices including point of sale terminals, kiosks, control systems, etc. Basically anything with an operating system can be hacked, so you need to worry about it. Good times. BYOD everywhere You’ll hear a lot about “consumerization” at RSAC 2012. Most of the vendors will focus on smartphones, as they are the clear and present danger. These devices aren’t going away, so everybody will be talking about mobile device management. But as in other early markets, there is a plenty of talk but little reality to back it up. You should use the venue to figure out what you really need to worry about, and for this technology that’s really the deployment model. It comes down to a few questions: Can you use the enterprise console from your smartphone vendor? Amazingly enough, the smartphone vendors have decent controls to manage their devices. And if you live in a homogenous world this is a logical choice. But if you live in a heterogenous world (or can’t kill all those BlackBerries in one fell swoop), a vendor console won’t cut it. Does your IT management vendor have an offering? Some of the big stack IT security/management folks have figured out that MDM is kind of important, so they offer solutions that plug into the stuff you already use. Then you can tackle the best of breed vs. big stack discussion, but this is increasingly a reasonable alternative. What about those other tools? If you struck out with the first two questions you should look at one of the start-up vendors who make a trade on heterogenous environment. But don’t just look for MDM – focus on what else those folks are working on. Maybe it’s better malware checking. Perhaps it’s integration with network controls (to restrict devices to certain network segments). If you find a standalone product, it is likely to be acquired during your depreciation cycle, so be sure there is enough added value to warrant the tool standing alone for a while. Another topic to grill vendors on is how they work with the “walled garden” of iOS (Apple mobile devices). Vendors have limited access into iOS, so look for innovation above and beyond what you can get with Apple’s console. Finally, check out our research on Bridging the Mobile Security Gap (Staring Down Network Anarchy, The Need for Context, and Operational Consistency), as that research deals with many of these consumerization & BYOD issues, especially around integrating with the network. The Biggest AV Loser Last year’s annual drops of the latest and greatest in endpoint protection suites were all about sucking less. And taking up less real estate and compute power on the endpoint devices. Given the compliance regimes many of you live under, getting rid of endpoint protection isn’t an option, so less suckage means less heartburn for you. At least you can look at the bright side, right? In terms of technology evolution there won’t be much spoken about at the RSA Conference. You’ll see vendors still worshipping the Cloud Messiah, as they try to leverage their libraries of a billion AV signatures in the cloud. That isn’t very interesting but check into how they leverage file ‘reputation’ to track which files look like malware, and your options to block them. The AV vendors actually have been hard at work bolstering this file analysis capability, so have them run you through their cloud architectures to learn more. It’s still early in terms of effectiveness but the technology is promising. You will also see adjunct endpoint malware detection technologies positioned to address the shortcomings of current endpoint protection. You know, basically everything. The technology (such as Sourcefire’s FireAMP) is positioned as the cloud file analysis technology discussed above so the big vendors will say they do this, but be wary of them selling futures. There are differences, though – particularly in terms of tracking proliferation and getting better visibility into what the malware is doing. You can learn a lot more about this malware analysis process by checking out our Quant research, which goes into gory detail on the process and provides some context for how the tools fit into the process. Share:

Share:
Read Post

RSA Conference Guide 2012: Application Security

Building security in? Bolting it on? If you develop in-house applications, it’s likely both. Application security will be a key theme of the show. But the preponderance of application security tools will block, scan, mask, shield, ‘reperimeterize’, reconfigure, or reset connections from the outside. Bolt-on is the dominant application security model for the foreseeable future. The good news is that you may not be the one managing it, as there is a whole bunch of new cloud security services and technologies available. Security as a service, anyone? Here’s what we expect to see at this year’s RSA Conference. SECaaS Security as a Service, or ‘SECaaS’; basically using ‘the cloud’ to deliver security services. No, it’s not a new concept, but a new label to capture the new variations on this theme. What’s new is that some of the new services are not just SaaS, but delivered for PaaS or IaaS protection as well. And the technologies have progressed well beyond anti-spam and web-site scanning. During the show you will see a lot of ‘cloudwashing’ – where the vendor replaces ‘network’ with ‘cloud’ in their marketing collateral, and suddenly they are a cloud provider – which makes it tough to know who’s legit. Fortunately at the show you will see several vendors who genuinely redesigned products to be delivered as a service from the cloud and/or into cloud environments. Offerings like web application firewalls available from IaaS vendors, code scanning in the cloud, DNS redirectors for web app request and content scanning, and threat intelligence based signature generation, just to name a few. The new cloud service models offers greater simplicity as well as cost reduction, so we are betting these new services will be popular with customers. They’ll certainly be a hit on the show floor. Securing Applications at Scale Large enterprises and governments trying to secure thousands of off-the-shelf and homegrown applications live with this problem every day. Limited resources are the key issue – it’s a bit like weathering a poop storm with a paper hat. Not enough protection and the limited resources you have are not suitable for the job. It’s hard to be sympathetic as most of these organizations created their own headaches – remember when you thought it was a good idea to put a web interface on those legacy applications? Yeah, that’s what I’m talking about. Now you have billions of lines of code, designed to be buried deep within your private token ring, providing content to people outside your company. Part of the reason application security moves at a snail’s pace is because of the sheer scope of the problem. It’s not that companies don’t know their applications – especially web applications – are not secure, but the time and money required to address all the problems are overwhelming. A continuing theme we are seeing is how to deal with application security at scale. It’s both an admission that we’re not fixing everything, and an examination of how to best utilize resources to secure applications. Risk analysis, identifying cross-domain threats, encapsulation, reperimetrization, and multi-dimensional prioritization of bug fixes are all strategies. There’s no embodying product that you’ll see at the show, but we suggest this as a topic of discussion when you chat with folks. Many vendors will be talking about the problem and how their product fits within a specific strategic approach for addressing the issue. Code Analysis? Meh. DAST? Yeah. The merits of ‘building security in’ are widely touted but adoption remains sporadic. Awareness, the scale of the issue, and cultural impediments all keep tools that help build secure code a small portion of the overall application security market. Regardless, we expect to hear lots of talk about code analysis and white box testing. These products offer genuine value and several major firms made significant investments in the technology last year. While the hype will be in favor of white box code analysis, the development community remains divided. No one is arguing the value of white box testing, but adoption is slower than we expected. Very large software development firms with lots of money implement a little of each secure code development technique in their arsenal, including white box as a core element, basically because they can. The rest of the market? Not so much. Small firms focus on one or two areas during the design, development, or testing phase. Maybe. And that usually means fuzzing and Dynamic Application Security Testing (DAST). Whether it’s developer culture, or mindset, or how security integrates with development tools, or this is just the way customers want to solve security issues – the preference is for semi-black-box web scanning products. Big Data, Little App Security You’re going to hear a lot about big data and big data security issues at the conference. Big Data definitely needs to be on the buzzword bingo card. And 99 out of 100 vendors who tell you they have a big data security solution are lying. The market is still determining what the realistic threats are and how to combat them. But we know application security will be a bolt-on affair for a long period, because: Big data application development has huge support and is growing rapidly A vanishingly low percentage of developer resources are going into designing secure applications for big data. SQL injection, command injection, and XSS are commonly found on most of the front-end platforms that support NoSQL development. Some of them did not even have legitimate access controls until recently! Yes, jump into your time machine and set the clock for 10 years ago. Make no mistake – firms are pumping huge amounts of data into production non-relational databases without much more than firewalls and SSL protecting them. So if you have some architects playing around with these technologies (and you do), work on identifying some alternatives to secure them at the show. Share:

Share:
Read Post

RSA Conference 2012 Guide: Network Security

Yesterday we posted the key themes we expect to see at the upcoming RSA Conference. Now we’ll starting digging into our main coverage areas. Today we’ll start with network security. Firewalls are (still) dead! Long live the perimeter security gateway! Shockingly enough, similar to the past three years at RSAC, you’ll hear a lot about next generation firewalls (NGFW). And you should, as ports and protocol-based firewall rules will soon go the way of the dodo bird. If by soon, we mean 5+ years anyway, but corporate inertia remains a hard game to predict. The reality is that you need to start moving toward a deeper inspection of both ingress and egress traffic through your network, and the NGFW is the way to do that. The good news is that every (and we mean every) vendor in the network security space will be showing a NGFW at the show. Some are less NG than a bolted-on IPS to do the application layer inspection, but at the end of the day they can all claim to meet the NGFW market requirements, as defined by the name-brand analysts anyway. Which basically means these devices are less firewalls and more perimeter security gateways. So we will see two general positioning tactics from the vendors: Firewall-centric vendors: These folks will pull a full frontal assault on the IPS business. They’ll talk about how there is no reason to have a stand-alone IPS anymore and that the NGFW now does everything the IPS does and more. The real question for you is whether you are ready for the forklift that moving to a consolidated perimeter security platform requires. IPS vendors: IPS vendors have to protect their existing revenue streams, so they will be talking about how the NGFW is the ultimate goal, but it’s more about how you get there. They’ll be talking about migration and co-existence and all those other good things that made customers feel good about dropping a million bucks on an IPS 18 months ago. But no one will be talking about how the IPS or yesterday’s ports & protocols firewall remains the cornerstone of the perimeter security strategy. That sacred cow is slain, so now it’s more about how you get there. Which means you’ll be hearing a different tune from many of the UTM vendors. Those same brand-name analysts always dictated that UTM only met small company needs and didn’t have a place in an enterprise network. Of course that wasn’t exactly true but the UTM vendors have stopped fighting it. Now they just magically call their UTM a NGFW. It actually makes sense (from their perspective) as they understand that an application-aware firewall is just a traditional firewall with an IPS bolted on for application classification. Is that a ‘NGFW’? No, because it still runs on firewall blocking rules based on ports and protocols (as opposed to applications), but it’s not like RSA attendees (or most mid-market customers) are going to really know the difference. Control (or lack thereof) Another batch of hyperbole you’ll hear at the conference is about control. This actually plays into a deeply felt desire on the part of all security professionals, who don’t really control much of anything on a daily basis. So you want to buy devices that provide control over your environment. But this is really just a different way of pushing you towards the NGFW, to gain ‘control’ over the applications your dimwit end users run. But control tends to put the cart ahead of the horse. The greatest impact of the NGFW is not in setting application-aware policies. Not at first. The first huge value of a NGFW is gaining visibility over what is going on in your environment. Basically, you probably have no idea what apps are being used by whom and when. The NGFW will show you that, and then (only then) are you in a position to start trying to control your environment through application-centric policies. While you are checking out the show floor remember that embracing application-awareness on your perimeter is about more than just controlling the traffic. It all starts with figuring out what is really happening on your network. Network-based Malware Detection gains momentum Traditional endpoint AV doesn’t work. That public service message has been brought to you by your friend Captain Obvious. But even though blacklists and signatures don’t work anymore, there are certain indicators of malware that can be tracked. Unfortunately that requires you to actually execute the malware to see what it does. Basically it’s a sandbox. It’s not really efficient to put a sandbox on every endpoint (though the endpoint protection vendors will try), so this capability is moving to the perimeter. Thus a hot category you’ll see at RSA is “network-based malware detection” gear. These devices sit on the perimeter and watch all the files passing through to figure out which of them look bad and then either alert or block. They also track command and control traffic on egress links to see which devices have already been compromised and trigger your incident response process. Of course these monitors aren’t a panacea for catching all malware entering your network, but you can stop the low hanging fruit before it makes its way onto your network. There are two main approaches to NBMD, which are described ad nauseum in our recently published paper, so we won’t get into that here. But suffice it to say, we believe this technology is important and until it gets fully integrated into the perimeter security gateway, it’s a class of device you should be checking out while you are at the show. Big security flexes its muscle Another major theme related to network security we expect to see at the show is Big Security flexing its muscles. Given the need for highly specialized chips to do application-aware traffic inspection, and the need to see a ton of traffic to do this network-based malware detection and reputation analysis, network security is no longer really a place for start-ups (and no, Palo Alto is no

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.