Securosis

Research

Quick Wins with Website Protection Services: Protecting the Website

In the introductory post in the Quick Wins with Website Protection Services series, we described the key attack vectors that usually result in pwnage of your site and possibly data theft, or an availability issue with your site falling down and not being able to get back up. Since this series is all about Quick Wins, we aren’t going to belabor the build-up, rather let’s jump right in and talk about how to address these issues. Application Defense As we mentioned in the Managing WAF paper, it’s not easy to keep a WAF operating effectively, which involves lots of patching and rule updates based on new attacks and tuning the rules to your specific application. Doing nothing isn’t an option, given the fact that attackers use your site as the path of least resistance to gain a foothold in your environment. One of the advantages of front-ending your website with a website protection service (WPS) is to take advantage of a capability we’ll call WAF Lite. Now WAF Lite is first and foremost — simple. You don’t want to spend a lot of time configuring or tuning the application defense. The key to getting a Quick Win is to minimize required customization, while providing adequate coverage against the most likely attacks. You want it to just work and block the stuff that’s pretty obviously an attack. You know, stuff like XSS, SQLi, and the other stuff that makes the OWASP Top 10 list. These are pretty standard attack types and it’s not brain surgery to build rules to block them. It’s amazing that everyone doesn’t have this kind of simple defense implemented. Out of one side of our mouths we talk about the need for simplicity. But we also need the ability to customize and/or tune the rules when you need to, which shouldn’t be that often. It’s kind of like having a basic tab, which gives you a few check boxes to configure and needs to be within the capabilities of the unsophisticated admin. That’s what you should be using most of the time. But when you need it, or when you enlist expert help, you’d like to have an advanced tab to give you lots of knobs and granular controls. Although a WPS can be very effective against technical attacks, these services are not going to do anything to protect against a logic error on the part of your application. If your application or search engine or shopping cart can be gamed using legitimate application functions, no security service (or dedicated WAF, for that matter) can do anything about that. So parking your sites behind a WPS doesn’t mean you don’t have to do QA testing and have smart penetration tester types trying to expose potential exploits. OK, we’ll end the disclaimer there. We’re talking about service offerings in this series, but that doesn’t mean you can’t accomplish all of these goals using on-premise equipment and managing the devices yourself. In fact, that’s how stuff got done before the fancy cloud-everything mentality started to permeate through the technology world. But given the fact that we’re trying to do things quickly, a service gives you the opportunity to deploy within hours and not require significant burn-in and tuning to bring the capabilities online. Platform Defense Despite the application layer being the primary target for attacks on your website (since it’s the lowest hanging fruit for attackers) that doesn’t mean you don’t have to pay attention to attacks on your technology stack. We delved a bit into some of the application denial of service (DoS) attacks targeting the building blocks of your application, like Apache Killer and Slowloris. A WPS can help deal with this class of attacks by implementing rate controls on the requests hitting your site, amongst other application defenses. Given that search engines never forget and some data you don’t want in the great Googly-moogly index, it pays to control the pages available for crawling by the search bots. You can configure this using a robots.txt file, but not every search engine plays nice. And some will jump right to the disallowed sections, since that’s where the good stuff is, right? Being able to block automated requests and other search bots via the WPS can keep these pages off the search engines. You’ll also want to restrict access to unauthorized areas of your site (and not just from the search engines discussed above). This could be pages like the control panel, sensitive non-public pages, or your staging environment where you test feature upgrades and new designs. Unauthorized pages could also be back doors left by attackers to facilitate getting back into your environment. You also want to be able to block nuisance traffic, like comment spammers and email harvesters. These folks don’t cause a lot of damage, but are a pain in the rear and if you can get rid of them without any incremental effort, it’s all good. A WPS can lock down not only where a visitor goes, but also where they come from. For some of those sensitive pages you may want to enforce those pages can only be accessed by someone on the corporate network (either directly or virtually via a VPN). So the WPS can block access to those pages unless the originating IP is on the authorized list. Yes, this (and most other controls) can be spoofed and gamed, but it’s really about reducing your attack surface. Availability Defense We can forget about keeping the site up and taking requests, and a WPS can help with this function in a number of ways. First of all, a WPS provider has bigger pipes than you. In most cases, a lot bigger that gives them the ability absorb a DDoS without disruption or even impacting performance. You can’t say the same. Of course, be wary of bandwidth based pricing, since a volumetric attack won’t just hammer your site, but also your wallet. At some point, if the WPS provider has enough customers you can pretty much guarantee at least one of their

Share:
Read Post

Network-based Malware Detection 2.0: Evolving NBMD

In the first post updating our research on Network-based Malware Detection, we talked about how attackers have evolved their tactics, even over the last 18 months, to defeat emerging controls like sandboxing and command & control (C&C) network analysis. As attackers get more sophisticated defenses need to as well. So we are focusing this series on tracking the evolution of malware detection capabilities and addressing issues with early NBMD offerings – including scaling, accuracy, and deployment. But first we need to revisit how the technology works. For more detail on the technology you can always refer back to the original Network-based Malware Detection paper. Looking for Bad Behavior Over the past few years malware detection has moved from file signature matching to isolating behavioral characteristics. Given the ineffectiveness of blacklist detection the ability to identify malware behaviors has become increasingly important. We can no longer judge malware by what it looks like – we need to actually analyze what a file does to determine whether it’s malicious. We discussed this behavioral analysis in Evolving Endpoint Malware Detection, focusing on how new approaches have added contextual determination to make the technology far more effective. You can read our original paper for full descriptions of these kinds of tells that usually mean a device is compromised; but a simple list includes memory corruption/injection/buffer overflows; system file/configuration/registry changes; droppers, downloaders, and other unexpected programs installing code; turning off existing anti-malware protections; and identity and privilege manipulation. Of course this list isn’t comprehensive – it’s just a quick set of guidelines for kinds of information you can search devices for, when you are on the hunt for possible compromises. Other things you might look for include parent/child process inconsistencies, exploits disguised as patches, keyloggers, and screen grabbing. Of course these behaviors aren’t necessarily bad – that’s why you want to investigate as possible, before any outbreak has a chance to spread. The innovation in the first generation of NBMD devices was running this analysis on a device in the perimeter. Early devices implemented a virtual farm of vulnerable devices in a 19-inch rack. This enabled them to explode malware within a sandbox, and then to monitor for the suspicious behaviors described. Depending on the deployment model (inline or out of band), the device either fired an alert or could actually block the file from reaching its target. It turns out the term sandbox is increasingly unpopular amongst security marketers for some unknown reason, but that’s what they use – a protected and monitored execution environment for risk determination. Later in this series we will discuss different options for ensuring the sandbox can to your needs. Tracking the C&C Malware Factory The other aspect of network-based malware detection is identifying egress network traffic that shows patterns typical of communication between compromised devices and controlling entities. Advanced attacks start by compromising and gaining control of a device. Then it establishes contact with its command and control infrastructure to fetch a download with specific attack code, and instructions on what to attack and when. In Network-based Threat Intelligence we dug deep into the kinds of indicators you can look for to identify malicious activity on the network, such as: Destination: You can track the destinations of all network requests from your environment, and compare it against a list of known bad places. This requires an IP reputation capability – basically a list of known bad IP addresses. Of course IP reputation can be gamed, so combining it with DNS analysis to identify likely Domain Generation Algorithms (DGA) helps to eliminate false positives. Strange times: If you have a significant volume of traffic which is out of character for that specific device or time – such as the marketing group suddenly performing SQL queries against engineering databases – it’s time to investigate. File types, contents, and protocols: You can also learn a lot by monitoring all egress traffic, looking for large file transfers, non-standard protocols (encapsulated in HTTP or HTTPS), weird encryption of the files, or anything else that seems a bit off… These anomalies don’t necessarily mean compromise, but they warrant further investigation. User profiling: Beyond the traffic analysis described above, it is helpful to profile users and identify which applications they use and when. This kind of application awareness can identify anomalous activity on devices and give you a place to start investigating. Layers FTW We focus on network-based malware detection in this series, but we cannot afford to forget endpoints. NBMD gateways miss stuff. Hopefully not a lot, but it would be naive to believe you can keep computing devices (endpoints or servers) clean. You still need some protection on your endpoints, but at least you should have controls that work together to ensure you have full protection, when the device is on the corporate network and when it is not. This is where threat intelligence plays a role, making both network and endpoint malware detection capabilities smarter. You want bi-directional communication so malware indicators found by the network device or in the cloud are accessible to endpoint agents. Additionally, you want malware identified on devices to be sent to the network for further analysis, profiling, determination, and ultimately distribution of indicators to other protected devices. This wisdom of crowds is key to fighting advanced malware. You may be one of the few, the lucky, and the targeted. No, it’s not a new soap opera – it just means you will see interesting malware attacks first. You’ll catch some and miss others – and by the time you clean up the mess you will probably know a lot about what the malware does, how, and how to detect it. Exercising good corporate karma, you will have the opportunity help other companies by sharing what you found, even if you remain anonymous. If you aren’t a high-profile target this information sharing model works even better, allowing you to benefit from the misfortune of the targeted. The goal is to increase your chance of catching the malware

Share:
Read Post

Incite 5/22/2013: Picking Your Friends

This time of year neighborhoods are overrun with “Graduation 2013” signs. The banners hang at the entrance of every subdivision congratulating this year’s high school graduates. It’s a major milestone and they should celebrate. Three kids on our street are graduating, and two are youngests. So we will have a few empty nests on our street. You know what that means, right? At some point those folks will start looking to downsize. Who needs a big house for the summer break and holidays when the kids come home? Who needs the upkeep and yard work and cost? And the emptiness and silence for 10 months each year, when the kids aren’t there? They all got dogs presumably to fill the void – maybe that will work out. But probably not. Sooner rather than later they will get something smaller. And that means new neighbors. In fact it is already happening. The house next door has been on the market for quite a while. Yes, they are empty nesters, and they bought at the top of the market. So the bank is involved and selling has been a painstaking process. Not that I’d know – I don’t really socialize with neighbors. I never have. I sometimes hear about folks hanging in the garage, drinking brews or playing cards with buddies from the street. I played cards a couple of times in a local game across the street. It wasn’t for me. Why? I could blame my general anti-social nature, but that’s not it. I don’t have enough time to spend with people I like (yes, they do exist). So I don’t spend time with folks just because they live on my street. The Boy can’t get in his car to go see buddies who don’t live in the neighborhood. So he plays with the kids on the street and the adjoining streets. There are a handful of boys and they are pretty good kids, so it works out well. And he doesn’t have an option. But I can get in my car to see my friends, and I do. Every couple weeks I meet up with a couple guys at the local Taco Mac and add to my beer list. They recently sent me a really nice polo shirt for reaching the 225 beer milestone in the Brewniversity. At an average of $5 per beer that shirt only cost $1,125. I told you it was a nice shirt. I hang with those guys because I choose to – not because we liked the same neighborhood. We talk sports. We talk families. We talk work, but only a little. They are my buds. As my brother says, “You can pick your friends, but you can’t pick your family.” Which is true, but I’m not going there… –Mike Photo credit: “friend” originally uploaded by papadont Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. Quick Wins with Website Protection Services Are Websites Still the Path of Least Resistance? Network-based Malware Detection 2.0 Advanced Attackers Take No Prisoners Security Analytics with Big Data Use Cases Introduction Newly Published Papers Email-based Threat Intelligence: To Catch a Phish Network-based Threat Intelligence: Searching for the Smoking Gun Understanding and Selecting a Key Management Solution Incite 4 U Amazon to take over US government: Well, not really, but nobody should be surprised that Amazon is the first major cloud provider to achieve FedRAMP certification. Does this mean the NSA is about to store all the wiretaps of every US citizen in S3? Nope, but it means AWS meets some baseline level of security and can hold sensitive (but not classified) government information. Keep in mind that big clients could already have Amazon essentially host a private cloud for them on dedicated hardware, so this doesn’t necessarily mean the Bureau of Land Management will run their apps on the same server streaming you the new Arrested Development, nor will you get the same levels of assurance. But it is a positive sign that the core infrastructure is reasonably secure, and public cloud providers can meet higher security requirements when they need to. – RM Arguing against the profit motive… is pointless, as Dennis Fisher points out while trying to put a few nails in the exploit sales discussion. He does a great job revisiting the slippery slope of vulnerability disclosure, but stifles discussion on exploit sales with a clear assessment of the situation. “Debating the morality or legality of selling exploits at this point is useless. This is a lucrative business for the sellers, who range from individual researchers to brokers to private companies.” You cannot get in the way of Mr. Market – not for long, anyway. Folks like Moxie can choose not to do projects that may involve unsavory outcomes. But there will always be someone else ready, willing, and able to do the job – whether you like it or not. – MR Static Analysis Group Hug: WASC announced publication of a set of criteria to help consumers evaluate static analysis tools, including a view of their evaluation criteria. With more and more companies looking to address software security issues in-house we see modest growth in the code security market. But static analysis vendors are just as likely to find themselves up against dynamic application scanning vendors as static analysis competitors. The first thing that struck me about this effort is that, not only did the contributors represent just about every vendor in the space, it’s a “who’s who” list for code security. Those people really know their stuff and I am very happy that a capable group like this has put a stake in the ground. That said, I am disappointed that the evaluation criteria are freaking bland. They read more like a minimum feature set each product should have rather than a set of criteria to differentiate between products or solve

Share:
Read Post

Solera puts on a Blue Coat

Even after being in this business 20 years I still get surprised from time to time. When I saw this morning that Blue Coat is acquiring Solera Networks I was surprised, and not with a childlike sense of wonder. It was a WTF? type surprise. Blue Coat was taken private by Thoma Bravo, et al, a while back, so they don’t need to divulge the deal size. It seems Blue Coat did the deal to position the Solera technology as a good compliment to their existing perimeter filtering and blocking technology. Along with the Crossbeam acquisition, Solera can now run on big hardware next to Blue Coat in all those government and large enterprise networks where they scrutinize web traffic. Traffic volumes continue to expand, and given the advanced attacks everyone worries about, Solera’s analytics and detection capabilities fill a clear need. Blue Coat, like Websense (which went private this week in a private equity buyout), is being squeezed by cloud-based web filtering services and UTM/NGFW consolidation in their core business. So adding the ability to capture and analyze traffic at the perimeter moves the bar a bit, and makes sense for them. I expected Solera to get bought this year at some point. It’s hard to compete with a behemoth like RSA/NetWitness for years without deep pockets and an extensive global sales distribution engine. But I expected the buyer to be a big security player (McAfee, IBM, HP, etc.), who would look at what RSA has done integrating NetWitness technology as the foundation of their security management stack; and try something similar with Solera’s capture, forensics, and analytics technology. Given Solera’s existing partnership with McAfee and corporate parent Intel’s equity stake, I figured it would be them. Which is why I stay away from the gambling tables. I’m a crappy prognosticator. As Adrian is writing in the Security Analytics with Big Data series (Introduction & Use Cases) series, we expect SIEM to evolve over time to analyze events, network packets, and a variety of other data sources. This makes the ability to capture and analyze packets – which happens at a fundamentally different scale than events – absolutely critical for any company wanting to play in security management down the line. Solera was one of a handful of companies (a small handful) with the technology, so seeing them end up with Blue Coat is mildly disappointing, at least from the perspective of someone who wants to see broader solutions that solve larger security management problems. Blue Coat doesn’t have a way to fully leverage the broader opportunity packet capture brings to security management, because they operate only at the network layer. Since they were taken private they ha’ve hunkered down and focused on content analysis on the perimeter to find advanced attacks. Or something like that. But detecting advanced attacks and protecting corporate data require a much broader view of the security world than just the network. I guess if Blue Coat keeps buying stuff, leveraging Thoma’s deep pockets, they could acquire their way into a capability to deal with advanced attacks across all security domains. They would need something to protect devices. They would need some NAC to ensure they don’t go where they aren’t supposed to. They would need more traditional SIEM/security management. And they would need to integrate all the pieces into a common user experience. I’m sure they will get right on that. The timing is curious as well – especially if Blue Coat’s longer term strategy is to be a PE-backed aggregator and eventually take the company public, sell at a big increase in valuation (like SonicWALL) or milk large revenue and maintenance streams (like Attachmate). They could have bought a company in a more mature market (as TripWire did with nCircle), where the revenue impact would be greater even at a lower growth rate. And if they wanted sexy, perhaps buy a cloud/SECaaS thing. But to take out a company in a small market, which will require continued evangelizing to get past the tipping point, is curious. Let’s take a look at the other side of the deal Solera’s motivation – which brings up the fundamental drivers for start-ups to do deals: Strategic fit: Optimally start-ups love to find a partner who provides a strategic fit, with little product overlap and the ability to invest significantly in their product and/or service. Of course integration is always challenging but at least this kind of deal provides hope for a better tomorrow. Even if the reality usually falls a bit short. Distribution channel leverage: Similarly, start-ups sometimes become the cool emerging technology that gets pumped through a big distribution machine, as the acquirer watches the cash register ring. This is the concept behind big security vendors buying smaller technology firms to increase their wallet share with key customers. Too much money: Sometimes a buyer comes forward with the proverbial offer that is too good to refuse. Like when Yahoo or Facebook pay $1.1 billion for a web property that generates minimal revenue. Just saying. We don’t see many of these deals in security. Investor pressure: Sometimes investors just want an out. It might be because they have lost faith, their fund is winding down, they need a win (of any size), or merely because they are tired and want to move on. Pre-emptive strike: Sometimes start-ups sell when they see the wall. They know competition is coming after them. They know their technical differentiation will dissipate over time and they will be under siege from marketing vapor from well-funded much bigger companies. So they get out when they can – it is usually a good thing because the next two options are what’s left if they mess up. No choice: If the start-up waits too long they lose a lot of their leverage as competitors close in. At this point they will take what they can get, make investors whole, and hopefully find a decent place for their employees. They also promise themselves to sell sooner the next time. Fire sale: This happens when a start-up with no choice doesn’t

Share:
Read Post

Wendy Nather abandons the CISSP—good riddance

Mood music: Abandono by Amalia Rodrigues… Wendy blogged about not renewing her CISSP. I never had one myself, but as Wendy said it is much less important if you’re not going through the cattle call HR process, which is majorly gebrochen in infosec… but that’s another post. I suppose a CISSP might be useful for people starting out in security, who need to prove that they’ve actually put in a few years at it and know the basics. It’s a handy first sorting mechanism when you’re looking to fill certain levels of positions. But by the time you’re directly recruiting people, you should know why you want them other than the fact that they’re certified. And then the letters aren’t important. My personal career path has always been about proactively sniping for work (AKA consulting – never had a “real job”) and cultivating relationships and recommendations, so the following is especially true, even though I don’t have ‘decades’ of experience: “After decades of being in IT, I no longer want to bother proving how much I know. If someone can’t figure it out by talking to me or reading my writing, then I don’t want their job. If they feel so strongly about that certification that they won’t waive it for me, then they don’t want me either, and that’s okay.” Bingo. Sometimes, with a little time and attention, you can skip the HR cattle calls altogether and talk about what’s actually important to the hiring organization, beyond the HR robo-screening. That said, the CISSP has powerful (some say disproportionate) sway over our industry’s hiring practices. As Rich and Jamie said in our chat room today, the HR process is what it is, and many HR shops bounce you in the first round if you don’t have those five magic letters… So the CISSP has ongoing value to anyone going through open application processes, where HR is doing what they do: blindly screening out the best candidates. End Music: Good Riddance (I Hope You Had The Time Of Your Life) by Green Day Share:

Share:
Read Post

(Scape)goats travel under the bus

It’s funny how certain data points get manipulated to bolster the corporate message. At least how the trade press portrays they anyway. If you read infosecurity-magazine.com’s coverage of Veracode’s State of Software Security report, you will see the subhead that the CISO is really the Chief Information Scapegoat Officer. CISOs are often the first victim following a major security breach. Given the prevalence of such breaches, the average tenure of a CISO is now just 18 months; and this is likely to worsen if corporate security doesn’t improve. That’s true. CISOs have been dealing with little to no job security since, well, forever. What’s curious is how the article goes on to discuss software security as a big problem, and a potential contributor to the lack of job security for CISOs everywhere. The problem, suggests Chris Wysopal, co-founder and CTO of Veracode, is that “A developer’s main goal usually doesn’t include creating flawless, intrusion proof applications. In fact the goal is usually to create a working program as quickly as possible.” The need for speed over security is what creates the buggy software that threatens the CISO. These are all true statements. But as math people all over the world like to say, correlation is not causation. There are many contributing factors making CISOs scapegoats when the finger-pointing starts after a breach. And it is much simpler than poor software coding practices. I can sum it up in 3 words: SH*T FLOWS DOWNHILL You think the CEO is going to take the fall? The CFO? The CIO? Yeah, right. That leaves the CISO holding the bag and getting run over by the bus. The article does mention some new training materials from the SAFECode alliance, which are good stuff. Education is good. But that only addresses one of many problems facing CISOs. Photo credit: “Didn’t get to try any of this unfortunately” originally uploaded by Jen R Share:

Share:
Read Post

Websense Going Private

Websense announced today that they are being acquired by Vista Equity Partners and will be going private when the transaction closes. From the press release: Under the terms of the agreement, Websense stockholders will receive $24.75 in cash for each share of Websense common stock they hold, representing a premium of approximately 29 percent over Websense’s closing price on May 17, 2013 and a 53 percent premium to Websense’s average closing price over the past 60 days. The Websense board of directors unanimously recommends that the company’s stockholders tender their shares in the tender offer. Let’s be honest – Websense needed to do something, and John McCormack was elevated to the CEO position to get some sort of deal done. They have been languishing for the last few years under serious execution failures, predominantly in sales, and their channel strategy. The competition basically wrote them off, and has spent the last few years looting the Websense installed base. But unlike most companies which end up needing rescue from a private equity firm, Websense still has a decent product and technology. I have heard from multiple competitors over the past couple years that they have been surprised Websense hasn’t been more of a challenge given the capability of their rebuilt product line. TRITON is a good platform, combining email and web security with DLP – available on-premise, in the cloud, or as a hybrid deployment. That cloud piece holds the potential to save this from being a total train wreck for Vista. The on-premise web filtering market is being subsumed by multiple perimeter security vendors. Email security has substantially moved to the cloud, and is a mature market with highly competitive products from larger competitors. DLP isn’t enough to support a standalone company. Even combining these three pieces isn’t enough when the UTM guys advertise it all on one box for the mid-market, particularly because large enterprises look for best-of-breed components rather than for bundles. We assume Vista wants to break out the standard private equity playbook, focusing on sales execution and rebuilding distribution channels to generate cash by leveraging the installed base. Then they can sell Websense off in 2-3 years to a strategic acquirer. Thoma Bravo has proven a few times that if you can execute on the PE playbook in the security market, it’s great for the investors and remaining management, who walk away with a big economic win. TRITON has the potential to drive a positive exit, but only because of the cloud piece. On-premise they won’t be able to compete with the broader UTM and NGFW boxes. But Security as a Service bundles for email, web, and DLP are a growing market – especially in the mid-market, and even some enterprises are moving that way. Think ZScaler, not Check Point. Unlike the box pushers Websense is already a legitimate SecaaS player. We are not fortune tellers but if Vista expects a return similar to the SonicWALL deal, that is a stretch. Acquiring Websense is certainly one place to start in the security market, and there is a reasonable chance they won’t lose money – especially when they recapitalize the debt in a few quarters and take a distribution to cover their equity investment. The PE guys aren’t dumb. But in order to create a big win they need to inject some serious vision, rebuild the product teams, and streamline around TRITON with an emphasis on the cloud and hybrid options, all while stopping the bleed-off of the installed base. We hope internally they have a sense of urgency and excitement, as they step away from the scrutiny of the public market – not one of relief that they can hide for a few more years. As far as existing customers, it’s hard to see a downside unless Vista decides to focus on sales and channels while totally neglecting product and technology. They would be idiots to take that approach, though, so odds are good for the product continuing to improve and remaining competitive. Websense isn’t dead in the water by any means – if anything this deal gives them a chance to make the required changes without worrying about quarterly sales goals. But there will be nothing easy about turning Websense around. Vista and Websense have a lot of work in front of them. Photo credit: “Private” originally uploaded by Richard Holt Share:

Share:
Read Post

Quick Wins with Website Protection Services: Are Websites Still the Path of Least Resistance?

In the sad but true files, the industry has become focused on advanced malware, state-sponsored attackers, and 0-day attacks, to the exclusion of everything else. Any stroll around a trade show floor makes that obvious. Which is curious because these ‘advanced’ attackers are not a factor for the large majority of companies. It also masks the fact that many compromises start with attacks against poorly-coded brittle web sites. Sure many high-profile attacks target unsophisticated employees with crafty phishing messages, but we can neither minimize nor forget that if an attacker has the ability to gain presence via a website, they’ll take it. Why would they burn a good phishing message, 0-day malware, or other sophisticated attack when they can pop your web server with a XSS attack and then systematically run roughshod over your environment to achieve their mission? We wrote about the challenges of deploying and managing WAF products and services at enterprise scale last year. But we kind of jumped to Step 2, and didn’t spend any time on simpler approaches to an initial solution for protecting websites. Even today, strange as it sounds, far too many website have no protection at all. They are built with vulnerable technologies and without a thought for security, and then let loose into a very hostile world. These sites are sitting ducks for script kiddies and organized crime alike. So we are taking a step back to write a new series about protecting websites using Security as a Service (SECaaS). We will use our Quick Wins structure to keep focus on how web protection services can make a difference in protecting web properties, and can be deployed quickly without fuss. To be clear, you can achieve these goals using on-premise equipment, and we will discuss the pros & cons of that approach vis-a-vis web protection services. But Mr. Market tells us every day that the advantages of an always-on, simple-to-deploy and secure-enough service win out over yet another complex device to manage in the network perimeter. Before we get going we would like to thank to Akamai for agreeing to potentially license this content on completion, but as with all our research we will write the series objectively and independently, guided by our Totally Transparent Research Methodology. That allows us to write what needs to be written and stay focused on end user requirements. Website Attack Vectors Although the industry has made strides toward a more secure web experience it rarely takes long for reasonably capable attackers to find holes in any organization’s web properties. Whether due to poor coding practices, a poorly configured or architected technology stack, or change control issues, there is usually a way to defeat an application without proper protections in place. But even when proper security protections make it hard to compromise an application directly, attackers just resort to knocking down the site using a denial of service (DoS) attack. Let’s dig into these attack vectors and why we haven’t made much progress addressing them. SDLC what? The seeming inability of most developers to understand even simplistic secure coding requirements continues to plague security professionals, and leaves websites unprepared to handle simple attacks. But if we are honest that may not be fair. It is more an issue of developer apathy than inability. Developers still lack incentives to adopt secure coding practices – they are evaluated on their ability to ship code on time … not necessarily secure code. For “A Day in the Life of a CISO”, Mike wrote poems (in pseudo iambic pentameter, no less!). One was about application security: Urgent. The VP of Dev calls you in. A shiny new app. Full of epic win. Customers will love it. Everyone clap. We launch tomorrow. Dr. Dre will rap. It’s in the cloud. Using AJAX and Flash. No time for pen test. What’s password hash? Kind of funny, eh? It would be if it weren’t so true. Addressing this issue requires you to look at it creatively two perspectives. First you must be realistic and accept that you aren’t going to fundamentally change developer behavior overnight. So you need a solution to protect the website without rebuilding the code or changing developer behavior. You need to be able to stop SQL injection and XSS today, which is actually two days late. Why? Look no further than the truth explain by Josh Corman when introducing HD Moore’s Law. If your site can be compromised by anyone with an Internet connection, so long as they have 15 minutes to download and install Metasploit, you will have very long days as a security professional. Over time the right answer is to use a secure software development lifecycle (SDLC) to build all your code. We have written extensive about this Web app security program so we won’t rehash the details here. Suffice it to say that without proper incentives, a mandate from the top to develop and launch secure code, and a process to ensure it, you are unlikely to make much strategic progress. Brittle infrastructure It is amazing how many high profile websites are deployed on unpatched components. We understand the challenge of operational discipline, the issues of managing downtime & maintenance windows, and the complexity of today’s interlinked technology stacks. That understanding and $4 will buy you a latte at the local coffee shop. Attackers don’t care about your operational challenges. They constantly search for vulnerable versions of technology components, such as Apache, MySQL, Tomcat, Java, and hundreds of other common website components. Keeping everything patched and up to date is harder than endpoint patching, given the issues around downtime and the sheer variety of components used by web developers. Everyone talks about how great websites and SaaS are because the users are no longer subjected to patching and updates. Alas, server components still need to be updated – but you get to take care of them so end users don’t need to. Now you are. And if you don’t do it correctly – especially with open source components – you leave low-hanging fruit for attackers, who can easily weaponize exploits and search for vulnerable sites

Share:
Read Post

Spying on the Spies

The Washington Post says US Officials claimed Chinese hackers breached Google to determine who the US wanted Google to spy on. In essence the 2010 Aurora attack was a counter-counter-espionage effort to determine who the US government was monitoring. From the Post’s post: Chinese hackers who breached Google’s servers several years ago gained access to a sensitive database with years’ worth of information about U.S. surveillance targets, according to current and former government officials. The breach appears to have been aimed at unearthing the identities of Chinese intelligence operatives in the United States who may have been under surveillance by American law enforcement agencies. … and … Last month, a senior Microsoft official suggested that Chinese hackers had targeted the company’s servers about the same time Google’s system was compromised. The official said Microsoft concluded that whoever was behind the breach was seeking to identify accounts that had been tagged for surveillance by U.S. national security and law enforcement agencies. Wow. Like it or not, the US government ensnared US companies to spy on their customers and users. If the Chinese motivation is as claimed, Google was targeted because it was known to be collecting data on suspected spies. It will be interesting to see whether this announcement generates some pushback, either by companies refusing to cooperate, or – as many companies have done – by removing infrastructure that tracks specific users. Paining a target on your back and placing yourself in a situation where your servers could be seized is a risk most firms can’t afford. Share:

Share:
Read Post

Awareness training extends to the top

Trustwave’s Nicolas Percoco wrote an interesting article at boardmember.com describing a targeted attack at a senior executive. Who’dathunk sites catering to board members (and other mahogany row folks) would publish stuff from security folks. Oh, how the times have changed, eh? Let’s dissect this attack starting from before you received the email early this morning. One of your competitors hired a hacker to obtain business plans, financial statements, price lists, etc. from your company. This activity is known as corporate espionage and has been going on since businesses started competing, just not in the same way it is happening today – through the click of a mouse. The post runs through a plausible scenario. Targeted email from a spoofed account. Zero-day attack in the attachment. Total compromise and full access to the entire filesystem, allowing the theft of pretty much anything. Yup. When you opened that resume, the Zero Day exploited a problem in your document reader. It installed a custom piece of malware written by the hacker that scoured your computer for the types of documents he was being paid to steal. Once the malware gathered those files, it then sent them over the Internet to the hacker’s system. Of course the language is overly simplistic – it needs to be. This type of piece is for executive readers, who don’t understand Adobe exploits, egress filtering, or advanced malware. But the here tends to get lost in day-to-day security firefighting. You must spend time educating executives on these kinds of attacks. You also need to implement controls that more highly value the devices they use, and protect them accordingly in light of their extensive access to important things. The post ends with a number of high-level suggestions. Start with email security and then monitor for unusual activity. Ensure the devices of executives are updated. Yup, yup, and yup. But even these high-level recommendations will be over the heads of many executives. This kind of piece is more about making sure that, when security comes in and demands behavioral changes and additional protections that impair the executive user experience, executives are receptive. Or perhaps not receptive – but at least they understand why it is important. Photo credit: “CEO – Tiare – Board Meeting – Franklin Canyon” originally uploaded by tiarescott Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.