Securosis

Research

Security Analytics with Big Data: Defining Big Data

Today we pick up our Security Analytics with Big Data series where we left off. But first it’s worth reiterating that this series was originally intended to describe how big data made security analytics better. But when we started to interview customers it became clear that they are just as concerned with how big data can make their existing infrastructure better. They want to know how big data can augment SIEM and the impact of this transition on their organization. It has taken some time to complete our interviews with end users and vendors to determine current needs and capabilities. And the market is moving fast – vendors are pushing to incorporate big data into their platforms and leverage the new capabilities. I think we have a good handle on the state of the market, but as always we welcome comments and input. So far we have outlined the reasons big data is being looked at as a transformative technology for SIEM, as well as common use cases, with the latter post showing how customer desires differ from what we have come to expect. My original outline addressed a central question: “How is big data analysis different from traditional SIEM?”, but it has since become clear that we need to fully describe what big data is first. This post demystifies big data by explaining what it is and what it isn’t. The point of this post is to help potential buyers like you compare what big data is with what your SIEM vendor is selling. Are they really using big data or is it the same thing they have been selling all along? You need to understand what big data is before you can tell whether a vendor’s BD offering is valuable or snake oil. Some vendors are (deliberately) sloppy, and their big data offerings may not actually be big data at all. They might offer a relational data store with a “Big Data” label stuck on, or a proprietary flat file data storage format without any of the features that make big data platforms powerful. Let’s start with Wikipedia’s Big Data page. Wikipedia’s definition (as of this writing) captures the principal challenges big data is intended to address: increased Volume (quantity of data), Velocity (rate of data accumulation), and Variety (different types of data) – also called the 3Vs. But Wikipedia fails to actually define big data. The term “big data” has been so overused, with so many incompatible definitions, that it has become meaningless. Essential Characteristics The current poster child for big data is Apache Hadoop, an open source platform based on Google BigTable. A Hadoop installation is built as a clustered set of commodity hardware, with each node providing storage and processing capabilities. Hadoop provides tools for data storage, data organization, query management, cluster management, and client management. It is helpful to think about the Hadoop framework as a ‘stack’ like the LAMP stack. These Hadoop components are normally grouped together but you can replace each component, or add new ones, as desired. Some clusters add optional data access services such as Sqoop and Hive. Lustre, GFS, and GPFS, can be swapped in as the storage layer. Or you can extend HDFS functionality with tools like Scribe. You can select or design a big data architecture specifically to support columnar, graph, document, XML, or multidimensional data. This modular approach enables customization and extension to satisfy specific customer needs. But that is still not a definition. And Hadoop is not the only player. Users might choose Cassandra, Couch, MongoDB, or RIAK instead – or investigate 120 or more alternatives. Each platform is different – focusing on its own particular computational problem area, replicating data across the cluster in its own way, with its own storage and query models, etc. One common thread is that every big data system is based on a ‘NoSQL’ (non-relational) database; they also embrace many non-relational technologies to improve scalability and performance. Unlike relational databases, which we define by their use of relational keys, table storage, and various other common traits, there is no such commonality among NoSQL platforms. Each layer of a big data environment may be radically different, so there is much less common functionality than we see between RDBMS. But we have seen this problem before – the term “Cloud Computing” used to be similarly meaningless, but we have come to grips with the many different cloud service and consumption models. We lacked a good definition until NIST defined cloud computing based on a series of essential characteristics. So we took a similar approach, defining big data as a framework of utilities and characteristics common to all NoSQL platforms. Very large data sets (Volume) Extremely fast insertion (Velocity) Multiple data types (Variety) Clustered deployments Provides complex data analysis capabilities (MapReduce or equivalent) Distributed and redundant data storage Distributed parallel processing Modular design Inexpensive Hardware agnostic Easy to use (relatively) Available (commercial or open source) Extensible – designers can augment or alter functions There are more essential characteristics to big data than just the 3Vs. Additional essential capabilities include data management, cost reduction, more extensive analytics than SQL, and customization (including a modular approach to orchestration, access, task management, and query processing). This broader collection of characteristics captures the big data value proposition, and offers a better understanding of what big data is and how it behaves. What does it look like? This is a typical big data cluster architecture; multiple nodes cooperate to manage data and process queries. A central node manages the cluster and client connections, and clients communicate directly with the name node and individual data nodes as necessary for query operations. This simplified shows the critical components, but a big data cluster could easily comprise 500 nodes hosting 30 applications. More nodes enable faster data insertion, and parallel query processing improves responsiveness substantially. 500 nodes should be overkill to support your SIEM installation, but big data can solve much larger problems than security analytics. Why Are Companies Adopting Big Data? Thinking of big data simply as a system that holds “a lot of data”, or even limiting its definition

Share:
Read Post

Finally! Lack of Security = Loss of Business

For years security folks have been frustrated when trying to show real revenue impact for security. We used the TJX branding issue for years, but it didn’t really impact their stock or business much at all. Heartland Payment Systems is probably stronger now because of their breach. You can check out all the breach databases, and it’s hard to see how security has really impacted businesses. Is it a pain in the butt? Absolutely. Does cleanup cost money? That’s clear. But with the exception of CardSystems, business just don’t go away because of security issues. Or compliance issues for that matter. Which is why we continue to struggle to get budget for security projects. Maybe that’s changing a little with word that BT decided to dump Yahoo! Mail from its consumer offering because it’s a steaming pile of security FAIL. Could this be the dawn of a new age, where security matters? Where you don’t have to play state-sponsored hacking FUD games to get anything done. Could it be? Probably not. This, folks, is likely to be another red herring for security folks to chase. Let’s consider the real impact to a company like Yahoo. Do they really care? I’m not sure – they lost the consumer email game a long ago. With all their efforts around mobile and innovation, consumer email just doesn’t look like a major focus, so the lack of new features and unwillingness to address security issues kind of make sense. Sure, they will lose some traffic the captive BT portal offered as part of the service, but how material is that in light of Yahoo’s changing focus? Not enough to actually fix the security issues, which would likely require a fundamental rebuild/re-architecture of the email system. Yeah, not going to happen. Anyone working for a technology company has probably lived through this movie before. You don’t want to outright kill a product, because some customers continue to send money, and it’s high-margin because you don’t need to invest in continued development. So is Marissa Meyer losing sleep over this latest security-oriented black eye? Yeah, probably not. So where are we? Oh yeah, back to Square 1. Carry on. Photo credit: “Dump” originally uploaded by Travis Share:

Share:
Read Post

Friday Summary: May 31, 2013

It is starting to feel like summer. Both because the weather is getting warmer and because most of the Securosis team has been taking family time this week. I will keep the summary short – we have not been doing much writing and research this week. We talk a lot about security and compliance for cloud services. It has become a theme here that, while enterprises are comfortable with SaaS (such as Salesforce), they are less comfortable with PaaS (Dropbox & Evernote, etc.), and often refuse to touch IaaS (largely Amazon AWS) … for security and compliance reasons. Wider enterprise adoption has been stuck in the mud – largely because of compliance. Enterprises simply can’t get the controls and transparency they need to meet regulations, and they worry that service provider employees might steal their $#!%. The recent Bloomberg terminal spying scandal is a soft-core version of their nightmare scenario. As I was browsing through my feeds this week, it became clear that Amazon understands the compliance and security hurdles it needs to address, and that they are methodically removing them, one by one. The news of an HSM service a few weeks ago was very odd at first glance – it seems like the opposite of a cloud service: non-elastic, non-commodity, and not self-service. But it makes perfect sense for potential customers whose sticking point is a compliance requirement for HSM for key storage and/or generation. A couple weeks ago Amazon announced SOC compliance, adding transparency to their security and operational practices. They followed up with a post discussing Redshift’s new transparent encryption for compute nodes, so stolen disks and snapshots would be unreadable. Last week they announced FedRAMP certification, opening the door for many government organization to leverage Amazon cloud services – probably mostly community cloud. And taking a page from the Oracle playbook, Amazon now offers training and certification to help traditional IT folks close their cloud skills gap. Amazon is doing a superlative job of listening to (potential) customer impediments and working through them. By obtaining these certifications Amazon has made it much easier for customers to investigate what they are doing, and then negotiate a the complicated path to contract with Amazon while satisfying corporate requirements for security controls, logging, and reporting. Training raises IT’s comfort level with cloud services, and in many cases will shift detractors (IT personnel) into advocates. But I still have reservations about security. It’s great that Amazon is addressing critical problems for AWS customers and building these critical security and compliance technologies in-house. But this makes it very difficult for customers to select non-Amazon tools for key management, encryption, logging. Amazon is on their home turf, offering real useful services optimized for their offering, with good bundled pricing. But these solutions are not necessarily designed to make you ‘secure’. They may not even address your most pressing threats because they are focused on common federal and enterprise compliance concerns. These security capabilities are clearly targeted at compliance hurdles that have been slowing AWS adoption. Bundled security capabilities are not always the best ones to choose, and compliance capabilities have an unfortunate tendency to be just good enough to tick the box. That said, the AWS product managers are clearly on top of their game! On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian presenting next week on Tokenization vs. Encryption. Adrian’s Security Implications of Big Data. Favorite Securosis Posts Evernote Business Edition Doubles up on Authentication. Quick Wins with Website Protection Services: Deployment and Ongoing Management. Favorite Outside Posts Mike Rothman: Mandiant’s APT1: Revisited. Is the industry better off because Mandiant published the APT1 report? Nick Selby thinks so, and here are his reasons. I agree. Adrian Lane: Walmart Asked CA Shoppers For Zip Codes. Now It’s Ordered To Send Them Apology Giftcards. It’s a sleazy practice – cashiers act like the law requires shoppers to provide the zip codes, and are trained to stall if they don’t get it. The zip codes enable precise data analytics to identify shoppers. It’s good to see some merchant actually got penalized for this scam. Research Reports and Presentations Email-based Threat Intelligence: To Catch a Phish. Network-based Threat Intelligence: Searching for the Smoking Gun. Understanding and Selecting a Key Management Solution. Building an Early Warning System. Implementing and Managing Patch and Configuration Management. Defending Against Denial of Service (DoS) Attacks. Securing Big Data: Security Recommendations for Hadoop and NoSQL Environments. Tokenization vs. Encryption: Options for Compliance. Pragmatic Key Management for Data Encryption. The Endpoint Security Management Buyer’s Guide. Top News and Posts Paypal Site vulnerable to XSS. Via Threatpost. Two words: Dedicated. Browser. Sky hacked by the Syrian Electronic Army. Postgres Database security patch. Couple weeks old – we missed it – but a remote takeover issue. Anonymous Hacktivist Jeremy Hammond Pleads Guilty to Stratfor Attack. U.S. Government Seizes LibertyReserve.com. Why We Lie. Elcomsoft says Apple’s 2FA has holes. Blog Comment of the Week This week’s best comment goes to LonerVamp, in response to last week’s Friday Summary. As long as Google isn’t looking at biometrics, or other ways to uniquely identify me as a product of their advertising revenues, I’m interested in what they come up with. But just like their Google+ Real Names fiasco, I distrust anything they want to do to further identify me and make me valuable for further targeted advertising. Plus the grey market of sharing backend information with other (paying) parties. For instance, there are regulations to protect user privacy, but often the expectation of privacy is relaxed when it “appears” the a third party already knows you. For instance, if I have a set of data that includes mobile phone numbers (aka accounts) plus full real name of the owners, there can be some shady inferred trust that I am already intimate with you, and thus selling/sharing additional phone/device data with me is ok, as long as its done behind closed doors and neither of us talk about it. Tactics like that are how

Share:
Read Post

Network-based Malware Detection 2.0: Scaling NBMD

It is time to return to our Network-based Malware Detection (NBMD) 2.0 series. We have already covered how the attack space has changed over the past 18 months and how you can detect malware on the network. Let’s turn our attention to another challenge for this quickly evolving technology: scalability. Much of the scaling problem has to do with the increasing sophistication of attackers and their tools. Even unsophisticated attackers can buy sophisticated malware on the Internet. There is a well-developed market for packaged malware and suppliers are capitalizing on it. Market-based economies are a two-edged sword. And that doesn’t even factor in advanced attackers, who routinely discover and weaponize 0-day attacks to gain footholds in victim networks. All together, this makes scalability a top requirement for a network-based malware detection. So why is it hard to scale up? There are a few issues: Operating systems: Unless you have a homogenous operating system environment you need to test each malware sample against numerous vulnerable operating systems. The one-to-many testing requirement means that every malware sample requires 3-4 (or more) virtual machines, running different operating systems, to adequately test the file. VM awareness: Even better, attackers now check whether their malware is executing within a virtual machine. If so the malware either goes dormant or waits a couple hours, in hopes it will be cleared through the testbed and onto vulnerable equipment before it starts executing for real. So to fully test malware the sandbox needs to let it cook for a while. So you to spin up multiple VMs and need to let them run for a while – very resource intensive. Network impact: Analyzing malware isn’t just about determining a file is malicious. You also need to understand how it uses the network to connect to command and control infrastructure and perform internal reconnaissance to detect lateral movement. That requires watching the network stack on every VM and parsing network traffic patterns. Analyze everything: You can’t restrict your heavy analysis to only files that look obviously bad based on simple file characteristics. With the advanced obfuscation techniques in use today you need to analyze all unknown files. Given the number of files entering a typical enterprise network daily, you can see how the analysis requirements scale up quickly. As you can see the computing requirements to fully test inbound files are substantial and growing exponentially. Of course many people choose to reduce their analysis. You could certainly make a risk-based decision not even to try detecting VM-aware malware, and just pass or block each file instantly. You might decide not to analyze documents or spreadsheets for macros. You may not worry about the network characteristics of malware. These are all legitimate choices to help network-based malware detection scale without a lot more iron. But each compromise weakens your ability to detect malware. Everything comes back to risk management and tradeoffs. But, for what it’s worth, we recommend not skipping malware tests. Scaling the Malware Analysis Mountain Historically the answer to most scaling problems has been to add computing power – generally more and/or bigger boxes. The vendors selling boxes love that answer, of course. Enterprise customers not as much. Scaling malware detection hardware raises two significant issues. First is cost. We aren’t just referring to the cost of the product – each box requires a threat update subscription and maintenance. Second is the additional operational cost of managing more devices. Setting and maintaining policies on multiple boxes can be challenging; ensuring the device is operational, properly configured, and patched is more overhead. You need to keep each device within the farm up to date. New malware indicators appear pretty much daily and need to be loaded onto each device to remain current. We have seen this movie before. There was a time when organizations ran anti-spam devices within their own networks using enterprise-class (expensive) equipment. When the volume of spam mushroomed enterprises needed to add devices to analyze all the inbound mail and keep it flowing. This was great for vendors but made customers cranky. The similarities to network-based malware detection are clear. We won’t keep you in suspense – the anti-spam story ends in the cloud. Organizations realized they could make scaling someone else’s problem by using a managed email security service. So they did, en masse. This shifted the onus on providers to keep up with the flood of spam, and to keep devices operational and current. We expect a similar end to the NBMD game. We understand that many organizations have already committed to on-premise devices. If you are one of them you need to figure out how to scale your existing infrastructure. This requires central management from your vendor and a clear operational process for updating devices daily. At this point customer premise NBMD devices are mature enough to have decent central management capabilities, allowing you to configure policies and deploy updates throughout the enterprise. Keeping devices up to date requires a strong operational process. Some vendors offer the ability to have each device phone home to automatically download updates. Or you could use a central management console to update all devices. Either way you will want some human oversight of policy updates because most organizations remain uncomfortable with having policies and other device configurations managed and changed by a vendor or service provider. With good reason – it doesn’t happen often but bundled endpoint protection signature updates can brick devices. Bad network infrastructure updates don’t brick devices, but how useful is an endpoint without network access? As we mentioned earlier, we expect organizations to increasingly consider and choose cloud-based analysis, in tandem with an on-premise enforcement device for collection and blocking. This shifts responsibility for scaling and updating onto the provider. That said, accountability cannot be outsourced, so you need to ensure both detection accuracy (next post) and reasonable sample analysis turnaround times. Make sure to build this oversight into your processes. Another benefit of the cloud-based approach is the ability to share intelligence

Share:
Read Post

Evernote Business Edition Doubles up on Authentication

Joining the strong(er) authentication craze (which we enthusiastically support), along with recent entrants Twitter and Amazon Web Services, Evernote is now including two-factor authentication and access logging for its business edition. Two steps in the right direction for security. I expect to see a growing trend of many more of these types of services including security features like this in their paid versions as a valuable upgrade from their freeware. Share:

Share:
Read Post

Quick Wins with Website Protection Services: Deployment and Ongoing Management

For this series focused on Quick Wins with Website Protection Services, the key is getting your sites protected quickly without breaking too much application functionality. Your public website is highly visible to both customers and staff. Most such public sites capture private information, so site integrity is important. Lastly, your organization spends a ton of money geting the latest and greatest functionality on the site, so they don’t take kindly to being told their shiny objects aren’t supported by security. All this adds up to a tightrope act to protect the website while maintaining performance, availability, and functionality. Navigating these tradeoffs is what makes security a tough job. Planning the Deployment The first step is to set up with your website protection service (WPS). If you are just dealing with a handful of sites and your requirements are straightforward you can probably do this yourself. You don’t have much pricing leverage so you won’t get much attention from a dedicated account team. Obviously if you do have enterprise-class requirements (and budget), you go through the sales fandango with the vendor. This involves a proof of concept, milking their technical sales resources to help set things up, and then playing one WPS provider against another for the best price, just like with everything else. Before you are ready to move your site over (even in test mode) you have some decisions to make. Start at the beginning. You need to decide which sites need to be protected. The optimal answer is all of them, but we live in an imperfect world. You also may not know the full extent of all your website properties. With your list of high-priority sites which must be protected, you need to understand which pages & areas are good for the public and search spiders to see, and which are not. It is quite possible that everything is fair game for everybody, but you cannot afford to assume so. Speaking of search engines and automated crawlers, you will need to figure out how to handle those inhuman visitors. One key feature described in the last post is the ability to control which bots are allowed to visit and which are not. While you are thinking about the IP ranges that can visit your site, you need to decide whether to restrict inbound network connections to only the WPS. This blocks attackers from attacking your site directly, but to take advantage of this option you will need to work with the network security team to lock it down on your firewall. These are some of the decisions you need to make before you start routing traffic to the WPS. A level of abstraction above bots and IP addresses is users and identities. Will you restrict visitors by geography, user agent (some sites don’t allow IE6 to connect, for example), or anything else? WPS services use big data analytics (just ask them) to track details about certain IP addresses and speculate on the likely intent of visitors. Using that information you could conceivably block unwanted users from connecting in an attempt to prevent malicious activity. Kind of like Minority Report for your website. That’s all good and well, but as we learned during the early IPS days, blocking big customers causes major headaches for the security team – so be careful when pulling the trigger on this kind of controls. That’s why we are still in the planning phase here. Once we get to testing you will be able to thoroughly understand the impact of your policies on your site. Finally, you need to determine which of your administrators will have access to the WPS console and be able (re-)configure the service. Like any other cloud-based service, unauthorized access to the management console is usually game over. So it is essential to make sure authorizations and entitlements are properly defined and enforced. Another management decision involves who is alerted of WPS issues such as downtime and attacks – the same process you follow for your own devices. Defining handoffs and accountabilities between your team and the WPS group before you move traffic is essential. Test (or Suffer the Consequences) Now that you have planned out the deployment sufficiently, you need to work through testing to figure out what will break when you go live. Many WPS services claim you can be up and running in less than an hour, and that is indeed possible. But getting a site running is not exactly the same as getting it running with full functionality and security. So we always recommend a test to understand the impact of front ending your website with a WPS. You may decide any issues are more than outweighed by the security improvement from the WPS, or perhaps not. But you should be able to have an educated discussion with senior management about trade-offs before you flip the switch. How can you test these services? Optimally you already have a staging site where you test functionality before it goes live, and you can run a full battery of QA tests through the WPS. Of course that might require the network team to temporarily add firewall rules to allow traffic to flow properly to a protected staging environment. You might also use DNS hocus pocus to route a tightly controlled slice of traffic through the WPS for testing, while the general public still connects directly to your site. Much of the testing mechanics depend on your internal web architecture. WPS providers should be able to help you map out a testing plan. Then you get to configure the WAF rules. Some WPS have ‘learning’ capabilities, whereby they monitor site traffic during a burn-in period, and then suggest rules to protect your applications. That can get you going quickly, and this is a Quick Wins initiative so we can’t complain much. But automatically generated rules may not provide sufficient security. We favor an incremental approach, where you start with the most secure settings you can, see what breaks using the WPS, then tune accordingly. Obviously some functions of your applications must not be impacted, so

Share:
Read Post

Friday Summary: May 24, 2013

This month Google announced a new five year plan for identity management, and update from 2008’s five year plan. Their look backward is as interesting as the revised roadmap. Google recognized their 2-factor auth was more like one-time 2-factor, and that the model has been largely abused in practice. They also concluded that risk-based authentication has worked. A risk-based approach means more sensitive or unusual operations, such as credential changes and connections from unusual locations, ratchet up security by activating additional authentication hurdles. This has been a recent trend, and Google’s success will convince other organizations to get on board. The new (2013-2018) identity plan is for a stricter 2-factor authentication scheme, a continuing push for OpenID, locking ‘bearer’ tokens to specific devices (to reduce the damage an attacker can cause with stolen tokens), and a form of Android app monitoring that alerts users to risky behavior. These are all very good things! Google did not explicitly state that passwords and password recovery schemes are broken, but it looks like they will promote biometrics such as face and fingerprint scanning to unlock devices and authenticate users. The shift away from passwords is a good thing, but what will replace them is still being hotly debated. From the roadmap Google is looking to facial and fingerprint scans first. This latter is a big deal from a outfit like Google because consumers have shown they largely don’t care about security. Despite more than a decade of hijacked accounts, data breaches, and identity theft, people still haven’t shifted from saying they care about security to actually adopting security. Even something as simple and effective as personal password managers is too much for most people to bother with. A handful of small companies offer biometric apps for mobile devices – targeting consumers and hoping Joe User will actually want to buy multi-factor authentication for his mobile device. So far that pitch has been about as successful as offering brussels sprouts to a toddler. But companies do care about mobile security. Demand for things like biometrics, NFC, risk-based access controls, and 2-factor authentication is all driven by enterprises. But if enterprises (including Google) drive advanced (non-password) authentication to maturity – meaning a point where it’s easier and more secure than our current broken password security – users will eventually use it too. Google has the scale and pervasiveness to push the needle on security. Initiatives such as their bug bounty program have succeeded, leading the way for other firms. If Google demonstrates similar successes with better identity systems, they are well positioned to drive both awareness and comfort with cloud-based identity solutions – in a way Courion, Okta, Ping Identity, Symplified, and other outfits cannot. There are many good technologies for identity and access management, but someone needs to make the user experience much easier before we can see widespread adoption. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian’s DR post: Why Database Monitoring?. Favorite Securosis Posts David Mortman: (Scape)goats travel under the bus. Mike Rothman: Websense Goes Private. It’s been a while since we have had two deals in a week in security, and these were both driven by private equity money. Happy days are here again! Rich’s analysis of the first deal was good. Adrian Lane: Solera puts on a Blue Coat. Other Securosis Posts Making Browsers Hard Targets. Network-based Malware Detection 2.0: Evolving NBMD. Incite 5/22/2013: Picking Your Friends. Wendy Nather abandons the CISSP – good riddance. Spying on the Spies. Websense Going Private. Awareness training extends to the top. This botnet is no Pushdo-ver. A Friday Summary from Boulder: May 17, 2013. Quick Wins with Website Protection Services: Protecting the Website. Quick Wins with Website Protection Services: Are Websites Still the Path of Least Resistance? Favorite Outside Posts Dave Lewis: Woman Brags About Hitting Cyclist, Discovers Police Also Use Twitter. Wow… just, wow. David Mortman: Business is a Sport, You Need A Team. Mike Rothman: Mrs. Y’s Rules for Security Bloggers. Some folks out there think it’s easy to be a security blogger. It’s hard, actually. But with these 6 rules you too can be on your way to a career of pontification, coffee addiction, and a pretty okay lifestyle. But they are only for the brave. Adrian Lane: A Guide to Hardening Your Firefox Browser in OS X. Good post on securing Firefox from Stach and Liu. Research Reports and Presentations Email-based Threat Intelligence: To Catch a Phish. Network-based Threat Intelligence: Searching for the Smoking Gun. Understanding and Selecting a Key Management Solution. Building an Early Warning System. Implementing and Managing Patch and Configuration Management. Defending Against Denial of Service (DoS) Attacks. Securing Big Data: Security Recommendations for Hadoop and NoSQL Environments. Tokenization vs. Encryption: Options for Compliance. Pragmatic Key Management for Data Encryption. The Endpoint Security Management Buyer’s Guide. Top News and Posts Krebs, KrebsOnSecurity, As Malware Memes. Say what you will, but malware authors have a sense of humor. NC Fuel Distributor Hit by $800,000 Cyberheist. The Government Wants A Backdoor Into Your Online Communications. For everything they don’t already have a backdoor for. Hacks labelled hackers for finding security hole. Twitter: Getting started with login verification. Chinese hackers who breached Google gained access to sensitive data, U.S. officials say. Yahoo Japan Suspects 22 Million IDs Stolen. It’s like 2005 all over again. Skype’s ominous link checking: facts and speculation. Bromium: A virtualization technology to kill all malware, forever. Interesting technology. Indian companies at center of global cyber heist. Update on last week’s $45M theft. Blog Comment of the Week This week’s best comment goes to Simon Moffatt, in response to Wendy Nather abandons the CISSP – good riddance. CISSP is like any professional qualification. When entering a new industry with zero or limited experience, you need some method to prove competence. Organisations need to de-risk the recruitment process as much as possible when recruiting individuals they don’t know. It’s a decent qualification, just not enough on its own. Experience, like in any role is

Share:
Read Post

Making Browsers Hard Targets

Check out this great secure browser guide from the folks at Stach and Liu. The blog post is OK, but the PDF guide is comprehensive and awesome. Here is the intro: Sometimes conventional wisdom lets us down. Recently some big names have been in the headlines: Apple, Facebook, Microsoft. They all got owned, and they got owned in similar ways: Specially-crafted malware was targeted at employee computers, not servers. The malware was injected via a browser, most often using malicious Java applets. The Java applets exploited previously unknown “0day” vulnerabilities in the Java VM. The Internet browser was the vector of choice in all cases. And an even better summary of what it tells us: Patching doesn’t help: It goes without saying that there are no security patches for 0day. Anti-virus won’t work: It was custom malware. There are no AV signatures. No attachments to open: Attacks are triggered by simply visiting a web page. No shady websites required: Attacks are launched from “trust-ed” advertising networks embedded within the websites you visit. And the kill shot: “We need to lock down our browsers.” Just in case you figured using Chrome on a Mac made you safe… The PDF guide goes through a very detailed approach to reducing your attack surface, sandboxing your browser and other critical apps, and changing your browser habits. Funny enough, they demonstrate locking down the Mac Gatekeeper functionality to limit the apps that can be installed on your device. And the software they suggest is Little Snitch, an awesome outbound firewall product I use religiously. They didn’t mention that as another means to secure your browser but I get some piece of mind from using single-purpose apps (built with Fluid) for sensitive sites, and locking down the outbound traffic allowed to each app with Little Snitch. This level of diligence isn’t for everyone. But if you want to be secure against the kinds of attacks we see targeted at browsers, which don’t require any user activity to run, you’ll do it. Photo credit: “Target” originally uploaded by Chris Murphy Share:

Share:
Read Post

Quick Wins with Website Protection Services: Protecting the Website

In the introductory post in the Quick Wins with Website Protection Services series, we described the key attack vectors that usually result in pwnage of your site and possibly data theft, or an availability issue with your site falling down and not being able to get back up. Since this series is all about Quick Wins, we aren’t going to belabor the build-up, rather let’s jump right in and talk about how to address these issues. Application Defense As we mentioned in the Managing WAF paper, it’s not easy to keep a WAF operating effectively, which involves lots of patching and rule updates based on new attacks and tuning the rules to your specific application. Doing nothing isn’t an option, given the fact that attackers use your site as the path of least resistance to gain a foothold in your environment. One of the advantages of front-ending your website with a website protection service (WPS) is to take advantage of a capability we’ll call WAF Lite. Now WAF Lite is first and foremost — simple. You don’t want to spend a lot of time configuring or tuning the application defense. The key to getting a Quick Win is to minimize required customization, while providing adequate coverage against the most likely attacks. You want it to just work and block the stuff that’s pretty obviously an attack. You know, stuff like XSS, SQLi, and the other stuff that makes the OWASP Top 10 list. These are pretty standard attack types and it’s not brain surgery to build rules to block them. It’s amazing that everyone doesn’t have this kind of simple defense implemented. Out of one side of our mouths we talk about the need for simplicity. But we also need the ability to customize and/or tune the rules when you need to, which shouldn’t be that often. It’s kind of like having a basic tab, which gives you a few check boxes to configure and needs to be within the capabilities of the unsophisticated admin. That’s what you should be using most of the time. But when you need it, or when you enlist expert help, you’d like to have an advanced tab to give you lots of knobs and granular controls. Although a WPS can be very effective against technical attacks, these services are not going to do anything to protect against a logic error on the part of your application. If your application or search engine or shopping cart can be gamed using legitimate application functions, no security service (or dedicated WAF, for that matter) can do anything about that. So parking your sites behind a WPS doesn’t mean you don’t have to do QA testing and have smart penetration tester types trying to expose potential exploits. OK, we’ll end the disclaimer there. We’re talking about service offerings in this series, but that doesn’t mean you can’t accomplish all of these goals using on-premise equipment and managing the devices yourself. In fact, that’s how stuff got done before the fancy cloud-everything mentality started to permeate through the technology world. But given the fact that we’re trying to do things quickly, a service gives you the opportunity to deploy within hours and not require significant burn-in and tuning to bring the capabilities online. Platform Defense Despite the application layer being the primary target for attacks on your website (since it’s the lowest hanging fruit for attackers) that doesn’t mean you don’t have to pay attention to attacks on your technology stack. We delved a bit into some of the application denial of service (DoS) attacks targeting the building blocks of your application, like Apache Killer and Slowloris. A WPS can help deal with this class of attacks by implementing rate controls on the requests hitting your site, amongst other application defenses. Given that search engines never forget and some data you don’t want in the great Googly-moogly index, it pays to control the pages available for crawling by the search bots. You can configure this using a robots.txt file, but not every search engine plays nice. And some will jump right to the disallowed sections, since that’s where the good stuff is, right? Being able to block automated requests and other search bots via the WPS can keep these pages off the search engines. You’ll also want to restrict access to unauthorized areas of your site (and not just from the search engines discussed above). This could be pages like the control panel, sensitive non-public pages, or your staging environment where you test feature upgrades and new designs. Unauthorized pages could also be back doors left by attackers to facilitate getting back into your environment. You also want to be able to block nuisance traffic, like comment spammers and email harvesters. These folks don’t cause a lot of damage, but are a pain in the rear and if you can get rid of them without any incremental effort, it’s all good. A WPS can lock down not only where a visitor goes, but also where they come from. For some of those sensitive pages you may want to enforce those pages can only be accessed by someone on the corporate network (either directly or virtually via a VPN). So the WPS can block access to those pages unless the originating IP is on the authorized list. Yes, this (and most other controls) can be spoofed and gamed, but it’s really about reducing your attack surface. Availability Defense We can forget about keeping the site up and taking requests, and a WPS can help with this function in a number of ways. First of all, a WPS provider has bigger pipes than you. In most cases, a lot bigger that gives them the ability absorb a DDoS without disruption or even impacting performance. You can’t say the same. Of course, be wary of bandwidth based pricing, since a volumetric attack won’t just hammer your site, but also your wallet. At some point, if the WPS provider has enough customers you can pretty much guarantee at least one of their

Share:
Read Post

Network-based Malware Detection 2.0: Evolving NBMD

In the first post updating our research on Network-based Malware Detection, we talked about how attackers have evolved their tactics, even over the last 18 months, to defeat emerging controls like sandboxing and command & control (C&C) network analysis. As attackers get more sophisticated defenses need to as well. So we are focusing this series on tracking the evolution of malware detection capabilities and addressing issues with early NBMD offerings – including scaling, accuracy, and deployment. But first we need to revisit how the technology works. For more detail on the technology you can always refer back to the original Network-based Malware Detection paper. Looking for Bad Behavior Over the past few years malware detection has moved from file signature matching to isolating behavioral characteristics. Given the ineffectiveness of blacklist detection the ability to identify malware behaviors has become increasingly important. We can no longer judge malware by what it looks like – we need to actually analyze what a file does to determine whether it’s malicious. We discussed this behavioral analysis in Evolving Endpoint Malware Detection, focusing on how new approaches have added contextual determination to make the technology far more effective. You can read our original paper for full descriptions of these kinds of tells that usually mean a device is compromised; but a simple list includes memory corruption/injection/buffer overflows; system file/configuration/registry changes; droppers, downloaders, and other unexpected programs installing code; turning off existing anti-malware protections; and identity and privilege manipulation. Of course this list isn’t comprehensive – it’s just a quick set of guidelines for kinds of information you can search devices for, when you are on the hunt for possible compromises. Other things you might look for include parent/child process inconsistencies, exploits disguised as patches, keyloggers, and screen grabbing. Of course these behaviors aren’t necessarily bad – that’s why you want to investigate as possible, before any outbreak has a chance to spread. The innovation in the first generation of NBMD devices was running this analysis on a device in the perimeter. Early devices implemented a virtual farm of vulnerable devices in a 19-inch rack. This enabled them to explode malware within a sandbox, and then to monitor for the suspicious behaviors described. Depending on the deployment model (inline or out of band), the device either fired an alert or could actually block the file from reaching its target. It turns out the term sandbox is increasingly unpopular amongst security marketers for some unknown reason, but that’s what they use – a protected and monitored execution environment for risk determination. Later in this series we will discuss different options for ensuring the sandbox can to your needs. Tracking the C&C Malware Factory The other aspect of network-based malware detection is identifying egress network traffic that shows patterns typical of communication between compromised devices and controlling entities. Advanced attacks start by compromising and gaining control of a device. Then it establishes contact with its command and control infrastructure to fetch a download with specific attack code, and instructions on what to attack and when. In Network-based Threat Intelligence we dug deep into the kinds of indicators you can look for to identify malicious activity on the network, such as: Destination: You can track the destinations of all network requests from your environment, and compare it against a list of known bad places. This requires an IP reputation capability – basically a list of known bad IP addresses. Of course IP reputation can be gamed, so combining it with DNS analysis to identify likely Domain Generation Algorithms (DGA) helps to eliminate false positives. Strange times: If you have a significant volume of traffic which is out of character for that specific device or time – such as the marketing group suddenly performing SQL queries against engineering databases – it’s time to investigate. File types, contents, and protocols: You can also learn a lot by monitoring all egress traffic, looking for large file transfers, non-standard protocols (encapsulated in HTTP or HTTPS), weird encryption of the files, or anything else that seems a bit off… These anomalies don’t necessarily mean compromise, but they warrant further investigation. User profiling: Beyond the traffic analysis described above, it is helpful to profile users and identify which applications they use and when. This kind of application awareness can identify anomalous activity on devices and give you a place to start investigating. Layers FTW We focus on network-based malware detection in this series, but we cannot afford to forget endpoints. NBMD gateways miss stuff. Hopefully not a lot, but it would be naive to believe you can keep computing devices (endpoints or servers) clean. You still need some protection on your endpoints, but at least you should have controls that work together to ensure you have full protection, when the device is on the corporate network and when it is not. This is where threat intelligence plays a role, making both network and endpoint malware detection capabilities smarter. You want bi-directional communication so malware indicators found by the network device or in the cloud are accessible to endpoint agents. Additionally, you want malware identified on devices to be sent to the network for further analysis, profiling, determination, and ultimately distribution of indicators to other protected devices. This wisdom of crowds is key to fighting advanced malware. You may be one of the few, the lucky, and the targeted. No, it’s not a new soap opera – it just means you will see interesting malware attacks first. You’ll catch some and miss others – and by the time you clean up the mess you will probably know a lot about what the malware does, how, and how to detect it. Exercising good corporate karma, you will have the opportunity help other companies by sharing what you found, even if you remain anonymous. If you aren’t a high-profile target this information sharing model works even better, allowing you to benefit from the misfortune of the targeted. The goal is to increase your chance of catching the malware

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.