Securosis

Research

Endpoint Security Management Buyer’s Guide Published (with the Index of Posts)

We have published the Endpoint Security Management Buyer’s Guide paper, which provides a strategic view of Endpoint Security Management, addressing the complexities caused by malware’s continuing evolution, device sprawl, and mobility/BYOD. The paper focuses on periodic controls that fall under good endpoint hygiene (such as patch and configuration management) and ongoing controls (such as device control and file integrity monitoring) to detect unauthorized activity and prevent it from completing. The crux of our findings involve use of an endpoint security management platform to aggregate the capabilities of these individual controls, providing policy and enforcement leverage to decrease cost of ownership, and increasing the value of endpoint security management. This excerpt says it all: Keeping track of 10,000+ of anything is a management nightmare. With ongoing compliance oversight and evolving security attacks against vulnerable endpoint devices, getting a handle on managing endpoints becomes more important every day. We will not sugarcoat things. Attackers are getting better – and our technologies, processes, and personnel have not kept pace. It is increasingly hard to keep devices protected, so you need to take a different and more creative view of defensive tactics, while ensuring you execute flawlessly – because even the slightest opening provides opportunity for attackers. One of the cool things we ve added to the new Buyer’s Guide format was 10 questions to consider as you evaluate and deploy the technology: What specific controls do you offer for endpoint management? Can the policies for all controls be managed via your console? Does your organization have an in-house research team? How does their work make your endpoint security management product better? What products, devices, and applications are supported by your endpoint security management offerings? What standards and/or benchmarks are offered out of the box for your configuration management offering? What kind of agentry is required by your products? Is the agent persistent or dissolvable? How are updates distributed to managed devices? What do you do to ensure agents are not tampered with? How do you handle remote and disconnected devices? What is your plan to extend your offering to mobile devices and/or virtual desktops (VDI)? Where does your management console run? Do we need a dedicated appliance? What kind of hierarchical management do you support? How customizable is the management interface? What kinds of reports are available out of the box? What is involved in customizing specific reports? What have you done to ensure the security of your endpoint security management platform? Is strong authentication supported? Have you done an application penetration test on your console? Does your engineering team use any kind of secure software development process? You can check out the series of posts we combined into the eventual paper. The Business Impact of Managing Endpoints The ESM Lifecycle Periodic Controls Ongoing Controls – Device Control Ongoing Controls – File Integrity Monitoring Platform Buying Considerations 10 Questions We thank Lumension Security for licensing this research, and enabling us to distribute it at no cost to readers. Check out the full paper in our research library, or download it directly (PDF). Share:

Share:
Read Post

Securing Big Data: Operational Security Issues

Before I dig into today’s post I want to share a couple observations. First, my new copy of the Harvard Business Review just arrived. The cover story is “Getting Control of Big Data”. It’s telling that HBR thinks big data is a trend important enough to warrant a full spread, and feel business managers need to understand big data and the benefits and risks it poses to business. As soon as I finish this post I intend to dive into these articles. Now that I have just about finished this research effort, I look forward to contrasting what I have discovered with their perspective. Second, when we talk about big data security, we are really referring to both data and infrastructure security. We want to protect the application (or database, if you prefer that term) that manages data, with the end goal of protecting the information under management. If an attacker can access data directly, bypassing the database management system, they will. Barring a direct path to the information, they will look for weaknesses in or ways to subvert the database application. So it’s important to remember that when we talk about database security we mean both data and infrastructure protection. Finally, a point about clarity. Big data security is one of the tougher topics to describe, especially as we here at Securosis prefer to describe things in black and white terms for the sake of clarity. But for just about every rule we establish and every emphatic statement we make, we have to acknowledge exceptions. Given the variety of different big data distributions and add-on capabilities, you can likely find a single instance of every security control described in today’s post. But it’s usually a single security control, like encryption, with the other security controls absent from the various packages. Nothing offers even a partial suite of solutions, much less a comprehensive offering. Today I want to discuss operational security of big data environments. Unlike yesterday’s post that discussed architectural security issues endemic to the platform, it is now time to address security controls of an operational nature. That includes “turning the dials” things like configuration management and access controls, as well as “bolt-on” capabilities such as auditing and security gateways. We see the greatest impact in these areas, and vendors jumping in with security offerings to fill the gaps. Normally when we consider how to secure data repositories, we consider the following major areas: Encryption: The standard for protecting data at rest is encryption to protect data from undesired access. And just because folks don’t use archiving features to back up data does not mean a rogue DBA or cloud service manager won’t. I think two or three of the more obscure NoSQL variants provides encryption for data at rest, but most do not. And the majority of available encryption products offer neither sufficient horizontal scalability nor adequate transparency for use with big data. This is a critical issue. Administrative data access: Each node has an admin, and each admin can read the node’s data if they choose. As with encryption, we need a boundary or facility to provide separation of duties between different administrators. The requirement is the same as on relational platforms – but big data platforms lack their array of built-in facilities, documentation, and third party tools to address requirements. Unwanted direct access to data files or data node processes can be addressed through a combination of access controls, separation of roles, and encryption technologies, but out-of-the box data is only as secure as your least trustworthy administrator. It’s up to the system designer to select controls to close this gap. Configuration and patch management: When managing a cluster of servers, it’s common to have nodes running different configurations and patch levels. And if you’re using dissimilar platforms to support the cluster you need to figure out what how to handle management. Existing configuration management tools work for underlying platforms, and HDFS Federation will help with cluster management, but careful planning is still necessary. I will go more detail about how in the next post, when I make recommendations. The cluster may tolerate nodes cycling without loss of data service interruption, but reboots can still cause serious performance issues, depending on which nodes are affected and how the cluster is configured. The upshot is that people don’t patch, fearing user complaints. Perhaps you have heard that one before. Authentication of applications/clients: Hadoop uses Kerberos to authenticate users and add-on services to the HDFS cluster. But a rogue client can be inserted onto the network if a Kerberos ticket is stolen or duplicated. This is more of a concern when embedding credentials in virtual and cloud environments, where it’s relatively easy to introduce an exact replica of a client app or service. A clone of a node is often all that’s needed to introduce a corrupted node or service into a cluster, it’s easy to impersonate or a service in the cluster, but it requires an attacker to compromise the management plane of your environment, or obtain a backup, of a client. Regardless of it being a pain to set up, strong authentication through Kerberos is one of your principle security tools, it helps solve the critical problem of who can access hadoop services. Audit and logging: One area with a variety of add-on capabilities is logging. Scribe and LogStash are open source tools that integrate into most big data environments, as do a number of commercial products. So you just need to find a compatible tool, install it, integrate it with other systems such as SIEM or log management, and then actually review the results. Without actually looking at the data and developing policies to detect fraud, logging is not useful. Monitoring, filtering, and blocking: There are no built-in monitoring tools to look for misuse or block malicious queries. In fact I don’t believe anyone has ever described what a malicious query might look like in a big data environment – other than crappy MapReduce

Share:
Read Post

Defending Against DoS Attacks: Attacks

Our first post built a case for considering availability as an aspect of security context, rather than only confidentiality and integrity. This has been driven by Denial of Service (DoS) attacks, which are used by attackers in many different ways, including extortion (using the threat of an attack), obfuscation (to hide exfiltration), hacktivism (to draw attention to a particular cause), or even friendly fire (when a promotion goes a little too well). Understanding the adversary and their motivation is one part of the puzzle. Now let’s look at the types of DoS attacks you may face – attackers have many arrows in their quivers, and use them all depending on their objectives and targets. Flooding the Pipes The first kind of Denial of Service attack is really a blunt force object. It’s basically about trying to oversubscribe the bandwidth and computing resources of network (and increasingly server) devices to impact resource availability. These attacks aren’t very sophisticated, but as evidenced by the ongoing popularity of volume-based attacks, fairly effective effective. These tactics have been in use since before the Internet bubble, leveraging largely the same approach. But they have gotten easier with bots to do the heavy lifting. Of course, this kind of blasting must be done somewhat carefully to maintain the usefulness of the bot, so bot masters have developed sophisticated approaches to ensure their bots avoid ISPs penalty boxes. So you will see limited bursts of traffic from each bot and a bunch of IP address spoofing to make it harder to track down where the traffic is coming from, but even short bursts from 100,000+ bots can flood a pipe. Quite a few specific techniques have been developed for volumetric attacks, but most look like some kind of flood. In a network context, the attackers focus on overfilling the pipes. Floods target specific protocols (SYN, ICMP, UDP, etc.), and work by sending requests to a target using the chosen protocol, but not acknowledging the response. Enough of these outstanding requests limit the target’s ability to communicate. But attackers need to stay ahead of Moore’s Law, because targets’ ability to handle floods has improved with processing power. So network-based attacks may include encrypted traffic, forcing the target to devote additional computational resources to process massive amounts of SSL traffic. Given the resource-intensive nature of encryption, this type of attack can melt firewalls and even IPS devices unless they are configured specifically for large-scale SSL support. We also see some malformed protocol attacks, but these aren’t as effective nowadays, as even unsophisticated network security perimeter devices drop bad packets at wire speed. These volume-based attacks are climbing the stack as well, targeting web servers by actually completing connection requests and then making simple GET request and resetting the connection over and over again, with approximately the same impact as a volumetric attack – over-consumption of resources effectively knocking down servers. These attacks may also include a large payload to further consume bandwidth. The now famous Low Orbit Ion Cannon, a favorite tool of the hacktivist crowd, has undertaken a similar evolution, first targeting network resources and proceeding to now target web servers as well. It gets even better – these attacks can be magnified to increase their impact by simultaneously spoofing the target’s IP address and requesting sessions from thousands of other sites, which then bury the target in a deluge of misdirected replies, further consuming bandwidth and resources. Fortunately defending against these network-based tactics isn’t overly complicated, as we will discuss in the next post, but without a sufficiently large network device at the perimeter to block these attacks or an upstream service provider/traffic scrubber to dump offending traffic, devices fall over in short order. Overwhelming the Application But attackers don’t only attack the network – they increasingly attack the applications as well, following the rest of attackers up the stack. Your typical n-tier web application will have some termination point (usually a web server), an application server to handle application logic, and then a database to store the data. Attackers can target all tiers of the stack to impact application availability. So let’s dig into each layer to see how these attacks work. The termination point is usually the first target in application DoS attacks. They started with simple GET floods as described above, but quickly evolved to additional attack vectors. The best known application DoS attack is probably RSnake’s Slowloris, which consumes web server resources by sending partial HTTP requests, effectively opening connections and then leaving the sessions open by sending additional headers at regular intervals. This approach is far more efficient than the GET flood, requiring only hundreds of requests at regular intervals rather than constant thousands, and only requires one device to knock down a large site. These application attacks have evolved over time and now send complete HTTP requests to evade IDS and WAF devices looking for incomplete HTTP requests, but they tamper with payloads to confuse applications and consume resources. As defenders learn the attack vectors and deploy defenses, attackers evolve their attacks. The cycle continues. Web server based attacks can also target weaknesses in the web server platform. For example the Apache Killer attack sends a malformed HTTP range request to take advantage of an Apache vulnerability. The Apache folks quickly patched the code to address this issue, but it shows how attackers target weaknesses in the underlying application stack to knock the server over. And of course unpatched Apache servers are still vulnerable today at many organizations. Similarly, the RefRef attack leverages SQL injection to inject a rogue .js file onto a server, which then hammers a backend database into submission with seemingly legitimate traffic originating from an application server. Again, application and database server patches are available for the underlying infrastructure, but vulnerability remains if either patch is missing. Attackers can also target legitimate application functionality. One example of such an attack targets the search capability within a web site. If an attacker scripts a series of overly broad

Share:
Read Post

Incite 9/27/2012: They Own the Night

Our days just keep getting longer and longer. When the kids were younger afternoons and early evenings were a blur of activities, homework, hygiene, meals, reading, and then bed. Most nights the kids were in bed by 8:30 and the Boss and I could eat in peace, watch a little TV, catch up, and basically take a breath. But since XX1 entered middle school, things have changed. The kids have adapted fine. The Boss and me, not so much. Now it’s all about dividing and conquering. I handle the early shift and get the twins ready for school. They are on the bus by 7:20 and then I usually head over to some coffee shop and start working. The Boss handles XX1 and has her on the bus at 8:10, and then she starts her day of working through all the crap that has to happen to keep the trains running. The twins get off the bus at 3pm or so. Then it’s homework time and shuttling them off to activities. XX2 isn’t home until 4:30; then some days she can get an hour or two of work in, and other days she can’t. Inevitably she gets home from dance and has to start her homework. She usually wraps up around 10, but I usually get enlisted to help with the writing or math. And there are nights when XX1 is up until 11 or even later trying to get everything done. So there is no peace and quiet. Ever. We find ourselves staying up past midnight because those 90 minutes after all the kids go to bed are the only time we have to catch up and figure out the logistics for the next day. Which assumes that I don’t have work I need to get done. I know Rich has it harder right now with his 2 (and soon to be 3) kids under 4. I remember those days, and don’t miss the sleep deprivation. And I’m sure he misses sleeping in on weekends. At least I get to do that – our kids want us to sleep as late a possible, so they can watch more crappy shows on Nick Jr. But I do miss the quiet evenings after the kids were sleeping. Those are likely gone for a little while. For the next 9 years or so, the kids own the night. –Mike Photo credits: We Own The Night originally uploaded by KJGarbutt Heavy Research We’re back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. Defending Against Denial of Service (DoS) Attacks Introduction Securing Big Data Architectural Issues Security Issues with Hadoop Incite 4 U Responsible is in the eye of the beholder: My personal views on disclosure have changed a lot over the years. If you haven’t changed your views in the last 10 years you are either a hermit or a religious zealot – the operating environment has changed a lot. And the longer I have watched (and participated) in the debate, the more I realize it seems to be more about egos than the good of the public. And I fully mean this on all sides – researchers, vendors, users (but less), government, and pundits. Take Richard Bejtlich’s latest post on vendors or researchers going public when they find command and control servers. He expresses the legitimate concern that whoever finds and publicizes this information may often be blowing a law enforcement or intelligence operation. On the other hand law enforcement and intelligence agencies sure don’t make it easy to report these findings, and researchers might be sitting there watching people get compromised (including their customers). This is a hard problem to solve – if we even can. Just ask the Stratfor guys who were materially damaged while the FBI was not only watching, but ‘assisting’ the attack via their confidential informant. Better communication and cooperation is probably the answer, but I have absolutely no confidence that can happen at scale, even if some companies (including Richard’s employer) have those ties. No, I don’t have an answer, but we all need open minds, and probably a bit less ego and dogma. – RM The mark of a mature market: You can joke about the SC Magazine reviews operation. How they rarely actually test products, but instead sit through WebEx demos run by experienced SEs who make every product seem totally awesome. And that may be true but it’s not the point. It’s about relative ratings as an indicator of a mature market. If you look at SC Mag’s recent group test on email security devices, you’ll see 9 out of 10 products graded higher than 4 1/4 stars (out of 5). That 10th product must really suck for 3 stars. But even if you deflate the ratings by a star (or two) you’ll see very little outward differentiation. Which means the product category has achieved a lowest common denominator around a base set of features. So how do you decide between largely undifferentiated offerings? Price, of course… – MR Progress, at a glacial pace: I disagree with Mike Mimoso about the Disconnect Between Application Development and Security Getting Wider. We have been talking about this problem for almost a decade with not much improvement, so it certainly can feel that way. But I can say from personal experience that 10 years ago even the companies who developed security software knew nothing about secure code development, while now these is a better than even chance that someone on the team knows a little security. Have their processes changed to embrace security? Only at a handful of firms. The issue, in my opinion, is and has been the invisible boundary around the dev team to shield them from outside influence. Developers are largely isolated to keep

Share:
Read Post

Friday Summary: September 28, 2012 (A weird security week)

There was a lot of big news this week in the security world, most of it bad. Even if you skip the intro, make sure you read the Top News section. Rich here, Growing up I was – and this might shock some of you – a bit of a nerd. I glommed onto computers and technology pretty much as soon as I had access to them, and when I didn’t I was reading books and watching shows that painted wonderful visions of the future. I was a hacker before I ever heard the word, constantly taking things apart to see how they worked, then building my own versions. Technology is thus very intuitive to me. I never had to learn it in the same way as people coming to computers and electronics later in life. I began programming so early in life that it keyed into the same (maybe) brain pathways that allow children to learn multiple languages with far more facility than adults. While my generational peers are far more comfortable with technology and computers than our parents, I generally still have a leg up due to my early immersion. I naturally assumed that the generations following me would grow up closer to my experiences than my less geeky peers. But much to my surprise, although they are very comfortable with computers, they don’t have the damnedest idea of how they work or how to bend them to their own will. Unless it involves cats and PowerPoint. Lacking teachers who understood tech, they grow up learning how to use Office, not to program or dig into technology beyond the shallowest surface levels. As I have started raising my own kids, I worry about how to get them interested in technology, and algorithmic thinking, in a world where iPads put the entire Disney repository a few taps away. I’m not talking about forcing them to become programmers, but taking advantage of their brain plasticity to reinforce logical thinking and problem solving, and at least convey a sense of deeper exploration. This really did worry me, but over the past few months I have realized that as a parent I have the opportunity to engage my children to degrees my parents couldn’t possibly imagine. It was a big deal when I got my first Radio Shack electronics kit. It was even a bigger deal when I made my first radio. My kids? This past weekend my 3.5 and 2 year old got to play with their first home-built LEGO robot. Yes, I did most of the building and all the programming, but I could see them learning the foundation of how it worked and what we could make it do. Building a robot to play with our cat is a hell of a lot more exciting than putting a picture of a cat in a PowerPoint. This is barely the start. I grew up pushing ASCII pixels on screens. They will grow up programming, and perhaps designing, autonomous flying drones with high-definition video feeds. I grew up making simple electric candles that would turn on in a dark room. They will be able to create wonderful microcontroller-based objects they then embed into 3-D printed housings. There’s no guarantee they will actually be interested in these things, but social engineering isn’t just for pen testing. Hopefully I can manipulate the crap out of them so they at least get the basics. And, if not, it means more stock fab material for me. I’m biased. I think most of my success in life is due to a combination of logical thinking, the exploratory drive of a hacker, and a modest facility with the written word. As a parent I now have tools to teach these skills to my children in ways our parents could only dream about. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Rich quoted on the myth of cyberinsurance. Mike’s Security Intelligence post at Dark Reading. Favorite Securosis Posts Adrian Lane: My Security Fail (and Recovery) for the Week. Gave me a moment of panic. Mike Rothman: Securing Big Data: Architectural Issues. This series is critical for you to learn what’s coming. If it hasn’t already arrived. Rich: David Mortman’s Another Inflection Point. The more we let go of, the more we can do. Other Securosis Posts Defending Against DoS Attacks: The Attacks. Incite 9/27/2012: They Own the Night. New Research Paper: Pragmatic WAF Management. Favorite Outside Posts Adrian Lane: OAuth 2.0 – Google Learns to Crawl. For someone learning just how much I don’t know about authorization, this is a good overview of the high points of the OAuth security discussion. Mike Rothman: 25 Great Quotes from the Princess Bride. 25 YEARS! WTF? I don’t feel that old, but I guess I am. Take a trip down memory lane and remember one of the better movies ever filmed. IMHO, anyway. Rich: Connect with your inner grey hat. The title is a bit misleading, but the content is well stated. You need to change up your thinking constantly. Research Reports and Presentations Pragmatic WAF Management: Giving Web Apps a Fighting Chance. Understanding and Selecting Data Masking Solutions. Evolving Endpoint Malware Detection: Dealing with Advanced and Targeted Attacks. Implementing and Managing a Data Loss Prevention Solution. Defending Data on iOS. Malware Analysis Quant Report. Report: Understanding and Selecting a Database Security Platform. Top News and Posts The big news this week is the compromise and use of an Adobe code signing certificate in targeted attacks. Very serious indeed. Banks still fighting off the Iranian DDoS attacks. OpenBTS on Android. This is the software you use to fake a cell phone base tower. Smart grid control vendor hacked. Yes, they had deep access to their clients, why do you ask? An interview with the author of XKCD. Sudo read this article. PHPMyadmin backdoored. PPTP now really and truly dead. More Java 0day. Seriously, what the hell is going on this week? And to top everything off, a Sophos post

Share:
Read Post

New Research Paper: Pragmatic WAF Management

We are proud to announce a new research paper on Pragmatic Web Application Firewall Management. This paper has been a long time coming – we have been researching this topic for three years, looking for the right time to discuss WAF’s issues. Our key finding is that Web Application Firewalls can genuinely raise the bar on application security. Properly set up they block many attacks such as SQL injection and, just as importantly, ‘virtually’ patch applications faster than code fixes can be implemented. There is ample evidence that building security into applications from the get-go is more efficient, but unfortunately it may not be practical or even realistic. Most firms already have dozens – if not thousands – of vulnerable web apps that will take years to fix. So the real answer is both: “build security in” and “bolt security on”. And that is how WAFs help protect web applications when timely code fixes are not an option. During our research we heard a lot of negative feedback from various security practitioners – specifically pen testers – about how WAFs barely slow skilled attackers down. We heard many true horror stories, but they were not due to any specific deficiency in WAF technology. The common theme among critics was that problems stemmed from customers’ ineffective management practices in WAF deployment and tuning of rules. We also heard many stories about push-back from development teams who refused to wade through the reams of vulnerability output generated by WAFs. Some of this was due to poor report quality of WAF products, and some was due to internal politics and process issues. But in both cases we concluded from hundreds of conversations that WAF provides a unique value, and its issues can be mitigated through effective management. For more detailed information on our recommendations, as well as how we reached our conclusions, we encourage you to grab a copy of the white paper. Finally, Securosis provides the vast bulk of our research for free and without user registration. Our goal, as always, is to help organizations understand security issues and products, and to help get your job done with as little headache as possible. But it’s community support that enables us to produce our research, so we want to make special mention of those firms who have sponsored this paper: Alert Logic, Barracuda Networks, and Fortinet. We want to thank our sponsors as well as those of you who took time to discuss your WAF stories and provide feedback during this project! The paper is available to download: Pragmatic WAF Management (PDF). Share:

Share:
Read Post

Securing Big Data: Architectural Issues

In the previous post we went to some length to define what big data is – because the architectural model is critical to understanding how it poses different security challenges than traditional databases, data warehouses, and massively parallel processing environments. What distinguishes big data environments is their fundamentally different deployment model. Highly distributed and elastic data repositories enabled by the Hadoop File System. A distributed file system provides many of the essential characteristics (distributed redundant storage across resources) and enables massively parallel computation. But specific aspects of how each layer of the stack integrates – such as how data nodes communicate with clients and resource management facilities – raise many concerns. For those of you not familiar with the model, this is the Hadoop architecture. Architectural Issues Distributed nodes: The idea that “Moving Computation is Cheaper than Moving Data” is key to the model. Data is processed anywhere processing resources are available, enabling massively parallel computation. It also creates a complicated environment with a large attack surface, and it’s harder to verify consistency of security across all nodes in a highly distributed cluster of possibly heterogeneous platforms. ‘Sharded’ data: Data within big data clusters is fluid, with multiple copies moving to and from different nodes to ensure redundancy and resiliency. This automated movement makes it very difficult to know precisely where data is located at any moment in time, or how many copies are available. This runs counter to traditional centralized data security models, when data is wrapped in various protections until it’s used for processing. Big data is replicated in many places and moves as needed. The ‘containerized’ data security model is missing – as are many other relational database concepts. Write once, read many: Big data clusters handle data differently than other data management systems. Rather than the classical “Insert, Update, Select, and Delete” set of basic operations, they focus on write (Insert) and read (Select). Some big data environments don’t offer delete or update capabilities at all. It’s a ‘write once, read many’ model, which is excellent for performance. And it’s a great way to collect a sequence of events and track changes over time, but removing and overwriting sensitive data can be problematic. Data management is optimized for performance of insertion and query processing, at the expense of content manipulation. Inter-node communication: Hadoop and the vast majority of available add-ons that extend core functions don’t communicate securely. TLS and SSL are rarely available. When they are – as with HDFS proxies – they only cover client-to-proxy communication, not proxy-to-node sessions. Cassandra does offer well-engineered TLS, but it’s the exception. Data access/ownership: Role-based access is central to most database security schemes. Relational and quasi-relational platforms include roles, groups, schemas, label security, and various other facilities for limiting user access, based on identity, to an authorized subset of the available data set. Most big data environments offer access limitations at the schema level, but no finer granularity than that. It is possible to mimic these more advanced capabilities in big data environments, but that requires the application designer to build these functions into applications and data storage. Client interaction: Clients interact with resource managers and nodes. While gateway services for loading data can be defined, clients communicate directly with both the master/name server and individual data nodes. The tradeoff this imposes is limited ability to protect nodes from clients, clients from nodes, and even name servers from nodes. Worse, the distribution of self-organizing nodes runs counter to many security tools such as gateways/firewalls/monitoring which require a ‘chokepoint’ deployment architecture. Security gateways assume linear processing, and become clunky or or overly restrictive in peer-to-peer clusters. NoSecurity: Finally, and perhaps most importantly, big data stacks build in almost no security. As of this writing – aside from service-level authorization, access control integration, and web proxy capabilities from YARN – no facilities are available to protect data stores, applications, or core Hadoop features. All big data installations are built upon a web services model, with few or no facilities for countering common web threats, (i.e. anything on the OWASP Top Ten) so most big data installations are vulnerable to well known attacks. There are a couple other issues with securing big data on an architectural level, which are not issues specifically with big data, but with security products in general. To add security capabilities into a big data environment, they need to scale with the data. Most ‘bolt-on’ security does not scale this way, and simply cannot keep up. Because the security controls are not built into the products, there is a classic impedance mismatch between NoSQL environments and aftermarket security tools. Most security vendors have adapted their existing offerings as well as they can – usually working at data load time – but only a handful of traditional security products can dynamically scale along with a Hadoop cluster. The next post will go into day to day operational security issues. Share:

Share:
Read Post

My Security Fail (and Recovery) for the Week

I remember sitting at lunch with a friend and well-respected member of our security community as I described the architecture we used to protect our mail server. I’m not saying it’s perfect, but this person responded with, “that’s insane – I know people selling 0-days to governments that don’t go that far”. On another occasion I was talking with someone with vastly more network security knowledge and experience than me; someone who once protected a site attacked daily by very knowledgeable predators, and he was… confused as to why I architected the systems like I did. Yesterday, it saved my ass. Now I’m not 100% happy with our current security model on mail. There are aspects that are potentially flawed, but I’ve done my best to reduce the risk while still maintaining the usability we want. I actually have plans to close the last couple holes I’m not totally comfortable with, but our risk is still relatively low even with them. Here’s what happened. Back when we first set up our mail infrastructure we hit a problem with our VPN connections. Our mail is on a fully segregated network, and we had some problems with our ISP and IPSec-based VPNs even though I tried multiple back-end options. Timing-wise we hit a point where I had to move forward, so I set up PPTP and mandated strong passwords (as in I reviewed or set all of them myself). Only a handful of people have VPN access anyway, and, at the time, a properly-constructed PPTP password was still very secure. That delusion started dying this summer, and was fully buried yesterday thanks to a new, cloud-based, MS-CHAP cracking tool released by Moxie Marlinspike and David Hulton. The second I saw that article I shut down the VPN. But here’s how my paranoia saved my ass. As a fan of hyper-segregation, early on I decided to never trust the VPN. I put additional security hardware behind the VPN, with extremely restrictive policies. Connecting via the VPN gave you very little access to anything, with the mail server still completely walled off (a UTM dedicated only to the mail server). Heck, the only two things you could try to hack behind the VPN were the VPN server itself and the UTM… nothing else is directly connected to that network, and all that traffic is monitored and filtered. When I initially set things up people questioned my choice to put one security appliance behind another like that. But I didn’t want to have to rely on host security for the key assets if someone compromised anyone connected to our VPN. In this case, it worked out for me. Now I set all this up pre-cloud, but you can set up a similar architecture in many VPC or private cloud environments (you need two virtual NICs and a virtual UTM, although even simple firewall rules can go a long way to help). This is also motivation to finish the next part of my project, which involves yet another UTM and server. Share:

Share:
Read Post

Another Inflection Point

Rich Mogull recently posted a great stream of consciousness piece about how we are at an inflection point in information security. He covers how cloud and mobility are having, and will continue to have, a huge impact on how we practice security. Rich mentions four main areas of impact: Hyper-segregation Incident response Active defense Closing the action loop The post is short but very very dense. Read it a couple times, even – there’s a lot there. I would add another consequence of these changes that has already begun and will continue to manifest over the next five to ten years. That is the operationalization of security. This is partially because security is increasingly becoming a feature rather than a product itself. Over time we will see more and more of today’s dedicated security jobs evolving into responsibilities of other roles. We already see this with patch and AV management, which are increasingly owned by the Operations rather than Security teams. Similarly, we will see development and QA picking up functions that previously belonged to security folks. Dedicated security folks will still exist, but they will fall into a much smaller set of roles, including: Advisors/subject matter experts – architects and CISOs Penetration testers Incident responders And in the case of the latter two, they will increasingly be experts brought in to handle issues beyond the capabilities of regular teams, or to satisfy requirements for third-party assessment. In many ways this will be a return to how security was in the mid-90s. Yet another example of Hoff’s Hamster Sine Wave of Pain…. h/t to Gene Kim for feedback as I wrote this post. Share:

Share:
Read Post

Friday Summary: September 21, 2012

Adrian here … I had a few surgical procedures over the past few weeks. They corrected some vascular defects that were causing several problems, some of which had been coming on for such a long time I was unaware that there was an issue. The whole boiling frog in a beaker concept. And with the slow progression I was ignorant of the extent of the damage it was causing. The good news is that procedures were successful and their positive benefit was far greater than I anticipated. This whole series of events hammered home a concept that I have been intellectually aware of for a long time, but not lived out to this degree. Many people have blogged about how and why people make bad security tradeoffs. Instinct, fear, lower brain functions, and other ways we are wired to make some decisions and not others. Bruce Schneier has been talking about this for 10 years or more. But I think for the first time I really understand it at a basic level. When I was a kid I had a very strong vasovagal response. I get the lightheadedness, nausea, feeling of being extremely hot, and sweating. I don’t get the fuzziness, inability to speak, or weakness. But I only ever got it when I went to the eye doctor and they administered the glaucoma test. Nothing else has ever bugged me – until this recent surgery. For the first time I saw it in slow motion, with the internal conversation going something like this: The upper, rational part of my brain says: “I’m really looking forward to getting this stuff fixed and moving on with my life.” The lower part that’s wired into all critical infrastructure say: “Something’s wrong. Something bad is happening to your leg. Fix it!” The upper brain: “It’s okay, the doctor’s just fixing some veins. Don’t …” Lower brain: “NO, it’s not! Kick that F**ker in the head! Kick him then run like hell!” Lower brain wins. And all these years I just thought I hated my eye doctor. Who knew? But getting that very strange sensation again was both odd and educational. Being aware of the condition and watching yourself react as an adult is a whole different experience; you consciously witness two parts of your brain at odds. And I know how to work through it without passing out, but it involves the same stomach compression techniques jet pilots learn to combat G-forces. A great way to creep out the hospital staff too, but it kept me alert through the physical manifestations of the conflict to ‘witness’ the internal debate. No wonder we’re so flawed when if comes to making judgements when threats or fear are involved. I can be aware of it and still do very little about it. You body would rather shut down than deal with it. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian’s Dark Reading post on Encrypted Query Processing. Favorite Securosis Posts Mike Rothman: Inflection. Rich provides some food for thought on what the future of security looks like. Read it. Think about it. See what makes sense to you. Adrian Lane: It’s Time for Enterprises to Support a “Backup” Browser. No way to be secure with just one. And don’t get me started on browsing from mobile devices! Other Securosis Posts Incite 9/20/2012: Scabs. Securing Big Data: Security Issues with Hadoop Environments. Attend Gunnar’s Kick-A Mobile Security and Development Class. Friday Summary: September 14, 2012. Favorite Outside Posts Mike Rothman: Antivirus programs often poorly configured, study finds. In a “master of the obvious” research finding, the OPSWAT guys tell us that even if AV worked (which it doesn’t), most folks misconfigure their controls anyway and don’t have the right stuff turned on. And you wonder why the busiest guys in the industry are the folks tracking all the breaches? Adrian Lane: Looking Inside Your Screenshots. So many good posts this week, but I thought this was the most interesting. I have never been a big fan of digital watermarking – it’s too easy to detect and evade for images and music files, and we know it degrades content. But in this case it’s more difficult to detect and does not degrade the content – and it gives Blizzard a hammer to use in legal cases as they have solid user/client identity. Sneaky, and if you give it some thought, there are other practical applications of this approach. Rich: Compliance lessons from Lance at EmergentChaos. As the resident Securosis cycling fan there’s no way I wasn’t going to pick this one. Only difference is Lance previously, clearly, stated he didn’t dope… which isn’t the same as his recent comments more focused on ‘compliance’. Project Quant Posts Malware Analysis Quant: Index of Posts. Malware Analysis Quant: Metrics – Monitor for Reinfection. Malware Analysis Quant: Metrics – Remediate. Malware Analysis Quant: Metrics – Find Infected Devices. Research Reports and Presentations Understanding and Selecting Data Masking Solutions. Evolving Endpoint Malware Detection: Dealing with Advanced and Targeted Attacks. Implementing and Managing a Data Loss Prevention Solution. Defending Data on iOS. Top News and Posts OWASP ZAP – the Firefox of web security tools Coders Behind the Flame Malware Left Incriminating Clues on Control Servers. A fun read. And if most code received this level of scrutiny, we would have much better code! Attack Easily Cracks Oracle Database Passwords Internet Explorer Fix is available now Media Manipulation and Social Engineering Mobile Pwn2Own ownage Hacker Steals $140k From Lock Poker Account RSnake donated XSS filter cheat sheet to OWASP BSIMM 4 Released Petco Releases Coupon On Internet, Forgets How Internet Works Majority of companies suffered a web application security breach Massachusetts group to pay $1.5M HIPAA settlement. I would love to see the “corrective plan of action”. Web Cryptography API draft published Java zero-day leads to Internet Explorer zero-day Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.