Securosis

Research

How to Detect Cloudwashing by Your Vendors

“There is nothing more deceptive than an obvious fact” – Sherlock Holmes It’s cloud. It’s cloud-ready. It’s cloud insert-name-here. As analysts we have been running into a lot of vendors labeling traditional products as ‘cloud’. Two years ago we expected the practice to die out once customers understood cloud services. We were wrong – vendors are still doing it rather than actually building the technology. Call it cloudwashing, cloudification, or just plain BS. As an enterprise buyer, how can you tell whether the system you are thinking of purchasing is a cloud application or not? It should be easy – just look at the products branded ‘cloud’, right? But dig deeper and you see it’s not so simple. Sherlock Holmes made a science of detection, and being an enterprise buyer today can feel like a being detective in a complex investigation. Vendors have anticipated your questions and have answers ready. What to do? Start by drilling down: what is behind the labels? Is it cloud or just good old enterprise bloatware? Or is it MSO with a thin veneer of cloud? We pause here to state that there is nothing inherently wrong with enterprise software or MSO. There is also no reason cloud is necessarily better for you. Our goal here is to orient your thinking beyond labels and give you some tips so you can an educated consumer. We have seen a grab bag of cloudwashes. We offer the following questions to help you figure out what’s real: Does it run at a third party provider? (not on-premise or ‘private’ cloud) Is the service self-service (i.e., you can use it without other user interactions or without downloading – not installed ‘on the edge’ of your IT network) Is service metered? If you stopped using it tomorrow would bills stop? Can you buy it with a credit card? Can your brother-in-law sign up with the same service? Do you need to buy a software license? Does it have an API? Does it autoscale? Did the vendor start from scratch or rewrite its product? Is the product standalone (i.e., not a proxy-type interface on top of an existing stack)? Can you deploy without assistance; or does it require professional services to design, deploy, and operate? The more of these questions that get a ‘No’ answer, the more likely your vendor is marketing ‘cloud’ instead of selling cloud services. Why does that matter? Because real cloud environments offer specific advantages in elasticity, flexibility, scalability, self-service, pay-as-you-go, and various other areas, which are not present in many non-cloud solutions. What cloudwashing exercises have you seen? Please share in the comments below. Share:

Share:
Read Post

New Series: What CISOs Need to Know about Cloud Computing

This is the first post in a new series detailing the key differences between cloud computing and traditional security. I feel pretty strongly that, although many people are talking about the cloud, nobody has yet done a good job of explaining why and how security needs to adapt at a fundamental level. It is more than outsourcing, more than multitenancy, and definitely more than simple virtualization. This is my best stab at it, and I hope you like it. The entire paper, as I write it, is also posted and updated at GitHub for those of you who want to track changes, submit feedback, or even submit edits. Special thanks to CloudPassage for agreeing to license the paper (as always, we are following our Totally Transparent Research Process and they do not have any more influence than you do, and can back out of licensing the paper if, in the end, they don’t like it). And here we go… What CISOs Need to Know about Cloud Computing Introduction One of a CISO’s most difficult challenges is sorting the valuable wheat from the overhyped chaff, and then figuring out what it means in terms of risk to your organization. There is no shortage of technology and threat trends, and CISOs need not to only determine which matter, but how they impact security. The rise of cloud computing is one of the truly transformative evolutions that fundamentally change core security practices. Far more than an outsourcing model, cloud computing alters the very fabric of our infrastructure, technology consumption, and delivery models. In the long run, the cloud and mobile computing are likely to mark a larger shift than the Internet. This series details the critical differences between cloud computing and traditional infrastructure for security professionals, as well as where to focus security efforts. We will show that the cloud doesn’t necessarily increase risks – it shifts them, and provides new opportunities for significant security improvement. Different, But Not the Way You Think Cloud computing is a radically different technology model – not just the latest flavor of outsourcing. It uses a combination of abstraction and automation to achieve previously impossible levels of efficiency and elasticity. But in the end cloud computing still relies on traditional infrastructure as its foundation. It doesn’t eliminate physical servers, networks, or storage, but allows organizations to use them in different ways, with substantial benefits. Sometimes this means building your own cloud in your own datacenter; other times it means renting infrastructure, platforms, and applications from public providers over the Internet. Most organizations will use a combination of both. Public cloud services eliminate most capital expenses and shift them to on-demand operational costs. Private clouds allow more efficient use of capital, tend to reduce operational costs, and increase the responsiveness of technology to internal needs. Between the business benefits and current adoption rates, we expect cloud computing to become the dominant technology model over the next ten to fifteen years. As we make this transition it is the technology that create clouds, rather than the increased use of shared infrastructure, that really matters for security. Multitenancy is more an emergent property of cloud computing than a defining characteristic. Security Is Evolving for the Cloud As you will see, cloud computing isn’t more or less secure than traditional infrastructure – it is different. Some risks are greater, some are new, some are reduced, and some are eliminated. The primary goal of this series is to provide an overview of where these changes occur, what you need to do about them, and when. Cloud security focuses on managing the different risks associate with abstraction and automation. Mutitenancy tends to be more a compliance issue than a security problem, and we will cover both aspects. Infrastructure and applications are opened up to network-based management via Internet APIs. Everything from core network routing to creating and destroying entire application stacks is now possible using command lines and web interfaces. The early security focus has been on managing risks introduced by highly dynamic virtualized environments such as autoscaled servers, and broad network access, including a major focus on compartmentalizing cloud management. Over time the focus is gradually shifting to hardening the cloud infrastructure, platforms, and applications, and then adapting security to use the cloud to improve security. For example, the need for data encryption increases over time as you migrate more sensitive data into the cloud. But the complexities of internal network compartmentalization and server patching are dramatically reduced as you leverage cloud infrastructure. We expect to eventually see more security teams hook into the cloud fabric itself – bridging existing gaps between security tools and infrastructure and applications with Software Defined Security. The same APIs and programming techniques that power cloud computing can provide highly-integrated dynamic and responsive security controls – this is already happening. This series will lay out the key differences, with suggestions for where security professionals should focus. Hopefully, by the end, you will look at the cloud and cloud security in a new light, and agree that the cloud isn’t just the latest type of outsourcing. Share:

Share:
Read Post

Defending Against Application Denial of Service: Attacking the Stack

  In our last post, we started digging into ways attackers target standard web servers, protocols, and common pages to impact application availability. These kinds of attacks are at the surface level and low-hanging fruit because they can be executed via widely available tools wielded by unsophisticated attackers. If you think of a web application as an onion, there always seems to be another layer you can peel back to expose additional attack surface. The next layer we will evaluate is the underlying application stack used to build the application. One of the great things about web applications is the availability of fully assembled technology stacks, making it trivial to roll out the infrastructure to support a wide variety of applications. But anything widely available inevitably becomes an attack target. The best example of this within the context of an availability attack is how hash tables can be exploited to crush web servers. Hash Collision Attacks We won’t get into advanced programming but you need some context to understand this attack. A hash table is used to map specific keys to values by assigning the value of the key to a specific slot in an array. This provides a very fast way to search for things. On the downside, multiple values may end up in the same slot, which creates a hash collision that needs to be dealt with by the application, requiring significant additional processing. Hash collisions are normally minimized, so the speed trade-off is usually worthwhile. But if an attacker understands the hashing function used by the application they can cause excessive hash collisions. This requires the application to compensate and consume extra resources to manage the hashing function. If enough hash collisions occur… you guessed it: the application can’t handle the workload and goes down. This attack was weaponized as HashDoS, an attack tool that leverages the fact that most web application stacks use the same hashing algorithm within their dictionary tables. With knowledge of this hashing algorithm, the attacker can send a POST request with many variables to create hash table chaos and render the application useless. Mitigation for this attack requires the ability to discard messages with too many variables – typically implemented within a WAF (web application firewall) – or to randomize the hash function using application-layer logic. A good explanation of this attack using cats explains HashDoS in layperson’s terms. Remember that any capabilities within the application stack can be exploited, and given the open source nature of these stacks probably will. So diligence in selection of a stack, ensuring secure implementation, and tracking security notices and implementing patches are all critical to ensure application security and availability. Targeting the Database As part of the application stack, databases tend to get overlooked as a denial of service attack target. Many attackers try to extract the data in the database and then exfiltrate it, so knocking down the database would be counter-productive. But when the mission is to impact application availability or to use a DoS as cover for exfiltration, the database can be a soft target – because in some way, shape, or form, the web application depends on the database. If you recall our Defending Against Denial of Service Attacks paper, we broke DoS attacks into network-based volumetric attacks and application-layer attacks. The issue with databases is that they can be attacked using both tactics. Application servers connect to the database using some kind of network, so a volume attack on that network segment can impact database availability. If the database itself is exploited, the application is also likely to go down. Either way the application is out of business. Database DoS Attacks If we dig a little deeper into the attacks, we find that one path is to crush databases using deliberately wasteful queries. Other attacks target simple vulnerabilities that have never been patched, mostly because the need for continuous database uptime interferes with patching, so that it happens sporadically or not at all. Again, you don’t need to be a brain surgeon to knock a web application offline. Here are some of the attack categories: Abuse of Functions: This type of attack is similar to the slow HTTP attacks mentioned in the last post – attackers use database functionality against you. For example, if you restrict failed logins, they may blast your database with bad password requests to lock legitimate users (or applications) out. Another example involves the attackers taking advantage of database autoscaling, blast it with requests until so many instances are running that the database falls over. Complex Queries: If the attacker gives the database too much work to do, it will fall over. There are many techniques, including nested queries & recursion, Cartesian joins, and the in operator, which can overwhelm the database. The attacker would need to be able to inject a SQL query into the database directly or from within the application for this to work, so you can block these attacks that way. We will talk about defenses below. Bugs and Defects: In these cases the attacker is targeting a known database vulnerability. This includes queries of death and buffer overflow to take down the database. With new functionality being introduced constantly, database attack surface continues to grow. And even if the database vendor identifies the issue and produces a patch (not a sure thing), finding a maintenance window to patch remains challenging in many operational environments. Application Usage: Finally, the way the application uses the database can be gamed to cause an outage. The best example of this is SQL injection, but that attack is rarely used to knock over databases. Also consider the login and store locator page attacks we mentioned in the last post, as well as shopping cart and search engine attacks (to be covered later) as additional examples of application misuse that can impact availability. Database DoS Defenses The tactics used to defend against database denial of service attacks really reflect good database security practices. Go figure. Configuration: Strong

Share:
Read Post

Trustwave Acquires Application Security Inc.

It has been a while since we had an acquisition in the database security space, but today Trustwave announced it acquired Application Security Inc. – commonly called “AppSec” by many who know the company. About 10 years ago I wrote my first competitive analysis paper during my employment with IPLocks, of our principal competitor: another little-known database security company called Application Security, Inc. Every quarter for four years, I updated those competitive analysis sheets to keep pace with AppSec’s product enhancements and competitive tactics in sales engagements. Little did I know I would continue to examine AppSec’s capabilities on a quarterly basis after having joined Securosis – but rather than solely looking at competitive positioning, I have been gearing my analysis toward how features map to the customer inquires, and tracking consumer experiences during proof-of-concept engagements. Of all the products I have tracked, I have been following AppSec the longest. It feels odd to be writing this for a general audience, but this deal is pretty straightforward, and it needed to happen. Application Security was one of the first database security vendors, and while they were considered a leader in the 2004 timeframe, their products have not been competitive for several years. AppSec still has one of the best database assessment products on the market (dbProtectAppDetectivePRO), and one of the better – possibly the best – database vulnerability research team backing it. But Database Activity Monitoring (DAM) is now the key driver in that space, and AppSec’s DAM product (AppDetectivePROdbProtect) has not kept pace with customer demand in terms of performance, integration, ease-of-use, or out-of-the-box functionality. A “blinders on” focus can be both admirable and necessary for very small start-ups to deliver innovative technologies to markets that don’t understand their new technology or value proposition, but as markets mature vendors must respond to customers and competitors. In AppSec’s early days, very few people understood why database security was important. But while the rest of the industry matured and worked to build enterprise-worthy solutions, AppSec turned a deaf ear to criticism from would-be customers and analysts. Today the platform has reasonable quality, but is not much more than an ‘also-ran’ in a very competitive field. That said, I think this is a very good purchase for Trustwave. It means several things for Trustwave customers: Trustwave has filled a compliance gap in its product portfolio – specifically for PCI. Trustwave is focused on PCI-DSS, and data and database security are central to PCI compliance. Web and network security have been part of their product suite, but database security has not. Keep in mind that DAM and assessment are not specifically prescribed for PCI compliance like WAF is; but the vast majority of customers I speak with use DAM to audit activity, discovery to show what data stores are being used, and assessment to prove that security controls are in place. Trustwave should have acquired this technology a while ago. The acquisition fits Trustwave’s model of buying decent technology companies at low prices, then selling a subset of their technology to existing customers where they already know demand exists. That could explain why they waited so long – balancing customer requirements against their ability to negotiate a bargain price. Trustwave knows what their customers need to pass PCI better than anyone else, so they will succeed with this technology in ways AppSec never could. This puts Trustwave on a more even footing for customers who care more about security and don’t just need to check a compliance box, and gives Trustwave a partial response to Imperva’s monitoring and WAF capabilities. I think Trustwave is correct that AppSec’s platform can help with their managed services offering – Monitoring and Assessment as a Service appeals to smaller enterprises and mid-market firms who don’t want to own or manage database security platforms. What does this mean for AppSec customers? It is difficult to say – I have not spoken with anyone from Trustwave about this acquisition, and I am unable to judge their commitment to putting engineering effort behind the AppSec products. And I cannot tell whether they intend to keep the research team which has been keeping the assessment component current. Trustwave tweeted during the official announcement that “.@Trustwave will continue to develop and support @AppSecInc products, DbProtect and AppDetectivePRO”, but that could be limited to features compliance buyers demand, without closing the performance and data collection gaps that are problematic for DAM customers. I will blog more on this as I get more information, but expect them to provide what’s required to meet compliance and no more. And lastly, for those keeping score at home, AppSec is the 7th Database Activity Monitoring acquisition – after Lumigent (BeyondTrust), IPLocks (Fortinet), Guardium (IBM), Secerno (Oracle), Tizor (IBM via Netezza), and Sentrigo (McAfee); leaving Imperva and GreenSQL as the last independent DAM vendors. Share:

Share:
Read Post

Security Awareness Training Evolution [New Paper]

Everyone has an opinion about security awareness training, and most of them are negative. Waste of time! Ineffective! Boring! We have heard them all. And the criticism isn’t wrong – much of the content driving security awareness training is lame. Which is probably the kindest thing we can say about it. But it doesn’t need to be that way. Actually, it cannot remain this way – there is too much at stake. Users remain the lowest-hanging fruit for attackers, and as long as that is the case attackers will continue to target them. Educating users about security is not a panacea, but it can and does help. It’s not like a focus on security awareness training is the flavor of the day for us. We have been talking about the importance of training users for years, as unpopular as it remains. The main argument against security training is that it doesn’t work. That’s just not true. But it doesn’t work for everyone. Like security in general, there is no 100%. Some employees will never get it – mostly because they just don’t care – but they do bring enough value to the organization that no matter what they do (short of a felony) they are sticking around. Then there is everyone else. Maybe it’s 50% of your folks, or perhaps 90%. Regardless of the number of employees who can be influenced by better security training content, wouldn’t it make your life easier if you didn’t have to clean up after them? We have seen training reduce the amount of time spent cleaning up easily avoidable mistakes. We are pleased to announce the availability of our Security Awareness Training Evolution paper. It discusses how training needs to evolve, and presents a path to improve training content and ensure the right support and incentives are in place for training to succeed. We would like to thank our friends at PhishMe for licensing this paper. Remember, it is through the generosity of our licensees that you get to read our stuff for this nifty price. Here is another quote from the paper to sum things up: As we have said throughout this paper, employees are clearly the weakest link in your security defenses, so without a plan to actively prepare them for battle you have a low chance of success. It is not about making every employee a security ninja – instead focus on preventing most of them from falling for simplistic attacks. You will still be exploited, but make it harder for attackers so you suffer less frequent compromise. Security-aware employees protect your data more effectively, it’s as simple as that, regardless of what you hear from naysayers. Check out the page in our Research Library, or download the Security Awareness Training Evolution (PDF) paper directly. Share:

Share:
Read Post

How to Edit Our Research on GitHub

I am still experimenting with posting research, from drafts through the editing process, on GitHub. No promises that we will keep doing this – it depends on the reaction we get. From a workflow standpoint it isn’t much more effort for us, but I like the radical transparency it enables. I just posted a second paper, which is still very much incomplete. I want to offer some instructions on how to edit or propose changes. This is just quick and dirty, and you should review the GitHub Help to really understand the process. GitHub is meant for code but works for any text files. Unlike any other option we found, GitHub offers an open, transparent way to not only collect feedback, but also to solicit and manage direct edits. Once you set it up the first time it is pretty easy – you subscribe, pull down a copy of the research, make your own edits, then send us a request to incorporate your changes into our master copy. Another nice feature is that GitHub tracks the entire editing process, including our internal edits. For transparency that’s sweet. I don’t expect many people to take advantage of this. I am currently the only Securosis Analyst doing it, and based on your feedback we will decide whether we should continue. Even if you don’t push changes or comments, let us know what you think. Here’s how: We will post all our research at https://github.com/Securosis. Right now I still need to move the Pragmatic Network Security Management project over there because I was still learning the process when I posted that one. For now you can find the research split between those two places. If you only want to leave a comment you can do so here on the blog post, or as an ‘Issue’ on GitHub. Blog comments can be anonymous but GitHub requires an account to create or edit an issue. Click ‘Issues’ and then simply add yours. If you want to actually make edits, go for it! To do this you need to both create a GitHub account and install the software. For you non-command-line types, you can download official GUI versions here. If you are running Linux git is probably already installed. If you try to use the git command under OS X 10.9 Mavericks, the system should install the software if necessary. Next, fork a copy of our repository. Go to https://github.com/Securosis, click the Fork button, and follow the instructions. That fork isn’t on your computer for editing yet, so synchronize your repository. This pulls down the key files to your system. On the web page click “Clone to Desktop”, it will launch your client, and you can choose where to save the fork. Edit away locally. This doesn’t affect our canonical version – just your fork of it. When you are done, commit your changes in your desktop GUI by clicking Changes, then Commit and Sync. Don’t forget to comment on your changes so we know why you submitted them. Then submit a pull request. This notifies us that you made changes. We will run through them and accept or decline. It is our research, after all. This is all new to us, so we need your feedback on whether it is worth continuing. We know many of you might be interested in tracking the research but not participating, and that’s fine, but if you don’t email or send us comments we won’t know you like it. Share:

Share:
Read Post

Defending Against Application Denial of Service: Attacking the Application Server

It has been a while, but it is time to jump back into the Application Denial of Service series with both feet. As we described in the introduction, application denial of service can be harder to deal with than volume-based network DDoS because it is not always obvious what’s an attack and what’s legitimate traffic. Unless you are running all your traffic through a scrubbing center, your applications will remain targets for attacks that exploit the architecture, application stacks, business logic, and even legitimate functionality of the application. As we start digging into specific AppDoS tactics, we will start with attacks that target the server and infrastructure of your application. Given the popularity and proliferation of common application stacks, attackers can attack millions of sites with a standard technique, most of which have been in use for years. But not enough web sites have proper mitigations in place. Go figure. Server and infrastructure attacks are the low-hanging fruit of application denial of service, and will remain that so long as they continue to work. So let’s examine the various types of application infrastructure attacks and some basic mitigations to blunt them. Exploiting the Server Most attacks that directly exploit web servers capitalize on features of the underlying standards and/or protocols that run the web, such as HTTP. This makes many of these attacks very hard to detect because they look like legitimate requests – by the time you figure out it’s an attack your application is down. Here are a few representative attack types: Slowloris: This attack, originally built by Robert ‘RSnake’ Hansen, knocks down servers by slowly delivering request headers, forcing the web server to keep connections open, without ever completing the requests. This rapidly exhausts the server’s connection pool. Slow HTTP Post: Similar to Slowloris, Slow HTTP Post delivers the message body slowly. This serves the same purpose of exhausting resources on the web server. Both Slowloris and Slow HTTP Post are difficult to detect because their requests look legitimate – they just never complete. The R-U-Dead-Yet attack tool automates launching a Slow HTTP Post attack via an automated user interface. To make things easier (for your adversaries), RUDY is included in many penetration testing tool packages to make knocking down vulnerable web servers easy. Slow Read: Yet another variation of the Slowloris approach, Slow HTTP Read involves shrinking the response window on the client side. This forces the server to send data to the client slowly to stay within the response window. The server must keep connections open to ensure the data is sent, which means it can be quickly overwhelmed with connections. As with RUDY, these techniques are already weaponized and available for easy download and usage. You can expect innovative attackers to combine and automate these tactics into weapons of website destruction (as XerSeS has been portrayed). Regardless of packaging, these tactics are real and need to be defended against. Mitigating these server attacks typically requires a combination of web server configuration with network-based and application-based defenses. Keep in mind that ultimately you can’t really defend the application from these kinds of attacks because they are just taking advantage of web server protocols and architecture. But you can blunt their impact with appropriate controls. For example, Slowloris and Slow HTTP Post require tuning the web server to increase the maximum number of connections, prevent excessive connections from the same IP address, and allow a backlog of connection requests to be stored – to avoid losing legitimate application traffic. Network-based defenses on WAFs and IPSes can be tuned to look for certain web connection patterns and block offending traffic before the server becomes overwhelmed. The best approach is actually all of the above. Don’t just tune the web server or install network-based protection in front of the application – also build web applications to limit header and body sizes, and to close connections within a reasonable timeframe to ensure the connection pool is not exhausted. We will talk about building AppDoS protections into applications later in this series. An attack like Slow HTTP Read games the client side of the connection, requires similar mitigations. But instead of looking for ingress patterns of slow activity (on either the web server or other network devices), you need to look for this kind of activity on the egress side of the application. Likewise, fronting the web application with a CDN (content delivery network) service can alleviate some of these attacks, as your web application server is a step removed from the clients, and insulated from slow reads. For more information on these services, consult our Quick Wins with Website Protection Services paper. Brute Force Another tactic is to overwhelm the application server – not with network traffic, but by overloading application features. We will cover an aspect of this later, when we discuss search engine and shopping cart shenanigans. For now let’s look at more basic features of pretty much every website, such as SSL handshakes and serving common web pages like the login screen, password reset, and store locator. These attacks are so effective for overwhelming application servers because functions like SSL handshake and pages which require database calls are very compute intensive. Loading a static page is easy, but checking login credentials against the hashed database of passwords is a different animal. First let’s consider the challenges of scaling SSL. On some pages, such as the login page, you need to encrypt traffic to protect user credentials in motion. SSL is a requirement for such pages. So why is scaling SSL handshaking such an issue? As described succinctly by Microsoft in this Tech Bulletin, there are 9 distinct steps in establishing a SSL handshake, many of which require cryptographic and key generation operations. If an attacker uses a rented botnet to establish a million or so SSL sessions at the same time, guess what happens? It is not a bandwidth issue – it is a compute problem – and the application becomes unavailable because no more SSL

Share:
Read Post

Blowing Your Mind(fulness) at RSA 2014

It was kind of a joke between two friends on a journey to become better people. Jen Minella (JJ) and I compared notes over way too many drinks at last year’s RSA, and we decided our experiences would make a good talk. I doubt either of us really thought it would be interesting to anyone but us. We were wrong. At RSA we will do a session called “Neuro-hacking 101: Taming Your Inner Curmudgeon”. Here is the description: For self-proclaimed security curmudgeons and anyone else searching for better work/life balance, this session is a how-to guide for happiness, health and finding a paths to increased productivity. Case studies, methods and research in the science of mind and body are followed up with resources and ways to get started. From neuroscience to nutrition, there’s something for everyone. JJ summed up her thoughts on the pitch, and I feel pretty much the same way. And so today, I’m overjoyed, a little relieved, excited at the opportunity, and yet at the same time a big piece of me is completely mortified. This talk, although founded in science, is a big lift of ol’ virtual skirt. It’s a talk about being happy, getting a grip on life, and using mindfulness to succeed and excel at everything you do. We do not pass go, we do not collect $200. Instead, we’re taking a nose dive into traditionally taboo topics and expose what many consider to be deportments of an intimate and personal nature. But we reached a mutual conclusion – how we think and communicate about the topics of mindfulness shouldn’t be secreted. There’s no shame in participating in activities (or inactivity) designed to make us better, happier, more productive people. I don’t wear a virtual skirt, but it is a bit scary to provide a view into the inner workings of my improvement processes to be less grumpy and more content with all I have achieved. I’ve talked about some of those topics in past Incites, but never to this degree. And that’s good. No, it’s great. Hope to see you there. Share:

Share:
Read Post

Summary: Hands on

  Before I dive into this week’s sermon, just a quick note that our posting will be a bit off through the end of the year. As happens from time to time, our collective workloads and travel are hitting insanity levels, which impedes our ability to push out more consistent updates. But, you know, gotta feed the kids and dogs. A couple weeks ago I got to abandon my family during the weekend and spend my time in a classroom renewing my Emergency Medical Technician certification. I was close to letting it go, but my wife made it abundantly clear that she would rather lose me for a weekend than deal with the subsequent years of whining. I never look forward to my recert classes. It is usually 2-3 days in a classroom, followed by a written and psychomotor (practical) test. I first certified as an EMT in 1991, and then became a paramedic in 1993 (which is an insane amount of training – no comparison). I won’t say I don’t learn anything in the every-two-year refresher classes, but I have been doing this for a very long time. But this year I learned more than expected, and some of it relates directly to my current work in security. Five or six years ago I started hearing about some new trends in CPR. A doctor here in Phoenix started a research study to try a completely nonconventional approach to CPR. The short version is that the human body, when dead, isn’t using a ton of oxygen. Even when alive we inhale air with 21% O2 and exhale air with 16% O2. Stop all muscular activity and the brain will mostly suck out whatever O2 is circulated when you compress someone’s chest. This doc had some local fire departments use hands-only CPR and 300 compressions with no ventilations. This keeps the blood pressure up and blood circulating, and the action of pushing the chest generates more than enough air exchange. The results? Something like 3x the survival rates. The CPR you learn today probably isn’t there yet, but definitely emphasizes compressions more than mouth-to-mouth, which I suspect will be dropped completely for adults if the research holds. There’s more to it, but you get the idea. All right, interesting enough, but what does this have to do with security? I found myself instinctively clinging to my old concepts of the ‘right’ way to do CPR despite clear evidence to the contrary. I understand the research, and immediately adopted the changes, but something felt wrong to me. I have been certified in what are basically the same essential techniques for nearly 30 years. Part of me didn’t want to let go, and that wasn’t a feeling I expected. I later had the same reaction to changes in the treatment of certain closed head injuries, but that more due to specific cases where I used techniques now known to harm patients. I am an evidence-based guy. I roll with the times and try not to cling to convention, but somewhere in me, especially as I get older, part of the brain reacts negatively to changing old habits. Fortunately, my higher-order functions know to tell that part to shut the hell up. We have a tendency to imprint on whatever we first learn as ‘correct’. Perhaps it was the act of discovery, or forming those brain pathways. In security we see this all the time. I once had an IT director tell me he would rather allow Windows XP on his network over iPads, because “we know XP”. Wrong answer. The rate of change in security exceeds that of nearly every other profession. Even developers can often cling to old languages and constructs, and that profession is probably the closest. I like to think of myself as an enlightened guy capable of assimilating the latest and greatest within the context of what’s known to work, and I still found myself clinging to a convention after it was scientifically proven wrong. I don’t think any of us are in a position to blame others for “not getting it”. All of us are luddards – you just need to hunt for the right frame of reference. That is not an excuse, but it is life. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Nada. Unless Google and Bing are both lying to me. Like I said, busy week. Favorite Securosis Posts Adrian Lane: Microsoft Upends the Bug Bounty Game. This may work. Mike Rothman: Microsoft Upends the Bug Bounty Game. Not a lot of choice this week (yes, I have been the suck at blogging lately). But Rich does a nice job explaining the ripple effects of Microsoft extending their bounty program. Rich: New Series: The Executive Guide to Pragmatic Network Security Management. The post isn’t new, but I can announce that RedSeal Networks intends to license it (pending the end of our open peer review process). And don’t forget that this is the first papare we are opening up for full public change tracking on GitHub. Other Securosis Posts Friday Summary: Halloween 2013 Edition. Favorite Outside Posts Adrian Lane: I Love the Smell of Popcorn in the Morning. Why did I choose to never be a CIO again? This is why. You’d think this type of story would be rare, but it’s common. However, it only occurs at 2:00am or on your first day of vacation. Mike Rothman: Five Styles of Advanced Threat Defense. The Big G does a decent job of explaining the overlap (and synergy) of these so-called Advanced Threat product categories. I differ slightly on how to carve things up but this is close enough for me to mention. Rich: IT Security from the Eyes of Data Scientists. Yep, serious job security if you head down this path. Research Reports and Presentations Firewall Management Essentials. A Practical Example of Software Defined Security. Continuous Security Monitoring. API Gateways: Where Security Enables Innovation. Identity and Access Management for Cloud Services. Dealing with

Share:
Read Post

Microsoft Upends the Bug Bounty Game

  Microsoft is expanding its $100k bounty program to include incident responders who find and document Windows platform mitigation flaws. Today’s news means we are going from accepting entries from only a handful of individuals capable of inventing new mitigation bypass techniques on their own, to potentially thousands of individuals or organizations who find attacks in the wild. Now, both finders and discoverers can turn in new techniques for $100,000. Our platform-wide defenses, or mitigations, are a kind of shield that protects the entire operating system and all the applications running on it. Individual bugs are like arrows. The stronger the shield, the less likely any individual bug or arrow can get through. Learning about “ways around the shield,” or new mitigation bypass techniques, is much more valuable than learning about individual bugs because insight into exploit techniques can help us defend against entire classes of attack as opposed to a single bug – hence, we are willing to pay $100,000 for these rare new techniques. This is important because Microsoft just turned every target and victim into a potential bug hunter. The pool of people looking for these just increased massively. Previously only security researchers could hunt these down and win the cash. Researchers can be motivated to sell bugs to governments or criminals for more then $100k (Windows mitigation exploits are extremely valuable). Some professional response teams like to keep exploit details and indicators of compromise trade secrets, but not every response team is motivated that way. This alters the economics for attackers, because they now need to be much more cautious in using their most valuable 0day exploits. If they attack the wrong target they are more likely to lose their exploit forever. As exciting as this is, it still requires a knowledgeable defender who isn’t financially motivated to keep it secret (again, some vendors and commercial IR services). And there are plenty of lower-level attacks that still work. But even with those stipulations the pool of hunters just increased tremendously. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.