Securosis

Research

Defending Against DDoS: Magnification

As mentioned in our last post, the predominant mechanism of network-based DDoS attacks involves flooding the pipes with standard protocols like SYN, ICMP, DNS, and NTP. But that’s not enough, so attackers now take advantage of weaknesses in the protocols to magnify the impact of their floods by an order of magnitude. This makes each compromised device far more efficient as an attack device and allows attackers to scale attacks over 400gbps (as recently reported by CloudFlare). Only a handful of organizations in the world can handle an attack of that magnitude, so DDoS + reflection + amplification is a potent combination. Fat Packets Attackers increasingly tune the size of their packets to their desired outcome. For example simple SYN packets can crush the compute capabilities of network/security devices. Combining small SYNs with larger SYN packets can also saturate the network pipe, so we see often them combined in today’s DDoS attacks. Reflection + Amplification The first technique used to magnify a DDoS attack is reflection. This entails sending requests to a large number of devices (think millions), spoofing the origination IP address of a target site. The replies to those millions of requests are reflected back to the target. The UDP-based protocols used in reflection attacks don’t require handshaking to establish new sessions, so they are spoofable. The latest wave of DDoS attacks uses reflected DNS and NTP traffic to dramatically scale the volume of traffic hitting targets. Why those two protocols? Because they provide good leverage for amplifying attacks – DNS and NTP responses are typically much bigger than requests. DNS can provide about 50x amplification because responses are that much larger than requests. And the number of open DNS resolvers which respond to any DNS request from any device make this an easy and scalable attack. Until the major ISPs get rid of these open resolvers DNS-based DDoS attacks will continue. NTP has recently become a DDoS protocol of choice because it offers almost 200x magnification. This is thanks to a protocol feature: clients can request a list of the last 600 IP addresses to access a server. To illustrate the magnitude of magnification, the CloudFlare folks reported that attack used 4,529 NTP servers, running on 1,298 different networks, each sending about 87mbps to the victim. The resulting traffic totaled about 400gbps. Even more troubling is that all those requests (to 4,500+ NTP servers) could be sent from one device on one network. Even better, other UDP-based protocols offers even greater levels of amplification. An SNMP response can be 650x the size of a request, which could theoretically be weaponized to create 1gbps+ DDoS attacks. Awesome. Stacking Attacks Of course none of these techniques existing a vacuum, so sometimes we will see them pounding a target directly, while other times attackers combine reflection and amplification to hammer a target. All the tactics in our Attacks post are in play, and taken to a new level with magnification. The underlying issue is that these attacks are enabled by sloppy network hygiene on the part of Internet service providers, who allow spoofed IP addresses for these protocols and don’t block flood attacks. These issues are largely beyond the control of a typical enterprise target, leaving victims with little option but to respond with a bigger pipe to absorb the attack. We will wrap up tomorrow, with look at the options for mitigating these attacks. Share:

Share:
Read Post

Defending Against DDoS: Attacks

As we discussed in our Introduction to Defending Against Network-based Distributed Denial of Service Attacks, DDoS is a blunt force instrument for many adversaries. So organizations need to remain vigilant against these attacks. There is not much elegance in a volumetric attack – adversaries impact network availability by consuming all the bandwidth into a site and/or by knocking down network and security devices, overwhelming their ability to handle the traffic onslaught. Today’s traditional network and security devices (routers, firewalls, IPS, etc.) were not designed to handle these attacks. Nor were network architectures built to easily decipher attack traffic and keep legitimate traffic flowing. So an additional layer of products and services has emerged to protect networks from DDoS attacks. But first things first. Before we dig into ways to deal with these attacks let’s understand the types of attacks and how attackers assemble resources to blast networks to virtual oblivion. The Attacks The first category of DDoS attacks is the straightforward flood. Attackers use tools that send requests using specific protocols or packets (SYN, ICMP, UDP, and NTP are the most popular) but don’t acknowledge the responses. If enough attack computers send requests to a site, its bandwidth can quickly be exhausted. Even if bandwidth is sufficient, on-site network and security devices need to maintain session state while continuing to handle additional (legitimate) inbound session requests. Despite the simplicity of the problem floods continue to be a very effective tactic for overwhelming targets. Increasingly we see the DNS infrastructure targeted by DDoS attacks. This prevents the network from successfully routing traffic from point A to point B, because the map is gone. As with floods, attackers can overwhelm the DNS by blasting it with traffic, especially because DNS infrastructure has not scaled to keep pace with overall Internet traffic growth. DNS has other frailties which make it an easy target for DDoS. Like the shopping cart and search attacks we highlighted for Application DoS, legitimate DNS queries can also overwhelm the DNS service and knock down a site. The attacks target weaknesses in the DNS system, where a single request for resolution can trigger 4-5 additional DNS requests. This leverage can overwhelm domain name servers. We will dig into magnification tactics later in this series. Similarly, attackers may request addresses for hosts that do not exist, causing the targeted servers to waste resources passing on the requests and polluting caches with garbage to further impair performance. Finally, HTTP continues to be a popular target for floods and other application-oriented attacks, taking advantage of the inherent protocol weaknesses. We discussed slow HTTP attacks in our discussion of Application Denial of Service, so we won’t rehash the details here, but any remediations for volumetric attacks should alleviate slow HTTP attacks as well. Assembling the Army To launch a volumetric attack an adversary needs devices across the Internet to pound the victim with traffic. Where do these devices come from? If you were playing Jeopardy the correct response would be “What is a bot network, Alex?” Consumer devices continue to be compromised and monetized at an increasing rate, driving by increasingly sophisticated malware and the lack of innovation in consumer endpoint protection. These compromised devices generate the bulk of DDoS traffic. Of course attackers need to careful – Internet Service Providers are increasingly sensitive to consumer devices streaming huge amounts of traffic at arbitrary sites, and take devices off the network when they find violations of their terms of service. Bot masters use increasingly sophisticated algorithms to control their compromised devices, to protect them from detection and remediation. Another limitation of consumer devices is their limited bandwidth, particularly upstream. Bandwidth continues to grow around the world, but DDoS attackers hit capacity constraints. DDoS attackers like to work around these limitations of consumer devices by instead compromising servers to blast targets. Given the millions of businesses with vulnerable Internet-facing devices, it tends to be unfortunately trivial for attackers to compromise some. Servers tend to have much higher upstream bandwidth so they are better at serving up malware, commanding and controlling bot nodes, and launching direct attacks. Attackers are currently moving a step beyond conventional servers, capitalizing on cloud services to change their economics. Cloud servers – particularly Infrastructure as a Service (IaaS) servers are inherently Internet-facing and often poorly configured. And of course cloud servers have substantial bandwidth. For network attacks, a cloud server is like a conventional server on steroids – DDoS attackers see major gains in both efficiency and leverage. To be fair, the better-established cloud providers take great pains to identify compromised devices and notify customers when they notice something remiss. You can check out Rich’s story for how Amazon proactively notified us of a different kind of issue, but they do watch for traffic patterns that indicate misuse. Unfortunately by the time misuse is detected by a cloud provider, server owner, or other server host, it may be too late. It doesn’t take long to knock a site offline. And attackers without the resources or desire to assemble and manage botnets can just rent them. Yes, a number of folks offer DDoS as a service (DDoSaaS, for the acronym hounds), so it couldn’t be easier for attackers to harness the resources to knock down a victim. And it’s not expensive according to McAfee, which recorded DDoS costs from $2/hour for short attacks, up to $1,000 to take a site down for a month. It is a bit scary to think you could knock down someone’s site for 4 hours for less than a cup of coffee. But when you take a step back and consider the easy availability of compromised devices, servers, and cloud servers, DDoS is a very easy service to add to an attacker’s arsenal. Our next post will discuss tactics for magnifying the impact of a DDoS attack – including encryption and reflection – to make attacks an order of magnitude more effective. Share:

Share:
Read Post

Security Sharing

I really like that some organizations are getting more open about sharing information regarding their security successes and failures. Prezi comes clean about getting pwned as part of their bug bounty program. They described the bug, how they learned about it, and how they fixed it. We can all learn from this stuff.   Facebook talked about their red team exercise last year, and now they are talking about how they leverage threat intelligence. They describe their 3-tier architecture to process intel and respond to threats. Of course they have staff to track down issues as they are happening, which is what really makes the process effective. Great alerts with no response don’t really help. You can probably find a retailer to ask about that… I also facilitated a CISO roundtable where a defense sector attendee offered to share his indicators with the group via a private email list. So clearly this sharing thing is gaining some steam, and that is great. So why now? What has changed that makes sharing information more palatable? Many folks would say it’s the only way to deal with advanced adversaries. Which is true, but I don’t think that’s the primary motivation. It certainly got the ball rolling, and pushed folks to want to share. But it has typically been general counsels and other paper pushers preventing discussion of security issues and sharing threat information. My hypothesis is that these folks finally realized have very little to lose by sharing. Companies have to disclose breaches, so that’s public information. Malware samples and the associated indicators of attack provide little to no advantage to the folks holding them close to the vest. By the time anything gets shared the victim organization has already remediated the issue and placed workarounds in place. I think security folks (and their senior management) finally understand that. Or at least are starting to, because you still see folks who will only share on ‘private’ fora or within very controlled groups. Of course there are exceptions. If an organization can monetize the data, either by selling it or using it to hack someone else (yes, that happens from time to time), they aren’t sharing anything. But in general we will see much more sharing moving forward. Which is great. I guess it is true that everything we need to know we learned in kindergarten. Photo credit: “Sharing” originally uploaded by Toban Black Share:

Share:
Read Post

Friday Summary: March 28, 2014—Cloud Wars

Begun, the cloud war has. We have been talking about cloud computing for a few years now on this blog, but in terms of market maturity it is still early days. We are really entering the equivalent of the second inning of a much longer game, it will be over for a long time, and things are just now getting really interesting. In case you missed it, the AWS Summit began this week in San Francisco, with Amazon announcing several new services and advances. But the headline of the week was Google’s announced price cuts for their cloud services: Google Compute Engine is seeing a 32 percent reduction in prices across all regions, sizes and classes. App Engine prices are down 30 percent, and the company is also simplifying its price structure. The price of cloud storage is dropping a whopping 68 percent to just $0.026/month per gigabyte and $0.2/month per gigabyte/DRA. At that price, the new pricing is still lower than the original discount available for those who stored more than 4,500TB of data in Google’s cloud. Shortly thereafter Amazon countered with their own price reductions – something we figured they were prepared to do, but didn’t intend during the event. Amazon has been more focused on methodically delivering new AWS functionality, outpacing all rivals by a wide margin. More importantly Amazon has systematically removed impediments to enterprise adoption around security and compliance. But while we feel Amazon has a clear lead in the market, Google has been rapidly improving. Our own David Mortman pointed out several more interesting aspects of the Google announcement, lost in the pricing war noise: “The thing isn’t just the lower pricing. It’s the lower pricing with automatic “reserve instances” and the managed VM offering so you can integrate Google Compute Engine (GCE) and Google App Engine. Add in free git repositories for managing the GCE infrastructure and support for doing that via github – we’re seeing some very interesting features to challenge AWS. GOOG is still young at offering this as an external service but talk about giving notice… Competition is good! This all completely overshadowed Cisco’s plans to pour $1b into an OpenStack-based “Network of Clouds”. None of this is really security news, but doubling down on cloud investments and clearly targeting DevOps teams with new services, make it clear where vendors think this market is headed. But Google’s “Nut Shot” shows that the battle is really heating up. On to the Summary, where several of us had more than one favorite external post: Favorite Securosis Posts Adrian Lane: Incite 3/26/2014: One Night Stand. All I could think of when I read this was Rev. Horton Heat’s song “Eat Steak”. Mike Rothman: Firestarter: The End of Full Disclosure. Other Securosis Posts Mike’s Upcoming Webcasts. Friday Summary: March 21, 2014 – IAM Mosaic Edition. Favorite Outside Posts Gal Shpantzer: Why Google Flu is a failure: the hubris of big data. Adrian Lane: Canaries are Great! David Mortman: Primer on Showing Empathy in the Tech Industry. Gunnar: Making Sure Your Security Advice and Decisions are Relevant. “Information security professionals often complain that executives ignore their advice. There could be many reasons for this. One explanation might be that you are presenting your concerns or recommendations in the wrong business context. You’re more likely to be heard if you relate the risks to an economic moat relevant to your company.” Gunnar: Cyberattacks Give Lift to Insurance. The cybersecurity market is growing: “The Target data breach was the equivalent of 10 free Super Bowl ads”. Mike Rothman: You’ve Probably Been Pouring Guinness Beer The Wrong Way Your Whole Life. As a Guinness lover, this is critical information to share. Absolutely critical. Mike Rothman: Data suggests Android malware threat greatly overhyped. But that won’t stop most security vendors from continuing to throw indiscriminate Android FUD. Research Reports and Presentations Reducing Attack Surface with Application Control. Leveraging Threat Intelligence in Security Monitoring. The Future of Security: The Trends and Technologies Transforming Security. Security Analytics with Big Data. Security Management 2.5: Replacing Your SIEM Yet? Defending Data on iOS 7. Eliminate Surprises with Security Assurance and Testing. What CISOs Need to Know about Cloud Computing. Defending Against Application Denial of Service Attacks. Executive Guide to Pragmatic Network Security Management. Top News and Posts Apple and Google’s wage-fixing. Not security but interesting. Google Announces Massive Price Drops for Cloud. Cisco plans $1B investment in global cloud infrastructure. Microsoft Security Advisory: Microsoft Word Under Seige. Chicago’s Trustwave sued over Target data breach. Blog Comment of the Week This week’s best comment goes to Marco Tietz, in response to Friday Summary: IAM Mosaic Edition. Thanks Adrian, it looks like you captured the essence of the problem. IAM is very fragmented and getting everything to play together nicely is quite challenging. Heck, just sorting it out corp internal is challenging enough without even going to the Interwebs. This is clearly something we need to get better at, if we are serious about ‘The Cloud’. Share:

Share:
Read Post

Analysis of Visa’s Proposed Tokenization Spec

Visa, Mastercard, and Europay – together known as EMVCo – published a new specification for Payment Tokenisation this month. Tokenization is a proven security technology, which has been adopted by a couple hundred thousand merchants to reduce PCI audit costs and the security exposure of storing credit card information. That said, there is really no tokenization standard, for payments or otherwise. Even the PCI-DSS standard does not address tokenization, so companies have employed everything from hashed credit card (PAN) values (craptastic!) to very elaborate and highly secure random value tokenization systems. This new specification is being provided to both raise the bar on shlock home-grown token solutions, but more importantly to address fraud with existing and emerging payment systems. I don’t expect many of you to read 85 pages of token system design to determine what it really means, if there are significant deficiencies, or whether these are the best approaches to solving payment security and fraud issues, so I will summarize here. But I expect this specification to last, so if you build tokenization solutions for a living you had best get familiar with it. For the rest of you, here are some highlights of the proposed specification. As you would expect, the specification requires the token format to be similar to credit card numbers (13-19 digits) and pass LUHN. Unlike financial tokens used today, and at odds with the PCI specification I might add, the tokens can be used to initiate payments. Tokens are merchant or payment network specific, so they are only relevant within a specific domain. For most use cases the PAN remains private between issuer and customer. The token becomes a payment object shared between merchants, payment processors, the customer, and possibly others within the domain. There is an identity verification process to validate the requestor of a token each time a token is requested. The type of token generated is variable based upon risk analysis – higher risk factors mean a low-assurance token! When tokens are used as a payment objects, there are “Data Elements” – think of them as metadata describing the token – to buttress security. This includes a cryptographic nonce, payment network data, and token assurance level. Each of these points has ramifications across the entire tokenization eco-system, so your old tokenization platform is unlikely to meet these requirements. That said, they designed the specification to work within todays payment systems while addressing near-term emerging security needs. Don’t let the misspelled title fool you – this is a good specification! Unlike the PCI’s “Tokenization Guidance” paper from 2011 – rumored to have been drafted by VISA – this is a really well thought out document. It is clear whoever wrote this has been thinking about tokenization for payments for a long time, and done a good job of providing functions to support all the use cases the specification needs to address. There are facilities and features to address PAN privacy, mobile payments, repayments, EMV/smartcard, and even card-not-present web transactions. And it does not address one single audience to the detriment of others – the needs of all the significant stakeholders are addressed in some way. Still, NFC payments seems to be the principle driver, the process and data elements really only gel when considered from that perspective. I expect this standard to stick. Share:

Share:
Read Post

Mike’s Upcoming Webcasts

After being on the road for what seems like a long time (mostly because it was), I will be doing two webcasts next week which you should check out. Disruption Ahead: How Tectonic Technology Shifts Will Change Network Security. Next Tuesday (April 1 at 11 am ET) I will be applying our Future of Security concepts to the network security business. Tufin’s Reuven Harrison will be riding shotgun and we will have a spirited Q&A after my talk to discuss some of the trends he is seeing in the field. Register for this talk. Security Management 2.5: Replacing your SIEM Yet? On Wednesday, April 2 at 11 am ET I will be covering our recent SIEM 2.5 research on a webcast with our friends at IBM. I will be honing in on the forensics and security analytics capabilities of next-generation SIEM. You can register for that event as well. See you there, right? UPDATE: I added the links. Driver error. Share:

Share:
Read Post

Incite 3/26/2014: One Night Stand

There is no easy way to say this. I violated a vow I made years ago. It wasn’t a spur of the moment thing. I have been considering how to do it, without feeling too badly, for a few weeks. The facts are the facts. No use trying to obscure my transgression. I cheated. If I’m being honest, after it happened I didn’t feel bad. Not for long anyway.   This past weekend, I ate both steak and bacon. After deciding to stop eating meat and chicken almost 6 years ago. Of course there is a story behind it. Basically I was in NYC celebrating a close friend’s 45th birthday and we were going to Peter Luger’s famous steakhouse. Fish isn’t really an option, and the birthday boy hadn’t eaten any red meat for over 20 years. Another guy in the party has never eaten bacon. Never! So we made a pact. We would all eat the steak and bacon. And we would enjoy it. It was a one night stand. I knew it would be – it meant nothing to me. I have to say the steak was good. The bacon was too. But it wasn’t that good. I enjoyed it, but I realized I don’t miss it. It didn’t fulfill me in any way. And if I couldn’t get excited about a Peter Luger steak, there isn’t much chance of me going back back to my carnivorous ways. Even better, my stomach was okay. I was nervously awaiting the explosive alimentary fallout that goes along with eating something like a steak after 6 years. Although the familiar indigestion during the night came back, which was kind of annoying – that has been largely absent for the past 6 years – but I felt good. I didn’t cramp, nor did I have to make hourly trips to the loo. Yes, that’s too much information, but I guess my iron stomach hasn’t lost it. To be candid, the meat was the least of my problems over the weekend. It was the Vitamin G and the Saturday afternoon visit to McSorley’s Old Ale House that did the damage. My liver ran a marathon over the weekend. One of our group estimated we might each have put down 2 gallons of beer on Saturday. That may be an exaggeration, but it may not be. I have no way to tell. And that’s the way it should be on Boys’ Weekend. Now I get to start counting days not eating meat again. I’m up to 5 days and I think I’ll be faithful for a while… –Mike Photo credit: “NoHo Arts District 052309” originally uploaded by vmiramontes Securosis Firestarter Have you checked out our new video podcast? Rich, Adrian, and Mike get into a Google Hangout and.. hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail. March 19 – An Irish Wake March 11 – RSA Postmortem Feb 21 – Happy Hour – RSA 2014 Feb 17 – Payment Madness Feb 10 – Mass Media Abuse Feb 03 – Inevitable Doom Jan 27 – Government Influence Jan 20 – Target and Antivirus Jan 13 – Crisis Communications 2014 RSA Conference Guide In case any of you missed it, we published our fifth RSA Conference Guide. Yes, we do mention the conference a bit, but it’s really our ideas about how security will shake out in 2014. You can get the full guide with all the memes you can eat. Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too. Defending Against Network Distributed Denial of Service Attacks Introduction Advanced Endpoint and Server Protection Quick Wins Detection/Investigation Prevention Assessment Introduction Newly Published Papers Reducing Attack Surface with Application Control Leveraging Threat Intelligence in Security Monitoring The Future of Security Security Management 2.5: Replacing Your SIEM Yet? Defending Data on iOS 7 Eliminating Surprises with Security Assurance and Testing What CISOs Need to Know about Cloud Computing Incite 4 U Palo Alto Does Endpoints: It was only a matter of time. After the big FireEye/Mandiant deal and Bit9/Carbon Black, Palo Alto Networks needed to respond. So they bought a small Israeli start-up named Cyvera for $200 million! And I thought valuations were only nutty in the consumer Internet market. Not so much. Although no company can really have a comprehensive advanced malware story without technology on the network and endpoints. So PANW made the move, and now they need to figure out how to sell endpoint agents, which are a little bit different than boxes in the perimeter… – MR Payment Tokenization Evolution: EMVCo – the Visa, Mastercard, and Europay ‘standards’ organization, has released the technical architecture for a proposed Payment Tokenisation Specification, which will alter payment security around the globe over the coming years. The framework is flexible enough to both enable Near Field Communication (NFC, aka mobile payments) and help combat Card Not Present fraud – the two publicly cited reasons for the card brands to create a tokenization standard in parallel with promotion of EMV-style “smart cards” in the US. The huge jump in recent transactional fraud rates demands some response, and this looks like a good step forward. The specification does not supersede use of credit card numbers (PAN) for payment yet, but would enable merchants to support either PAN or tokens for payment. And this would be done either through NFC – replacing a credit card with a mobile device – or via wallet software (either a mobile or desktop application). For those of you interested in the more technical side of the solution, download the paper and look at the token format! They basically create a unique digital certificate for each transaction, which embeds merchant and payment network data, and wrapped it with a signature. And somewhere in the back office the payment gateways/acquirer (merchant bank) or third-party service will manage a token vault. More to come – this warrants detailed posts. –

Share:
Read Post

Firestarter: The End of Full Disclosure

Last week we held a wake for Windows XP. This week we continue that trend, as we discuss the end of yet era – coincidentally linked to XP. Last week the venerable Thunderdome of security lists bid adieu, as the Full Disclosure list suddenly shut down. And yes, this discussion is about more than just one email list going bye-bye. The audio-only version is up too. Share:

Share:
Read Post

Friday Summary: March 21, 2014—IAM Mosaic Edition

Researching and writing about identity and access management over the last three years has made one thing clear: This is a horrifically fragmented market. Lots and lots of vendors who assemble a bunch of pieces together to form a ‘vision’ of how customers want to extend identity services outside the corporate perimeter – to the cloud, mobile, and whatever else they need. And for every possible thing you might want to do, there are three or more approaches. Very confusing. I have had it in mind for several months to create a diagram that illustrates all the IAM features available out there, along with how they all link together. About a month ago Gunnar Peterson started talking about creating an “identity mosaic” to show how all the pieces fit together. As with many subjects, Gunnar and I were of one mind on this: we need a way to show the entire IAM landscape. I wanted to do something quick to show the basic data flows and demystify what protocols do what. Here is my rough cut at diagramming the current state of the IAM space (click to enlarge):   But when I sent over a rough cut to Gunnar, he responded with: “Only peril can bring the French together. One can’t impose unity out of the blue on a country that has 265 different kinds of cheese.” – Charles de Gaulle Something as basic as ‘auth’ isn’t simple at all. Just like the aisles in a high-end cheese shop – with all the confusing labels and mingled aromas, and the sneering cheese agent who cannot contain his disgust that you don’t know Camembert from Shinola – identity products are unfathomable to most people (including IT practitioners). And no one has been able to impose order on the identity market. We have incorrectly predicted several times that recent security events would herd identity cats vendors in a single unified direction. We were wrong. We continue to swim in a market with a couple hundred features but no unified approach. Which is another way to say that it is very hard to present this market to end users and have it make sense. A couple points to make on this diagram: This is a work in progress. Critique and suggestions encouraged. There are many pieces to this puzzle and I left a couple things out which I probably should not have. LDAP replication? Anyone? Note that I did not include authorization protocols, roles, attributes, or other entitlement approaches! Yes, I know I suck at graphics. Gunnar is working on a mosaic that will be a huge four-dimensional variation on Eve Mahler’s identity Venn diagram, but it requires Oculus Rift virtual reality goggles. Actually he will probably have his kids build it as a science project, but I digress. Do let us know what you think. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Mort quoted in Network World. Favorite Securosis Posts Mike Rothman: Firestarter: An Irish Wake Most of us chose this one: Jennifer Minella Is Now a Securosis Contributing Analyst. Other Securosis Posts Incite 3/18/2014: Yo Mama! Webinar Tomorrow: What Security Pros Need to Know About Cloud. Defending Against Network Distributed Denial of Service Attacks [New Series]. Reminder: We all live in glass houses. New Paper: Reducing Attack Surface with Application Control. Favorite Outside Posts A Few Lessons From Sherlock Holmes. Great post here about some of the wisdom of Sherlock that can help improve your own thinking. Gunnar: Project Loon. Cloud? Let’s talk stratosphere and balloons – that’s what happens when you combine the Internet with the Montgolfiers Adrian Lane: It’s not my birthday. I was going to pick Weev’s lawyers appear in court by Robert Graham as this week’s Fav, but Rik Ferguson’s post on sites that capture B-Day information struck an emotional chord – this has been a peeve of mine for years. I leave the wrong date at every site, and record which is which, so I know what’s what. Gal Shpantzer: Nun sentenced to three years, men receive five. Please read the story – it’s informative and goes into sentencing considerations by the judge, based on the histories of the convicted protesters, and the requests of the defense and prosecution. One of them was released on January 2012 for a previous trespass. At Y-12… David Mortman: Trust me: The DevOps Movement fits perfectly with ITSM. Yes, trust him. He’s The Real Gene Kim! Research Reports and Presentations Reducing Attack Surface with Application Control. Leveraging Threat Intelligence in Security Monitoring. The Future of Security: The Trends and Technologies Transforming Security. Security Analytics with Big Data. Security Management 2.5: Replacing Your SIEM Yet? Defending Data on iOS 7. Eliminate Surprises with Security Assurance and Testing. What CISOs Need to Know about Cloud Computing. Defending Against Application Denial of Service Attacks. Executive Guide to Pragmatic Network Security Management. Top News and Posts 110,000 WordPress Databases Exposed. Whitehat Security’s Aviator browser is coming to Windows. Missing the (opportunity of) Target. PWN2OWN Results. Symantec CEO fired. The official ‘CEO Transition’ Press Release. This Is Why Apple Enables Bluetooth Every Time You Update iOS. Threat Advisory: PHP-CGI At Your Command. IBM says no NSA backdoors in its products. Google DNS Hijack. 14% of Starbucks transactions are now made with a mobile device. And what the heck is a “Chief Digital Officer”? New Jersey Boy Climbs to Top of 1 World Trade Center. Are Nation States Responsible for Evil Traffic Leaving Their Networks? Full Disclosure shuts down. NSA Program monitors content of all calls. Country details not provided. Share:

Share:
Read Post

Jennifer Minella Is Now a Contributing Analyst

We are always pretty happy-go-lucky around here, but some days we are really happy. Today is one of those days. As you probably grasped from the headline, we are insanely excited to announce that Jennifer ‘JJ’ Minella is now a Contributing Analyst here at Securosis. JJ has some of the deepest technical and product knowledge of anyone we know, on top of a strong grounding as a security generalist. As a security engineer she has implemented countless products in various organizations. She is also a heck of a good speaker/writer, able to translate complex topics into understandable chunks for non-techie types. There is a reason she worked her way up to the executive ranks. JJ also has one of the most refined BS sensors in the industry. Seems like a good fit, eh? This is actually a weird situation because we always wanted to have her on the team but figured she was too busy to ask. Mike and JJ even worked together for months on their RSA presentation. It was classic over-analysis – she didn’t hesitate when we finally brought it up. Okay, probably over beers at RSA, which is how a lot of our major decisions are made. JJ joins David Mortman, Gunnar Peterson, James Arlen, Dave Lewis, and Gal Shpantzer as a contributor. Mike, Adrian, and I feel very lucky to have such an amazing group of security pros practically volunteer their time to work with us and keep the research real. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.