Securosis

Research

XP Users Twisting in the Wind

Windows XP’s recent end of life has garnered a bit of industry recognition. Mostly from vendors pushing controls to lock down the ancient operating system. Folks who are stuck on XP are, well, stuck. And now there is a new exploit in the wild that takes advantage of IE, so what are XP users to do? About 30% of all desktops are thought to be still running Windows XP and analysts have previously warned that those users would be vulnerable to attacks from cyber-thieves. Microsoft has suggested businesses and consumers still using the system should upgrade to a newer alternative. Twist in the wind, that’s what, at least according to Microsoft. That’s their answer: upgrade. If that’s not an option, lock down the device using a technology like privilege management or full application whitelisting. If those aren’t options either, you had better get some good forensics tools, because those devices will be owned. Sooner rather than later. This is the first unpatched XP issue in the new regime. It won’t be the last. Photo credit: “Four storms and a twister” originally uploaded by JD Hancock Share:

Share:
Read Post

NoSQL Security: Understanding NoSQL Platforms

I started this series on recommendations for securing NoSQL clusters a couple weeks ago, so sorry for the delay posting the rest of the series. I had some difficulty contacting the people I spoke with during the first part of this “big data” research project, and some vendors were been slow to respond with current product capabilities. As I hoped, launching this series “shook the tree of knowledge”, and several people responded to my inquiries. It has taken a little more time than I thought to schedule calls and parse through the data, but I am finally restarting, and should be able to quickly post the rest of the research. The first step is to describe what NoSQL is and how it differs from the other databases you have been protecting, so you can understand the challenges. Let’s get started… NoSQL Overview In our last research paper on this subject, we defined NoSQL platforms with a set of characteristics which differentiate them from big iron/proprietary MPP/”cloud in the box” platforms. Yes, some folks slapped “big data” stickers on the same ‘ol stuff they have always sold, but most people we speak with now understand that those platforms are not Hadoop-like, so we can dispense with discussion of the essential characteristics. If you need to characterize NoSQL platforms please review our introductory sections on securing big data clusters. Fundamentally, NoSQL is clustered data analytics and management. And please stop calling it “big data” – we are really discussing a building-block approach to databases. Rather than the packaged relational systems we have grown accustomed to over the last two decades, we now assemble different pieces (data management, data storage, orchestration, etc.) to satisfy specific requirements. The early architects of this trend had a chip on their collective shoulders, and they were trying to escape the shadow of ubiquitous relational databases, so they called their movement ‘NoSQL’. That term nicely illustrates their opposition to relational platforms. Not that they do not support SQL – many in fact do support Structured Query Language syntax. Worse, the term “big data” was used by the press to describe this trend, assigning a label taken from the most obvious – but not most important – characteristic of these platforms. Unfortunately that term serves the movement poorly. ‘NoSQL’ is not much better but it is a step in the right direction. These databases can be tailored to focus on speed, size, analytic capabilities, failsafe operation, or some other goal, and they enable computation on a massive scale for very little money. But just as importantly, they are fully customizable to meet different needs – often simultaneously! So what does a NoSQL database look like? The “poster child” is Hadoop. The Hadoop framework (the combination of Hadoop File System (HDFS) with other services such as YARN, Hive, Pig, etc.) is the general architecture employed by most NoSQL clusters, but many more are in wide use – including Cassandra, MongoDB, Couch, AWS SimpleDB, and Riak. There are over 125 known variations, but those account for the majority of customer usage today. NoSQL platforms scale and perform so well because of two key principles: distribution of data management and query processing across many servers (possibly thousands), combined with a modular architecture that allows different services to be swapped in as needed. Architecturally a Hadoop cluster looks like this: It is useful to think of the Hadoop framework as a ‘stack’, much like the famous LAMP stack. These pieces are normally grouped together, but you can mix and match and add to the stack as needed. For example Sqoop and Hive are replacement data access services. You can select a big data environment specifically to support columnar, graph, document, XML, or multidimensional data – all collectively called ‘NoSQL’ because they are not constrained by relational database constructs or a relational query parser. You can install different query engines depending on the type of data being stored. Or you can extend HDFS functionality with logging tools like Scribe. The entire stack can be configured and extended as needed. I have included Lustre, GFS, and GPFS, as they are all technically alternatives to HDFS but not as widely used. But the point is that this modular approach offers great flexibility at the expense of more difficult security, because each option brings its own security options and deficiencies. We get big, cheap, and easy data management and processing – with lagging security capabilities. This diagram illustrates some of the many variables in play. It seems like every customer uses a slightly different setup – tweaking for performance, manageability, and programmer preference. Many of the people we spoke with are on their second or third stack architecture, having replaced components which did not scale or perform as needed. This flexibility is great for functionality, but makes it much more difficult for third-party vendors to produce monitoring or configuration assessment tools. There are few constants (or even knowns) for NoSQL clusters, and things are too chaotic to definitively identify best or worst practices across all configurations. For convenient reference, we offer a list of key differences between relational and NoSQL platforms which impact security. Relational platforms typically scale by replacing a server with a larger one, rather than by adding many new servers. Relational systems have a “walled garden” security model: you attach to the database through well-defined interfaces, but internal workings are generally not exposed. Relational platforms come with many tools such as built-in encryption, SQL validation, centralized administration, full support for identity management, built-in roles, administrative segregation of duties, and labeling capabilities. You can add many of these features to a NoSQL cluster, but still face the fundamental problem of securing a dynamic constellation of many servers. This makes configuration management, patching, and server validation particularly challenging. Despite a few security detractors, NoSQL facilitates data management and very fast analysis on large-scale data warehouses, at very low cost. Cheap, fast, and easy are the three pillars this movement has been built upon – data analytics for the masses. A NoSQL

Share:
Read Post

Firestarter: The Verizon DBIR

After missing a week, Rich, Mike, and Adrian return to talk about birthdays, the annual Verizon Data Breach Investigations Report, and child-induced alcohol consumption. The audio-only version is up too.   Share:

Share:
Read Post

Defending Against Network-based Distributed Denial of Service Attacks [New Paper]

What’s a couple hundred gigabits per second of traffic between friends, right? Because that is the magnitude of recent volumetric denial of service attacks, which means regardless of who you are, you need a plan to deal with that kind of onslaught. Regardless of motivation attackers now have faster networks, bigger botnets, and increasingly effective tactics to magnify the impact of their DDoS attacks – organizations can no longer afford to ignore them. In Defending Against Network-based Distributed Denial of Service Attacks we dig into the attacks and tactics now being used to magnify those attacks to unprecedented volumes. We also go through your options to mitigate the attacks, and the processes needed to minimize downtime. To steal our own thunder, the conclusion is pretty straightforward: Of course there are trade-offs with DDoS defense, as with everything. Selecting an optimal mix of defensive tactics requires some adversary analysis, an honest and objective assessment of just how much downtime is survivable, and clear understanding of what you can pay to restore service quickly. We owe a debt of gratitude to A10 Networks for licensing this content and supporting our research. We make this point frequently, but without security companies understanding and getting behind our Totally Transparent Research model you wouldn’t be able to enjoy our research. Check out the paper’s permanent landing page, or download it directly (PDF). Share:

Share:
Read Post

Summary: Time and Tourists

Rich here, Travel is about as close as any of us get to a time machine. Leave home, step into an airport, and you step out of your life, even in our hyper-connected world. Sure, you are still on email, still talking to your family over the phone or Skype/FaceTime, and still surrounded by screens spewing endless worthless updates on the tragedy du jour, but fundamentally you are cut off. From your normal life, daily patterns, and state of mind. It isn’t ‘bad’, but it is unavoidable – no matter how closely you hew to your familiar habits. Can you guess I am writing this Summary on an airplane? Yeah, go figure. Yesterday I finished the last trip on a string of travel that has kept me moving nearly every week since before the RSA conference in February. To be honest I haven’t really had a break since sometime before Thanksgiving. On top of the travel I have finished some of the more intense yet fulfilling research and projects of my career. It is cool to go from my first little 30-minute cloud presentation four or five years ago, to advising cloud providers on their technical security architectures and controls. I now get two weeks in a row at home before I knock out my next couple trips, with no behind-schedule project deliverables hovering on the horizon. While travel disconnects you from your life, it also spurs innovation and creativity by placing you in new environments, making new personal connections, and providing ample time for deep thoughts. Throughout this travel binge I have been speaking to tons of security and non-security IT pros throughout the world, getting a really good feel for what is happening at multiple levels of the industry. Mostly in my focus areas of cloud and DevOps. One thing that has popped out is that most cloud providers… aren’t. I have been seeing a ton of companies advertising themselves as Infrastructure as a Service, when they are really little more than remote hosting/colo options. They don’t included any autoscaling capabilities, and they tend to define ‘elastic’ as “click a bunch of stuff to launch a new virtual machine by hand”. Digging deep; some of them lack the fundamental technologies needed to even possibly scale to the size of an Azure, Google, Rackspace, or AWS; and a few poop their pants when I start going into the details. It is going to be interesting, and do your homework. The next tidbit is that many large enterprises are dipping their toes into the cloud, but most of them really don’t understand native cloud architectures so they stick with these non-elastic vendors. There is nothing inherently wrong with that but they don’t get the resiliency, agility, or economic benefits of going cloud native. I call them “cloud tourists”. Everyone needs to start someplace, and who am I to judge? But as usual there is a dark side. Non-elastic vendors are pushing not only false promises but a lot of misinformation in hopes of knocking off AWS (mostly). Factually incorrect information that misleads clients. I think I will do a blog post on it soon, either here or at devops.com because it is the kind of thing that can really cause enterprises headaches. Looking at the big analyst research, most of them fundamentally don’t understand the cloud and aren’t helping their clients. Lastly, one of the most rewarding lessons of the past few months has been the realization that my research on the cloud and Software Defined Security is dead on target. I am working the same problems as many of the major cloud-native brands – albeit not with their scale issues. I am coming up with the same answers, and reflecting their real practices. That is always my biggest fear as an analyst – especially going hands-on again – and it is a relief to know that the work we are publishing to help readers implement cloud security and DevOps is… you know… not analyst bullshit. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Sign up for our Cloud Security Training at Black Hat! Rich quoted on Mac security in USA Today. Mort quoted at DevOps.com. Mort wrote more on DevOps myths. Favorite Securosis Posts Mort: Understanding Role Based Access Control: Advanced Concepts Mike: Friday Summary: The IT Dysfunction Issue. There may be something to this DevOps thing. I’m glad Adrian (and Rich) like the Phoenix Project. It offers a quick glimpse into the future of provisioning/delivering value to customers through technology. Rich: Pass the Hemlock. I view it a little differently, taking survival lessons from my full-time paramedic days. The patient is the one who is sick, not you. Empathize, but maintain detachment. You invest yourself into work, but at the end of the day there are things you can’t change. If that gets to you too much, move on. Rich #2: Verizon DBIR 2014: Incident Classification Patterns. Other Securosis Posts Incite 4/23/2014: New Coat of Paint. DDoS-fuscation. Favorite Outside Posts Rich: It’s time for the FCC to stand up for Americans instead of ruining the internet. I realize the U.S. political system is no longer by and for the people, but this is an incredibly anti-business stance that sacrifices all businesses to help out a few. Mort: On Policy in the Data Center: The policy problem. Mike: Choose Your Own DBIR Adventure. Kudos to our buddy Rick Holland (congrats on the new baby BTW), who between changing diapers managed a good summary of the DBIR. He even has a section about how to use the DBIR (that seems familiar – I wonder why…). But flattery via imitation aside, Rick’s perspectives on the DBIR are solid. Gunnar’s World Gunnar had a bunch of related links, so we putting them all together in his words: I have one theme with a couple of links “If you are competing with Microsoft, which is to say you are in the technology business – have a look at the track record under Satya Nadella so far – sh*t just got real.” Azure Beyond Windows Beyond

Share:
Read Post

Pass the Hemlock

I can certainly empathize with folks who suffer from burnout, in any occupation. It is miserable and clinical and not to be minimized or swept under the rug. But if this whole mindfulness approach has shown me anything, it is that we control how we respond to situations. So yes, security is a tough job. Yes, you probably can’t win. Yes, your senior management has no idea what you do and can’t understand your value. But that doesn’t mean you should go reaching for the hemlock at the first opportunity. You have to be able to handle the job – good, bad, and ugly – on a daily basis. Or find something else to do. And I say that from a position of kindness, not to be a dick. If you can’t find happiness, engagement, and a sense of accomplishment from your career, get a new career.   Krypt3ia posts his perspectives on the burnout issue. I am pretty sure he isn’t coming from a place of kindness but he delivers the facts. In order to survive and possibly even thrive doing security, you need to understand the job. And Scot has a great summary in a few bullet points: – It is your job to inform your client/bosses of the vulnerabilities and the risks – It is your job ONLY to inform them of these things and to recommend solutions – Once you have done this it is up to them to make the decisions on what to do or not do and to sign off on the risks – Your job is done (except if you are actually making changes to the environment to fix issues) Did you get that last one? Your job is done. Remember the Serenity prayer? I don’t care if you kneel at the alter of the Flying Spaghetti Monster or nothing at all – if you know the difference between what you control and what you don’t, you have a chance. If you don’t, then you don’t. Photo credit: “Nice Cup of Hemlock?” originally uploaded by Kova Shostakovich Share:

Share:
Read Post

Incite 4/23/2014: New Coat of Paint

It is interesting to see the concept of mindfulness enter the vernacular. For folks who have read the Incite for a while, I haven’t been shy about my meditation practice. And next week I will present on Neuro-Hacking with Jen Minella at her company’s annual conference. I never really shied away from this discussion, but I didn’t go out of my way to discuss it either.   If someone I was meeting with seemed receptive to talking about it, I would. If they weren’t, I wouldn’t. I doesn’t really matter to me either way. Turns out I found myself engaging in interesting conversations in unexpected places once I became open to talking about my experiences. It turns out mindfulness is becoming mass market fodder. In our Neuro-Hacking talk we reference Search Inside Yourself, which describes Google’s internal program, which is broadening into a mindfulness curriculum and a variety of other resources to kickstart a practice. These materials are hitting the market faster and faster now. When I was browsing through a brick and mortar bookstore last weekend with the Boy (they still exist!), I saw two new titles in the HOT section on these topics. From folks you wouldn’t expect. 10% Happier is from Dan Harris, a weekend anchor for ABC News. He describes his experiences embracing mindfulness and meditation. I am about 75% done with his book, and it is good to see how a skeptic overcame his pre-conceived notions to gain the aforementioned 10% benefit in his life. I also noticed Arianna Huffington wrote a book called Thrive, which seems to cover a lot of the same topics – getting out of our own way to find success, by drawing “on our intuition and inner wisdom, our sense of wonder, and our capacity for compassion and giving.” At this point I start worrying that mindfulness will just be the latest in a series of fads to capture the public’s imagination, briefly. ‘Worry’ is probably the wrong word – it’s more that I have a feeling of having seen this movie before and knowing it ends up like the Thighmaster. Like a lot of fads, many folks will try it and give up. Or learn they don’t like it. Or realize it doesn’t provide a quick fix in their life, and then go back to their $300/hr shrinks, diet pills, and other short-term fixes. And you know what? That’s okay. The nice part about really buying into mindfulness and non-judgement is that I know it’s not for everyone. How can it be? With billions of people on earth, there are bound to be many paths and solutions for people to find comfort, engagement, and maybe even happiness. And just as many paths for people to remain dissatisfied, judgmental, and striving for things they don’t have. I guess the best thing about having some perspective is that I can appreciate that nothing I’m doing is really new. Luminaries and new-age gurus like Ekhart Tolle and Deepak Chopra have put a new coat of paint on a 2,500 year old practice. They use fancy words for a decidedly unfancy practice. That doesn’t make it new. It just makes it shiny, and perhaps accessible to a new generation of folks. And there’s nothing wrong with that. –Mike Photo credit: “Wet Paint II originally uploaded by James Offer Securosis Firestarter Have you checked out our new video podcast? Rich, Adrian, and Mike get into a Google Hangout and.. hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail. April 14 – Three for Five March 24 – The End of Full Disclosure March 19 – An Irish Wake March 11 – RSA Postmortem Feb 21 – Happy Hour – RSA 2014 Feb 17 – Payment Madness Feb 10 – Mass Media Abuse Feb 03 – Inevitable Doom Jan 27 – Government Influence Jan 20 – Target and Antivirus Jan 13 – Crisis Communications 2014 RSA Conference Guide In case any of you missed it, we published our fifth RSA Conference Guide. Yes, we do mention the conference a bit, but it’s really our ideas about how security will shake out in 2014. You can get the full guide with all the memes you can eat. Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too. Understanding Role-based Access Control Introduction NoSQL Security 2.0 Introduction Defending Against Network Distributed Denial of Service Attacks Mitigations Magnification The Attacks Introduction Advanced Endpoint and Server Protection Quick Wins Detection/Investigation Prevention Assessment Introduction Newly Published Papers Reducing Attack Surface with Application Control Leveraging Threat Intelligence in Security Monitoring The Future of Security Security Management 2.5: Replacing Your SIEM Yet? Defending Data on iOS 7 Eliminating Surprises with Security Assurance and Testing What CISOs Need to Know about Cloud Computing Incite 4 U Questions driving the search for answers: Whatever you are doing, stop! And read Kelly White’s 3-part series on Questioning Security (Part 1, Part 2, and Part 3). Kelly’s main contention is that the answers we need to do security better are there, but only if we ask the right questions. Huh. Then he provides a model for gathering that data, contextualizing it, using some big data technologies to analyze it, and even works through an example or two. This echoes something we have been talking about for a long time. There is no lack of data. There is a lack of information to solve security problems. Of course a lot of this stuff is easily said but much harder to do. And even harder to do consistently. But it helps to have a model which provides a roadmap. Without some examples to make the model tangible you woon’t even know where to start. So thank Kelly for a piece of that. Now go read the posts. – MR Bounties on open source security flaws: The Veracode blog’s latest post is thought-provoking, asking whether it is time to Crowdfund Open Source Software.

Share:
Read Post

Verizon DBIR 2014: Incident Classification Patterns

[Note: Rich, Adrian, and Mike are all traveling today, so we asked Jamie Arlen to provide at least a little perspective on an aspect of the DBIR he found interesting. So thanks Jamie for this. We will also throw Gunnar under the bus a little because he has been very active on our email list, with all sorts of thoughts on the DBIR, but he doesn’t want to share them publicly. Maybe external shaming will work, but more likely he’ll retain his midwestern sensibilities and be too damn nice.] As usual, the gang over at Verizon have put a lot of information and effort into the 2014 edition of their DBIR (registration required). This is both a good thing and a bad thing. The awesome part is that there are historically very few places where incident information is available – leaving all too many professionals in the position of doing risk mitigation planning, based on anecdotes, prayer, and imagination. The DBIR offers some much-needed information to fill in the blanks. This year you will note the DBIR is different. Wade, Jay, and the gang have gone back to the data to provide a new set of viewpoints. They have also done a great job of putting together great graphics. Visualization for the win! Except that all the graphics are secondary to the high quality data tables. Of course graphics are sexy and tables are boring. Unless you have to make sense of the data, that is. So I will focus on one table in particular to illustrate my point.   This is Figure 19 (page 15 printed, 17 of 62 in the PDF) – click it to see a larger version. You may need to stare at it for a while for it to even begin to make sense. I have been staring at it since Friday and I’m still seeing new things. Obvious things Accommodation and Point of Sale Intrusion: No real surprise here. The problem of “the waiter taking the carbons” in the 70’s seems to be maintaining its strength into the future. Despite the efforts of the PCI Council, we have a whole lot of compliance but not enough security. And honestly, isn’t it time for the accommodation industry to make that number go down? Healthcare Theft/Loss: Based on the news it is no great surprise that about half the problems in healthcare are related to the loss or theft of information. We have pretty stringent regulation in place (and for years now). Is this a case of too much compliance and not enough security? It is time to take stock of what is really important (protecting the information of recipients of health care services) and build systems and staff capabilities to meet patient expectations! Interesting things Industry = Public: Biggest issue is “Misc. Error”. I didn’t know what a Misc Error was either. It turns out that it is due to the reporting requirements most of the public sector is under – they need to (and do) report everything. Things that would go completely unremarked in most organizations are reported. Things like, “I sent this email to the wrong person,” “I lost my personal phone (which had access to company data),” etc. I vaguely remember something from stats class about this. Incident = Denial of Service: The two industries reporting the largest impact are ‘Management’ and ‘Professional’. If you look at the NAICS listings for those two industry categories, you will see they are largely ‘offices’. I would love a deeper dive into those incidents to see what’s going on exactly and what industries they really represent. The text of the report talks primarily about the impact of DoS on the financial industry, but doesn’t go into any detail on the effects on Management and Professional. You can read into the report to see that the issue may have been the takeover of content management systems by the QCF / Brobot DoS attacks. Incident = Cyber Espionage: Just sounds cool. And something we have all spent lots of time talking about. It seems to affect Mining, Manufacturing, Professional and Transportation in greater proportion than others. Again, I’d love a look at the actual incidents – they are probably about 10% Sneakers and 90% Tommy Boy. If you are working in those industries you have something interesting to talk to your HR department about. There shouldn’t be any big surprises in this data, but there are plenty of obvious and interesting things. I am still staring at the table and waiting for the magic pattern moment to jump out at me. Though if I stare at the chart long enough, I think it’s a sailboat. Share:

Share:
Read Post

Understanding Role Based Access Control: Advanced Concepts

For some of you steeped in IAM concepts, our previous post on Role Lifecycles seems a bit basic. But many enterprises are still grappling with how to plan for, implement, and manage roles throughout the enterprise. There are many systems which contribute to roles and privileges, so what may seem basic in theory is often quite complex in practice. Today’s post will dig a bit deeper into more advanced RBAC concepts. Let’s roll up our sleeves to look at role engineering! Role Engineering To get value from roles in real-world use cases requires spending some time analyzing what you want from the system, and deciding how to manage user roles. A common first step is to determine whether you want a flat or hierarchical role structure. A ‘flat’ structure is conceptually simple, and the easiest to manage for smallish environments. For example you might start by setting up DBA and SysAdmin roles as peers, and then link them to the privileges they need.   Flat role structures are enough to get the job done in many cases because they provide the core requirement of mapping between roles and privileges. Then Identity Management (IdM) and Provisioning systems can associate users with their roles, and limit users to their authorized subset of system functions. But large, multifunction applications with thousands of users typically demand more from roles to address the privilege management problem. Hierarchies add value in some circumstances, particularly when it makes sense for roles to include several logical groups of permissions. Each level of the hierarchy has sub-classes of permissions, and the further up the hierarchy you go the more permissions are bundled together into each logical construct. The hierarchy lets you assemble a role as coarse or granular as you need. Think of it as an access gradient, granting access based on an ascending or descending set of privileges.   This modeling exercise cuts both ways – more complex management and auditing is the cost of tighter control. Lower-level roles may have access to specific items or applications, such as a single database. In other systems the high-level manager functions may use roles to facilitate moving and assigning users in a system or project. Keep in mind that roles facilitate many great features that applications rely on. For example roles can be used to enforce session-level privileges to impose consistency in a system. A classic example is a police station, where there can only be one “officer of the watch” at any given time. While many users can fulfill this function, only one can hold it at a time. This is an edge case not found in most systems, but it nicely illustrates where RBAC can be needed and applied. RBAC + A Sometimes a role is not enough by itself. For example, your directory lists 100 users in the role “Doctor”, but is being a doctor enough to grant access to review a patient’s history or perform an operation? Clearly we need more than just roles to define user capabilities, so the industry is trending toward a combination of roles supplemented by attributes. Roles can be further refined by adding attributes – what is commonly called RBAC+A (for Attributes). In our simple example above the access management system both checks the Doctor role and queries additional attributes such as a patient list and approved operation types to fully resolve an access request. Adding attributes solves another dimension of the access control equation: they can be closely linked to a user or resource, and then loaded into the program at runtime. The benefit is access control decisions based on dynamic data rather than static mappings, which are much harder to maintain. Roles with dynamic attributes can provide the best of both worlds, with roles for coarse policy checks, refined with dynamic attributes for fresher and more precise authorization decisions. More on Integration We will return to integration… no, don’t go away… come back… integration is important! If you zoom out on any area of IAM you see they are all rife with integration challenges, and roles are no different. Key questions for integrating roles include the following: What is the authoritative source of roles? Roles are a hybrid – privilege information is derived from many sources. But roles are best stored in a locale with visibility to both users and resource privileges. In a monolithic system (“We just keep everything in AD.”) this is not a problem. But for distributed heterogeneous systems this isn’t a single problem – it is often problems #1, #2, and #3! The repository of users can usually be tracked down and managed – by hook or by crook – but the larger challenge is usually discovering and managing the privilege side. To work through this problem, security designers need to choose a starting point with the right level of resource permission granularity. A URL can be a starting point but itself is usually not enough, because a single URL may offer a broad range of functionality. This gets a bit complex so let’s walk though an example: Consider setting a role for accessing an arbitrary domain like https://example.com/admin. Checking that the user has the Admin role before displaying any content makes sense. But the functionality across all Admin screens can vary widely. In that case the scope of work must be defined by more granular roles (Mail Admin, DB Admin, and so on) and/or by further role checking within the applications. Even this simple example clearly demonstrates why working with roles is often an iterative process – getting the definition and granularity right requires consideration of both the subject and the object sides. The authoritative source is not just a user repository – it should ideally be a system repository for hooks and references to both users and resources. Where is the policy enforcement point for roles? Once the relationship between roles and privileges is defined, there is still the question of where to enforce privileges. The answer from most role checkers is simple: access either granted

Share:
Read Post

DDoS-fuscation

Akamai’s research team has an interesting post on how attackers now use web proxies to shield their identities when launching DDoS attacks. Using fairly simple web-based tools they can launch attacks, and by routing the traffic through an exposed web proxy they can hide the bots or other devices performing the attacks. 234 source IP addresses is a surprisingly low number when considering the duration of the collected data (one month), further analysis into the data revealed that out of the 234 IPs, 136 were web proxies – this explains the low number of source IPs – attackers are using web proxies to hide their true identity. In order to understand the nature of these web proxies, we analyzed the domain (WHOIS) information as well as certain HTTP headers and discovered that 77% of all WebHive LOIC attack traffic came from behind Opera Mini proxy servers. So the hackers are abusing Opera’s mobile browser system to launch their attacks. Akamai tracked that back to the devices, which were largely in Indonesia. But were they? Were other obfuscation techniques used to further hide the attackers? Who knows? It doesn’t really matter. The Akamai researchers go on to talk about blocking attackers’ source IP addresses. Of course that requires you to be pretty nimble, able to mine those IP addresses, and to get blocks configured on your network gear (or within your scrubbing service). Then they talk about using WAF rules to protect applications by blocking DoS tools. And blocking HTTP from well-known DoS apps, assuming the attackers aren’t messing with headers.   Understand that blocking some of these IP addresses and applications may result in dropping legitimate sessions from legitimate former customers. Because people who cannot complete a transaction will find a company which can. So it becomes a balance of loss, between downtime and failed transactions. Akamai doesn’t mention built-in application defenses (as discussed in our AppDoS paper), but that’s okay – when you have a hammer, everything looks like a nail. Photo credit: “Hide & Seek” originally uploaded by capsicina Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.