Securosis

Research

Open Source Development and Application Security Analysis [New Series]

Earlier this year I participated in the 2014 Open Source Development and Application Security Survey, something I have participated in the last couple years. As a developer and former development manager – and let’s face it, an overtly opinionated one – I am always interested in adding my viewpoint to these inquiries, even if I’m just one developer voice among thousands. But I have also benefitted from these surveys – looking at the stuff my peers are using, and even selecting open source distributions based on these shared data points. Crazy, I know, but it’s another way to leverage the community. But I am equally interested in the survey questions asked, as they hint at what the sponsors are most interested in learning about their community. The organization that conducts this survey is Sonatype, and the 2014 survey was their 4th annual review of open source usage. This year’s survey was co-sponsored by Contrast Security, Rugged Software, NEA, and the Trusted Software Alliance. What piqued my interest is that this year is that I noticed more questions regarding security and vulnerabilities than in previous years. Even the name of the survey changed. But another interesting facet is that the survey was conducted right when OpenSSL’s Heartbleed vulnerability was discovered. It takes a lot for a security vulnerability to make mainstream news, but Heartbleed managed it. For any of you reading this who were not aware of it, OpenSSL is an open source implementation of the SSL protocol. The disclosure simultaneously illustrated that open source components are in use just about everywhere – across industries and organizations of all sizes – and disrupted IT practitioners’ blind faith in this ubiquitous cryptographic module. But Heartbleed is not the story here – the more interesting thing is how it affected people’s understanding of open source software and security. My question was “Did the vulnerability change the survey results?” In past years Sonatype provided us with a pre-briefing before they announced the survey results, and this year was no different. And after going through the survey myself I was extremely interested in the results. As we went through the data and discussed what it all meant, Sonatype said they were interested in getting someone to perform an independent analysis of the data. You don’t have to ask me twice – I jumped at the chance! As a security practitioner who has built software and managed development teams for a couple decades, I could offer some perspective. And we are beginning to see changes in developer attitudes and participation with security, not to mention a disruption of development approaches with DevOps, so I am eager to go through the data to better understand what developers are doing and what issues they face – in both security and product development. So over the next couple days I will discuss the results, with a focus on two key areas: Security Trends Analysis: The survey poses several questions about security and open source policies as they relate to security, vulnerability tracking, and responsibilities. We will examine tools usage, with trending data from prior years where applicable. Because the survey was conducted during the Heartbleed and Struts vulnerability disclosures, we can examine the data for important differences between responses, before and after disclosure. Development Trends and Operations Management: The survey data contains several important questions on development policies around open source management and use. These trends may not have specific security implications, but they impact how teams manage open source and the general quality of their releases. I will discuss trends in open source policy management, licensing, and security testing approaches; as well as where security testing occurs within the development process. I will highlight key takeaways and make recommendations. Finally, for those of you in security who are not familiar with Sonatype, think Apache Maven and Nexus. Their founder built Maven, which is probably the most widely used build automation tool out there. The company also builds the Nexus repository manager, used by over 40,000 organizations for storing and organizing binary software components, including management of policies for their use and automated health checks for security vulnerabilities. As the steward of the Central Repository, which handled over 13 billion requests for open source components last year, they are in a unique position to monitor use of open source development components – including version management, license characteristics, update frequencies, and known security vulnerabilities. This perspective helped them formulate the survey and reach the 3,300+ development professionals who participated. Next week I will cover the report’s security trend analysis. And if you’re interested I will also do a webcast with Brian Fox of Sonatype to discuss the highlights, comparing and contrasting our views on the results. Check it out! Share:

Share:
Read Post

Cloudera acquires Gazzang

Today Cloudera announced that they have acquired Austin-based data encryption vendor Gazzang. From the press release: While Cloudera customers will continue to have a choice of a broad range of cross-platform data protection methods available from Cloudera partners, Cloudera now offers encryption for all data-at-rest stored inside the Hadoop cluster – using an approach that is transparent to applications using the data, thereby minimizing the costs associated with enabling encryption. Cloudera plans to focus the efforts of the Gazzang team on additional security challenges in Hadoop. The team will become the heart of the Cloudera Center for Security Excellence focusing exclusively on Hadoop security. The “Big Data” market is growing rapidly, and for good reason. The ability of leverage inexpensive NoSQL databases like Hadoop, running atop cheap commodity/cloud hardware, means companies can do all sorts of analysis that was previously economically unfeasible. From large enterprise to small startups, companies are adopting NoSQL platforms for just about every use case imaginable. And while these companies won’t necessarily admit to the presence of sensitive data on these clusters, it’s there. This creates a genuine need for security with NoSQL platforms, something which the open source community has not really delivered. If you remember our series on securing Hadoop and NoSQL clusters in 2012, one of our principal recommendations was to use transparent encryption for NoSQL. That’s because you get data security while still allowing big data to maintain scalability and input velocity. Those may seem like obvious requirements, but they are not givens for most security products. Gazzang is, at its core, is a transparent data encryption tool with key management. Enterprises are adopting big data solutions, despite what some mainstream publications have stated, but only when they can satisfy data security and compliance requirements. Cloudera’s ability to address the enterprise’s most critical security requirement – data encryption – directly on the platform is a big win for security sensitive customers. Even better, Gazzang’s transparent encryption scales right along with NoSQL clusters so Cloudera customers get data security at big data scale. Cloudera is one of a growing group that includes MapR, Hortonworks, Datastax and Zettaset, positioning security as a differentiator to enterprise customers. Bundling encryption and key management capabilities into platforms will make them faster and easier to deploy – a win for customers. I usually have a handful of risks and downsides for every acquisition, but it is hard to criticize this deal because there are not that many possible downsides. This is an astute acquisition by Cloudera. Share:

Share:
Read Post

Friday Summary: The Hammock Edition

I am a pretty upbeat person, and despite my tendency towards snark I am optimistic by nature. You might find that surprising, given my profession of computer and software security, but it’s not. I have gotten a daily barrage of negative news about hacks, breaches, and broken software for well over a decade now. Like rainwater off a duck’s back, I let the bad news wash over me, and continue to educate those interested in security. Sure, I have had days where I say “Crap, security on everything is broken – and worse, nobody seems to get it.” Which is pretty much what Quinn Norton said last week with Everything is Broken. But her article was so well-written that it got to me. It is a testament to the elegance and effectiveness of her arguments that someone as calloused as I could be dragged along with her storyline, right into mild depression. It didn’t help that my morning reading consisted of that and this presentation on how the Internet and always-on connectivity may be making our lives worse. Both offer a sober look at the state of security and privacy; both were well done, with provocative imagery and text. And I admit, for the first time in a long time, I allowed them to get to me. Powerful posts. I think most people in security get to this same point of frustration at some point in their career. Like Quinn, I try to un-frack my little corner of the world whenever possible. Perhaps unlike Quinn, I accept that this is a never-ending game. Culture is not broken – it is in its natural state between civilization and chaos. It just pisses us off that it’s our own government spending our tax money to create so much of the chaos. Computers and electronic systems are probably a bit more secure from Joe Hacker than they were in 2001 – about when I came to this realization – but government hackers and criminals are much better too. For most folks the daily grind is a balancing act, where things are only unbroken enough to work most of the time. Those of us in security think that if you don’t control your systems, they are essentially non-functional and broken. But for the people people who own the systems, software, and devices there are many competing priorities to worry about; so they put just enough time, effort, and money in to patch things up to achieve their acceptable level of dysfunction. In the balancing act I can apply some affect momentum, but not define the balance point. At least that’s what I tell myself as I swing in my hammock, shaking off the blues. On the totally opposite end of the spectrum is Shack. And thank $DEITY for that! His post this week – A Hacker Looks at 40 – is a classic. Reading it is like surfing the banzai pipeline. “First, the industry we’re in. WOW. What a shit show … Yeah, it is volatile, and messy, and changes all the time. Thank goodness.” It’s all that an more. Loved Shack’s #1 takeaway: Learn Constantly. That is one of Rich Mogull’s too. You may be tired of hearing about cloud, mobile, and big data as disruptive tech; and the term DevOps makes many wince, but once you jump in it’s awesome and exciting. What a great time to be in security! They say there is no such thing as bad press, but Ubisoft’s promotion of Watch Dogs got pretty close. Apparently they anonymously mailed a black safe to several media outlets, including Ninemsn. Locked, of course. Then they mailed an anonymous letter telling the recipients to check their voicemail. And left anonymous voicemail with the PIN to open the safe, but not before it started beeping. Cool, right? But Homer Simpson was not there to open the safe for them, so Ninemsn called the bomb squad. After the initial panic and clearing of the building, a copy of the new Watch Dogs game was found. Ah, good times! The presence of booth schwag is unconfirmed. I am just disappointed that the bomb squad wouldn’t say whether they liked the new video game or not. I mean, getting the word out was the whole point, right? On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Mike quoted in Do you really think the CEOs resignation from Target was due to security?. Favorite Securosis Posts David Mortman: What You Need to Know About Amazon’s New Volume Storage Encryption. Adrian Lane: What You Need to Know About Amazon’s New Volume Storage Encryption. I say “Cheap, Fast, and Easy” wins, and AWS made volume encryption just that. Mike Rothman: What You Need to Know About Amazon’s New Volume Storage Encryption. Amazon is pushing things forward pretty quickly. Pay attention to their new stuff. And what Rich didn’t mention is that every time Amazon changes stuff, he has to update our CCSK training screenshots. So I think he’s secretly hoping for slower innovation… Other Securosis Posts Incite 5/28/2014: Auditory Dissonance. Translation Machine: Responding to (Uninformed) Bloggers. Summary: A Thousand Miles. Favorite Outside Posts Dave Lewis: ISS’s View on Target Directors Is a Signal on Cybersecurity. If you are keeping score at home we have a number of firsts: CIO dismissal, credit rating downgrade, CEO dismissal, boardroom shakeup. That is a lot of firsts – this is a Sputnik moment for security. David Mortman: Postmortem for outage of us-east-1. <– Joyent accidentally reboots an entire data center. Not a pure security issue, but input validation (or the lack thereof) strikes again James Arlen: TrueCrypt’s demise. Kees Leune nails the TrueCrypt thing in this post. Adrian Lane: A Hacker Looks at 40.. Mike Rothman: Tribal organizing (right and wrong, slow and fast). It has been a while since I linked to Godin. This is a good one about building a community – the right way. I love how he calls out folks for using invented urgency. We see that every day in security. Every. Single. Day. Rich: Why NSA Critics Are Wrong About Internet Vulnerabilities Like ‘Heartbleed’. I don’t agree completely with Aitel,

Share:
Read Post

Friday Summary: May 16, 2014

It’s odd, given the large number of security conferences I attend, how few sessions I get to see. I am always meeting with clients around events, but I rarely get to see the sessions. Secure360 is an exception, and that’s one of the reasons I like to go. I figured I’d share some of better ones – at least sessions where I not only learned something but got to laugh along the way: Marcus Ranum had an excellent presentation on “Directions in system log analysis”, effectively offering a superior architecture and design for log parsing – encouraging the audience anyone to build their own log analysis engines. What he sketched out will perform and scale as well as any existing commercial product. The analysis tree approach to making quick evaluations of log entries – which is successfully used in SQL statement analysis – can quickly isolate bad statements from good and spotlight previously unseen entries. I have a small quibble with Marcus’s assertion that you don’t need “big data” – especially given that he recommended Splunk several times, because Splunk is a flavor of NoSQL, and also because many NoSQL platforms are open source (meaning inexpensive), can store logs longer, and provide infrastructure for forensic analysis. Parsing at the edge may work great for alerting, but once you have detected something you are likely to need the raw logs for forensic analysis – at which point you can be looking for stuff that you threw away. Regardless, a great preso, and I encourage you to get the slides if you can. One of my favorite presentations the second day was Terenece Spies’ talk on “Defending the future” of payment security, talking about things like PoS security, P2P encryption, tokenization – all interwoven in a brief crypto history – and ending up with Bitcoin technology. The perspective he offered on how we got where we are today with payment security was excellent – you can see the natural progression of both payment and security technologies, and the points at which they intersect. This highlights how business and technology each occasionally overrun their dance partner to make the other look silly. Sure, I disagree with his assertion that tokenization means encryption – it doesn’t – but it was a very educational presentation on why specific security approaches are used in payment security. David Mortman did “Oh, the PaaS-abilities: Is PaaS Securable?”, offering a realistic assessment of where you can implement security controls and – just as importantly – where you can’t. David worked his way through each layer of the PaaS stack, contrasting what people normally handle with traditional IT against what they should do in the cloud, and what needs to be done vs. what can be performed. The audience was small but they stayed throughout, despite the advanced subject matter. Well, advanced in the sense that not many people are using PaaS yet, but many of us here at Securosis expect the cloud to end up there in the long run. With PaaS security thus, David’s security concepts are right at the cutting edge. David could probably keep doing this presentation for the next couple years – it’s right on the mark. If you are looking at PaaS find a copy of his presentation. Finally I had to choose between Rothman’s NGFUFW talk and Gunnar’s Mobile AppSec talk. Even though I work with Mike every day, I don’t get to see him present very often, so I watched Mike. You can read all his blogs and download his papers but it’s just not the same as seeing him present the material live – replete with stories decidedly unsuitable for print about some colorful pros. Good stuff! We are all traveling again this week, so we are light on links and news, and had no comment of the week. On to the Summary! Webcasts, Podcasts, Outside Writing, and Conferences Rich is presenting at Camp DevOps on Kick-aaS security Favorite Securosis Posts Adrian Lane: Firestarter: 3 for 5 – McAfee, XP, and CEOs. The well groomed edition. Other Securosis Posts Incite 5/14/2014: Solo Exploration. Summary: Thin Air. Favorite Outside Posts Mike Rothman: Undocumented Vulnerability In Enterprise Security. Look who’s now a Forbes contributor… Our own Dave Lewis. Nice post on the importance of documentation. Adrian Lane: The Mad, Mad Dash to Update Flash. The adoption charts are worth the read. Research Reports and Presentations Defending Against Network-based Distributed Denial of Service Attacks. Reducing Attack Surface with Application Control. Leveraging Threat Intelligence in Security Monitoring. The Future of Security: The Trends and Technologies Transforming Security. Security Analytics with Big Data. Security Management 2.5: Replacing Your SIEM Yet? Defending Data on iOS 7. Eliminate Surprises with Security Assurance and Testing. What CISOs Need to Know about Cloud Computing. Defending Against Application Denial of Service Attacks. Top News and Posts What Target and Co aren’t telling you: your credit card data is still out there. Network Admin Allegedly Hacked Navy While on an Aircraft Carrier. Antivirus is Dead: Long Live Antivirus! Serious security flaw in OAuth, OpenID discovered. Share:

Share:
Read Post

Friday Summary: Biased Analysis Edition

Glenn Fleishman (@GlennF) tweeted “Next month’s Wired: ‘We painstakingly reconstructed Steve Jobs’ wardrobe so you can wear it, too.’” A catty response to Wired Magazine’s recent reconstruction of Steve Jobs’ stereo system. Unlike Mr. Fleishman I was highly interested in this article, and found it relevant to current events. For people who love music and quality home music reproduction, iTunes’ disgustingly low-resolution MP3 files seem at odds with Jobs’ personal interest in HiFi. The equipment surrounding Jobs in the article’s lead picture was not just good stereo equipment, and not ‘name brand’ equipment either – but instead esoteric brands aimed at aficionados (indicating Jobs was very serious about music reproduction and listening). The irony is that someone who was heavily invested in HiFi would become the principal purveyor of what audiophiles deem unholy evil. Sure, MP3s are a great convenience – just not so great for music quality. This picture has made HiFi trade magazines over the years, and while Jobs was alive the vanishingly small population of audiophiles held out hope that we would someday get high-resolution music from iTunes. The rumor – of which confirmation would be a great surprise – is that we may finally get HiRes files from iTunes, which I suspect is why this picture was the subject of such scrutiny. The market for high-quality headphones has jumped 10-fold in the last 7 years, and vinyl record sales have gone up 6-fold in the same period, showing public interest in higher quality audio while CD sales plummet. Even piracy-paranoid anti-consumer vendors like Sony have begun to sell HiRes DSD files, so Apple has likely noticed these trends and we can hope they will follow suit. Garbage in, garbage out is a basic axiom I learned when I first started programming database applications, and it remains true for any database, including NoSQL variants. Write any query you want – if the data is bad, the results are meaningless. But even if the data is completely accurate, depending on how you write your queries, you may produce results that don’t mean what you think they do. The learning curve with NoSQL is even weirder – many data scientists are still learning how to use these platforms. Consider that for many NoSQL users, the starting point is often just looking for stuff – we don’t necessarily know what we are looking for, but we often discover interesting patterns in the data. And when we do, we try to make sense of them. This itself is a form of bias. In this process we may write and rewrite data queries many times over, trying to refine a hypothesis. But the quality and completeness of the data, as well as your ability to mine it effectively with queries, can lead to profound revelations – or perhaps to poop. More likely it’s somewhere in-between, but both extremes are a possibility. One of Gunnar’s key themes from a post earlier this year is to understand the balance between objective and subjective aspects of metrics. He said, “I am very tired of quant debates where … the supposed quant approach beats the subjective approach.” It is not a question of whether you are subjective or not – it is there in your biases when you make the model… “To me the formula for infosec is objective measures through logging and monitoring, subjective decisions on where to place them, and what depth, a mix of subjective and objective review of the logs and data feedback from the system’s performance over time.” I raise these points because while we examine our navels for effective uses of analytics for business, operations, and security metrics, practiced FUD-ites work their magic to make analysis irrelevant. An exaggerated example to make a point is this post on discrimination potential in big data use, where we see political opponents claiming big data is biased before it has been put to use. A transparent attempt to kill funding based on data analysis, without analysis to back it up! It is easier for a politician to generate fear by labeling this mysterious thing called “big data” as discriminatory in order to get their way than to discredit an actual analysis. They are feeding off audience bias (popular opinion). Many people naively believe “It’s big data so it’s evil” in response to NSA spying and corporations performing what feels like consumer espionage. It does not even matter if the data or tools will be used used effectively – bias and fear are used to kill metrics-based decisions. Ironic, right? As a security example: in each of the last three years – always a few months after the release of the Verizon DBIR – a handful of vendors has told me how the DBIR says the number one threat is from insiders! When I point out that the report says the exact opposite, they always argue that an outsider becomes an insider once they have breached your systems. And post-Snowden many enterprises are mostly worried about being Snowdened – regardless of any breach statistics. I don’t have any lesson here, or a specific safety tip to offer, but if you have metrics and data for decision support perform your own review. It will help remove some bias from the analysis. People who are financially invested in a specific worldview deliberately misinterpret, discredit, and fund biased studies, to support their position – their biased arguments drive you to conclusions that benefit them. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Rich quoted on SecDevOps. Favorite Securosis Posts David Mortman: NoSQL Security: Understanding NoSQL Platforms. Adrian Lane: XP Users Twisting in the Wind. For the picture, if nothing else. Mike Rothman: NoSQL Security: Understanding NoSQL Platforms. I have long said Adrian has forgotten more about databases than most of us know. He has proven it once again with this primer on NoSQL databases… Other Securosis Posts Incite 4/30/2014: Sunscreen. Firestarter: The Verizon DBIR. Defending Against Network-based Distributed Denial of Service Attacks [New Paper]. Summary: Time and Tourists. Pass the Hemlock. Favorite Outside Posts Mike Rothman: UltraDNS Dealing with DDoS Attack. The cyber equivalent of going up to someone and hitting them with

Share:
Read Post

NoSQL Security: Understanding NoSQL Platforms

I started this series on recommendations for securing NoSQL clusters a couple weeks ago, so sorry for the delay posting the rest of the series. I had some difficulty contacting the people I spoke with during the first part of this “big data” research project, and some vendors were been slow to respond with current product capabilities. As I hoped, launching this series “shook the tree of knowledge”, and several people responded to my inquiries. It has taken a little more time than I thought to schedule calls and parse through the data, but I am finally restarting, and should be able to quickly post the rest of the research. The first step is to describe what NoSQL is and how it differs from the other databases you have been protecting, so you can understand the challenges. Let’s get started… NoSQL Overview In our last research paper on this subject, we defined NoSQL platforms with a set of characteristics which differentiate them from big iron/proprietary MPP/”cloud in the box” platforms. Yes, some folks slapped “big data” stickers on the same ‘ol stuff they have always sold, but most people we speak with now understand that those platforms are not Hadoop-like, so we can dispense with discussion of the essential characteristics. If you need to characterize NoSQL platforms please review our introductory sections on securing big data clusters. Fundamentally, NoSQL is clustered data analytics and management. And please stop calling it “big data” – we are really discussing a building-block approach to databases. Rather than the packaged relational systems we have grown accustomed to over the last two decades, we now assemble different pieces (data management, data storage, orchestration, etc.) to satisfy specific requirements. The early architects of this trend had a chip on their collective shoulders, and they were trying to escape the shadow of ubiquitous relational databases, so they called their movement ‘NoSQL’. That term nicely illustrates their opposition to relational platforms. Not that they do not support SQL – many in fact do support Structured Query Language syntax. Worse, the term “big data” was used by the press to describe this trend, assigning a label taken from the most obvious – but not most important – characteristic of these platforms. Unfortunately that term serves the movement poorly. ‘NoSQL’ is not much better but it is a step in the right direction. These databases can be tailored to focus on speed, size, analytic capabilities, failsafe operation, or some other goal, and they enable computation on a massive scale for very little money. But just as importantly, they are fully customizable to meet different needs – often simultaneously! So what does a NoSQL database look like? The “poster child” is Hadoop. The Hadoop framework (the combination of Hadoop File System (HDFS) with other services such as YARN, Hive, Pig, etc.) is the general architecture employed by most NoSQL clusters, but many more are in wide use – including Cassandra, MongoDB, Couch, AWS SimpleDB, and Riak. There are over 125 known variations, but those account for the majority of customer usage today. NoSQL platforms scale and perform so well because of two key principles: distribution of data management and query processing across many servers (possibly thousands), combined with a modular architecture that allows different services to be swapped in as needed. Architecturally a Hadoop cluster looks like this: It is useful to think of the Hadoop framework as a ‘stack’, much like the famous LAMP stack. These pieces are normally grouped together, but you can mix and match and add to the stack as needed. For example Sqoop and Hive are replacement data access services. You can select a big data environment specifically to support columnar, graph, document, XML, or multidimensional data – all collectively called ‘NoSQL’ because they are not constrained by relational database constructs or a relational query parser. You can install different query engines depending on the type of data being stored. Or you can extend HDFS functionality with logging tools like Scribe. The entire stack can be configured and extended as needed. I have included Lustre, GFS, and GPFS, as they are all technically alternatives to HDFS but not as widely used. But the point is that this modular approach offers great flexibility at the expense of more difficult security, because each option brings its own security options and deficiencies. We get big, cheap, and easy data management and processing – with lagging security capabilities. This diagram illustrates some of the many variables in play. It seems like every customer uses a slightly different setup – tweaking for performance, manageability, and programmer preference. Many of the people we spoke with are on their second or third stack architecture, having replaced components which did not scale or perform as needed. This flexibility is great for functionality, but makes it much more difficult for third-party vendors to produce monitoring or configuration assessment tools. There are few constants (or even knowns) for NoSQL clusters, and things are too chaotic to definitively identify best or worst practices across all configurations. For convenient reference, we offer a list of key differences between relational and NoSQL platforms which impact security. Relational platforms typically scale by replacing a server with a larger one, rather than by adding many new servers. Relational systems have a “walled garden” security model: you attach to the database through well-defined interfaces, but internal workings are generally not exposed. Relational platforms come with many tools such as built-in encryption, SQL validation, centralized administration, full support for identity management, built-in roles, administrative segregation of duties, and labeling capabilities. You can add many of these features to a NoSQL cluster, but still face the fundamental problem of securing a dynamic constellation of many servers. This makes configuration management, patching, and server validation particularly challenging. Despite a few security detractors, NoSQL facilitates data management and very fast analysis on large-scale data warehouses, at very low cost. Cheap, fast, and easy are the three pillars this movement has been built upon – data analytics for the masses. A NoSQL

Share:
Read Post

Understanding Role Based Access Control: Advanced Concepts

For some of you steeped in IAM concepts, our previous post on Role Lifecycles seems a bit basic. But many enterprises are still grappling with how to plan for, implement, and manage roles throughout the enterprise. There are many systems which contribute to roles and privileges, so what may seem basic in theory is often quite complex in practice. Today’s post will dig a bit deeper into more advanced RBAC concepts. Let’s roll up our sleeves to look at role engineering! Role Engineering To get value from roles in real-world use cases requires spending some time analyzing what you want from the system, and deciding how to manage user roles. A common first step is to determine whether you want a flat or hierarchical role structure. A ‘flat’ structure is conceptually simple, and the easiest to manage for smallish environments. For example you might start by setting up DBA and SysAdmin roles as peers, and then link them to the privileges they need.   Flat role structures are enough to get the job done in many cases because they provide the core requirement of mapping between roles and privileges. Then Identity Management (IdM) and Provisioning systems can associate users with their roles, and limit users to their authorized subset of system functions. But large, multifunction applications with thousands of users typically demand more from roles to address the privilege management problem. Hierarchies add value in some circumstances, particularly when it makes sense for roles to include several logical groups of permissions. Each level of the hierarchy has sub-classes of permissions, and the further up the hierarchy you go the more permissions are bundled together into each logical construct. The hierarchy lets you assemble a role as coarse or granular as you need. Think of it as an access gradient, granting access based on an ascending or descending set of privileges.   This modeling exercise cuts both ways – more complex management and auditing is the cost of tighter control. Lower-level roles may have access to specific items or applications, such as a single database. In other systems the high-level manager functions may use roles to facilitate moving and assigning users in a system or project. Keep in mind that roles facilitate many great features that applications rely on. For example roles can be used to enforce session-level privileges to impose consistency in a system. A classic example is a police station, where there can only be one “officer of the watch” at any given time. While many users can fulfill this function, only one can hold it at a time. This is an edge case not found in most systems, but it nicely illustrates where RBAC can be needed and applied. RBAC + A Sometimes a role is not enough by itself. For example, your directory lists 100 users in the role “Doctor”, but is being a doctor enough to grant access to review a patient’s history or perform an operation? Clearly we need more than just roles to define user capabilities, so the industry is trending toward a combination of roles supplemented by attributes. Roles can be further refined by adding attributes – what is commonly called RBAC+A (for Attributes). In our simple example above the access management system both checks the Doctor role and queries additional attributes such as a patient list and approved operation types to fully resolve an access request. Adding attributes solves another dimension of the access control equation: they can be closely linked to a user or resource, and then loaded into the program at runtime. The benefit is access control decisions based on dynamic data rather than static mappings, which are much harder to maintain. Roles with dynamic attributes can provide the best of both worlds, with roles for coarse policy checks, refined with dynamic attributes for fresher and more precise authorization decisions. More on Integration We will return to integration… no, don’t go away… come back… integration is important! If you zoom out on any area of IAM you see they are all rife with integration challenges, and roles are no different. Key questions for integrating roles include the following: What is the authoritative source of roles? Roles are a hybrid – privilege information is derived from many sources. But roles are best stored in a locale with visibility to both users and resource privileges. In a monolithic system (“We just keep everything in AD.”) this is not a problem. But for distributed heterogeneous systems this isn’t a single problem – it is often problems #1, #2, and #3! The repository of users can usually be tracked down and managed – by hook or by crook – but the larger challenge is usually discovering and managing the privilege side. To work through this problem, security designers need to choose a starting point with the right level of resource permission granularity. A URL can be a starting point but itself is usually not enough, because a single URL may offer a broad range of functionality. This gets a bit complex so let’s walk though an example: Consider setting a role for accessing an arbitrary domain like https://example.com/admin. Checking that the user has the Admin role before displaying any content makes sense. But the functionality across all Admin screens can vary widely. In that case the scope of work must be defined by more granular roles (Mail Admin, DB Admin, and so on) and/or by further role checking within the applications. Even this simple example clearly demonstrates why working with roles is often an iterative process – getting the definition and granularity right requires consideration of both the subject and the object sides. The authoritative source is not just a user repository – it should ideally be a system repository for hooks and references to both users and resources. Where is the policy enforcement point for roles? Once the relationship between roles and privileges is defined, there is still the question of where to enforce privileges. The answer from most role checkers is simple: access either granted

Share:
Read Post

Friday Summary: April 18, 2014, The IT Dysfunction Issue

I just finished reading The Phoenix Project by Gene Kim, Kevin Behr, and George Spafford. And wow, what a great book! It really captures the organizational trends and individual behaviors that screw up software & IT projects. And, better yet, it offers some concrete examples for how to address these issues. The Phoenix Project is a bit like a time machine for me, because it so accurately captures the entire ecosystem of dysfunction at one of my former companies that it could have been based on that organization. I have worked with these people and witnessed those behaviors – but my Brent was a guy named Yudong who was very bright and well-intentioned, but without a clue how to operate. Those weekly emergency hair-on-fire sessions were typically caused by him. Low-quality software and badly managed deployments make productivity go backwards. Worse, repeat failures and lack of reliability create tension and distrust between all the groups in a company, to the point when they become rival factions. Not a pleasant work environment – everyone thinks everyone else is bad at their jobs! The Phoenix Project does a wonderful job of capturing these situations, and why companies fall into these behavioral patterns. Had this book been written 10 years ago it would have saved a different firm I worked for. A certain CEO who did things like mandate a waterfall development process shorter than the development cycle, commit to features without specifications and forget to tell development, and only allow user features – not scalability, reliability, management, or testing infrastructure improvements – into development might not have failed so spectacularly. Look at blog posts from Facebook and Twitter and Netflix and Google – companies who have succeeded at building products during explosive growth. They don’t talk about fancy UI or customer-centric features – they talk about how to advance their infrastructure while making their jobs easier over the long term. Steady improvement. In some of my previous firms more money went into prototype apps to show off a technology than the technology and supporting infrastructure. Anyway, as an ex-VP of Engineering & CTO, I like this book a lot and think it would be very helpful for anyone who needs to manage technology or technical people. We all make mistakes, and it is valuable for executive management to have the essential threads of dysfunction exposed this way. When you are in the middle of the soup it is hard to explain why certain actions are disastrous, especially when they come from, say, the CEO. And no, I am not getting paid for this and no, I did not get a free copy of the book. This enthusiastic endorsement is because I think it will help managers avoid some misery. Well, that, and I am enjoying the mental image of the looks on some people’s face when they each receive a highlighted copy anonymously in the mail. Regardless, highly recommended, especially if you manage technology efforts. It might save your bacon! We have not done the Summary in a couple weeks, so there is a lot of news! On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Mort speaking next week at Thotcon. Favorite Securosis Posts David Mortman: NoSQL Security 2.0 [New Series] updated. Adrian Lane: Can’t Unsee. “It was funny … also because it didn’t happen to me.” Sometimes that Rothman guy really cracks me up! Mike Rothman: NoSQL Security 2.0 [New Series]. Looking forward to this series from Adrian. I know barely enough database security to be dangerous and it’s a great opportunity for all of us to learn. Other Securosis Posts Incite 4/16/2014: Allergies. Understanding Role Based Access Control: Role Lifecycle. Responsibly (Heart)Bleeding. Firestarter: Three for Five. FFIEC’s Rear-View Mirror. Understanding Role Based Access Control [New Series]. Defending Against DDoS: Mitigations. Favorite Outside Posts David Mortman: Security of Things: An Implementers’ Guide to Cyber-Security for Internet of Things. Devices and Beyond! <– a PDF, but read it anyway Adrian Lane: Manhattan: real-time, multi-tenant distributed database for Twitter scale. Having just finished the excellent The Phoenix Project, I particularly see success factors in how companies like Twitter, Facebook, and Netflix approach development. Gunnar Peterson: The Heartbleed Hit List. They took the time to go through all the major web services to show who is affected. Good reference. Mike Rothman: NSS Labs Hits Back at FireEye ‘Untruths’. There was quite a dust-up last week when NSS published their “Breach Detection” tests. FireEye didn’t do very well and responded. And then the war of words began. Here is Channelomics’ perspective. Gal Shpantzer: Moving Forward. I think this will be my FS link every week. Dave Lewis: Security on-call nightmares. Pepper: * iptables rules to block all heartbeat queries Research Reports and Presentations Reducing Attack Surface with Application Control. Leveraging Threat Intelligence in Security Monitoring. The Future of Security: The Trends and Technologies Transforming Security. Security Analytics with Big Data. Security Management 2.5: Replacing Your SIEM Yet? Defending Data on iOS 7. Eliminate Surprises with Security Assurance and Testing. What CISOs Need to Know about Cloud Computing. Defending Against Application Denial of Service Attacks. Executive Guide to Pragmatic Network Security Management. Top News and Posts Heartbleed Update (v3) via @CISOAndy DuckDuckGo is the Anonymous Alternative to Google What Edward Snowden Used to Evade the NSA FBI warns businesses of VC IP scams. Soon to be a movie snort. Aereo Streaming-TV Service Wins Big Ruling Against Broadcasters Staying ahead of OpenSSL vulnerabilities Don’t Shoot The Messenger One of World’s Largest Websites Hacked Brendan Eich Steps Down as Mozilla CEO. A series of strange decisions at Mozilla make you wonder what’s up over there. Companies track more than credit scores Whitehat Security’s Aviator browser is coming to Windows Blog Comment of the Week This week’s best comment goes to Marco Tietz, in response to Responsibly (Heart)Bleeding. Agreed. a bit of bumpy road pre-disclosure (why only a few groups etc pp, you guys covered that in the firestarter), but responsible handling from akamai along the way. maybe I’m too optimistic but it seems to be happening more often than it used to. Share:

Share:
Read Post

Understanding Role Based Access Control: Role Lifecycle

Roles-based access control (RBAC) has earned a place in the access control architectures at many organization. Companies have many questions about how to effectively use roles, including “How can I integrate role-based systems with my applications? How can I build a process around roles? How can I manage roles on a day-to-day basis? And by the way, how does this work?” It is difficult to distinguish between the different options on the market – they all claim equivalent functionality. Our goal for this post is to provide a simple view of how all the pieces fit together, what you do with them, and how each piece helps provide and/or support role-based access. Role Lifecycle in a real-world enterprise Roles make access control policy management easier. The concept is simple: perform access control based on a role assigned to one or more users. Users are grouped by job functions so a single role can define access for all users who perform a function – simplifying access control policy development, management, and deployment. The security manager does not need to set permissions for every user, but can simply provide access to necessary functions to a single shared role. Like many simple concepts, what is easy to understand can be difficult to achieve in the real world. We begin our discussion of real-world usage of roles and role-based access control (RBAC) by looking at practices and pitfalls for using roles in your company. Role definition For a basic definition we will start with roles as a construct for managing the application of security policy in the separation between users and the system’s resources. A role is a way to group similar users. On the resource side resources are accessed via a set of permissions – such as Create, Read, Update, and Delete – which are assigned to roles which need them.   This simple definition is the way roles are commonly used: as a tool for management convenience. If you have many users and a great many applications – each with many features and functions – it quickly becomes untenable to manage them individually. Roles provide an abstraction layer to ease administration. Roles and groups are often lumped together, but there is an important difference. Users are added to Groups – such as the Finance Group – to club them together. Roles go one step further – the association is bi-directional: users are members of roles, which are then associated with permissions. Permissions allow a user, through a role, to take action (such as Create, Read, Update, or Delete) on an application and/or resources. Enforcing access control policy with roles What roles should you create? What are your companies’ rules for which users get access to which application features? Most firms start with their security policies, if they are documented. But this is where things get interesting: some firms don’t have documented policies – or at least not at the right level to unambiguously specify technical access control policy. Others have information security policies which are tens or even hundreds of pages long. But as a rule those are not really read by IT practitioners, and sometimes not even by their authors. Information security policies are full of moldy old chestnuts like “principle of least privilege” – which sounds great, but what does it mean in practice? How do you actually use that? Another classic is “Separation of Duties” – which means privileged users should not have unfettered access, so you divide capabilities across several people. Again the concept makes sense, but there is no clear roadmap to take advantage of it. One of the main values of RBAC is that it lets you enforce a specific set of policies for a specific set of users. Only a user acting in the role of Department X can access Department X’s resources. In addition, RBAC can enforce a hierarchy of roles. A user with the Department X manager role can add or disable users in the Department X worker bee roles. Our recommendation is clear: start simple. It is very effective to start with a small set of rules, perhaps 20-30. Do not feel obliged to create more roles initially — instead ensure that your initial small set of roles is integrated end-to-end, to users on the front end, and to permissions and resources on the back end. Roles open up ways to enforce important access control policies – including separation of duties. For example your security policy might state that users in a Finance role cannot also be in an IT role. Role-Based Access Control gives you a way to enforce that policy. Implementation Building on our simple definition, a permission checker could perform this role check: Subject currentUser = SecurityService.getSubject(); if (currentUser.hasRole(“CallCenter”)) { //show the Call Center screen } else { //access denied } In this simple example an application does not make an access control decision per user, but instead based on the user’s role. Most application servers contain some form of RBAC support, and it is often better to rely on server configuration than to hard-code permission checks. For example: <web-app> <security-role> <role-name>CallCenter</role-name> </security-role> <security-constraint> <web-resource-collection> <web-resource-name>Call Center pages</web-resource-name> <url-pattern>/CCFunctions/*</url-pattern> </web-resource-collection> <auth-constraint> <role-name>CallCenter</role-name> </auth-constraint> </security-constraint>   Notice that both code and configuration examples map the role the permission set to the resource (screen and URL). This accomplishes a key RBAC concept: the programmer does not need specific knowledge about any user – they are abstracted from user accounts, and only deal with permissions and roles. Making this work in the real world raises the question of integration: Where do you deploy the roles that govern access? Do you do it in code, configuration, or a purpose-built tool? Integration RBAC systems raise both first-mile and last-mile integration considerations. For the first mile what you do is straightforward: role assignment is tied to user accounts. Each user has one or more assigned roles. Most enterprises use Active Directory, LDAP, and other systems to store and manage users, so role mapping conveniently takes place in collaboration with

Share:
Read Post

Understanding Role Based Access Control [New Series]

Identity and Access Management (IAM) is a marathon rather than a sprint. Most enterprises begin their IAM journey by strengthening authentication, implementing single-sign on, and enabling automated provisioning. These are excellent starting points for an enterprise IAM foundation, but what happens next? Once users are provisioned, authenticated, and signed on to multiple systems, how are they authorized? Enterprises need to very quickly answer crucial questions: How is access managed for large groups of users? How will you map business roles to technology and applications? How is access reviewed for security and auditing? What level of access granularity is appropriate? Many enterprises have gotten over the first hurdle for IAM programs with sufficient initial capabilities in authentication, single sign-on, and provisioning. But focusing on access is only half the challenge; the key to establishing a durable IAM program for the long haul is tying it to an effective authorization strategy. Roles are not just a management concept to make IT management easier; they are also fundamental to defining how work in an enterprise gets done. Role based access control (RBAC) has been around for a while and has a proven track record, but key questions remain for enterprise practitioners. How can roles make management easier? Where is the IAM industry going? What pitfalls exist with current role practices? How should an organization get started setting up a role based system? This series will explore these questions in detail. Roles are special to IAM. They can answer certain critical access management problems, but they require careful consideration. Their value is easy to see, but there are essential to realize value. These include identifying authoritative sources, managing the business-to-technology mapping, integration with applications, and the art and science of access granularity. The paper will provide context, explore each of these questions in detail, and provide the critical bits enterprises need to choose between role-based access control products: The role lifecycle in a real world enterprise – how to use roles to make management easier: This post will focus on three areas: defining roles and how they work, enforcing access control policies with roles, and using roles in real-world systems. We will also cover identification of sources, integration, and access reviews. Advanced concepts – where is the industry going? This section will talk about role engineering – rolling up your sleeves to get work done. But we will also cover more advanced concepts such as using attributes with roles, dynamic ‘risk-based’ assess, scalability, and dealing with legacy systems. Role management: This is the section many of you will be most interested in: how to manage roles. We will examine access control reviews, scaling across the enterprise, metrics, logging, error handling, and handling key audit & compliance chores. Buyer’s guide: As with most of our series, not all vendors and services are equal, so we will offer a buyer’s guide. We will examine the criteria for the major use cases, help you plan and run the evaluation, and decide on a product. We will offer a set of steps to ensure success, and finally, a buyer’s checklist for features and proofs-of-concept. Our goal is to address the common questions from enterprises regarding role-based access controls, with a focus on techniques and technologies that address these concerns. The content for this paper will be developed and posted to the Securosis blog, and as always we welcome community feedback on the blog and via Twitter. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.