Securosis

Research

Friday Summary: April 18, 2014, The IT Dysfunction Issue

I just finished reading The Phoenix Project by Gene Kim, Kevin Behr, and George Spafford. And wow, what a great book! It really captures the organizational trends and individual behaviors that screw up software & IT projects. And, better yet, it offers some concrete examples for how to address these issues. The Phoenix Project is a bit like a time machine for me, because it so accurately captures the entire ecosystem of dysfunction at one of my former companies that it could have been based on that organization. I have worked with these people and witnessed those behaviors – but my Brent was a guy named Yudong who was very bright and well-intentioned, but without a clue how to operate. Those weekly emergency hair-on-fire sessions were typically caused by him. Low-quality software and badly managed deployments make productivity go backwards. Worse, repeat failures and lack of reliability create tension and distrust between all the groups in a company, to the point when they become rival factions. Not a pleasant work environment – everyone thinks everyone else is bad at their jobs! The Phoenix Project does a wonderful job of capturing these situations, and why companies fall into these behavioral patterns. Had this book been written 10 years ago it would have saved a different firm I worked for. A certain CEO who did things like mandate a waterfall development process shorter than the development cycle, commit to features without specifications and forget to tell development, and only allow user features – not scalability, reliability, management, or testing infrastructure improvements – into development might not have failed so spectacularly. Look at blog posts from Facebook and Twitter and Netflix and Google – companies who have succeeded at building products during explosive growth. They don’t talk about fancy UI or customer-centric features – they talk about how to advance their infrastructure while making their jobs easier over the long term. Steady improvement. In some of my previous firms more money went into prototype apps to show off a technology than the technology and supporting infrastructure. Anyway, as an ex-VP of Engineering & CTO, I like this book a lot and think it would be very helpful for anyone who needs to manage technology or technical people. We all make mistakes, and it is valuable for executive management to have the essential threads of dysfunction exposed this way. When you are in the middle of the soup it is hard to explain why certain actions are disastrous, especially when they come from, say, the CEO. And no, I am not getting paid for this and no, I did not get a free copy of the book. This enthusiastic endorsement is because I think it will help managers avoid some misery. Well, that, and I am enjoying the mental image of the looks on some people’s face when they each receive a highlighted copy anonymously in the mail. Regardless, highly recommended, especially if you manage technology efforts. It might save your bacon! We have not done the Summary in a couple weeks, so there is a lot of news! On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Mort speaking next week at Thotcon. Favorite Securosis Posts David Mortman: NoSQL Security 2.0 [New Series] updated. Adrian Lane: Can’t Unsee. “It was funny … also because it didn’t happen to me.” Sometimes that Rothman guy really cracks me up! Mike Rothman: NoSQL Security 2.0 [New Series]. Looking forward to this series from Adrian. I know barely enough database security to be dangerous and it’s a great opportunity for all of us to learn. Other Securosis Posts Incite 4/16/2014: Allergies. Understanding Role Based Access Control: Role Lifecycle. Responsibly (Heart)Bleeding. Firestarter: Three for Five. FFIEC’s Rear-View Mirror. Understanding Role Based Access Control [New Series]. Defending Against DDoS: Mitigations. Favorite Outside Posts David Mortman: Security of Things: An Implementers’ Guide to Cyber-Security for Internet of Things. Devices and Beyond! <– a PDF, but read it anyway Adrian Lane: Manhattan: real-time, multi-tenant distributed database for Twitter scale. Having just finished the excellent The Phoenix Project, I particularly see success factors in how companies like Twitter, Facebook, and Netflix approach development. Gunnar Peterson: The Heartbleed Hit List. They took the time to go through all the major web services to show who is affected. Good reference. Mike Rothman: NSS Labs Hits Back at FireEye ‘Untruths’. There was quite a dust-up last week when NSS published their “Breach Detection” tests. FireEye didn’t do very well and responded. And then the war of words began. Here is Channelomics’ perspective. Gal Shpantzer: Moving Forward. I think this will be my FS link every week. Dave Lewis: Security on-call nightmares. Pepper: * iptables rules to block all heartbeat queries Research Reports and Presentations Reducing Attack Surface with Application Control. Leveraging Threat Intelligence in Security Monitoring. The Future of Security: The Trends and Technologies Transforming Security. Security Analytics with Big Data. Security Management 2.5: Replacing Your SIEM Yet? Defending Data on iOS 7. Eliminate Surprises with Security Assurance and Testing. What CISOs Need to Know about Cloud Computing. Defending Against Application Denial of Service Attacks. Executive Guide to Pragmatic Network Security Management. Top News and Posts Heartbleed Update (v3) via @CISOAndy DuckDuckGo is the Anonymous Alternative to Google What Edward Snowden Used to Evade the NSA FBI warns businesses of VC IP scams. Soon to be a movie snort. Aereo Streaming-TV Service Wins Big Ruling Against Broadcasters Staying ahead of OpenSSL vulnerabilities Don’t Shoot The Messenger One of World’s Largest Websites Hacked Brendan Eich Steps Down as Mozilla CEO. A series of strange decisions at Mozilla make you wonder what’s up over there. Companies track more than credit scores Whitehat Security’s Aviator browser is coming to Windows Blog Comment of the Week This week’s best comment goes to Marco Tietz, in response to Responsibly (Heart)Bleeding. Agreed. a bit of bumpy road pre-disclosure (why only a few groups etc pp, you guys covered that in the firestarter), but responsible handling from akamai along the way. maybe I’m too optimistic but it seems to be happening more often than it used to. Share:

Share:
Read Post

Incite 4/16/2014: Allergies

It was a crummy winter. Cold. Snowy. Whiplash temperature swings. Over the past few weeks, when ATL finally seemed to warm up for spring (and I was actually in town), I rejoiced. One of the advantages of living a bit south is the temperate weather from mid-February to late November. But there is a downside. The springtime blooming of the flowers and trees is beautiful, and brings the onslaught of pollen. For a couple weeks in the spring, everything is literally green. It makes no difference what color your car is – if it’s outside for a few minutes it’s green. Things you leave outside (like your deck furniture and grill), green. Toys and balls the kids forget to put back in the garage when they are done. Yup, those are green too. And not a nice green, but a fluorescent type green that reminds you breathing will be a challenge for a few weeks.   Every so often we get some rain to wash the pollen away. And the streams and puddles run green. It’s pretty nasty. Thankfully I don’t have bad allergies, but for those few weeks even I get some sniffles and itchy eyes. But XX2 has allergies, bad. It’s hard for her to function during the pollen season. Her eyes are puffy (and last year swelled almost shut). She can’t really breathe. She’s hemorrhaging mucus; we can’t seem to send her to school with enough Sudafed, eye drops, and tissues to make it even barely comfortable. It’s brutal for her. But she’s a trooper. And for the most part she doesn’t play outside (no recess, phys ed, and limited sports activities) until the pollen is mostly gone. Unless she does. Last night, when we were celebrating Passover with a bunch of friends, we lost track of XX2. With 20+ kids at Seder that was easy enough to do. When it was time to leave we found her outside, and she had been playing for close to an hour. Yeah, it rained yesterday and gave her a temporary respite from the pollen. But that lulled her into a false sense of security. So when she started complaining about her eyes itching a bit and wanted some Benadryl to get to sleep, we didn’t want to hear about it. Yes, it’s hard seeing your child uncomfortable. It’s also brutal to have her wake you up in the middle of the night if she can’t breathe and can’t get back to sleep. But we make it clear to all the kids that they have the leeway to make choices for themselves. With that responsibility, they need to live with the consequences of their choices. Even when those consequences are difficult for all of us. But this will pass soon enough. The pollen will be gone and XX2 will be back outside playing every day. Which means she’ll need to learn the same lesson during next year’s pollen onslaught. Wash, rinse, repeat. It’s just another day in the parenting life. –Mike Photo credit: “I Heart Pollen!” originally uploaded by Brooke Novak See Mike Speak Mike will be moderating a webcast this coming Thursday at 2pm ET, discussing how to Combat the Next Generation of Advanced Malware with folks from Critical Assets and WatchGuard. Register here: http://secure.watchguard.com/how-to-survive-an-apt-attack-social.html Securosis Firestarter Have you checked out our new video podcast? Rich, Adrian, and Mike get into a Google Hangout and.. hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail. April 14 – Three for Five March 24 – The End of Full Disclosure March 19 – An Irish Wake March 11 – RSA Postmortem Feb 21 – Happy Hour – RSA 2014 Feb 17 – Payment Madness Feb 10 – Mass Media Abuse Feb 03 – Inevitable Doom Jan 27 – Government Influence Jan 20 – Target and Antivirus Jan 13 – Crisis Communications 2014 RSA Conference Guide In case any of you missed it, we published our fifth RSA Conference Guide. Yes, we do mention the conference a bit, but it’s really our ideas about how security will shake out in 2014. You can get the full guide with all the memes you can eat. Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too. Understanding Role-based Access Control Introduction NoSQL Security 2.0 Introduction Defending Against Network Distributed Denial of Service Attacks Mitigations Magnification The Attacks Introduction Advanced Endpoint and Server Protection Quick Wins Detection/Investigation Prevention Assessment Introduction Newly Published Papers Reducing Attack Surface with Application Control Leveraging Threat Intelligence in Security Monitoring The Future of Security Security Management 2.5: Replacing Your SIEM Yet? Defending Data on iOS 7 Eliminating Surprises with Security Assurance and Testing What CISOs Need to Know about Cloud Computing Incite 4 U Traitors are the new whistleblowers: A good thought-provoking post by Justine Aitel on how security needs to change and evolve, given some of the architectural and social disruptions impacting technology. She makes a bunch of points about how the cloud and the “compete now/share first/think later mentality, “ impacts risk. It comes back to some tried and true tactics folks have been talking about for years (yes, Pragmatic CSO reference). Things like communications and getting senior folks on board with the risks they are taking – and ignorance is no excuse. She also makes good points about new roles as these changes take root, and that’s where the traitors and whistleblowers in the title comes from. Overall her conclusion: “This game is no longer just for us nerds” rings true. But that’s not new. Security has been the purview of business folks for years. It’s just that now the stakes are higher. – MR A glimpse of DBSec’s future: From a database design perspective, the way Facebook is customizing databases to meet their performance needs is a fascinating look at what’s possible with modular, open source NoSQL platforms. Facebook’s goals are performance related,

Share:
Read Post

Understanding Role Based Access Control: Role Lifecycle

Roles-based access control (RBAC) has earned a place in the access control architectures at many organization. Companies have many questions about how to effectively use roles, including “How can I integrate role-based systems with my applications? How can I build a process around roles? How can I manage roles on a day-to-day basis? And by the way, how does this work?” It is difficult to distinguish between the different options on the market – they all claim equivalent functionality. Our goal for this post is to provide a simple view of how all the pieces fit together, what you do with them, and how each piece helps provide and/or support role-based access. Role Lifecycle in a real-world enterprise Roles make access control policy management easier. The concept is simple: perform access control based on a role assigned to one or more users. Users are grouped by job functions so a single role can define access for all users who perform a function – simplifying access control policy development, management, and deployment. The security manager does not need to set permissions for every user, but can simply provide access to necessary functions to a single shared role. Like many simple concepts, what is easy to understand can be difficult to achieve in the real world. We begin our discussion of real-world usage of roles and role-based access control (RBAC) by looking at practices and pitfalls for using roles in your company. Role definition For a basic definition we will start with roles as a construct for managing the application of security policy in the separation between users and the system’s resources. A role is a way to group similar users. On the resource side resources are accessed via a set of permissions – such as Create, Read, Update, and Delete – which are assigned to roles which need them.   This simple definition is the way roles are commonly used: as a tool for management convenience. If you have many users and a great many applications – each with many features and functions – it quickly becomes untenable to manage them individually. Roles provide an abstraction layer to ease administration. Roles and groups are often lumped together, but there is an important difference. Users are added to Groups – such as the Finance Group – to club them together. Roles go one step further – the association is bi-directional: users are members of roles, which are then associated with permissions. Permissions allow a user, through a role, to take action (such as Create, Read, Update, or Delete) on an application and/or resources. Enforcing access control policy with roles What roles should you create? What are your companies’ rules for which users get access to which application features? Most firms start with their security policies, if they are documented. But this is where things get interesting: some firms don’t have documented policies – or at least not at the right level to unambiguously specify technical access control policy. Others have information security policies which are tens or even hundreds of pages long. But as a rule those are not really read by IT practitioners, and sometimes not even by their authors. Information security policies are full of moldy old chestnuts like “principle of least privilege” – which sounds great, but what does it mean in practice? How do you actually use that? Another classic is “Separation of Duties” – which means privileged users should not have unfettered access, so you divide capabilities across several people. Again the concept makes sense, but there is no clear roadmap to take advantage of it. One of the main values of RBAC is that it lets you enforce a specific set of policies for a specific set of users. Only a user acting in the role of Department X can access Department X’s resources. In addition, RBAC can enforce a hierarchy of roles. A user with the Department X manager role can add or disable users in the Department X worker bee roles. Our recommendation is clear: start simple. It is very effective to start with a small set of rules, perhaps 20-30. Do not feel obliged to create more roles initially — instead ensure that your initial small set of roles is integrated end-to-end, to users on the front end, and to permissions and resources on the back end. Roles open up ways to enforce important access control policies – including separation of duties. For example your security policy might state that users in a Finance role cannot also be in an IT role. Role-Based Access Control gives you a way to enforce that policy. Implementation Building on our simple definition, a permission checker could perform this role check: Subject currentUser = SecurityService.getSubject(); if (currentUser.hasRole(“CallCenter”)) { //show the Call Center screen } else { //access denied } In this simple example an application does not make an access control decision per user, but instead based on the user’s role. Most application servers contain some form of RBAC support, and it is often better to rely on server configuration than to hard-code permission checks. For example: <web-app> <security-role> <role-name>CallCenter</role-name> </security-role> <security-constraint> <web-resource-collection> <web-resource-name>Call Center pages</web-resource-name> <url-pattern>/CCFunctions/*</url-pattern> </web-resource-collection> <auth-constraint> <role-name>CallCenter</role-name> </auth-constraint> </security-constraint>   Notice that both code and configuration examples map the role the permission set to the resource (screen and URL). This accomplishes a key RBAC concept: the programmer does not need specific knowledge about any user – they are abstracted from user accounts, and only deal with permissions and roles. Making this work in the real world raises the question of integration: Where do you deploy the roles that govern access? Do you do it in code, configuration, or a purpose-built tool? Integration RBAC systems raise both first-mile and last-mile integration considerations. For the first mile what you do is straightforward: role assignment is tied to user accounts. Each user has one or more assigned roles. Most enterprises use Active Directory, LDAP, and other systems to store and manage users, so role mapping conveniently takes place in collaboration with

Share:
Read Post

Can’t Unsee (and the need for better social media controls)

I have to admit the USAirways porno tweet had me cracking up. Business Insider has good coverage (even including the NSFW link, if you are a glutton for well, whatever). It was funny not because of the picture, but as an illustration of how a huge corporation could have its brand and image impacted by the mistake of one person. Also because it didn’t happen to me. I assure you the executive suite at the company did not think this was funny, at all.   But it highlights the need for much greater control of social media. With advertising there are multiple layers of approval before anything ever hits the airwaves – and we still have branding fiascos. Social media changes the rules. One person can control a very highly followed account, and that person’s device can be attacked and compromised – giving attackers free reign to behave badly and impact the brand. Or a malicious insider could do the same. Or just plain old human error. It happens all the time, but not like the USAir tweet. That went viral fast, and the damage was done even faster. It’s like Pandora’s Box. Once it’s open, you shouldn’t try to put a plane in it. (Sorry, had to…) I know you have to move fast with social media. But folks will be lampooning USAirways for years over this. I don’t think their real-time response to the customer outweighs the downside, or that a little check and balance would be a terrible thing – if only to make sure you have multiple eyes on the corporate social media accounts. Photo credit: “Cannot Unsee” originally uploaded by Lynn Williams Share:

Share:
Read Post

Responsibly (Heart)Bleeding

Yeah, we hit on the Heartbleed vulnerability in this week’s FireStarter, but I wanted to call attention to how Akamai handled the vulnerability. They first came out with an announcement that their networks (and their customers) were safe because their systems were already patched. Big network service providers tend to get an early heads-up when stuff like this happens, so they can get a head start on patching. They were also very candid about whether they have proof of compromise: Do you have any evidence of a data breach? No. And unfortunately, this isn’t “No, we have evidence that there was no breach of data;” rather, “we have no evidence at all.” We doubt many people do – and this leaves data holders in the uncomfortable position of not knowing what, if any, data breaches might have happened. Sites using Akamai were not measurably safer – or less safe – than sites not using Akamai. So kudos are due Akamai for explaining the issue in understandable terms, discussing their home-grown way of issuing and dealing with certs, discussing the potential vulnerability window before they started patching, and owning up to the fact that they (like everyone else) have no idea what (if anything) was compromised. Then they assured customers they were protected. Unless they weren’t. Over the weekend a researcher pointed out a bug in Akamai’s patch. Ruh Roh. But again, to Akamai’s credit, they came clean. They posted an update explaining the specifics of the buggy patch and why they were still exposed. Then they made it clear that all the certs will be re-issued – just to be sure. As a result, we have begun the process of rotating all customer SSL keys/certificates. Some of these certificates will quickly rotate; some require extra validation with the certificate authorities and may take longer. It is okay to be wrong. As long as an organization works diligently to make it right, and they keep customers updated and in the loop. Preferably without requiring an NDA to figure out what’s going on… Share:

Share:
Read Post

FFIEC’s Rear-View Mirror

You have to love compliance mandates, especially when they are anywhere from 18 months to 3 years behind the threat. Recently the FFIEC (the body that regulates financial institutions) published some guidance for financials to defend against DDoS attacks. Hat tip to Techworld.   It’s not like the guidance is bad. Assessing risk, monitoring inbound traffic, and having a plan to move traffic to a scrubber is all good. And I guess some organizations still don’t know that they should even perform that simple level of diligence. But a statement in the FFIEC guidance sums up rear-view mirror compliance: “In the latter half of 2012, an increased number of DDoS attacks were launched against financial institutions by politically motivated groups,” the FFIEC statement says. “These DDoS attacks continued periodically and increased in sophistication and intensity. These attacks caused slow website response times, intermittently prevented customers from accessing institutions’ public websites, and adversely affected back-office operations.” Uh, right on time. 18 months later. It’s not that DDoS is going away, but to mandate such obvious stuff at this point is a beautiful illustration of solving yesterday’s problem tomorrow. Which I guess is what most compliance mandates are about. Sigh. Photo credit: “mtcook” originally uploaded by Jim Howard Share:

Share:
Read Post

Firestarter: Three for Five

In this week’s Firestarter the team makes up for last week and picks three different stories, each with a time limit. It’s like one of those ESPN shows, but with less content and personality. The audio-only version is up too. Share:

Share:
Read Post

Understanding Role Based Access Control [New Series]

Identity and Access Management (IAM) is a marathon rather than a sprint. Most enterprises begin their IAM journey by strengthening authentication, implementing single-sign on, and enabling automated provisioning. These are excellent starting points for an enterprise IAM foundation, but what happens next? Once users are provisioned, authenticated, and signed on to multiple systems, how are they authorized? Enterprises need to very quickly answer crucial questions: How is access managed for large groups of users? How will you map business roles to technology and applications? How is access reviewed for security and auditing? What level of access granularity is appropriate? Many enterprises have gotten over the first hurdle for IAM programs with sufficient initial capabilities in authentication, single sign-on, and provisioning. But focusing on access is only half the challenge; the key to establishing a durable IAM program for the long haul is tying it to an effective authorization strategy. Roles are not just a management concept to make IT management easier; they are also fundamental to defining how work in an enterprise gets done. Role based access control (RBAC) has been around for a while and has a proven track record, but key questions remain for enterprise practitioners. How can roles make management easier? Where is the IAM industry going? What pitfalls exist with current role practices? How should an organization get started setting up a role based system? This series will explore these questions in detail. Roles are special to IAM. They can answer certain critical access management problems, but they require careful consideration. Their value is easy to see, but there are essential to realize value. These include identifying authoritative sources, managing the business-to-technology mapping, integration with applications, and the art and science of access granularity. The paper will provide context, explore each of these questions in detail, and provide the critical bits enterprises need to choose between role-based access control products: The role lifecycle in a real world enterprise – how to use roles to make management easier: This post will focus on three areas: defining roles and how they work, enforcing access control policies with roles, and using roles in real-world systems. We will also cover identification of sources, integration, and access reviews. Advanced concepts – where is the industry going? This section will talk about role engineering – rolling up your sleeves to get work done. But we will also cover more advanced concepts such as using attributes with roles, dynamic ‘risk-based’ assess, scalability, and dealing with legacy systems. Role management: This is the section many of you will be most interested in: how to manage roles. We will examine access control reviews, scaling across the enterprise, metrics, logging, error handling, and handling key audit & compliance chores. Buyer’s guide: As with most of our series, not all vendors and services are equal, so we will offer a buyer’s guide. We will examine the criteria for the major use cases, help you plan and run the evaluation, and decide on a product. We will offer a set of steps to ensure success, and finally, a buyer’s checklist for features and proofs-of-concept. Our goal is to address the common questions from enterprises regarding role-based access controls, with a focus on techniques and technologies that address these concerns. The content for this paper will be developed and posted to the Securosis blog, and as always we welcome community feedback on the blog and via Twitter. Share:

Share:
Read Post

Defending Against DDoS: Mitigations

Our past two posts discussed network-based Distributed Denial of Device (DDoS) attacks and the tactics used to magnify those attacks to unprecedented scale and volume. Now it’s time to wrap up this series with a discussion of defenses. To understand what you’re up against let’s take a small excerpt from our Defending Against Denial of Service Attacks paper. First the obvious: you cannot just throw bandwidth at the problem. Your adversaries likely have an unbounded number of bots at their disposal and are getting smarter at using shared virtual servers and cloud instances to magnify the amount at their disposal. So you can’t just hunker down and ride it out. They likely have a bigger cannon than you can handle. You need to figure out how to deal with a massive amount of traffic, and separate good traffic from bad while maintaining availability. Your first option is to leverage existing network/security products to address the issue. As we discussed in our introduction, that is not a good strategy because those devices aren’t built to withstand the volumes or tactics involved in a DDoS. Next, you could deploy a purpose-built device on your network to block DDoS traffic before it melts your networks. This is certainly an option, but if your inbound network pipes are saturated, an on-premise device cannot help much – applications will still be unavailable. Finally, you can front-end your networks with a service to scrub traffic before it reaches your network. But this approach is no panacea either – it takes time to move traffic to a scrubbing provider, and during that window you are effectively down. So the answer is likely a combination of these tactics, deployed in a complimentary fashion to give you the best chance to maintain availability. Do Nothing Before we dig into the different alternatives, we need to acknowledge one other choice: doing nothing. The fact is that many organizations have to go through an exercise after being hit by a DDoS attack, to determine what protections are needed. Given the investment required for any of the alternatives listed above, you have to weigh the cost of downtime against the cost of potentially stopping the attack. This is another security tradeoff. If you are a frequent or high-profile target then doing nothing isn’t an option. If you got hit with a random attack – which happens when attackers are testing new tactics and code – and you have no reason to believe you will be targeted again, you may be able to get away with doing nothing. Of course you could be wrong, in which case you will suffer more downtime. You need to both make sure all the relevant parties are aware of this choice, and manage expectations so they understand the risk you are accepting in case you do get attacked again. We will just say we don’t advocate this do-nothing approach, but we do understand that tough decision need to be made with scarce resources. Assuming you want to put some defenses in place to mitigate the impact of a DDoS, let’s work through the alternatives. DDoS Defense Devices These appliances are purpose-built to deal with DoS attacks, and include both optimized IPS-like rules to prevent floods and other network anomalies, and simple web application firewall capabilities to protect against application layer attacks. Additionally, they feature anti-DoS features such as session scalability and embedded IP reputation capabilities, in order to discard traffic from known bots without full inspection. To understand the role of IP reputation, let’s recall how email connection management devices enabled anti-spam gateways to scale up to handle spam floods. It is computationally expensive to fully inspect every inbound email, so immediately dumping messages from known bad senders focuses inspection on email that might be legitimate to keep mail flowing. The same concept applies here. Keep the latency inherent in checking a cloud-based reputation database in mind – you will want the device to aggressively cache bad IPs to avoid a lengthy cloud lookup for every incoming session. For kosher connections which pass the reputation test, these devices additionally enforce limits on inbound connections, govern the rate of application requests, control clients’ request rates, and manage the number of total connections allowed to hit the server or load balancer sitting behind it. Of course these limits must be defined incrementally to avoid shutting down legitimate traffic during peak usage. Speed is the name of the game for DDoS defense devices, so make sure yours have sufficient headroom to handle your network pipe. Over-provision to ensure they can handle bursts and keep up with the increasing bandwidth you are sure to bring in over time. CDN/Web Protection Services Another popular option is to front-end web applications with a content delivery network or web protection service. This tactic only protects the web applications you route through the CDN, but can scale to handle very large DDoS attacks in a cost-effective manner. Though if the attacker is targeting other address or ports on your network, you’re out of luck – they aren’t protected. DNS servers, for instance, aren’t protected. We find CDNs effective for handling network-based DDOS in smaller environments with a small external web presence. There are plenty of other benefits to a CDN, including caching and shielding your external IP addresses. But for stopping DDoS attacks a CDN is a limited answer. External Scrubbing The next level up the sophistication (and cost) scale is an external scrubbing center. These services allow you to redirect all your traffic through their network when you are attacked. The switch-over tends to be based on either a proprietary switching protocol (if your perimeter devices or DDoS Defense appliances support the carrier’s signaling protocol) or a BGP request. Once the determination has been made to move traffic to the scrubbing center, there will be a delay while the network converges, before you start receiving clean traffic through a tunnel from the scrubbing center. The biggest question with a scrubbing center is when to move the traffic. Do it too soon and your resources stay

Share:
Read Post

NoSQL Security 2.0 [New Series] *updated*

NoSQL, both the technology and the industry, have taken off. We are past the point where we can call big data a fad, and we recognize that we are staring straight into the face of the next generation of data storage platforms. About 2 years ago we started the first Securosis research project on big data security, and a lot has changed since then. At that point many people had heard of Hadoop, but could not describe what characteristics made big data different than relational databases – other than storing a lot of data. Now there is no question that NoSQL — as a data management platform — is here to stay; enterprises have jumped into large scale analysis projects with both feet and people understand the advantages of leveraging analytics for business, operations, and security use cases. But as with all types of databases – and make no mistake, big data systems are databases – high quality data produces better analysis results. Which is why in the majority of cases we have witnessed, a key ingredient is sensitive data. It may be customer data, transactional data, intellectual property, or financial information, but it is a critical ingredient. It is not really a question of whether sensitive data is stored within the cluster – more one of which sensitive data it contains. Given broad adoption, rapidly advancing platforms, and sensitive data, it is time to re-examine how to secure these systems and the data they store. But this paper will be different than the last one. We will offer much more on big data security strategies in addition to tools and technologies. We will spend less time defining big data and more looking at trends. We will offer more explanation of security building blocks including data encryption, logging, network encryption, and access controls/identity management in big data ecosystems. We will discuss the types of threats to big data and look at some of the use cases driving security discussions. And just like last time, we will offer a frank discussion of limitations in platforms and vendor offerings, which leave holes in security or fail to mesh with the inherent performance and scalability of big data. I keep getting one question from enterprise customers and security vendors. People ask repeatedly for a short discussion of data-centric security, so this paper provides one. This is because I have gotten far fewer questions in the last year on how to protect a NoSQL cluster, and far more on how to protect data before it is stored into the cluster. This was a surprise, and it is not clear from my conversations whether it is because users simply don’t trust the big data technology, due to worries about data propagation, because they don’t feel they can meet compliance obligations, or if they are worried about the double whammy of big data atop cloud services – all these explanations are plausible, and they have all come up. But regardless of driver, companies are looking for advice around encryption and wondering if tokenization and masking are viable alternatives for their use cases. The nature of the questions tells me that is where the market is looking for guidance, so I will cover both cluster security and data-centric security approaches. Here is our current outline: Big Data Overview and Trends: This post will provide a refresher on what big data is, how it differs from relational databases, and how companies are leveraging its intrinsic advantages. We will also provide references on how the market has changed and matured over the last 24 months, as this bears on how to approach security. Big Data Security Challenges: We will discuss why it is different architecturally and operationally, and also how the platform bundles and approaches differ from traditional relational databases. We will discuss what traditional tools, technologies and security controls are present, and how usage of these tools differs in big data environments. Big Data Security Approaches: We will outline the approaches companies take when implementing big data security programs, as reference architectures. We will outline walled-garden models, cluster security approaches, data-centric security, and cloud strategies. Cluster Security: An examination of how to secure a big data cluster. This will be a threat-centric examination of how to secure a cluster from attackers, rogue admins, and application programmers. Data (Centric) Security: We will look at tools and technologies that protect data regardless of where it is stored or moved, for use when you don’t trust the database or its repository. Application Security: An executive summary of application security controls and approaches. Big data in cloud environments: Several cloud providers offer big data as part of Platform or Infrastructure as a Service offerings. Intrinsic to these environments are security controls offered by the cloud vendor, offering optional approaches to securing the cluster and meeting compliance requirements. Operational Considerations: Day-to-day management of the cluster is different than management of relational databases, so the focus of security efforts changes too. This post will examine how daily security tasks change and how to adjust operational controls and processes to compensate. We will also offer advice on integration with existing security systems such as SIEM and IAM. As with all our papers, you have a voice in what we cover. So I would like feedback from readers, particularly whether you want a short section of application layer security as well. It is (tentatively) included in the current outline. Obviously this would be a brief overview – application security itself is a very large topic. That said, I would like input on that and any other areas you feel need addressing. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.