Login  |  Register  |  Contact
Wednesday, April 23, 2014

Incite 4/23/2014: New Coat of Paint

By Mike Rothman

It is interesting to see the concept of mindfulness enter the vernacular. For folks who have read the Incite for a while, I haven’t been shy about my meditation practice. And next week I will present on Neuro-Hacking with Jen Minella at her company’s annual conference. I never really shied away from this discussion, but I didn’t go out of my way to discuss it either.

Looks like Banksy strikes again

If someone I was meeting with seemed receptive to talking about it, I would. If they weren’t, I wouldn’t. I doesn’t really matter to me either way. Turns out I found myself engaging in interesting conversations in unexpected places once I became open to talking about my experiences.

It turns out mindfulness is becoming mass market fodder. In our Neuro-Hacking talk we reference Search Inside Yourself, which describes Google’s internal program, which is broadening into a mindfulness curriculum and a variety of other resources to kickstart a practice. These materials are hitting the market faster and faster now. When I was browsing through a brick and mortar bookstore last weekend with the Boy (they still exist!), I saw two new titles in the HOT section on these topics. From folks you wouldn’t expect.

10% Happier is from Dan Harris, a weekend anchor for ABC News. He describes his experiences embracing mindfulness and meditation. I am about 75% done with his book, and it is good to see how a skeptic overcame his pre-conceived notions to gain the aforementioned 10% benefit in his life. I also noticed Arianna Huffington wrote a book called Thrive, which seems to cover a lot of the same topics – getting out of our own way to find success, by drawing “on our intuition and inner wisdom, our sense of wonder, and our capacity for compassion and giving.”

At this point I start worrying that mindfulness will just be the latest in a series of fads to capture the public’s imagination, briefly. ‘Worry’ is probably the wrong word – it’s more that I have a feeling of having seen this movie before and knowing it ends up like the Thighmaster. Like a lot of fads, many folks will try it and give up. Or learn they don’t like it. Or realize it doesn’t provide a quick fix in their life, and then go back to their $300/hr shrinks, diet pills, and other short-term fixes.

And you know what? That’s okay. The nice part about really buying into mindfulness and non-judgement is that I know it’s not for everyone. How can it be? With billions of people on earth, there are bound to be many paths and solutions for people to find comfort, engagement, and maybe even happiness. And just as many paths for people to remain dissatisfied, judgmental, and striving for things they don’t have.

I guess the best thing about having some perspective is that I can appreciate that nothing I’m doing is really new. Luminaries and new-age gurus like Ekhart Tolle and Deepak Chopra have put a new coat of paint on a 2,500 year old practice. They use fancy words for a decidedly unfancy practice. That doesn’t make it new. It just makes it shiny, and perhaps accessible to a new generation of folks. And there’s nothing wrong with that.

–Mike

Photo credit: “Wet Paint II originally uploaded by James Offer


Securosis Firestarter

Have you checked out our new video podcast? Rich, Adrian, and Mike get into a Google Hangout and.. hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail.


2014 RSA Conference Guide

In case any of you missed it, we published our fifth RSA Conference Guide. Yes, we do mention the conference a bit, but it’s really our ideas about how security will shake out in 2014. You can get the full guide with all the memes you can eat.


Heavy Research

We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too.

Understanding Role-based Access Control

NoSQL Security 2.0

Defending Against Network Distributed Denial of Service Attacks

Advanced Endpoint and Server Protection

Newly Published Papers


Incite 4 U

  1. Questions driving the search for answers: Whatever you are doing, stop! And read Kelly White’s 3-part series on Questioning Security (Part 1, Part 2, and Part 3). Kelly’s main contention is that the answers we need to do security better are there, but only if we ask the right questions. Huh. Then he provides a model for gathering that data, contextualizing it, using some big data technologies to analyze it, and even works through an example or two. This echoes something we have been talking about for a long time. There is no lack of data. There is a lack of information to solve security problems. Of course a lot of this stuff is easily said but much harder to do. And even harder to do consistently. But it helps to have a model which provides a roadmap. Without some examples to make the model tangible you woon’t even know where to start. So thank Kelly for a piece of that. Now go read the posts. – MR

  2. Bounties on open source security flaws: The Veracode blog’s latest post is thought-provoking, asking whether it is time to Crowdfund Open Source Software. The post hits the key points on both sides of the open source vs. proprietary software debate, discussed for almost a decade without resolution so far. While I consider the statement “Heartbleed vulnerability puts the lie to the idea of the ‘thousands of eyes’ notion” total BS – software will always have flaws which are not readily apparent – it is good they threw in that point, balanced against Andy Ellis’s “Our lesson of the last few days is that proprietary products are not stronger…” This is the core issue! Enterprise IT never fully trusted open source code, and it would be a lie to say otherwise. But that is more an emotional response than based on fact – they say they don’t trust it but (often unwittingly) use lots of it. Look at it this way: how many major web sites, many of which include substantial proprietary code, rely on OpenSSL? And OpenSSL was in use for years, with this bug undetected. So I throw in a hearty ‘Yes!’. We definitely need to crowdfund open source software security for critical components. This software can benefit from additional scrutiny, the same way we have proven proprietary code does. – AL

  3. Botnet innovation latte: Our pals at Malcovery identified an interesting phishing message targeting Starbucks customers/aficionados (I wouldn’t know any of those). Targeting a large consumer brand with a phishing attack isn’t interesting. But the phishing site’s ability to deliver “the GameOver Zeus variant adding the victim’s machine to a large peer-to-peer botnet and deploy rootkiting tools from the Necurs rootkit to hamper detection and removal of this trojan–all without downloading additional files or contacting a static command and control.” [emphasis mine] That’s interesting. No additional files, and no need to contact a C&C network, because it’s a peer-to-peer botnet. So much for that cool callback detection widget you just deployed, eh? Actually it’s just another opportunity for defenders to take another step to keep pace with attackers. And the beat goes on… – MR

  4. The shape of things to go: Have you noticed all the new security positions listed on job boards? Retail is just now seeing The Rise of the CSO, and this article captures the mindset of those grappling with security for the first time: “We should not be having any breaches …”. Yeah, right. Finance and regulated industries have placed C-level executives in IT security and compliance or the better of the last decade, and understand that breaches will happen, necessitating a balancing act, prevention against detection and response. Retail? On the technology adoption curve, the retail data security vertical is decidedly in the ‘laggard’ category. It is ironic that an industry at the forefront of customer analytics, driven by sensitive data and monetized via just-in-time sales programs, is at the tail end of data security. But clearly the Target breach prompted a collective “Oh crap, am I vulnerable too?!” gasp. While other firms are evolving to distribute security responsibility across different business centers, retail is trying to buy a clue through CSO/CISO hires. – AL

  5. Security lemonade: Not that I’m a fan of Schneier, but every so often he finds a metaphor that makes sense for security folks. He recently wrote on his blog that Security is a Market for Lemons, pointing that like the used car market, the best offerings price themselves out of the market because typical buyers don’t know the difference between options and so opts for the average or even below-average (priced) solution. It is hard to tell real security from snake oil, so we need someone to vouch for a product to help unsuspecting consumers know the difference. Kind of like Consumer Reports. The problem, as Schneier points out, is that there is no real market for this. Product testing labs tend to focus on the stuff they can measure, and as nicely demonstrated by the NSS/FireEye dust-up, they can all to easily get swamped in a messy he-said/she-said deal. And the media can no longer pay for real product testing like in the old days. So what to do? Rely on your friends, of course. They tend to be the most reliable source of information. – MR

–Mike Rothman

Tuesday, April 22, 2014

Understanding Role Based Access Control: Advanced Concepts

By Adrian Lane

  • Gunnar
  • Adrian Lane
  • For some of you steeped in IAM concepts, our previous post on Role Lifecycles seems a bit basic. But many enterprises are still grappling with how to plan for, implement, and manage roles throughout the enterprise. There are many systems which contribute to roles and privileges, so what may seem basic in theory is often quite complex in practice. Today’s post will dig a bit deeper into more advanced RBAC concepts. Let’s roll up our sleeves to look at role engineering!

    Role Engineering

    To get value from roles in real-world use cases requires spending some time analyzing what you want from the system, and deciding how to manage user roles. A common first step is to determine whether you want a flat or hierarchical role structure. A ‘flat’ structure is conceptually simple, and the easiest to manage for smallish environments. For example you might start by setting up DBA and SysAdmin roles as peers, and then link them to the privileges they need.

    Flat structure

    Flat role structures are enough to get the job done in many cases because they provide the core requirement of mapping between roles and privileges. Then Identity Management (IdM) and Provisioning systems can associate users with their roles, and limit users to their authorized subset of system functions.

    But large, multifunction applications with thousands of users typically demand more from roles to address the privilege management problem.

    Hierarchies add value in some circumstances, particularly when it makes sense for roles to include several logical groups of permissions. Each level of the hierarchy has sub-classes of permissions, and the further up the hierarchy you go the more permissions are bundled together into each logical construct. The hierarchy lets you assemble a role as coarse or granular as you need. Think of it as an access gradient, granting access based on an ascending or descending set of privileges.

    Hierarchial

    This modeling exercise cuts both ways – more complex management and auditing is the cost of tighter control. Lower-level roles may have access to specific items or applications, such as a single database. In other systems the high-level manager functions may use roles to facilitate moving and assigning users in a system or project.

    Keep in mind that roles facilitate many great features that applications rely on. For example roles can be used to enforce session-level privileges to impose consistency in a system. A classic example is a police station, where there can only be one “officer of the watch” at any given time. While many users can fulfill this function, only one can hold it at a time. This is an edge case not found in most systems, but it nicely illustrates where RBAC can be needed and applied.

    RBAC + A

    Sometimes a role is not enough by itself. For example, your directory lists 100 users in the role “Doctor”, but is being a doctor enough to grant access to review a patient’s history or perform an operation? Clearly we need more than just roles to define user capabilities, so the industry is trending toward a combination of roles supplemented by attributes.

    Roles can be further refined by adding attributes – what is commonly called RBAC+A (for Attributes). In our simple example above the access management system both checks the Doctor role and queries additional attributes such as a patient list and approved operation types to fully resolve an access request.

    Adding attributes solves another dimension of the access control equation: they can be closely linked to a user or resource, and then loaded into the program at runtime. The benefit is access control decisions based on dynamic data rather than static mappings, which are much harder to maintain. Roles with dynamic attributes can provide the best of both worlds, with roles for coarse policy checks, refined with dynamic attributes for fresher and more precise authorization decisions.

    More on Integration

    We will return to integration… no, don’t go away… come back… integration is important! If you zoom out on any area of IAM you see they are all rife with integration challenges, and roles are no different.

    Key questions for integrating roles include the following:

    What is the authoritative source of roles?

    Roles are a hybrid – privilege information is derived from many sources. But roles are best stored in a locale with visibility to both users and resource privileges. In a monolithic system (“We just keep everything in AD.”) this is not a problem. But for distributed heterogeneous systems this isn’t a single problem – it is often problems #1, #2, and #3! The repository of users can usually be tracked down and managed – by hook or by crook – but the larger challenge is usually discovering and managing the privilege side.

    To work through this problem, security designers need to choose a starting point with the right level of resource permission granularity. A URL can be a starting point but itself is usually not enough, because a single URL may offer a broad range of functionality. This gets a bit complex so let’s walk though an example:

    Consider setting a role for accessing an arbitrary domain like https://example.com/admin. Checking that the user has the Admin role before displaying any content makes sense. But the functionality across all Admin screens can vary widely. In that case the scope of work must be defined by more granular roles (Mail Admin, DB Admin, and so on) and/or by further role checking within the applications. Even this simple example clearly demonstrates why working with roles is often an iterative process – getting the definition and granularity right requires consideration of both the subject and the object sides. The authoritative source is not just a user repository – it should ideally be a system repository for hooks and references to both users and resources.

    Where is the policy enforcement point for roles?

    Once the relationship between roles and privileges is defined, there is still the question of where to enforce privileges. The answer from most role checkers is simple: access either granted or denied – but figuring out where to place the checks can be considerably more complicated.

    Role checking can be done at the UI level, such as in a gateway or proxy; it can be in code in the middle tier; and/or it can be performed in the data tier. Notice that “and/or” – role checks can, and often should, occur in several places inside an application. Or – typically for management convenience – it may make sense to centralize all role checks in one place.

    The next question is: Should role enforcement be embedded in the application container, or should the application call out to the role engine? All else being equal, the performance and simplicity of an embedded role checker makes sense to us, but your mileage may vary. Even embedded in an application, a role checker should be clearly auditable – preferably implemented as a set of easily updated rules, rather than hardcoded in an application, scattered across a million lines of spaghetti code.

    Dealing with legacy systems

    Legacy systems – the justification for and mechanisms behind your paycheck – present their own challenges. Very often the only option is to “front end” a legacy system with an authorization proxy or gateway that intercepts and validates access requests.

    If an access-checking “front door” won’t work the next option is to use provisioning systems to synchronize or replicate roles from an authoritative source, translate them into the legacy system language such as SAP, and then let the legacy system check roles based on your external data feed. This can work well but involves up-front data cleanup, role/access translation, and an automated provisioning feed.

    Otherwise most authorization changes are handled through invasive surgery inside the application. Like most invasive surgery, it is extremely painful at best in the short term, and life (career) threatening at worst. When you are wondering whether to rewrite an internal legacy authorization system, the answer is ‘no’ 99 times out of 100.

    Scalability considerations

    It is critical to keep in mind the number of access checks in a real-world system. If your system makes heavy use of roles, as it probably should, then your role system will be taxed in a serious way. Plan for scale and test accordingly. We have seen systems that weren’t, and the results were not pretty – in one memorable case, 3-minutes call center interaction turned into 9 minute affairs, thanks to external role checkers running across a large network. In that case the responsible VP was fired. Ensure your RBAC is efficient and scalable, and meets your business’ performance requirements. We understand this is not easy, but if you do your homework up front, roles can pay big dividends down the road.

    –Adrian Lane

  • Gunnar
  • Adrian Lane
  • Verizon DBIR 2014: Incident Classification Patterns

    By James Arlen

    [Note: Rich, Adrian, and Mike are all traveling today, so we asked Jamie Arlen to provide at least a little perspective on an aspect of the DBIR he found interesting. So thanks Jamie for this. We will also throw Gunnar under the bus a little because he has been very active on our email list, with all sorts of thoughts on the DBIR, but he doesn’t want to share them publicly. Maybe external shaming will work, but more likely he’ll retain his midwestern sensibilities and be too damn nice.]

    As usual, the gang over at Verizon have put a lot of information and effort into the 2014 edition of their DBIR (registration required). This is both a good thing and a bad thing. The awesome part is that there are historically very few places where incident information is available – leaving all too many professionals in the position of doing risk mitigation planning, based on anecdotes, prayer, and imagination. The DBIR offers some much-needed information to fill in the blanks.

    This year you will note the DBIR is different. Wade, Jay, and the gang have gone back to the data to provide a new set of viewpoints. They have also done a great job of putting together great graphics. Visualization for the win! Except that all the graphics are secondary to the high quality data tables. Of course graphics are sexy and tables are boring. Unless you have to make sense of the data, that is. So I will focus on one table in particular to illustrate my point.

    Figure 19

    This is Figure 19 (page 15 printed, 17 of 62 in the PDF) – click it to see a larger version. You may need to stare at it for a while for it to even begin to make sense. I have been staring at it since Friday and I’m still seeing new things.

    Obvious things

    • Accommodation and Point of Sale Intrusion: No real surprise here. The problem of “the waiter taking the carbons” in the 70’s seems to be maintaining its strength into the future. Despite the efforts of the PCI Council, we have a whole lot of compliance but not enough security. And honestly, isn’t it time for the accommodation industry to make that number go down?
    • Healthcare Theft/Loss: Based on the news it is no great surprise that about half the problems in healthcare are related to the loss or theft of information. We have pretty stringent regulation in place (and for years now). Is this a case of too much compliance and not enough security? It is time to take stock of what is really important (protecting the information of recipients of health care services) and build systems and staff capabilities to meet patient expectations!

    Interesting things

    • Industry = Public: Biggest issue is “Misc. Error”. I didn’t know what a Misc Error was either. It turns out that it is due to the reporting requirements most of the public sector is under – they need to (and do) report everything. Things that would go completely unremarked in most organizations are reported. Things like, “I sent this email to the wrong person,” “I lost my personal phone (which had access to company data),” etc. I vaguely remember something from stats class about this.
    • Incident = Denial of Service: The two industries reporting the largest impact are ‘Management’ and ‘Professional’. If you look at the NAICS listings for those two industry categories, you will see they are largely ‘offices’. I would love a deeper dive into those incidents to see what’s going on exactly and what industries they really represent. The text of the report talks primarily about the impact of DoS on the financial industry, but doesn’t go into any detail on the effects on Management and Professional. You can read into the report to see that the issue may have been the takeover of content management systems by the QCF / Brobot DoS attacks.
    • Incident = Cyber Espionage: Just sounds cool. And something we have all spent lots of time talking about. It seems to affect Mining, Manufacturing, Professional and Transportation in greater proportion than others. Again, I’d love a look at the actual incidents – they are probably about 10% Sneakers and 90% Tommy Boy. If you are working in those industries you have something interesting to talk to your HR department about.

    There shouldn’t be any big surprises in this data, but there are plenty of obvious and interesting things. I am still staring at the table and waiting for the magic pattern moment to jump out at me.

    Though if I stare at the chart long enough, I think it’s a sailboat.

    –James Arlen

    Sunday, April 20, 2014

    DDoS-fuscation

    By Mike Rothman

    Akamai’s research team has an interesting post on how attackers now use web proxies to shield their identities when launching DDoS attacks. Using fairly simple web-based tools they can launch attacks, and by routing the traffic through an exposed web proxy they can hide the bots or other devices performing the attacks.

    234 source IP addresses is a surprisingly low number when considering the duration of the collected data (one month), further analysis into the data revealed that out of the 234 IPs, 136 were web proxies – this explains the low number of source IPs – attackers are using web proxies to hide their true identity. In order to understand the nature of these web proxies, we analyzed the domain (WHOIS) information as well as certain HTTP headers and discovered that 77% of all WebHive LOIC attack traffic came from behind Opera Mini proxy servers.

    So the hackers are abusing Opera’s mobile browser system to launch their attacks. Akamai tracked that back to the devices, which were largely in Indonesia. But were they? Were other obfuscation techniques used to further hide the attackers? Who knows? It doesn’t really matter.

    The Akamai researchers go on to talk about blocking attackers’ source IP addresses. Of course that requires you to be pretty nimble, able to mine those IP addresses, and to get blocks configured on your network gear (or within your scrubbing service). Then they talk about using WAF rules to protect applications by blocking DoS tools. And blocking HTTP from well-known DoS apps, assuming the attackers aren’t messing with headers.

    peekaboo

    Understand that blocking some of these IP addresses and applications may result in dropping legitimate sessions from legitimate former customers. Because people who cannot complete a transaction will find a company which can. So it becomes a balance of loss, between downtime and failed transactions.

    Akamai doesn’t mention built-in application defenses (as discussed in our AppDoS paper), but that’s okay – when you have a hammer, everything looks like a nail.

    Photo credit: “Hide & Seek” originally uploaded by capsicina

    –Mike Rothman

    Thursday, April 17, 2014

    Friday Summary: April 18, 2014, The IT Dysfunction Issue

    By Adrian Lane

    I just finished reading The Phoenix Project by Gene Kim, Kevin Behr, and George Spafford. And wow, what a great book! It really captures the organizational trends and individual behaviors that screw up software & IT projects. And, better yet, it offers some concrete examples for how to address these issues. The Phoenix Project is a bit like a time machine for me, because it so accurately captures the entire ecosystem of dysfunction at one of my former companies that it could have been based on that organization. I have worked with these people and witnessed those behaviors – but my Brent was a guy named Yudong who was very bright and well-intentioned, but without a clue how to operate. Those weekly emergency hair-on-fire sessions were typically caused by him. Low-quality software and badly managed deployments make productivity go backwards. Worse, repeat failures and lack of reliability create tension and distrust between all the groups in a company, to the point when they become rival factions. Not a pleasant work environment – everyone thinks everyone else is bad at their jobs! The Phoenix Project does a wonderful job of capturing these situations, and why companies fall into these behavioral patterns.

    Had this book been written 10 years ago it would have saved a different firm I worked for. A certain CEO who did things like mandate a waterfall development process shorter than the development cycle, commit to features without specifications and forget to tell development, and only allow user features – not scalability, reliability, management, or testing infrastructure improvements – into development might not have failed so spectacularly. Look at blog posts from Facebook and Twitter and Netflix and Google – companies who have succeeded at building products during explosive growth. They don’t talk about fancy UI or customer-centric features – they talk about how to advance their infrastructure while making their jobs easier over the long term. Steady improvement. In some of my previous firms more money went into prototype apps to show off a technology than the technology and supporting infrastructure.

    Anyway, as an ex-VP of Engineering & CTO, I like this book a lot and think it would be very helpful for anyone who needs to manage technology or technical people. We all make mistakes, and it is valuable for executive management to have the essential threads of dysfunction exposed this way. When you are in the middle of the soup it is hard to explain why certain actions are disastrous, especially when they come from, say, the CEO. And no, I am not getting paid for this and no, I did not get a free copy of the book. This enthusiastic endorsement is because I think it will help managers avoid some misery. Well, that, and I am enjoying the mental image of the looks on some people’s face when they each receive a highlighted copy anonymously in the mail. Regardless, highly recommended, especially if you manage technology efforts. It might save your bacon!

    We have not done the Summary in a couple weeks, so there is a lot of news!


    On to the Summary:

    Webcasts, Podcasts, Outside Writing, and Conferences

    Favorite Securosis Posts

    Other Securosis Posts

    Favorite Outside Posts

    Research Reports and Presentations

    Top News and Posts

    Blog Comment of the Week

    This week’s best comment goes to Marco Tietz, in response to Responsibly (Heart)Bleeding.

    Agreed. a bit of bumpy road pre-disclosure (why only a few groups etc pp, you guys covered that in the firestarter), but responsible handling from akamai along the way. maybe I’m too optimistic but it seems to be happening more often than it used to.

    –Adrian Lane

    Wednesday, April 16, 2014

    Incite 4/16/2014: Allergies

    By Mike Rothman

    It was a crummy winter. Cold. Snowy. Whiplash temperature swings. Over the past few weeks, when ATL finally seemed to warm up for spring (and I was actually in town), I rejoiced. One of the advantages of living a bit south is the temperate weather from mid-February to late November.

    But there is a downside. The springtime blooming of the flowers and trees is beautiful, and brings the onslaught of pollen. For a couple weeks in the spring, everything is literally green. It makes no difference what color your car is – if it’s outside for a few minutes it’s green. Things you leave outside (like your deck furniture and grill), green. Toys and balls the kids forget to put back in the garage when they are done. Yup, those are green too. And not a nice green, but a fluorescent type green that reminds you breathing will be a challenge for a few weeks.

    Love is not a strong enough word when discussing pollen

    Every so often we get some rain to wash the pollen away. And the streams and puddles run green. It’s pretty nasty.

    Thankfully I don’t have bad allergies, but for those few weeks even I get some sniffles and itchy eyes. But XX2 has allergies, bad. It’s hard for her to function during the pollen season. Her eyes are puffy (and last year swelled almost shut). She can’t really breathe. She’s hemorrhaging mucus; we can’t seem to send her to school with enough Sudafed, eye drops, and tissues to make it even barely comfortable.

    It’s brutal for her. But she’s a trooper. And for the most part she doesn’t play outside (no recess, phys ed, and limited sports activities) until the pollen is mostly gone. Unless she does. Last night, when we were celebrating Passover with a bunch of friends, we lost track of XX2. With 20+ kids at Seder that was easy enough to do. When it was time to leave we found her outside, and she had been playing for close to an hour. Yeah, it rained yesterday and gave her a temporary respite from the pollen. But that lulled her into a false sense of security.

    So when she started complaining about her eyes itching a bit and wanted some Benadryl to get to sleep, we didn’t want to hear about it. Yes, it’s hard seeing your child uncomfortable. It’s also brutal to have her wake you up in the middle of the night if she can’t breathe and can’t get back to sleep. But we make it clear to all the kids that they have the leeway to make choices for themselves. With that responsibility, they need to live with the consequences of their choices. Even when those consequences are difficult for all of us.

    But this will pass soon enough. The pollen will be gone and XX2 will be back outside playing every day. Which means she’ll need to learn the same lesson during next year’s pollen onslaught. Wash, rinse, repeat. It’s just another day in the parenting life.

    –Mike

    Photo credit: “I Heart Pollen!” originally uploaded by Brooke Novak


    See Mike Speak

    Mike will be moderating a webcast this coming Thursday at 2pm ET, discussing how to Combat the Next Generation of Advanced Malware with folks from Critical Assets and WatchGuard. Register here: http://secure.watchguard.com/how-to-survive-an-apt-attack-social.html


    Securosis Firestarter

    Have you checked out our new video podcast? Rich, Adrian, and Mike get into a Google Hangout and.. hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail.


    2014 RSA Conference Guide

    In case any of you missed it, we published our fifth RSA Conference Guide. Yes, we do mention the conference a bit, but it’s really our ideas about how security will shake out in 2014. You can get the full guide with all the memes you can eat.


    Heavy Research

    We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too.

    Understanding Role-based Access Control

    NoSQL Security 2.0

    Defending Against Network Distributed Denial of Service Attacks

    Advanced Endpoint and Server Protection

    Newly Published Papers


    Incite 4 U

    1. Traitors are the new whistleblowers: A good thought-provoking post by Justine Aitel on how security needs to change and evolve, given some of the architectural and social disruptions impacting technology. She makes a bunch of points about how the cloud and the “compete now/share first/think later mentality, “ impacts risk. It comes back to some tried and true tactics folks have been talking about for years (yes, Pragmatic CSO reference). Things like communications and getting senior folks on board with the risks they are taking – and ignorance is no excuse. She also makes good points about new roles as these changes take root, and that’s where the traitors and whistleblowers in the title comes from. Overall her conclusion: “This game is no longer just for us nerds” rings true. But that’s not new. Security has been the purview of business folks for years. It’s just that now the stakes are higher. – MR

    2. A glimpse of DBSec’s future: From a database design perspective, the way Facebook is customizing databases to meet their performance needs is a fascinating look at what’s possible with modular, open source NoSQL platforms. Facebook’s goals are performance related, but these approaches can also be leveraged for security. For example you can implement tokenization or encryption where FB leveraged compression. And the same way Facebook swapped Corona for Hadoop’s job manager, you could implement identity controls prior to resource grants from the cluster manager. You can install what you want – most anything is possible here! Security can be woven into the platform, without being beholden to platform vendors to design and develop the security model. Granted, most customers want someone else to provide off-the-shelf security solutions, but their modular approach to Hadoop nicely illustrates what is possible. – AL

    3. ‘Marketing’ attacks: The Kalzumeus blog has a really interesting point about how the stickiness of any attack tends to be based on how it is merchandised. Remember Melissa? Or the I Love You virus? Or SQL Slammer? Of course you do – these high-profile attacks got a ton of press coverage and had catchy names. The Heartbleed name and logo were genius. Yes, it is a big issue and worthy of note and remembrance. But will we really remember Kaminsky’s DNS discovery years from now? I probably will because I am a security historian of sorts, but you might not – it doesn’t have a cool name. As an industry we pooh-pooh marketing, but it is integral to many things. But only if you want them to be memorable and drive action. – MR

    4. Helpful ignorance: The question Why should passwords be encrypted if they’re stored in a secure database? makes security professionals go into uncontrollable spasms, but it is a good question! For those new to security, the implicit assumptions underscore areas they don’t understand, and which pieces they need to be educated on. There is no single answer to this question, but “Secured from what?” is a good starting point. Is it secured from malicious DBAs? SQL injection? Direct file examination? The point here is to open a dialog to educate DBAs – and application developers, for that matter – to other types of threats not directly addressed by passwords, user roles, and encrypted backup tapes. – AL

    5. You can’t fight city hall: Actually you can, but it probably won’t work out very well. Case in point: Barrett Brown of allegedly Anon and Stratfor hack fame. He recently agreed to a sealed plea bargain for being an accesssory after the fact on posting the credit card numbers (and other stuff). What he pled to wasn’t even part of the original indictment, and he has already done 2 years in custody. With today’s forensicators and their ability to parse digital trails, it is really hard to get away with hacking. At least over a sustained period of time, and at some point the authorities (or Krebs – whoever gets there first) will find you with a smoking digital gun. So what to do? I know it sounds novel, but try to do the right thing – don’t steal folks’ stuff or be a schmuck. – MR

    –Mike Rothman

    Tuesday, April 15, 2014

    Understanding Role Based Access Control: Role Lifecycle

    By Adrian Lane

  • Gunnar
  • Adrian Lane
  • Roles-based access control (RBAC) has earned a place in the access control architectures at many organization. Companies have many questions about how to effectively use roles, including “How can I integrate role-based systems with my applications? How can I build a process around roles? How can I manage roles on a day-to-day basis? And by the way, how does this work?” It is difficult to distinguish between the different options on the market – they all claim equivalent functionality. Our goal for this post is to provide a simple view of how all the pieces fit together, what you do with them, and how each piece helps provide and/or support role-based access.

    Role Lifecycle in a real-world enterprise

    Roles make access control policy management easier. The concept is simple: perform access control based on a role assigned to one or more users. Users are grouped by job functions so a single role can define access for all users who perform a function – simplifying access control policy development, management, and deployment. The security manager does not need to set permissions for every user, but can simply provide access to necessary functions to a single shared role.

    Like many simple concepts, what is easy to understand can be difficult to achieve in the real world. We begin our discussion of real-world usage of roles and role-based access control (RBAC) by looking at practices and pitfalls for using roles in your company.

    Role definition

    For a basic definition we will start with roles as a construct for managing the application of security policy in the separation between users and the system’s resources. A role is a way to group similar users. On the resource side resources are accessed via a set of permissions – such as Create, Read, Update, and Delete – which are assigned to roles which need them.

    Roles defined

    This simple definition is the way roles are commonly used: as a tool for management convenience. If you have many users and a great many applications – each with many features and functions – it quickly becomes untenable to manage them individually. Roles provide an abstraction layer to ease administration.

    Roles and groups are often lumped together, but there is an important difference. Users are added to Groups – such as the Finance Group – to club them together. Roles go one step further – the association is bi-directional: users are members of roles, which are then associated with permissions. Permissions allow a user, through a role, to take action (such as Create, Read, Update, or Delete) on an application and/or resources.

    Enforcing access control policy with roles

    What roles should you create? What are your companies’ rules for which users get access to which application features? Most firms start with their security policies, if they are documented. But this is where things get interesting: some firms don’t have documented policies – or at least not at the right level to unambiguously specify technical access control policy. Others have information security policies which are tens or even hundreds of pages long. But as a rule those are not really read by IT practitioners, and sometimes not even by their authors. Information security policies are full of moldy old chestnuts like “principle of least privilege” – which sounds great, but what does it mean in practice? How do you actually use that? Another classic is “Separation of Duties” – which means privileged users should not have unfettered access, so you divide capabilities across several people. Again the concept makes sense, but there is no clear roadmap to take advantage of it.

    One of the main values of RBAC is that it lets you enforce a specific set of policies for a specific set of users. Only a user acting in the role of Department X can access Department X’s resources. In addition, RBAC can enforce a hierarchy of roles. A user with the Department X manager role can add or disable users in the Department X worker bee roles.

    Our recommendation is clear: start simple. It is very effective to start with a small set of rules, perhaps 20-30. Do not feel obliged to create more roles initially — instead ensure that your initial small set of roles is integrated end-to-end, to users on the front end, and to permissions and resources on the back end.

    Roles open up ways to enforce important access control policies – including separation of duties. For example your security policy might state that users in a Finance role cannot also be in an IT role. Role-Based Access Control gives you a way to enforce that policy.

    Implementation

    Building on our simple definition, a permission checker could perform this role check:

    Subject currentUser = SecurityService.getSubject();

    if (currentUser.hasRole("CallCenter")) {
     //show the Call Center screen
    } else {
      //access denied
    }
    

    In this simple example an application does not make an access control decision per user, but instead based on the user’s role.

    Most application servers contain some form of RBAC support, and it is often better to rely on server configuration than to hard-code permission checks. For example:

    <web-app>
    <security-role>
            <role-name>CallCenter</role-name>
       </security-role>
        <security-constraint>
            <web-resource-collection>
                <web-resource-name>Call Center pages</web-resource-name>
                <url-pattern>/CCFunctions/*</url-pattern>
           </web-resource-collection>
            <auth-constraint>
                <role-name>CallCenter</role-name>
           </auth-constraint>
       </security-constraint>
    

    Notice that both code and configuration examples map the role the permission set to the resource (screen and URL). This accomplishes a key RBAC concept: the programmer does not need specific knowledge about any user – they are abstracted from user accounts, and only deal with permissions and roles.

    Making this work in the real world raises the question of integration: Where do you deploy the roles that govern access? Do you do it in code, configuration, or a purpose-built tool?

    Integration

    RBAC systems raise both first-mile and last-mile integration considerations. For the first mile what you do is straightforward: role assignment is tied to user accounts. Each user has one or more assigned roles. Most enterprises use Active Directory, LDAP, and other systems to store and manage users, so role mapping conveniently takes place in collaboration with the user directory.

    Role Touchpoints

    The second integration point (the last mile) is defined by an application’s ‘container’. The container is the place where you manage resources: it could be a registry, repository, server configuration, database, or any of various other places. Linking permissions to roles may be performed through configuration management, or in code, or in purpose-built tools such as access management products. The amount of work you have varies by container type, as does who performs it. With some solutions it is as simple as checking a box, while others require coding.

    Using roles in real-world systems

    This introduction has provided a simple illustration of roles. Our simple system shows both the power of roles and their value as a central control point for access control. Taking advantage of roles requires a plan of action, so here are some key considerations to get started:

    • Identify and establish authoritative source(s) for roles: where and how to define and manage the user-to-role mapping
    • Identify and establish authoritative source(s) for permissions: where and how to define and manage resource permissions
    • Link roles to permissions: the RBAC system must have a way to bind roles and permissions. This can be static in a access management system or a directory, or dynamic at runtime
    • Role assignment: Granting roles to users should be integrated into identity provisioning processes
    • Permission assignment: Configuration management should include a step to provision new applications and services with access rights for each interface
    • Make access control decisions in code and configuration, and services
    • Use roles to conduct access reviews: large organizations adopt roles to simplify access review during audit

    Our next post will build on our simple definition of roles, drilling down into role engineering, management, and design issues.

    –Adrian Lane

  • Gunnar
  • Adrian Lane
  • Can’t Unsee (and the need for better social media controls)

    By Mike Rothman

    I have to admit the USAirways porno tweet had me cracking up. Business Insider has good coverage (even including the NSFW link, if you are a glutton for well, whatever). It was funny not because of the picture, but as an illustration of how a huge corporation could have its brand and image impacted by the mistake of one person. Also because it didn’t happen to me. I assure you the executive suite at the company did not think this was funny, at all.

    Need eye bleach NOW

    But it highlights the need for much greater control of social media. With advertising there are multiple layers of approval before anything ever hits the airwaves – and we still have branding fiascos. Social media changes the rules. One person can control a very highly followed account, and that person’s device can be attacked and compromised – giving attackers free reign to behave badly and impact the brand. Or a malicious insider could do the same. Or just plain old human error. It happens all the time, but not like the USAir tweet. That went viral fast, and the damage was done even faster.

    It’s like Pandora’s Box. Once it’s open, you shouldn’t try to put a plane in it. (Sorry, had to…)

    I know you have to move fast with social media. But folks will be lampooning USAirways for years over this. I don’t think their real-time response to the customer outweighs the downside, or that a little check and balance would be a terrible thing – if only to make sure you have multiple eyes on the corporate social media accounts.

    Photo credit: “Cannot Unsee” originally uploaded by Lynn Williams

    –Mike Rothman

    Monday, April 14, 2014

    Responsibly (Heart)Bleeding

    By Mike Rothman

    Yeah, we hit on the Heartbleed vulnerability in this week’s FireStarter, but I wanted to call attention to how Akamai handled the vulnerability. They first came out with an announcement that their networks (and their customers) were safe because their systems were already patched. Big network service providers tend to get an early heads-up when stuff like this happens, so they can get a head start on patching.

    They were also very candid about whether they have proof of compromise:

    Do you have any evidence of a data breach?

    No. And unfortunately, this isn’t “No, we have evidence that there was no breach of data;” rather, “we have no evidence at all.” We doubt many people do – and this leaves data holders in the uncomfortable position of not knowing what, if any, data breaches might have happened. Sites using Akamai were not measurably safer – or less safe – than sites not using Akamai.

    So kudos are due Akamai for explaining the issue in understandable terms, discussing their home-grown way of issuing and dealing with certs, discussing the potential vulnerability window before they started patching, and owning up to the fact that they (like everyone else) have no idea what (if anything) was compromised.

    Then they assured customers they were protected. Unless they weren’t. Over the weekend a researcher pointed out a bug in Akamai’s patch. Ruh Roh. But again, to Akamai’s credit, they came clean. They posted an update explaining the specifics of the buggy patch and why they were still exposed. Then they made it clear that all the certs will be re-issued – just to be sure.

    As a result, we have begun the process of rotating all customer SSL keys/certificates. Some of these certificates will quickly rotate; some require extra validation with the certificate authorities and may take longer.

    It is okay to be wrong. As long as an organization works diligently to make it right, and they keep customers updated and in the loop. Preferably without requiring an NDA to figure out what’s going on…

    –Mike Rothman

    Sunday, April 13, 2014

    Firestarter: Three for Five

    By Rich

    In this week’s Firestarter the team makes up for last week and picks three different stories, each with a time limit. It’s like one of those ESPN shows, but with less content and personality.

    The audio-only version is up too.

    –Rich

    FFIEC’s Rear-View Mirror

    By Mike Rothman

    You have to love compliance mandates, especially when they are anywhere from 18 months to 3 years behind the threat. Recently the FFIEC (the body that regulates financial institutions) published some guidance for financials to defend against DDoS attacks. Hat tip to Techworld.

    Hindsight is right, but the impact is from looking at the beauty in front of you

    It’s not like the guidance is bad. Assessing risk, monitoring inbound traffic, and having a plan to move traffic to a scrubber is all good. And I guess some organizations still don’t know that they should even perform that simple level of diligence. But a statement in the FFIEC guidance sums up rear-view mirror compliance:

    “In the latter half of 2012, an increased number of DDoS attacks were launched against financial institutions by politically motivated groups,” the FFIEC statement says. “These DDoS attacks continued periodically and increased in sophistication and intensity. These attacks caused slow website response times, intermittently prevented customers from accessing institutions’ public websites, and adversely affected back-office operations.”

    Uh, right on time. 18 months later. It’s not that DDoS is going away, but to mandate such obvious stuff at this point is a beautiful illustration of solving yesterday’s problem tomorrow. Which I guess is what most compliance mandates are about.

    Sigh.

    Photo credit: “mtcook” originally uploaded by Jim Howard

    –Mike Rothman

    Wednesday, April 09, 2014

    Understanding Role Based Access Control [New Series]

    By Adrian Lane

  • Gunnar
  • Adrian Lane
  • Identity and Access Management (IAM) is a marathon rather than a sprint. Most enterprises begin their IAM journey by strengthening authentication, implementing single-sign on, and enabling automated provisioning. These are excellent starting points for an enterprise IAM foundation, but what happens next? Once users are provisioned, authenticated, and signed on to multiple systems, how are they authorized? Enterprises need to very quickly answer crucial questions: How is access managed for large groups of users? How will you map business roles to technology and applications? How is access reviewed for security and auditing? What level of access granularity is appropriate?

    Many enterprises have gotten over the first hurdle for IAM programs with sufficient initial capabilities in authentication, single sign-on, and provisioning. But focusing on access is only half the challenge; the key to establishing a durable IAM program for the long haul is tying it to an effective authorization strategy. Roles are not just a management concept to make IT management easier; they are also fundamental to defining how work in an enterprise gets done.

    Role based access control (RBAC) has been around for a while and has a proven track record, but key questions remain for enterprise practitioners. How can roles make management easier? Where is the IAM industry going? What pitfalls exist with current role practices? How should an organization get started setting up a role based system? This series will explore these questions in detail.

    Roles are special to IAM. They can answer certain critical access management problems, but they require careful consideration. Their value is easy to see, but there are essential to realize value. These include identifying authoritative sources, managing the business-to-technology mapping, integration with applications, and the art and science of access granularity. The paper will provide context, explore each of these questions in detail, and provide the critical bits enterprises need to choose between role-based access control products:

    • The role lifecycle in a real world enterprise – how to use roles to make management easier: This post will focus on three areas: defining roles and how they work, enforcing access control policies with roles, and using roles in real-world systems. We will also cover identification of sources, integration, and access reviews.
    • Advanced concepts – where is the industry going? This section will talk about role engineering – rolling up your sleeves to get work done. But we will also cover more advanced concepts such as using attributes with roles, dynamic ‘risk-based’ assess, scalability, and dealing with legacy systems.
    • Role management: This is the section many of you will be most interested in: how to manage roles. We will examine access control reviews, scaling across the enterprise, metrics, logging, error handling, and handling key audit & compliance chores.
    • Buyer’s guide: As with most of our series, not all vendors and services are equal, so we will offer a buyer’s guide. We will examine the criteria for the major use cases, help you plan and run the evaluation, and decide on a product. We will offer a set of steps to ensure success, and finally, a buyer’s checklist for features and proofs-of-concept.

    Our goal is to address the common questions from enterprises regarding role-based access controls, with a focus on techniques and technologies that address these concerns. The content for this paper will be developed and posted to the Securosis blog, and as always we welcome community feedback on the blog and via Twitter.

    –Adrian Lane

  • Gunnar
  • Adrian Lane
  • Monday, April 07, 2014

    Defending Against DDoS: Mitigations

    By Mike Rothman

    Our past two posts discussed network-based Distributed Denial of Device (DDoS) attacks and the tactics used to magnify those attacks to unprecedented scale and volume. Now it’s time to wrap up this series with a discussion of defenses. To understand what you’re up against let’s take a small excerpt from our Defending Against Denial of Service Attacks paper.

    First the obvious: you cannot just throw bandwidth at the problem. Your adversaries likely have an unbounded number of bots at their disposal and are getting smarter at using shared virtual servers and cloud instances to magnify the amount at their disposal. So you can’t just hunker down and ride it out. They likely have a bigger cannon than you can handle. You need to figure out how to deal with a massive amount of traffic, and separate good traffic from bad while maintaining availability.

    Your first option is to leverage existing network/security products to address the issue. As we discussed in our introduction, that is not a good strategy because those devices aren’t built to withstand the volumes or tactics involved in a DDoS. Next, you could deploy a purpose-built device on your network to block DDoS traffic before it melts your networks. This is certainly an option, but if your inbound network pipes are saturated, an on-premise device cannot help much – applications will still be unavailable. Finally, you can front-end your networks with a service to scrub traffic before it reaches your network. But this approach is no panacea either – it takes time to move traffic to a scrubbing provider, and during that window you are effectively down.

    So the answer is likely a combination of these tactics, deployed in a complimentary fashion to give you the best chance to maintain availability.

    Do Nothing

    Before we dig into the different alternatives, we need to acknowledge one other choice: doing nothing. The fact is that many organizations have to go through an exercise after being hit by a DDoS attack, to determine what protections are needed. Given the investment required for any of the alternatives listed above, you have to weigh the cost of downtime against the cost of potentially stopping the attack.

    This is another security tradeoff. If you are a frequent or high-profile target then doing nothing isn’t an option. If you got hit with a random attack – which happens when attackers are testing new tactics and code – and you have no reason to believe you will be targeted again, you may be able to get away with doing nothing. Of course you could be wrong, in which case you will suffer more downtime. You need to both make sure all the relevant parties are aware of this choice, and manage expectations so they understand the risk you are accepting in case you do get attacked again.

    We will just say we don’t advocate this do-nothing approach, but we do understand that tough decision need to be made with scarce resources. Assuming you want to put some defenses in place to mitigate the impact of a DDoS, let’s work through the alternatives.

    DDoS Defense Devices

    These appliances are purpose-built to deal with DoS attacks, and include both optimized IPS-like rules to prevent floods and other network anomalies, and simple web application firewall capabilities to protect against application layer attacks. Additionally, they feature anti-DoS features such as session scalability and embedded IP reputation capabilities, in order to discard traffic from known bots without full inspection.

    To understand the role of IP reputation, let’s recall how email connection management devices enabled anti-spam gateways to scale up to handle spam floods. It is computationally expensive to fully inspect every inbound email, so immediately dumping messages from known bad senders focuses inspection on email that might be legitimate to keep mail flowing. The same concept applies here. Keep the latency inherent in checking a cloud-based reputation database in mind – you will want the device to aggressively cache bad IPs to avoid a lengthy cloud lookup for every incoming session.

    For kosher connections which pass the reputation test, these devices additionally enforce limits on inbound connections, govern the rate of application requests, control clients’ request rates, and manage the number of total connections allowed to hit the server or load balancer sitting behind it. Of course these limits must be defined incrementally to avoid shutting down legitimate traffic during peak usage.

    Speed is the name of the game for DDoS defense devices, so make sure yours have sufficient headroom to handle your network pipe. Over-provision to ensure they can handle bursts and keep up with the increasing bandwidth you are sure to bring in over time.

    CDN/Web Protection Services

    Another popular option is to front-end web applications with a content delivery network or web protection service. This tactic only protects the web applications you route through the CDN, but can scale to handle very large DDoS attacks in a cost-effective manner. Though if the attacker is targeting other address or ports on your network, you’re out of luck – they aren’t protected. DNS servers, for instance, aren’t protected.

    We find CDNs effective for handling network-based DDOS in smaller environments with a small external web presence. There are plenty of other benefits to a CDN, including caching and shielding your external IP addresses. But for stopping DDoS attacks a CDN is a limited answer.

    External Scrubbing

    The next level up the sophistication (and cost) scale is an external scrubbing center. These services allow you to redirect all your traffic through their network when you are attacked. The switch-over tends to be based on either a proprietary switching protocol (if your perimeter devices or DDoS Defense appliances support the carrier’s signaling protocol) or a BGP request. Once the determination has been made to move traffic to the scrubbing center, there will be a delay while the network converges, before you start receiving clean traffic through a tunnel from the scrubbing center.

    The biggest question with a scrubbing center is when to move the traffic. Do it too soon and your resources stay up, but at significant cost. Do it too late and you can suffer additional downtime. Finding that balance is a company-specific decision based on the perceived cost of downtime, compared to the cost and valuable of the service.

    Another blind spot for scrubbing is hit and run attacks, when an attacker blasts a site for briefly to take it down. Once the victim moves the traffic over to a scrubbing center, the attacker stops, not even trying to take down a scrubber. But the attack has already achieved its goals: disrupted availability and increased latency.

    These factors have pushed scrubbing centers to advocate for an always on approach, where the customer runs all traffic through the scrubbing center all the time. Obviously there is a cost but if you are a frequent DDoS target or cannot afford downtime for any reason, it may be worth it.

    All of the above

    As we stated in Defending Against DoS attacks, the best answer is often all the above. Your choice of network-based DoS mitigations inevitably involves trade-offs. It is not good to over-generalize, but most organizations are best suited by a hybrid approach, involving both an on-premise appliance and a contract with a CDN or anti-DoS service provider to handle more severe volumetric attacks. It is rarely cost-effective to run all traffic through a scrubbing center constantly, and many DoS attacks target the application layer – in which case you need a customer premise device anyway.

    Other Protection Tactics

    Given that many DDoS attacks also target DNS (as described in the Attacks post), you will want to make sure your internal DNS infrastructure is protected by front-ending your DNS servers with a DDoS defense device. You will also want some due diligence on your external DNS provider to ensure they have sufficient protections against DDoS, as they will be targeted along with you, and you could be impacted if they fall over.

    You don’t want to contribute to the problem yourself, so as a matter of course you should make sure you aren’t responding to public NTP requests on public NTP servers (as described by US-CERT). You will want to remediate compromised devices as quickly as practical for many reasons, not least to ensure they don’t blast others with your resources and bandwidth.

    The Response Process

    A strong underlying process is your best defense against a DDoS attack. Tactics change as attack volumes increase, but if you don’t know what to do when your site goes down, it will be out for a while.

    The good news is that the DoS defense process is quite similar to general incident response. We have already published a ton of research on this topic, so check out both our Incident Response Fundamentals series and our React Faster and Better paper. If your incident handling process isn’t where it needs to be yet, start there.

    Building off your existing IR process, think about what you need to do as a set of activities: before, during, and after an attack:

    • Before: Before an attack, spend time figuring out attack indicators and ensuring you perform sufficient monitoring to provide both adequate warning and enough information to identify the root cause of attacks. You might see increasing bandwidth volumes or a spike in DNS traffic. Perhaps your applications get flaky and fall down, you see server performance issues, or your CDN alerts you to a possible attack. Unfortunately many DDoS attacks come out of nowhere, so you may not know you are under attack until you are down.
    • During: How can you restore service as quickly as possible? By identifying the root cause accurately and remediating effectively. So you need to notify the powers that be, assemble your team, and establish responsibilities and accountability. Then focus on identifying root cause, attack vectors, and adversaries to figure out the best way to get the site back up. Restoring service depends on the mitigations in place, discussed above. Optimally your contracted CDN and/or anti-DoS service provider already has a team working on the problem by this point. In case you don’t have one, you can hope the attack doesn’t last long or your ISP can help you. Good luck.
    • After: Once the attack has been contained focus shifts to restoring normal operations, moving traffic back from the scrubbing center, and perhaps loosening anti-DoS/WAF rules. Keep monitoring for trouble. Try to make sure this doesn’t happen again. This involves asking questions… What worked? What didn’t? Who needs to be added to the team? Who just got in the way? This analysis needs to objectively identify the good, the bad, and the ugly. Dig into the attack as well. What controls would have blunted its impact? Would running all your traffic through a scrubbing provider have helped? Did network redirection work quickly enough? Did you get the right level of support from your service provider? Then update your process as needed and implement new controls if necessary.

    As we wrap up this series on network-based DDoS, let’s revisit a few key points.

    • Today’s DoS attacks encompass network attacks, application attacks, and magnification techniques to confuse defenders and exploit weaknesses in defenses.
    • Organizations need a multi-faceted approach to defend against DDoS, which likely involves both deploying DDoS defense equipment on-site and contracting with a service provider (either a scrubbing center or a content delivery network) to handle excessive traffic.
    • DoS mitigations do not work in isolation – on-premise devices and services are interdependent for adequate protection, and should communicate with each other to ensure an efficient and transparent transition to the scrubbing service when necessary.

    Of course there are trade-offs with DDoS defense, as with everything. Selecting an optimal mix of defensive tactics requires some adversary analysis, an honest and objective assessment of just how much downtime is survivable, and what you are willing to pay to restore service quickly. If a few hours of downtime are survivable defensive tactics can be much different than in situations where no downtime is ever acceptable – which demands more expenditure and much more sophisticated defenses.

    –Mike Rothman

    Friday, April 04, 2014

    NoSQL Security 2.0 [New Series] *updated*

    By Adrian Lane

    NoSQL, both the technology and the industry, have taken off. We are past the point where we can call big data a fad, and we recognize that we are staring straight into the face of the next generation of data storage platforms. About 2 years ago we started the first Securosis research project on big data security, and a lot has changed since then. At that point many people had heard of Hadoop, but could not describe what characteristics made big data different than relational databases – other than storing a lot of data. Now there is no question that NoSQL — as a data management platform — is here to stay; enterprises have jumped into large scale analysis projects with both feet and people understand the advantages of leveraging analytics for business, operations, and security use cases. But as with all types of databases – and make no mistake, big data systems are databases – high quality data produces better analysis results. Which is why in the majority of cases we have witnessed, a key ingredient is sensitive data. It may be customer data, transactional data, intellectual property, or financial information, but it is a critical ingredient. It is not really a question of whether sensitive data is stored within the cluster – more one of which sensitive data it contains. Given broad adoption, rapidly advancing platforms, and sensitive data, it is time to re-examine how to secure these systems and the data they store.

    But this paper will be different than the last one. We will offer much more on big data security strategies in addition to tools and technologies. We will spend less time defining big data and more looking at trends. We will offer more explanation of security building blocks including data encryption, logging, network encryption, and access controls/identity management in big data ecosystems. We will discuss the types of threats to big data and look at some of the use cases driving security discussions. And just like last time, we will offer a frank discussion of limitations in platforms and vendor offerings, which leave holes in security or fail to mesh with the inherent performance and scalability of big data.

    I keep getting one question from enterprise customers and security vendors. People ask repeatedly for a short discussion of data-centric security, so this paper provides one. This is because I have gotten far fewer questions in the last year on how to protect a NoSQL cluster, and far more on how to protect data before it is stored into the cluster. This was a surprise, and it is not clear from my conversations whether it is because users simply don’t trust the big data technology, due to worries about data propagation, because they don’t feel they can meet compliance obligations, or if they are worried about the double whammy of big data atop cloud services – all these explanations are plausible, and they have all come up. But regardless of driver, companies are looking for advice around encryption and wondering if tokenization and masking are viable alternatives for their use cases. The nature of the questions tells me that is where the market is looking for guidance, so I will cover both cluster security and data-centric security approaches.

    Here is our current outline:

    • Big Data Overview and Trends: This post will provide a refresher on what big data is, how it differs from relational databases, and how companies are leveraging its intrinsic advantages. We will also provide references on how the market has changed and matured over the last 24 months, as this bears on how to approach security.
    • Big Data Security Challenges: We will discuss why it is different architecturally and operationally, and also how the platform bundles and approaches differ from traditional relational databases. We will discuss what traditional tools, technologies and security controls are present, and how usage of these tools differs in big data environments.
    • Big Data Security Approaches: We will outline the approaches companies take when implementing big data security programs, as reference architectures. We will outline walled-garden models, cluster security approaches, data-centric security, and cloud strategies.
    • Cluster Security: An examination of how to secure a big data cluster. This will be a threat-centric examination of how to secure a cluster from attackers, rogue admins, and application programmers.
    • Data (Centric) Security: We will look at tools and technologies that protect data regardless of where it is stored or moved, for use when you don’t trust the database or its repository.
    • Application Security: An executive summary of application security controls and approaches.
    • Big data in cloud environments: Several cloud providers offer big data as part of Platform or Infrastructure as a Service offerings. Intrinsic to these environments are security controls offered by the cloud vendor, offering optional approaches to securing the cluster and meeting compliance requirements.
    • Operational Considerations: Day-to-day management of the cluster is different than management of relational databases, so the focus of security efforts changes too. This post will examine how daily security tasks change and how to adjust operational controls and processes to compensate. We will also offer advice on integration with existing security systems such as SIEM and IAM.

    As with all our papers, you have a voice in what we cover. So I would like feedback from readers, particularly whether you want a short section of application layer security as well. It is (tentatively) included in the current outline. Obviously this would be a brief overview – application security itself is a very large topic. That said, I would like input on that and any other areas you feel need addressing.

    –Adrian Lane

    Thursday, April 03, 2014

    Booth Babes Be Gone

    By Mike Rothman

    OK. I have changed my tune. I have always had a laissez-faire attitude toward booth babes. I come from the school of what works. And if booth babes generate leads, of which some statistically result in deals, I’m good. Mr. Market says that if something works, you keep doing it. And when it stops working you move on to the next tactic. Right?

    Not so much. Chenxi Wang and Zenobia Godschalk posted a thought-provoking piece about why it’s time to grow up. As people and as a business. This quote from Sonatype’s Debbie Rosen sums it up pretty well,

    …this behavior is a “lazy way of marketing”, Debbie Rosen of Sonatype said, “this happens when you do not have any creative or otherwise more positive ways of getting attention.”

    I agree with Debbie. But there are a lot of very bad marketers in technology and security. Getting attention for these simpletons is about getting a louder bullhorn. Creativity is hard. Hiring models is easy.

    Not only is he interesting, he is smart

    What’s worse is that I have had attractive technical product managers and SEs, who happen to be female, working at my company, and they were routinely asked to bring over a technical person to do the demo. It was just assumed that an attractive female wouldn’t have technical chops. And that’s what is so troubling about continuing to accept this behavior.

    I have daughters. And I’m teaching my girls they can be anything they want. I would be really happy if they pursued technical careers, and I am confident they will be attractive adults (yes, I’ll own my bias on that). Should they have to put up with this nonsense? I say not.

    Even better, the post calls for real change. Not bitching about it on Twitter.

    Writing blog posts and expressing outrage on social media alone won’t work. We need to make this issue a practical, rather than a rhetorical one. Those of us who are in positions of power, those of us in sales, marketing, and executive positions, need to do something real to effect changes.

    I still pray at the Temple of Mr. Market. And that means until the tactic doesn’t work, there will be no real change. So if you work for a vendor make it clear that booth babes make you uncomfortable, and it’s just wrong. Take a stand within your own company. And if they don’t like it, leave. I will personally do whatever I can to get you a better job if it comes to that.

    If you work for an end-user don’t get scanned at those booths. And don’t buy products from those companies. Vote with your dollars. That is the only way to effect real sustainable change. Money talks.

    We live in an age of equality. It is time to start acting that way. If a company wants to employ a booth babe, at least provide babes of both genders. I’m sure there are a bunch of lightly employed male actors and models in San Francisco who would be happy to hand out cards and put asses in trade show theater seats.

    –Mike Rothman