By Mike Rothman
I was in the car the other day with my oldest daughter. She’s 9 (going on 15, but that’s another story) and blurted out: “Dad, I don’t want to go to Georgia Tech.” Huh? Now she is the princess of non-sequiturs, but even this one was surprising to me. Not only does she have an educational plan (at 9), but she knows that GA Tech is not part of it.
So I figured I’d play along. First off, I studied to be an engineer. So I wasn’t sure if she was poking at me, or what the deal was. Second, her stance towards a state school is problematic because GA residents can go to a state school tuition-free, thanks to the magic of the Hope Scholarship, funded by people who don’t understand statistics – I mean the GA Lottery. Next I figured she was going to blurt out something about going to MIT or Harvard, and I saw my retirement fund dwindle to nothing. Looks like I’ll be eating Beef-a-Roni in my twilight years.
But it wasn’t that. She then went on to explain that one of her friends made the point that GA Tech teaches engineering and she didn’t want to be an engineer. Now things were coming more into focus for me. I then asked why she didn’t want to be an engineer. Right, it’s more about the friend’s opinions, then about what she wants. Good, she is still 9.
I then proceeded to go through all the reasons that being an engineer could be an interesting career choice, especially for someone who likes math, and that GA Tech would be a great choice, even if she didn’t end up being an engineer. It wasn’t about pushing her to one school or another – it was about making sure she kept an open mind.
I take that part of parenting pretty seriously. Peer and family pressure is a funny thing. I thought I wanted to be a doctor growing up. I’m not sure whether medicine actually interested me, or whether I just knew that culturally that was expected. I did know being a lawyer was out of the question. (Yes, that was a zinger directed at all my lawyer friends.) Ultimately I studied engineering and then got into computers way back when. I haven’t looked back since.
Which is really the point. I’m not sure what any of my kids’ competencies and passions will be. Neither do they. But it’s my job (working with The Boss) to make sure they get exposed to all sorts of things, keep an open mind, and hopefully find their paths.
Photo credit: “Open Minds” originally uploaded by gellenburg
Incite 4 U
Things are a little slow on the blog this week. Rich, Adrian, and I are sequestered plotting world domination. Actually, we are finalizing our research agendas & upcoming reports, and starting work on a new video initiative. Thus I’m handling the Incite today, so Adrian and Rich can pay attention to our clients. Toward the end of the week, we’ll also start posting a “Securosis Guide to RSAC 2010” here, to give those of you attending the conference a good idea of what’s going to be hot, and what to look for.
I also want to throw a grenade at our fellow bloggers. Candidly, most of you security bloggy types have been on an extended vacation. Yes, you are the suxor. We talked about doing the Incite twice a week, but truth be told, there just isn’t enough interesting content to link to.
Yes, we know many of you are enamored with Twitter and spend most of your days there. But it’s hard to dig into a discussion in 140 characters. And our collective ADD kicked in, so we got tired of blogging after a couple years. But keep in mind it’s the community interaction that makes all the difference. So get off your respective asses and start blogging again. We need some link fodder.
Baiting the Risk Modeling Crowd – Given my general frustration with the state of security metrics and risk quantification, I couldn’t pass up the opportunity to link to a good old-fashioned beat down from Richard Bejtlich and Tim Mullen discussing risk quantification. Evidently some windbag puffed his chest out with all sorts of risk quantification buffoonery and Tim (and then Richard) jumped on. They are trying to organize a public debate in the near future, and I want a front row seat. If only to shovel some dirt on the risk quantification model. Gunnar weighed in on the topic as well. – MR
Meaningful or Accurate: Pick One – I like Matthew Rosenquist’s attempts to put security advice in a fortune cookie, and this month’s is “Metrics show the Relevance of Security.” Then Matthew describes how immature metrics are at this point, and how companies face an awful decision: using meaningful or accurate metrics, but you only get to pick one. The root of the issue is “The industry has not settled on provable and reliable methodologies which scale with any confidence.” I know a lot of folks are working on this, and the hope is for progress in the near term. – MR
Wither virtual network appliances? – Exhibit #1 of someone who now seems to think in 140 characters is Chris Hoff. But every so often he does blog (or record a funny song) and I want to give him some positive feedback, so maybe he blogs some more. In this post, Chris talks about the issues of network virtual appliances – clearly they are not ready for prime time, and a lot of work needs to be done to get them there, especially if the intent is to run them in the cloud. Truth be told, I still don’t ‘get’ the cloud, but that’s why I hang out with Rich. He gets it and at some point will school me. – MR
Getting to the CORE of Metasploit – Normally vendor announcements aren’t interesting (so $vendor, stop asking if we are going to cover your crappy 1.8 release on the blog), but every so often you look at one, and figure “I can work with that.” In a nutshell, CORE Security is moving toward interoperability with the open source pen testing tool Metasploit (which was acquired by Rapid7 late last year). This takes a page from Microsoft’s “Embrace and Extend” playbook. CORE isn’t fighting Metasploit, although it’s competition. Instead they’re embracing the fact that a lot of folks use it to get started with pen testing tools and extendng it with their commercial-grade technology. Just as I beat down crappy marketing, we need to applaud a good strategic move for CORE. – MR
Who’s the dope now? – So evidently Floyd Landis doesn’t give up easily. To be a world class cyclist means he’s persistent and will work through the pain. So I guess we shouldn’t be overly surprised that he (or his peeps) hired a hacker to compromise the testing lab where his allegedly doped blood sample results were stored. If he’s willing to cheat to win in the first place, why wouldn’t he bend the rules to make test results disappear? I guess from a security professional’s standpoint, we’ve hit the big time. Folks have been using cyber-attacks for espionage purposes for years. But now it’s on the front page of the newspaper. Cool. – MR
It’s not about the money… – Toward the end of last year, I was including a more career-centric link in each Incite to get you all thinking. This post on Don Dodge’s blog is a good thought generator. He asks: What do Mark Cuban, Dan Farber, Steve Ballmer, and Mary Jo Foley all have in common? Not to spoil the fun, but the answer is they love what they do. Two folks on that list are billionaires, yet they still work hard. Why? Would you even work if you had that much money? If what you did every day didn’t feel like work, you probably would. And that’s something I keep having to learn the hard way by going back into corporate jobs every couple years. – MR
Posted at Wednesday 17th February 2010 5:15 am
(4) Comments •
By Adrian Lane
The Securosis team is proud to announce the availability of our latest white paper: Understanding and Selecting a Database Assessment Solution.
We’re very excited to get this one published – not just because we have been working on it for six months, but also because we feel that with a couple new vendors and a significant evolution in product function, the entire space needed a fresh examination. This is not the same old vulnerability assessment market of 2004 that revolved around fledgling DBA productivity tools! There are big changes in the products, but more importantly there are bigger changes in the buying requirements and users who have a vested interest in the scan results. Our main goal was to bridge the gap between technical and non-technical stakeholders. We worked hard to provide enough technical information for customers to differentiate between products, while giving non-DBA stakeholders – including audit, compliance, security, and operations groups – an understanding of what to look for in any RFI/proof-of-concept.
We want to especially thank our sponsors, Application Security Inc. (AppSec), Imperva, and Qualys. Without them, we couldn’t produce free research like this. As with all our papers, the content was developed independently and completely out in the open using our Totally Transparent Research process. We also want to thank our readers for helping review all our public research, and Chris Pepper for editing the paper.
This is version 1.0 of the document, and we will continue to update it (and acknowledge new contributions) over time, so keep coming with the comments if you think we’ve missed anything, or gotten something wrong.
Posted at Monday 15th February 2010 7:00 pm
(1) Comments •
By Mike Rothman
To state the obvious (as I tend to do), we all have too much to protect. No one gets through their list every day, which means perhaps the most critical skill for any professional is the ability to prioritize. We’ve got to focus on the issues that present the most significant risk to the organization (whatever you mean by risk) and act accordingly. I have’t explicitly said it, but the key to network security fundamentals is figuring out how to prioritize. And to be clear, though I’m specifically talking about network security in this series, the tactics discussed can (and need to) be applied to all the other security domains.
To recap how the fundamentals enable this prioritization, first we talked about implementing default deny on your perimeter. Next we discussed monitoring everything to provide a foundation of data for analysis. In the last post, correlation was presented to start analyzing that data.
By the way, I agree with Adrian, who is annoyed with having to do correlation at all. But it is what it is, and maybe someday we’ll get all the context we need to make a decision based on log data, but we certainly can’t wait for that. So to the degree you do correlate, you need to do it effectively.
Going hand in hand with prioritization is the ability to match patterns. Most of the good security folks out there do this naturally, in terms of consuming a number of data points, understanding how they fit together, and then making a decision about what that means, how it will change things and what action is required. The patterns help you to understand what you need to focus on at any given time. The first fundamental step in matching patterns involves knowing your current state. Let’s call that the baseline. The baseline gives you perspective on what is happening in your environment. The good news is that a “monitor everything” approach gives you sufficient data to establish the baseline.
Let’s just take a few examples of typical data types and what their baselines look like:
- Firewall Logs: You’ll see attacks in the firewall logs, so your baseline consists of the normal number/frequency of attacks, time distribution, and origin. So if all of a sudden you are attacked at a different time from a different place, or much more often than normal, it’s time to investigate.
- Network Flows: Network flows show network traffic dynamics on key segments, so your baseline tells you which devices communicate with which other devices – both internal and external to your network. So if you suddenly start seeing a lot of flow from an internal device (on a sensitive network) to an external FTP site, it could be trouble.
- Device Configurations: If a security device is compromised, there will usually be some type of configuration and/or policy change. The baseline in this case is the last known good configuration. If something changes, and it’s not authorized or in the change log, that’s a problem.
Again, these examples are not meant to be exhaustive or comprehensive, just to give an idea about the types of data you are already collecting and what the baseline could look like.
Next you set up the set of initial alerts to detect attacks that you deem important. Each management console for every device (or class of devices) gives you the ability to set alerts. There is leverage in aggregating all this data (see the correlation post), but it’s not necessary.
Now I’ll get back to something discussed in the correlation post, and that’s the importance of planning your use cases before implementing your alerts. You need to rely on those thresholds to tell you when something is wrong. Over time, you tune the thresholds to refine how and when you get alerted. Don’t expect this tuning process to go quickly or easily. Getting this right really is an art, and you’ll need to iterate for a while to get there – think months, not days.
You can’t look for everything, so the use cases need to cover the main data sources you collect and set appropriate alerts for when something is outside normal parameters. I call this looking for not normal, and yes it’s really anomaly detection.
But most folks don’t think favorably of the term “anomaly detection”, so I use it sparingly.
Learning from Mistakes
You can learn something is wrong in a number of ways. Optimally, you get an alert from one of your management consoles. But that is not always the case. Perhaps your users tell you something is wrong. Or (worst case) a third party informs you of an issue. How you learn you’ve been pwned is less important than what you do once you are pwned.
Once you jump into action, you’re looking at the logs, jumping into management consoles, and isolating the issues. How quickly you identify the root cause has everything to with the data you collect, and how effectively you analyze it. We’ll talk more about incident response later this year, but suffice it to say your only job is to contain the damage and remediate the problem.
Once the crisis ends, it’s time to learn from experience. The key, in terms of “looking for not normal”, is to make sure it doesn’t happen again. The attackers do their jobs well and you will be compromised at some point. Make sure they don’t get you the same way twice. The old adage, “Fool me once, shame on you – fool me twice, shame on me,” is very true.
So part of the post-mortem process is to define what happened, but also to look for that pattern again. Remember that attackers are fairly predictable. Like the direct marketers that fill your mailbox with crap every holiday season, if something works, they’ll keep doing it.
Thus, when you see an attack, you’ll need to expect to see it again. Build another set of rules/policies to make sure that same attack is detected quickly and accurately. Yes, I know this is a black list mindset, and there are limitations to this approach since you can’t build a policy for every possible attack (though the AV vendors are trying). That means you need to evaluate and clean up your alerting rules periodically – just like you prune firewall rules.
So between looking for not normal and learning from mistakes, you can put yourself in a position to be alerted to attacks when you actually have time to intervene. And given the reactive nature of the security job, that’s what we’re trying to do.
Posted at Monday 15th February 2010 5:00 pm
(0) Comments •
By Adrian Lane
Chris was kind enough to forward me Game Development in a Post-Agile World this week. What I know about game development could fit on the the head of a pin. Still, one of the software companies I worked for was incubated inside a much larger video game development company. I was always very interested in watching the game team dynamics, and how they differed from the teams I ran. The game developers did not have a lot of overlapping skills and the teams were – whether they knew it or not – built around the classical “surgical team” structure. They was always a single and clear leader of the team, and that person was usually both technically and creatively superior. The teams were small, and if they had a formalized process, I was unaware of it. It appeared that they figured out their task, built the tools they needed to support the game, and then built the game. There was consistency across the teams, and they appeared to be very successful in their execution.
Regardless, back to the post. When I saw the title I thought this would be a really cool examination of Agile in a game development environment. After the first 15 pages or so, I realized there is not a damned thing about video game development in the post. What is there, though, is a really well-done examination of the downsides with Agile development. I wrote what I thought to be a pretty fair post on the subject this week, but this post is better! While I was focused on the difficulties of changing an entrenched process, and their impact on developing secure code, this one takes a broader perspective and looks at different Agile methodologies along a continuum of how people-oriented different variations are. The author then looks at how moving along the continuum alters creativity, productivity, and stakeholder involvement. If you are into software development processes, you’re probably a little odd, but you will very much enjoy this post!
On to the Summary:
Webcasts, Podcasts, Outside Writing, and Conferences
It’s the week of Rich Mogull, Media Giant:
Favorite Securosis Posts
Other Securosis Posts
Favorite Outside Posts
Project Quant Posts
Top News and Posts
Blog Comment of the Week
Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to ds, in response to Rich’s Counterpoint: Admin Rights Don’t Matter the Way You Think They Do:
I think that this post is dangerous. While many will understand the difference between removing admin rights from a desktop for the user and restricting/managing admin rights for sysadmins, the distinction isn’t explicitly stated, and some may take this to mean dealing with admin rights isn’t necessary as a blanket statement.
Posted at Friday 12th February 2010 6:00 am
(0) Comments •
By Adrian Lane
It’s tough to talk about securing database access methods in a series designed to cover database security basics, because the access attacks are not basic. They tend to exploit either communications media or external functions – taking advantage of subtleties or logic flaws – capitalizing on trust relationships, or just being very unconventional and thus hard to anticipate. Still, some of the attacks are right through an open front door, like forgetting to set a TNS Listener password on Oracle. I will cover the basics here, as well as a few more involved things which can be addressed with a few hours and minimal third party tools.
Relational platforms are chock full of features and functions, and many have been in use for so long that people simply forget to consider their security ramifications. Or worse, some feature came with the database, and an administrator (who did not fully understand it) was afraid that disabling it would cause side effects in applications. In this post I will cover the communications media and external services provided by the database, and their proper setup to thwart common exploits. Let’s start with the basics:
- Network Connections: Databases can support multiple network connections over multiple ports. I have two recommendations here. First, to reduce complexity and avoid possible inconsistency with network connection settings, I advise keeping the number of listeners to a minimum: one or two. Second, as many automated database attacks go directly default network ports directly, I recommend moving listeners to a non-standard port numbers. This will annoy application developers and complicate their setup somewhat, but more importantly it will both help stop automated attacks and highlight connection attempts to the default ports, which then indicate either misconfiguration or hardwired attacks.
- Network Facilities: Some databases use add-on modules to support network connections, and like the database itself are not secure out of the box. Worse, many vulnerability assessment tools omit the network from the scan. Verify that the network facility itself is set up properly, that administrative access requires a password, and that the password is not stored in clear text on the file system.
- Transport Protocols: Databases support multiple transport protocols. While features such as named pipes are still supported, they are open to spoofing and hijacking. I recommend that you pick a single reliable protocol such as TCP/IP), and disable the rest to prevent insecure connections.
- Private Communication: Use SSL. If the database contains sensitive data, use SSL. This is especially true for databases in remote or virtual environments. The path between the user or application and the database is not guaranteed to be safe, so use SSL to ensure privacy. If you have never set up SSL before, get some help – otherwise connecting applications can choose to ignore SSL.
- External Procedure Lockdown: All database platforms have external procedures that are very handy for performing database administration. They enable DBAs to run OS commands, or to run database functions from the OS. These procedures are also a favorite of attackers – once they have hacked either an OS or a database, stored procedures (if enabled) make it trivial to leverage that access into a compromise of the other half. This one is not optional. If you are part of a small IT organization and responsible for both IT and database administration, it will make your day-to-day job just a little harder.
Checking these connection methods can be completed in under and hour, and enables you to close off the most commonly used avenues for attack and privilege escalation.
A little more advanced:
- Basic Connection Checks: Many companies, as part of their security policy, do not allow ad hoc connections to production databases. Handy administrative tools like Quest’s Toad are not allowed because they do not enforce change control processes. If you are worried about this issue, you can write a login trigger that detects the application, user, and source location of inbound connections – and then terminates unauthorized sessions.
- Trusted Connections & Service Accounts: All database platforms offer some form of trusted connections. The intention is to allow the calling application to verify user credentials, and then pass the credentials or verification token through the service account to the database. The problem is that if the calling application or server has been compromised, all the permissions granted to the calling application – and possibly all the permissions assigned to any user of the connection – are available to an attacker. You should review these trust relationships and remove them for high-risk applications.
Posted at Thursday 11th February 2010 9:15 pm
(0) Comments •
By Mike Rothman
As a security practitioner, it has always been difficult to select the ‘right’ product. You (kind of) know what problem needs to be solved, yet you often don’t have any idea how any particular product will work and scale in your production environment. Sometimes it is difficult to identify the right vendors to bring in for an evaluation. Even when you do, no number of vendor meetings, SE demos, or proof of concept installations can tell you what you need to know.
So it’s really about assembling a number of data points and trying to do your homework to the greatest degree possible. Part of that research process has always been product reviews by ‘independent’ media companies. These folks line up a bunch of competitors, put them through the paces, and document their findings. Again, this doesn’t represent your environment, but it gives you some clues on where to focus during your hands-on tests and can help winnow down the short list.
Unfortunately, the patient (the product review) has died. The autopsy is still in process, but I suspect the product review died of starvation. There just hasn’t been enough food around to sustain this legend of media output. And what has been done recently subsists on a diet of suspicion, media bias, and shoddy work.
The good news is that tech media has evolved with the times. Who doesn’t love to read tweets about regurgitated press releases? Thankfully Brian Krebs is still out there actually doing something useful.
Seeing Larry Suto’s recent update of his web application scanner test (PDF) and the ensuing backlash was the final nail in the coffin for me. But this patient has been sick for a long time. I first recognized the signs years ago when I was in the anti-spam business. NetworkWorld commissioned a bake-off of 40 mail security gateways and published the results. In a nutshell, the test was a fiasco for several reasons:
- Did not reflect reality: The test design was flawed from the start. The reviewer basically resent his mail to each device. This totally screwed up the headers (by adding another route) and dramatically impacted effectiveness. This isn’t how the real world works.
- Too many vendors: To really test these products, you have to put them all through their paces. That means at least a day of hands-on time to barely scratch the surface. So to really test 40 devices, it would take 40-60+ man-days of effort. Yeah, I’d be surprised if a third of that was actually spent on testing.
- Uneven playing field: The reviewers let my company send an engineer to install the product and provide training. We did that with all enterprise sales, so it was standard practice for us, but it also gave us a definite advantage over competitors who didn’t have a resource there. If every review present a choice: a) fly someone to the lab for a day, or b) suffer by comparison to the richer competitors, how fair and comprehensive can reviews really be?
- Not everyone showed: There is always great risk in doing a product review. If you don’t win and handily, it is a total train wreck internally. Our biggest competitor didn’t show up for that review, so we won, but it didn’t help with in most of our head-to-head battles.
Now let’s get back to Suto’s piece to see how things haven’t changed, and why reviews are basically useless nowadays. By the way, this has nothing to do with Larry or his efforts. I applaud him for doing something, especially since evidently he didn’t get compensated for his efforts.
In the first wave, the losing vendors take out their machetes and start hacking away at Larry’s methodology and findings. HP wasted no time, nor did a dude who used to work for SPI. Any time you lose a review, you blame the reviewer. It certainly couldn’t be a problem with the product, right? And the ‘winner’ does its own interpretation of the results. So this was a lose-lose for Larry. Unless everyone wins, the methodology will come under fire.
Suto tested 7 different offerings, and that probably was too many. These are very complicated products and do different things in different ways. He also used the web applications put forth by the vendors in a “point and shoot” type of methodology for the bulk of the tests. Again, this doesn’t reflect real life or how the product would stand up in a production environment. Unless you actually use the tool for a substantial amount of time in a real application, there is no way around this limitation.
I used to love the reviews Network Computing did in their “Real-World Labs.” That was the closest we could get to reality. Too bad there is no money in product reviews these days – that means whoever owns Network Computing and Dark Reading can’t sponsor these kinds of tests anymore, or at least not objective tests. The wall between the editorial and business teams has been gone for years. At the end of the day it gets back to economics.
I’m not sure what level of help Larry got from the vendors during the test, but unless it was nothing from nobody, you’re back to the uneven playing field. But even that doesn’t reflect reality, since in most cases (for an enterprise deployment anyway) vendor personnel will be there to help, train, and refine the process. And in most cases, craftily poison the process for other competitors, especially during a proof of concept trial. This also gets back to the complexity issue. Today’s enterprise environments are too complicated to expect a lab test to reflect how things work. Sad, but true.
Finally, WhiteHat Security didn’t participate in the test. Jeremiah explained why, and it was probably the right answer. He’s got something to tell his sales guys, and he doesn’t have results that he may have to spin. If we look at other tests, when was someone like Cisco last involved in a product review? Right, it’s been a long time because they don’t have to be. They are Cisco – they don’t have to participate, and it won’t hurt their sales one bit.
When I was in the SIEM space, ArcSight didn’t show up for one of the reviews. Why should they? They had nothing to gain and everything to lose. And without representation of all the major players, again, the review isn’t as useful as it needs to be.
Which all adds up to the untimely death of product reviews. So raise your drinks and remember the good times with our friend. We laughed, we cried, and we bought a bunch of crappy products. But that’s how it goes.
What now for the folks in the trenches? Once the hangover from the wake subsides, we still need information and guidance in making product decisions. So what to do? That’s a topic for another post, but it has to do with structuring the internal proof of concept tests to reflect the way the product will be used – rather than how the vendor wants to run the test.
Posted at Thursday 11th February 2010 8:15 pm
(12) Comments •
We are in the process of finalizing some research planning for the next few months, so I want to see if there are any requests for research out there.
First, here are some papers we anticipate completing over the next 3 months:
- Understanding and Selecting a Database Encryption or Tokenization Solution
- Understanding and Selecting a Database Assessment Solution
- Project Quant for Database Security
- Quick Wins with DLP
- Pragmatic Data Security
- Network Security Fundamentals
- Endpoint Security Fundamentals
- Understanding and Selecting a SIEM/Log Management Product
- Understanding and Implementing Network Segregation
- Data Security for the Cloud
Some of these are sponsored, some aren’t, and all will be released for free under a Creative Commons license.
But we’d also like to know if there are any areas you’d like to see us develop. What the heck – since we give it away for free, you might as well take advantage of us. The one area we aren’t ready to cover yet is identity management, but anything else is open.
Seriously – use us. We like it. Oh, yeah.
Posted at Wednesday 10th February 2010 9:00 pm
(14) Comments •
By Mike Rothman
You may not know it, but lots of folks you know are zombies. It seems that life has beaten them down, and miraculously two weeks later they don’t say ‘hi’ – they just give you a blank stare and grin as the spittle drips out of the corners of their mouths. Yup, a sure sign they’ve been to see Dr. Feelgood, who heard for an hour how hard their lives are, and as opposed to helping to deal with the pain, they got their friends Prozac, Lexapro, and Zoloft numb it. These billion dollar drugs build on the premise that life is hard, so it’s a good idea to take away the ability to feel because it hurts too much. Basically we, as a society, are increasingly becoming comfortably numb.
I’m not one to be (too) judgmental about the personal decisions that some folks make, but this one gets in my craw. My brother once said to me “Life is Pain,” and there is some truth to that statement. Clearly life is hard right now for lots of folks and I feel for them. But our society values quick fixes over addressing the fundamental causes of issues. Just look at your job. If someone came forward with a widget that would get you compliant, you’d buy it. Maybe you already have. And then you realize: there are no short cuts. You’ve got to do the work. Seems to me we don’t do the work anymore.
Now, to be clear, some folks are ill and they need anti-depressants. I’ve got no issue with that – in fact I’m thankful that these folks have some options to lead normal lives and not hurt themselves and/or others. It’s the soccer mom (or dad) who is overwhelmed with having to get the kid’s homework done and getting them to baseball practice. That doesn’t make sense to me. I know it’s easier to take a pill than to deal with the problem, but that doesn’t make the problem go away.
I guess that’s easy for me to say because thankfully I don’t suffer from depression. Yet, to come clean I spent most of my 20’s medicating in my own way. I got hammered every weekend and sometimes during the week. If I had invested in the market half of what I spent on booze, I wouldn’t be worrying about the mortgage. But I guess that I worry at all about anything is a good sign. Looking back, I was trying to be someone different – the “party guy,” who can drink beer funnels until he pukes and then drink some more. I was good at that. Then I realized how unfulfilling that lifestyle was for me, especially when the doctor informed me I had the liver of a 50 year old. Which is not real good when you are 30.
Ten years later, I actually enjoy the ups and downs. OK, I like the ups more than the downs, but I understand that without feeling bad, I can’t appreciate when things are good. I’m getting to the point where I’m choosing what to get pissed off about. And I do still get pissed. But it’s not about everything and I get past my anger a lot faster. Basically, I’m learning how to let it go. If I can’t control it and I didn’t screw it up, there isn’t much I can do – so being pissed off about it isn’t helping anyone.
By the way, that doesn’t mean I’m a puritan. I still tip back a few per week and kick out the jams a few times a year. The funnel is still my friend. The difference is I’m not running away from anything. I’m not trying to be someone else. I’m getting into the moment and having fun. There is a big difference.
Photo credit: “Comfortably Numb” originally uploaded by Olivander
Incite 4 U
One of the advantages of working on a team is that we cover for each other and we are building a strong bench. This week contributor David Mortman put together a couple of pieces. Mort went and got a day job, so he’s been less visible on Securosis, but given his in-depth knowledge of all things (including bread making), we’ll take what we can get.
I also want to highlight a post by our “intern” Dave Meier on Misconceptions of a DMZ, in which he dismantles a thought balloon put out there regarding virtualized web browser farms. Meier lives in the trenches every day, so he brings a real practitioner’s perspective to his work for Securosis.
It’s About the Boss, Not the Org Chart – My buddy Shack goes on a little rampage here listing the reasons why security shouldn’t report to IT. I’m the first to think in terms of absolutes (the only gray in my life is my hair), but Dave is wrong here. I’m not willing to make a blanket statement about where security should report because it’s more about being empowered than it is about the org chart. If the CIO gets it and can persuade the right folks to do the right thing and support the mission, then it’s all good. If that can happen in the office of the CFO or even CEO, that’s fine too. Dave brings up some interesting points, but unless you have support from the boss, none of it means a damn thing. – MR
Rock Stars Are a Liability – It looks like Forrester Research now requires all analysts to shut down their personal blogs, and only blog on the Forrester platform. I started Securosis (the blog) back when I was still working at Gartner, and took advantage of the grey area until they adopted an official policy banning any coverage of IT in personal blogs. That wasn’t why I left the company, but I fully admit that the reception I received while blogging gave me the confidence to jump out there on my own. In a big analyst firm the corporate brand is more important than personal brands, since personal brands represent a risk to the company. The rock star analyst wants more pay & more freedom, and most of them then start believing their own hype and forget how to be a good analyst (which is why so few succeed on their own). The company also needs to maintain their existing business model, and can’t give away too much for free. From that perspective, the Forrester (and Gartner) policies make a lot of sense. Where they fail is that it will eventually be very difficult to attract and retain talent without letting them blog, since that’s where many thought leaders are now incubated. I also think it reduces trust, since blogs are powerful platforms to build personal connections with a wide audience. We have a totally different business model, but I fully respect and understand the reasoning behind the large firms. They’ll change when they have to, and not one second sooner. – RM
Just a Little Tap (on the Noggin) – I wish I had gone to Black Hat in DC this year, as it appears there were half a dozen really cool presentations. One was Christopher Tarnovsky demonstrating how to crack TMP Smartcard Encryption through a hard-wire attack on the chip. By interrogating the data bus he was able to tap into the unencrypted data stream. Pretty cool and looks very complicated. While the scientist in me finds this interesting, I am betting people who really need to know what is going on will employ ‘lead pipe’ cryptography instead. Yes, thumping the owner of the device with a lead pipe on the noggin. This type of brute force attack is generally easier than getting breaking into the hardware. Sure, not as elegant as interrogating the system bus, but faster and more cost effective. – AL
APT – Risk Management by a New Name – – An awesome rant by Greebo on why APT isn’t new, and also a great primer on how to design a security program. This says it all: ‘I hate APT and all the FUD surrounding it. Scaring the punters is chicken little or crying wolf. Get with the “do something” program. If you’re a news org, instead of talking about folks who got pwned, let’s talk about folks who through good management and effective IT Security programs have survived such “advanced persistent threats”.’ – DMort
Is Application White Listing Coming of Age? – There is still significant resistance to application white listing in the minds of security professionals. Personally, I think the concept makes tremendous sense, especially given the fatal flaws in the way we detect malware today. But the risk of breaking applications is real and must be managed effectively. Another issue is the entire weight of the status quo (that means you, big AV vendors) has a vested interest in keeping AWL down. SCMag considers both sides of the equation and decides…well…nothing. Most organizations are starting small and that’s the right approach. I’m starting the Endpoint Security Fundamentals series next week, and I’ll be talking a lot about how malware detection needs to evolve – to be clear, it involves changing the way we look at the problem. – MR
A “No Show” at Your Funeral – I was joking with a vendor today that participation at RSA is sometimes a must for small companies. Even if you don’t realize value and generate leads, not attending can create all sorts of speculation, rumor mongering, and competitive slurs. “They must not be doing very well” whispered over coffee to prospects clearly hurts sales. It’s a fact. I was reading Larry Suto’s “Analyzing the Accuracy and Time Costs of Web Application Security Scanners”, which I found to be an nice overview of issues with App Scanners, I could not help but wonder why WhiteHat had declined to participate. What was going on? Having been in the startup community for so long, I could not help but speculate (in the negative) before I caught myself. Jeremiah Grossman’s responses made me laugh out loud because I was guilty of this unfortunate trait. So I understand the post as saying: you must respond to these issues or FUD will fill the void for you. Logical or not, a response is not optional. And I am glad he did because the second half of the posts references some discussion points and history of the web application scanning space I was frankly unfamiliar with. He does a good job of documenting the issues with comparing web application scanners and not just issues of product functionality, but some of the surrounding issues of the craft in general. If you are considering investment, his list of references should help augment your evaluation process. – AL
Take Your Patent and Shove It – I get a lot of stuff in my inbox from lots of vendors about why they are great and why their product is innovative, disruptive, game changing, next-generation, and the like. It’s all crap, but the releases that make me laugh the hardest are patent announcements. Listen, I’m a patent author from my days in vendor-land and I know what a joke it is. So when I see nebulous patents from start-ups (LogRhythm and NetWitness, for example), it’s more of the “enrich BusinessWire” conspiracy. The reality is none of these folks are going to enforce their patents, so it’s really just a waste of time. And I’ve wasted enough of yours ranting about this crap. – MR
Are You Ready for the Risk of Mobile Malware? – This article on BankInfoSecurity is asking the completely wrong question. It doesn’t matter if you are ready or not. Either the risk exists or it doesn’t. Regardless, we have to assume that our users are going to continue to invest in mobile computing and we have to figure out a way to deal with securing those devices. Fortunately, there’s not a lot of mobile malware out there yet, largely because there isn’t a large enough footprint to warrant investing the time and effort when you can instead go after lower hanging fruit, like desktop browsers. But that will change soon enough. Wouldn’t it be nice to be ahead of the curve for a change? – DMort
Prius Clouds – Websense announced their new “Triton” platform to combine their web, email, and DLP platforms, plus offer hybrid cloud/on-premise solutions (triton makes me think of irradiated gun sights for some reason). I’ll wait for some customer testing before I render an opinion on how well it works, but conceptually these models make a lot of sense for the mid-market. These days it doesn’t always make sense to pump all remote users and locations through a central pipe via VPN, so using the cloud to cover remote users and branch offices when you don’t want to install boxes seems pretty reasonable. But we are still in the early days, and when you are evaluating these approaches make sure you understand which policies work where, since all is not equal in the cloud. (Note, I’m a little out of it today, so I can’t think of a good stuck accelerator joke. Make up your own). – RM
Marlboro Man Visits AppSec Land – Josh Corman is a big thinker. He, David Rice, and Jeff Williams posted a thought balloon about a concept called Rugged Software, ostensibly to appeal to the he-man developers out there. It’s a bunch of statements about what secure software should be. And it’s as yummy as blue skies and apple pie. Unfortunately it’s also irrelevant until there is a verifiable economic advantage for companies in supporting security software development. For what I’m hearing, it’s still pretty hard to make a buck selling tools to help companies build secure software and that’s not surprising. In this case, inertia is powerful and no amount of Marlboro Man positioning is going to change that in the short term. So I applaud the Rugged dudes. I look forward to saddling up and riding our horses off into the sunset… of continued insecure code. – MR
Posted at Wednesday 10th February 2010 5:15 am
(9) Comments •
By Adrian Lane
So it’s probably apparent that Mike and I have slightly different opinions on some security topics, such as Monitoring Everything (or not). But sometimes we have exactly the same viewpoint, for slightly different reasons. Correlation is one of these later examples.
I don’t like correlation. Actually, I am bitter that I have to do correlation at all. But we have to do it because log files suck. Please, I don’t want log management and SIEM vendors to get all huffy with that statement: it’s not your fault. All data sources for forensic information pretty much lack sufficient information for forensics, and deficient in options for tuning and filtering capabilities to make them better. Application developers did not have security in mind when they created the log data, and I have no idea what the inventors of Event Log had in mind when they spawned that useless stream of information, but it’s just not enough.
I know that this series is on network fundamentals, but I want to raise an example outside of the network space to clearly illustrate the issue. With database auditing, the database audit trail is the most accurate reflection of the database transactional history and database state. It records exactly what operations are performed. It is a useful centerpiece of auditing, but it is missing critical system events not recorded in the audit trail, and it does not have the raw SQL queries sent to the database. The audit trail is useful to an extent, but to enforce most policies for compliance or to perform a meaningful forensic investigation you must have additional sources of information (There are a couple vendors out there who, at this very moment, are tempted to comment on how their platform solves this issue with their novel approach. Please, resist the temptation). Relational database platforms do a better job of creating logs than most networks, platforms, or devices.
Log file forensics are a little like playing a giant game of 20 questions, and each record is the answer to a single question. You find something interesting in the firewall log, but you have to look elsewhere to get a better idea of what is going on. You look at an access control log file, and now it really looks like something funny is going on, but now you need to check the network activity files to try to estimate intent. But wait, the events don’t match up one for one, and activity against the applications does not map one-to-one with the log file, and the time stamps are skewed. Now what? Content matching, attribute matching, signatures, or heuristics?
Which data sources you select depends on the threats you are trying to detect and, possibly, react to. The success of correlation is largely dependent on how well you size up threats and figure out which combination of log events are needed. And which data sources you choose. Oh, and then how well develop the appropriate detection signatures. And then how well you maintain those policies as threats morph over time. All these steps take serious time and consideration.
So do you need correlation? Probably. Until you get something better. Regardless of the security tool or platform you use for threat detection, the threat assessment is critical to making it useful. Otherwise you are building a giant Cartesian monument to the gods of useless data.
Posted at Wednesday 10th February 2010 3:48 am
(2) Comments •
By Mike Rothman
In the last Network Security Fundamentals post, we talked about monitoring (almost) everything and how that drives a data/log aggregation and collection strategy. It’s great to have all that cool data, but now what?
That brings up the ‘C word’ of security: correlation. Most security professionals have tried and failed to get sufficient value from correlation relative to the cost, complexity, and effort involved in deploying the technology. Understandably, trepidation and skepticism surface any time you bring up the idea of real-time analysis of security data. As usual, it comes back to a problem with management of expectations.
First we need to define correlation – which is basically using more than one data source to identify patterns because the information contained in a single data source is not enough to understand what is happening or not enough to make a decision on policy enforcement. In a security context, that means using log records (or other types of data) from more than one device to figure out whether you are under attack, what that attack means, and the severity of attack.
The value of correlation is obvious. Unfortunately networks typically generate ten of thousands data records an hour or more, which cannot be analyzed manually. So sifting through potentially millions of records and finding the 25 you have to worry about represents tremendous time savings. It also provides significant efficiencies when you understand threats in advance, since different decisions require different information sources. The technology category for such correlation is known as SIEM: Security Information and Event Management.
Of course, vendors had to come along and screw everything up by positioning correlation as the answer to every problem in security-land. Probably the cure for cancer too, but that’s beside the point. In fairness, end users enabled this behavior by hearing what they wanted. A vendor said (and still says, by the way) they could set alerts which would tell the user when they were under attack, and we believed. Shame on us.
10 years later, correlation is achievable. But it’s not cheap, or easy, or comprehensive. But if you implement correlation with awareness and realistic expectations, you can achieve real value.
Making Correlation Work 4 U
I liken correlation to how an IPS can and should be used. You have thousands of attack signatures available to your IPS. That doesn’t mean you should use all of them, or block traffic based on thousands of alerts firing. Once again, Pareto is your friend. Maybe 20% of your signatures should be implemented, focusing on the most important and common use cases that concern you and are unlikely to trigger many false positives. The same goes for correlation. Focus on the use cases and attack scenarios most likely to occur, and build the rules to detect those attacks. For the stuff you can’t anticipate, you’ve got the ability to do forensic analysis, after you’ve been pwned (of course).
There is another more practical reason for being careful with the rules. Multi-factor correlation on a large dataset is compute intensive. Let’s just say a bunch of big iron was sold to drive correlation in the early days. And when you are racing the clock, performance is everything. If your correlation runs a couple days behind reality, or if it takes a week to do a forensic search, it’s interesting but not so useful. So streamlining your rule base is critical to making correlation work for you.
Defining Use Cases
Every SIEM/correlation platform comes with a bunch of out-of-the-box rules. But before you ever start fiddling with a SIEM console, you need to sit down in front of a whiteboard and map out the attack vectors you need to watch. Go back through your last 4-5 incidents and lay those out. How did the attack start? How did it spread? What data sources would have detected the attack? What kinds of thresholds need to be set to give you time to address the issue?
If you don’t have this kind of data for your incidents, then you aren’t doing a proper post-mortem, but that’s another story. Suffice it to say 90% of the configuration work of your correlation rules should be done before you ever touch the keyboard. If you haven’t had any incidents, go and buy a lottery ticket – maybe you’ll hit it big before your number comes up at work and you are compromised.
A danger of not properly defining use cases is the inability to quantify the value of the product once implemented. Given the amount of resources required to get a correlation initiative up and running, you need all the justification you can get. The use cases strictly define what problem you are trying to solve, establish success criteria (in finding that type of attack) and provide the mechanism to document the attack once detected. Then your CFO will pipe down when he/she wants to know what you did with all that money.
Also be wary of vendor ‘consultants’ hawking lots of professional service hours to implement your SIEM. As part of the pre-sales proof of concept process, you should set up a bunch of these rules. And to be clear, until you have a decent dataset and can do some mining using your own traffic, paying someone $3,000 per day to set up rules isn’t the best use of their time or your money.
Once you have an initial rule set, you need to start analyzing the data. Regardless of the tool, there will be tuning required, and that tuning takes time and effort. When the vendor says their tool doesn’t need tuning or can be fully operational in a day or week, don’t believe them.
First you need to establish your baselines. You’ll see patterns in the logs coming from your security devices and this will allow you to tighten the thresholds in your rules to only fire alerts when needed. A few SIEM products analyze network flow traffic and vulnerability data as well, allowing you to use that data to make your rules smarter based on what is actually happening on your network, instead of relying on generic rules provided as a lowest common denominator by your vendor.
For a deeper description of making correlation work, you should check out Rocky DeStefano’s two posts (SIEM 101 & SIEM 201) on this topic. Rocky has forgotten more about building a SOC than you probably know, so read the posts.
Putting the Security Analyst in a Box
I also want to deflate this idea that a SIEM/correlation product can provide a “security analyst in a box.” That is an old wives’ tale created by SIEM vendors in an attempt to justify their technology, versus adding a skilled human security analyst. Personally, I’ll take a skilled human who understands how things should look over a big honking correlation engine every time. To be clear, the data reduction and simple correlation capabilities of a SIEM can help make a human better at what they do – but cannot replace them. And any marketing that makes you think otherwise is disingenuous and irresponsible.
All that said, analysis of collected network security data is fundamental to any kind of security management, and as I dig into my research agenda through the rest of this year I’ll have lots more to say about SIEM/Log Management.
Posted at Wednesday 10th February 2010 12:09 am
(1) Comments •
By David J. Meier
A recent post tying segmented web browsing to DMZs by Daniel Miessler got me thinking more about the network segmentation that is lacking in most organizations. The concept behind that article is to establish a browser network in a DMZ, wherein nothing is trusted. When a user wants to browse the web, the article implies that the user fires up a connection into the browser network for some kind of proxy out onto the big, bad Internet. The transport for this connection is left to the user’s imagination, but it’s easy to envision something along the lines of Citrix Xenapp filling this gap. Fundamentally this may offset some risk initially, but don’t get too excited just yet.
First let’s clear up what a DMZ should look like conceptually. From the perspective of most every organization, you don’t want end users going directly into a DMZ. This is because, by definition, a DMZ should be as independent and self-contained as possible. It is a segment of your network that is Internet facing and allows specific traffic to (presumably) external, untrusted parties. If something were to go wrong in this environment, such as a server being compromised, the breach shouldn’t expose the internal organization’s networks, servers, and data; and doesn’t provide the gold mine of free reign most end users have on the inside.
The major risk of the DMZ network architecture is an attacker poking (or finding) holes and building paths back into your enterprise environment. Access from the inside into the primary DMZ should be restricted solely to some level of bastion hosts, user access control, and/or screened transport regimens. While Daniel’s conceptual diagram might be straightforward, it leaves out a considerable amount of magic that’s required to scale in the enterprise, given that the browser network needs to be segregated from the production environments. This entails pre-authenticating a user before he/she reaches the browser network, requiring a repository of internal user credentials outside the protected network. There must also be some level of implied trust between the browser network and that pre-authentication point, because passing production credentials from the trusted to untrusted network would be self-defeating. Conversely, maintaining a completely different user database (necessarily equivalent to the main non-DMZ system, and possibly including additional accounts if the main system is not complete) is out of the question in terms of scalability and cost (why build it once when you can build it twice, at twice the price?), so at this point we’re stuck in an odd place: either assuming more risk of untrusted users or creating more complexity – and neither is a good answer.
Assuming we can get past the architecture, let’s poke at the browser network itself. Organizations like technologies they can support, which costs money. Since we assume the end user already has a supported computer and operating system, the organization now has to take on another virtual system to allow them to browse the Internet. Assuming a similar endpoint configuration, this roughly doubles the number of supported systems and required software licenses. It’s possible we could deploy this for free (as in beer) using an array of open source software, but that brings us back to square one for supportability. Knock knock. Who’s there? Mr. Economic Reality here – he says this dog won’t hunt.
What about the idea of a service provider basically building this kind of browser network and offering access to it as part of a managed security service, or perhaps as part of an Internet connectivity service? Does that make this any more feasible? If an email or web security service is already in place, the user management trust issue is eliminated since the service provider already has a list of authorized users. This also could address the licensing/supportability issue from the enterprise’s perspective, since they wouldn’t be licensing the extra machines in the browser network. But what’s in it for the service provider? Is this something an enterprise would pay for? I think not. It’s hard to make the economics work, given the proliferation of browsers already on each desktop and the clear lack of understanding in the broad market of how a proxy infrastructure can increase security.
Supportability and licensing aside, what about the environment itself? Going back to the original post, we find the following:
- Browsers are banks of virtual machines
- Constantly patched
- Constantly rebooted
- No access to the Internet except through the browser network
- Untrusted, just like any other DMZ
Here’s where things start to fall apart even further. The first two don’t really mean much at this point. Sandboxing doesn’t buy us anything because we’re only using the system for web browsing anyway, so protecting the browser from the rest of the system (which does nothing but support the browser) is a moot point. To maintain a patch set makes sense, and one could argue that since the only thing on these systems is web browsing, patching cycles could be considerably more flexible. But since the patching is completely different than inside our normal environment, we need a new process and a new means of patch deployment. Since we all know that patching is a trivial slam-dunk which everyone easily gets right these days, we don’t have to worry, right? Yes, I’m kidding. After all, it’s one more environment to deal with. Check out Project Quant for Patch Management to understand the issues inherent to patching anything.
But we’re not done yet. You could mandate access to the Internet only through the browser network for the general population of end users, but what about admins and mobile users? How do we enforce any kind of browser network use when they are at Starbucks? You cannot escape exceptions, and exceptions are the weakest link in the chain. My personal favorite aspect of the architecture is that the browser network should be considered untrusted. Basically this means anything that’s inside the browser network can’t be secured. We therefore assume something in the browser network is always compromised. That reminds me of those containment units the Ghostbusters used to put Slimer and other nefarious ghosts in. Why would I, by architectural construct, force my users to use an untrusted environment? Yes, you can probably make a case that internal networks are untrusted as well, but not untrustworthy by design.
In the end, the browser network won’t work in today’s enterprise, and Lord knows a mid-sized business doesn’t have the economic incentive to embrace such an architecture. The concept of moving applications to a virtualized context to improve control has merit – but this deployment model isn’t feasible, doesn’t increase the security posture much (if at all), creates considerable rework, and entails significant economic impact. Not a recipe for success, eh? I recommend you allocate resources to more thorough network segmentation, accelerated patch management analysis, and true minimal-permission user access controls. I’ll be addressing these issues in upcoming research.
—David J. Meier
Posted at Tuesday 9th February 2010 8:44 pm
(0) Comments •
By Adrian Lane
During Black Hat last week, David Litchfield disclosed that he had discovered an 0-day in Oracle 11G which allowed him to acquire administrative level credentials. Until today, I was unaware that the attack details were made available as well, meaning anyone can bounce the exploit off your database server to see if it is vulnerable.
From the NetworkWorld article, the vulnerability is …
… the way Java has been implemented in Oracle 11g Release 2, there’s an overly permissive default grant that makes it possible for a low privileged user to grant himself arbitrary permissions. In a demo of Oracle 11g Enterprise Edition, he showed how to execute commands that led to the user granting himself system privileges to have “complete control over the database.” Litchfield also showed how it’s possible to bypass Oracle Label Security used for managing mandatory access to information at different security levels.
As this issue allows for arbitrary escalation of privileges in the database, it’s pretty much a complete compromise. At least Oracle 11G R2 is affected, and I have heard but not confirmed that 10G R2 is as well. This is serious and you will need to take action ASAP, especially for installations that support web applications. And if your web applications are leveraging Oracle’s Java implementation, you may want to take the servers offline until you have implemented the workaround.
From what I understand, this is an issue with the Public user having access to the Java services packaged with Oracle. I am guessing that the appropriate workaround is to revoke the Public user permissions granted during the installation process, or lock that account out altogether. There is no patch available at this time, but that should serve as a temporary workaround. Actually, it should be a permanent workaround – after all, you didn’t really leave the ‘Public’ user account enabled on your production server, did you?
I have been saying for several years that there is no such thing as public access to your database. Ever! You may have public content, but the public user should not just have its password changed, but should be fully locked out. Use a custom account with specific grant statements. Public execute permission to anything is ill advised, but in some cases can be done safely. Running default ‘Public’ permissions is flat-out irresponsible. You will want to review all other user accounts that have access to Java and ensure that no other accounts have public access – or access provided by default credentials – until a patch is available.
A couple database assessment vendors were kind enough to contact me with more details on the hack, confirming what I had heard. Application Security Inc. has published more specific information on this attack and on workarounds. They are recommending removing the execute permissions as a satisfactory work-around. That is the most up-to-date information I can find.
Posted at Monday 8th February 2010 10:20 pm
(3) Comments •
Update – Based on feedback, I failed to distinguish that I’m referring to normal users running as admin. Sysadmins and domain admins definitely shouldn’t be running with their admin privileges except for when they need them. As you can read in the comments, that’s a huge risk.
When I was reviewing Mike’s FireStarter on yanking admin rights from users, it got me thinking on whether admin rights really matter at all.
Yes, I realize this is a staple of security dogma, but I think the value of admin rights is completely overblown due to two reasons:
- There are plenty of bad things an attacker can do in userland without needing admin rights. You can still install malware and access everything the user can.
- Lack of admin privileges is little more than a speed bump (if even that) for many kinds of memory corruption attacks. Certain buffer overflows and other attacks that directly manipulate memory can get around rights restrictions and run as root, admin, or worse. For example, if you exploit a kernel flaw with a buffer overflow (including flaws in device drivers) you are running in Ring 0 and fully trusted, no matter what privilege level the user was running as. If you read through the vulnerability updates on various platforms (Mac, PC, whatever), there are always a bunch of attacks that still work without admin rights.
I’m also completely ignoring privilege escalation attacks, but we all know they tend to get patched at a slower pace than remote exploitation vulnerabilities.
This isn’t to say that removal of admin rights is completely useless – it’s very useful to keep users from mucking up your desktop images – but from a defensive standpoint, I don’t think restricting user rights is nearly as effective as is often claimed.
My advice? Do not rely on standard user mode as a security defense. It’s useful for locking down users, but has only limited effectiveness for stopping attacks. When you evaluate pulling admin rights, don’t think it will suddenly eliminate the need for other standard endpoint security controls.
Posted at Monday 8th February 2010 10:11 pm
(8) Comments •
By Adrian Lane
In our last task in the Protect phase of Quant for Database Security, we’ll cover the discrete tasks for implementing data masking. In a nutshell, masking is applying a function to data in order to obfuscate sensitive information, while retaining its usefulness for reporting or testing. Common forms of masking include randomly re-assigning first and last names, and creating fake credit card and Social Security numbers. The new values retain the format expected by applications, but are not sensitive in case the database is compromised.
Masking has evolved into two different models: the traditional Extraction, Transformation, Load (ETL) model, which alters copies of the data; and the dynamic model, which masks data in place. The conventional ETL functions are used to extract real data and provide an obfuscated derivative to be loaded into test and analytics systems. Dynamic masking is newer, and available in two variants. The first overwrites the sensitive values in place, and the second variant provides a new database ‘View’. With views, authorized users may access either the original or obfuscated data, while regular users always see the masked version. Which masking model to use is generally determined by security and compliance requirements.
- Time to confirm data security & compliance requirements. What data do you need to protect and how?
- Time to identify preservation requirements. Define precisely what reports and analytics are dependent upon the data, and what values must be preserved.
- Time to specify masking model (ETL, Dynamic).
- Time to generate baseline test data. Create sample test cases and capture results with expected return values and data ranges.
- Variable: Time to evaluate masking tools/products.
- Optional: Cost to acquire masking tool. This function may be in-house or provided by free tools.
- Time to acquire access & permissions. Access to data and databases required to extract and transform.
- Optional: Time to install masking tool.
- Variable: Time to select appropriate obfuscation function for each field, to both preserve necessary values and address security goals.
- Time to configure. Map rules to fields.
Deploy & Test
- Time to perform transformations. Time to extract or replace, and generate new data.
- Time to verify value preservation and test application functions against baseline. Run functional test and analytics reports to verify functions.
- Time to collect sign-offs and approval.
- Time to document specific techniques used to obfuscate.
Posted at Monday 8th February 2010 10:00 pm
(2) Comments •
By Adrian Lane
My mentors in engineering management used to define their job as managing people, process, and technology. Those three realms, and how they interact, are a handy way to conceptualize organizational management responsibilities. We use process to frame how we want people to behave – trying to promote productivity, foster inter-group cooperation, and minimize mistakes. The people are the important part of the equation, and the process is there to help make them better as a group. How you set up process directly impacts productivity, arranges priority, and creates or reduces friction. Subtle adjustments to process are needed to account for individuals, group dynamics, and project specifics.
I got to thinking about this when reading Microsoft’s Simple Implementation of SDL. I commented on some of the things I liked about the process, specifically the beginning steps of (paraphrased):
- Educate your team on the ground rules.
- Figure out what you are trying to secure.
- Commit to gate insecure code.
- Figure out what’s busted.
Sounds simple, and conceptually it is, but in practice this is really hard. The technical analysis of the code is difficult, but implementing the process is a serious challenge. Getting people to change their behavior is hard enough, but with diverging company goals in the mix, it’s nearly impossible. Adding the SDL elements to your development cycle is going to cause some growing pains and probably take years. Even if you agree with all the elements, there are several practical considerations that must be addressed before you adopt the SDL – so you need more than the development team to embrace it.
The Definition of Insanity
I heard Marcus Ranum give a keynote last year at Source Boston on the Anatomy of The Security Disaster, and one of his basic points was that merely identifying a bad idea rarely adjusts behavior, and when it does it’s generally only because failure is imminent. When initial failure conditions are noticed, as much effort is spent on finger-pointing and “Slaughter of the Innocents” as on learning and adjusting from mistakes. With fundamental process re-engineering, even with a simplified SDL, progress is impossible without wider buy-in and a coordinated effort to adapt the SDL to local circumstances.To hammer this point home, let’s steal a page from Mike Rothman’s pragmatic series, and imagine a quick conversation:
CEO to shareholders: “Subsequent to the web site breach we are reacting with any and all efforts to ensure the safety of our customers and continue trouble-free 24x7 operation. We are committed to security … we have hired one of the great young minds in computer security: a talented individual who knows all about web applications and exploits. He’s really good and makes a fine addition to the team! We hired him shortly after he hacked our site.”
Project Manager to programmers: “OK guys, let’s all pull together. The clean-up required after the web site hack has set us back a bit, but I know that if we all focus on the job at hand we can get back on track. The site’s back up and most of you have access to source code control again, and our new security expert is on board! We freeze code two weeks from now, so let’s focus on the goal and …
Did you see that? The team was screwed before they started. Management’s covered as someone is now responsible for security. And project management and engineering leadership must get back on track, so they begin doing exactly what they did before, but will push for project completion harder than ever. Process adjustments? Education? Testing? Nope. The existing software process is an unending cycle. That unsecured merry-go-round is not going to stop so you can fix it before moving on. As we like to say in software development: we are swapping engines on a running car. Success is optional (and miraculous, when it happens).
Break the Process to Fix It
The Simplified SDL is great, provided you can actually follow the steps. While I have not employed this particular secure development process yet, I have created similar ones in the past. As a practical matter, to make changes of this scope, I have always had to do one of three things:
- Recreate the code from scratch under the new process. Old process habits die hard, and the code evaluation sometimes makes it clear a that a retrofit would require more work than a complete rewrite. This makes other executives very nervous, but has been the most efficient path from practical experience. You may not have this option.
- Branch off the code, with the sub-branch in maintenance while the primary branch lives on under the new process. I halted new feature development until the team had a chance to complete the first review and education steps. Much more work and more programming time (meaning more programmers committed), but better continuity of product availability, and less executive angst.
- Moved responsibility of the code to an entirely new team trained on security and adhering to the new process. There is a learning curve for engineers to become familiar with the old code, but weaknesses found during review tend to be glaring, and no one’s ego get hurts when you rip the expletive out of it. Also, the new engineers have no investment in the old code, so can be more objective about it.
If you don’t break out of the old process and behaviors, you will generally end up with a mixed bag of … stuff.
As in the first post, I assume the goal of the simplified version of the process is to make this effort more accessible and understandable for programmers. Unfortunately, it’s much tougher than that. As an example, when you interview engineering candidates and discuss their roles, their skill level is immediately obvious. The more seasoned and advanced engineers and managers talk about big picture design and architecture, they talk about tools and process, and they discuss the tradeoffs of their choices. Most newbies are not even aware of process. Here is Adrian’s handy programmer evolution chart to show what I mean:
Many programmers take years to evolve to the point where they can comprehend a secure software development lifecycle, because it is several years into their career before they even have a grasp of what a software development lifecycle really means. Sure, they will learn tactics, but it just takes a while to put all the pieces together. A security process that is either embedded or wrapped around and existing development process is additional complexity that takes more time to grasp and be comfortable with. For the first couple years, programmers are trying to learn the languages and tools they need to do their jobs. Having a grasp of the language and programming style comes with experience. Understanding other aspects of code development such as design, architecture, assurance and process takes more time.
One of a well-thought-out aspects of the SDL is appointing knowledgeable people to champion the efforts, which gets around some of the skills barrier and helps compensate for turnover. Still, getting the team educated and up to speed will take time and money.
A Tree without Water
Every CEO I have ever spoken with is all for security! Unquestioningly, they will claim they are constantly improving security. The real question is whether they will fund it. As with all engineering projects where it is politically incorrect to say ‘no’, security improvement is most often killed through inaction. If your firm will not send programming team members for education, any security process fails. If your firm will not commit the time and expense to change the development process, by including additional security testing tasks, security processes fail. If your firm will not purchase fuzzing tools or take the time for proper code review, the entire security effort will wither and die. The tendrils of the development process, and any security augmentation efforts, must extend far beyond the development organization itself.
Posted at Monday 8th February 2010 5:00 pm
(5) Comments •