Securosis

Research

What Do DLP and Condoms Have in Common?

They both work a heck of a lot better if you use them ahead of time. I just finished reading the Trustwave Global Security Report, which summarizes their findings from incident response and penetration tests during 2009. In over 200 breach investigations, they only encountered one case where the bad guy encrypted the data during exfiltration. That’s right, only once. 1. The big uno. This makes it highly likely that a network DLP solution would have detected, if not prevented, the other 199+ breaches. Since I started covering DLP, one of the biggest criticisms has been that it can’t detect sensitive data if the bad guys encrypt it. That’s like telling a cop to skip the body armor since the bad guy can just shoot them someplace else. Yes, we’ve seen cases where data was encrypted. I’ve been told that in the recent China hacks the outbound channel was encrypted. But based on the public numbers available, more often than not (in a big way) encryption isn’t used. This will probably change over time, but we also have other techniques to try to detect such other exfiltration methods. Those of you currently using DLP also need to remember that if you are only using it to scan employee emails, it won’t really help much either. You need to use promiscuous mode, and scan all outbound TCP/IP to get full value. Also make sure you have it configured in true promiscuous mode, and aren’t locking it to specific ports and protocols. This might mean adding boxes, depending on which product you are using. Yes, I know I just used the words ‘promiscuous’ and ‘condom’ in a blog post, which will probably get us banned (hopefully our friends at the URL filtering companies will at least give me a warning). I realize some of you will be thinking, “Oh, great, but now the bad guys know and they’ll start encrypting.” Probably, but that’s not a change they’ll make until their exfiltration attempts fail – no reason to change until then. Share:

Share:
Read Post

Incite 2/2/2010: The Life of the Party

Good Morning: I was at dinner over the weekend with a few buddies of mine, and one of my friends asked (again) which AV package is best for him. It seems a few of my friends know I do security stuff and inevitably that means when they do something stupid, I get the call. You can be this guy… Seriously…This guy’s wife contracted one of the various Facebook viruses about a month ago and his machine still wasn’t working correctly. Right, it was slow and sluggish and just didn’t seem like it used to be. I delivered the bad news that he needed to rebuild the machine. But then we got talking about the attack vectors and how the bad guys do their evil bidding, and the entire group was spellbound. Then it occurred to me: security folks could be the most interesting guy (gal) in the room at any given time. Like that dude in the Tecate beer ads. Being a geek – especially with security cred – could be used to pick up chicks and become the life of the party. Of course, the picking up chicks part isn’t too interesting to me, but I know lots of younger folks with skillz that are definitely interested in those mating rituals young people or divorcees partake in. Basically, like every other kind of successful pick-up artist, it’s about telling compelling (and funny) stories. At least that’s what I’ve heard. You have to establish common ground and build upon that. Social media attacks are a good place to start. Everyone (that you’d be interested in anyway) uses some kind of social media, so start by telling stories of how spooky and dangerous it is. How they can be owned and their private data distributed all over China and Eastern Europe. And of course, you ride in on your white horse to save the day with your l33t hacker skillz. Then you go for the kill by telling compelling stories about your day job. Like finding pr0n on the computer of the CEO (and having to discreetly clean it off). Or any of the million other stupid things people in your company do, which is actually funny if you weren’t the one with the pooper scooper expected to clean it up. You don’t break confidences, but we are security people. We can tell anecdotes with the best of them. Like AndyITGuy, who shares a few humorous ones in this post. And those are probably the worst stories I’ve heard from Andy. But he’s a family guy and not going to tell his “good” stories on the blog. Now to be clear (and to make sure I don’t sleep in the dog house for the rest of the week), I’m making this stuff up. I haven’t tried this approach, nor did I have any kind of rap when I was on the prowl like 17 years ago. But maybe this will work for you, Mr. (or Ms.) L33t Young Hacker trying to find a partner or at least a warm place to hang for a night or three. Or you could go back to your spot on the wall and do like pretty much every generation of hacker that’s come before you. P!nk pretty much summed that up (video)… – Mike Photo credit: “Tecate Ad” covered by psfk.com Incite 4 U I’m not one to really toot my own horn, but I’m constantly amazed at the amount and depth of the research we are publishing (and giving away) on the Securosis blog. Right now we have 3-4 different content series emerging (DB Quant, Pragmatic Data Security, Network Security Fundamentals, Low Hanging Fruit, etc.), resulting in 3-4 high quality posts hitting the blog every single day. You don’t get that from your big research shops with dozens of analysts. Part of this is the model and a bigger part is focusing on doing the right thing. We believe the paywall should be on the endangered species list, and we are putting our money where our mouths are. If you do not read the full blog feed you are missing out. Prognosis for PCI in 2010: Stagnation… – Just so I can get my periodic shout-out to Shimmy’s new shop, let me point to a post on his corporate blog (BTW, who the hell decided to name it security.exe?) about some comments relative to what we’ll see in PCI-land for 2010. Basically nothing, which is great news because we all know most of the world isn’t close to being PCI compliant. So let’s give them another year to get it done, no? Oh yeah, I heard the bad guys are taking a year off. No, not really. I’m sure all the 12 requirements are well positioned to deal with threats like APT and even new fangled web application attacks. Uh-huh. I understand the need to let the world catch up, but remember that PCI (and every other regulation) is a backward looking indicator. It does not reflect what kinds of attacks are currently being launched your way, much less what’s coming next. – MR This Ain’t Your Daddy’s Cloud – One of the things that annoys me slightly when people start bringing up cloud security is the tendency to reinforce the same old security ideas and models we’ve been using for decades, as opposed to, you know, innovating. Scott Lupfer offers some clear, basic cloud security advice, but he sticks with classic security terminology, when it’s really the business audience that needs to be educated. Back when I wrote my short post on evaluating cloud risks I very purposely avoided using the CIA mantra so I could better provide context for the non-security audience, as opposed to educating them on our fundamentals. With the cloud transition we have an opportunity to bake in security in ways we missed with the web transition, but we can’t be educating people about CIA or Bell La Padula – RM Cisco milking the MARS cash cow? – Larry

Share:
Read Post

Pragmatic Data Security: Discover

In the Discovery phase we figure where the heck our sensitive information is, how it’s being used, and how well it’s protected. If performed manually, or with too broad an approach, Discovery can be quite difficult and time consuming. In the pragmatic approach we stick with a very narrow scope and leverage automation for greater efficiency. A mid-sized organization can see immediate benefits in a matter of weeks to months, and usually finish a comprehensive review (including all endpoints) within a year or less. Discover: The Process Before we get into the process, be aware that your job will be infinitely harder if you don’t have a reasonably up to date directory infrastructure. If you can’t figure out your users, groups, and roles, it will be much harder to identify misuse of data or build enforcement policies. Take the time to clean up your directory before you start scanning and filtering for content. Also, the odds are very high that you will find something that requires disciplinary action. Make sure you have a process in place to handle policy violations, and work with HR and Legal before you start finding things that will get someone fired (trust me, those odds are pretty darn high). You have a couple choices for where to start – depending on your goals, you can begin with applications/databases, storage repositories (including endpoints), or the network. If you are dealing with something like PCI, stored data is usually the best place to start, since avoiding unencrypted card numbers on storage is an explicit requirement. For HIPAA, you might want to start on the network since most of the violations in organizations I talk to relate to policy violations over email/web/FTP due to bad business processes. For each area, here’s how you do it: Storage and Endpoints: Unless you have a heck of a lot of bodies, you will need a Data Loss Prevention tool with content discovery capabilities (I mention a few alternatives in the Tools section, but DLP is your best choice). Build a policy based on the content definition you built in the first phase. Remember, stick to a single data/content type to start. Unless you are in a smaller organization and plan on scanning everything, you need to identify your initial target range – typically major repositories or endpoints grouped by business unit. Don’t pick something too broad or you might end up with too many results to do anything with. Also, you’ll need some sort of access to the server – either by installing an agent or through access to a file share. Once you get your first results, tune your policy as needed and start expanding your scope to scan more systems. Network: Again, a DLP tool is your friend here, although unlike with content discovery you have more options to leverage other tools for some sort of basic analysis. They won’t be nearly as effective, and I really suggest using the right tool for the job. Put your network tool in monitoring mode and build a policy to generate alerts using the same data definition we talked about when scanning storage. You might focus on just a few key channels to start – such as email, web, and FTP; with a narrow IP range/subnet if you are in a larger organization. This will give you a good idea of how your data is being used, identify some bad business process (like unencrypted FTP to a partner), and which users or departments are the worst abusers. Based on your initial results you’ll tune your policy as needed. Right now our goal is to figure out where we have problems – we will get to fixing them in a different phase. Applications & Databases: Your goal is to determine which applications and databases have sensitive data, and you have a few different approaches to choose from. This is the part of the process where a manual effort can be somewhat effective, although it’s not as comprehensive as using automated tools. Simply reach out to different business units, especially the application support and database management teams, to create an inventory. Don’t ask them which systems have sensitive data, ask them for an inventory of all systems. The odds are very high your data is stored in places you don’t expect, so to check these systems perform a flat file dump and scan the output with a pattern matching tool. If you have the budget, I suggest using a database discovery tool – preferably one with built in content discovery (there aren’t many on the market, as we’ll mention in the Tools section). Depending on the tool you use, it will either sniff the network for database connections and then identify those systems, or scan based on IP ranges. If the tool includes content discovery, you’ll usually give it some level of administrative access to scan the internal database structures. I just presented a lot of options, but remember we are taking the pragmatic approach. I don’t expect you to try all this at once – pick one area, with a narrow scope, knowing you will expand later. Focus on wherever you think you might have the greatest initial impact, or where you have known problems. I’m not an idealist – some of this is hard work and takes time, but it isn’t an endless process and you will have a positive impact. We aren’t necessarily done once we figure out where the data is – for approved repositories, I really recommend you also re-check their security. Run at least a basic vulnerability scan, and for bigger repositories I recommend a focused penetration test. (Of course, if you already know it’s insecure you probably don’t need to beat the dead horse with another check). Later, in the Secure phase, we’ll need to lock down the approved repositories so it’s important to know which security holes to plug. Discover: Technologies Unlike the Define phase, here we have a plethora of options. I’ll break this into

Share:
Read Post

FireStarter: Agile Development and Security

I am a big fan of the Agile project development methodology, especially Agile with Scrum. I love the granularity and focus the approach requires. I love that at any given point in time you are working on the most important feature or function. I love the derivative value of communication and subtle form of peer pressure that Scrum meetings produce. I love that if mistakes are made you do not go too far in the wrong direction, resulting in higher productivity and few software projects that are total disasters. I think Agile is the biggest advancement in code development in the last decade as it addresses issues of complexity, scalability, focus and bureaucratic overhead. But it comes with one huge caveat: Agile hurts secure code development. There, I said it. Someone had to. The Agile process, and even the scrum leadership model, hamstrings development in the area of building secure products. Security is not a freakin’ task card. Logic flaws are not well documented, discreet tasks to be assigned. Project managers (and unfortunately most ScrumMasters) learned security by skimming a ‘For Dummies’ book at Barnes & Noble while waiting for their lattes, but these are the folks making the choices as to what security should make it into the iterations. Just like general IT security, we end up wrapping the Agile process in a security blanket or bolting on security after the code is complete, because the process as we know it is not well suited to secure development. I know there will be several of you out there who saying “Prove it! Show us a study or research evidence that supports your theory.” I can’t. I don’t have meaningful statistical data to back up my claim. But that does not mean it’s not true, and there is anecdotal evidence to support what I am saying. For example: The average Sprint duration of two weeks is simply too short for meaningful security testing. Fuzzing & black box testing are infeasible with nightly builds or pre-release sanity checks. Trust assumptions between code modules or system functions where multiple modules process requests cannot be fully exercised and tested within the Agile timeline. White box testing can be effective, but face it – security assessments don’t fit into neat 4-8 hour windows. In the same way Agile products deviate from design and architecture specifications, they deviate from systemic analysis of trust and code dependancies. It’s a classic forest through the trees problem: efficiency and focus gained by skipping over big picture details necessarily come at the expense of understanding how the system and data are used as a whole. Agile’s great at dividing and conquering what you know, but not so great for dealing with the abstract. Secure code development is not like fixing bugs where you have a stack trace to follow. Secure code development is more about coding principles that lead to better security. In the same way Agile can’t help enforce code ‘style’, it won’t help with secure coding guidelines. (Secure) style verification is an advantage of peer programming and inherent in code reviews, but not intrinsic to Agile. The person on the Scrum team with the least knowledge of security, the Product Manager, prioritizes what gets done. Project managers as a general guideline don’t track security testing, and they are not incented to get security right. They are incented to get the software over the finish line. If they track bugs on the product backlog, they probably have a task card buried somewhere, but don’t understand the threats. Security personnel are chickens in the project and do not gate code acceptance they way they traditionally were able to do in waterfall testing, and may have limited exposure to developers. The fact that major software development organizations are modifying or wrapping Agile with other frameworks to compensate for security is evidence of the difficulties in applying security practices directly. The forms of testing that fit within Agile are more likely to get done. If they don’t fit, they are usually skipped (especially at crunch time), or they have to be scheduled outside the development cycle. It’s not just that the granular focus on tasks makes it harder to verify security at the code and system levels. It’s not just that the features are the focus, or that the wrong person is making security decisions. It’s not just that the quick turnaround in code production precludes some forms of testing known to be effective at identifying security issues. It’s not just that it’s hard to bucket security into discreet tasks. It’s all that and more. We’re not going to see a study that compares Waterfall with Agile for security benefits. Putting together similar development teams to create similar products under two development methodologies to prove this point is not practical. I have run Agile and Waterfall projects of a similar nature in parallel, and while Agile had overwhelming advantages in a number of areas, security was not one of them. If you are moving to Agile, great – but you will need to evolve your Agile process to accomodate security. What do you think? How have you successfully integrated secure coding practices with Agile? This is a FireStarter, so discuss in the comments. Share:

Share:
Read Post

You Have to Buy Data Security Tools

When Mike was reviewing the latest Pragmatic Data Security post he nailed me on being too apologetic for telling people they need to spend money on data-security specific tools. (The line isn’t in the published post). Just so you don’t think Mike treats me any nicer in private than he does in public, here’s what he said: Don’t apologize for the fact that data discovery needs tools. It is what it is. They can be like almost everyone else and do nothing, or they can get some tools to do the job. Now helping to determine which tools they need (which you do later in the post) is a good thing. I just don’t like the apologetic tone. As someone who is often a proponent for tools that aren’t in the typical security arsenal, I’ve found myself apologizing for telling people to spend money. Partially, it’s because it isn’t my money… and I think analysts all too often forget that real people have budget constraints. Partially it’s because certain users complain or look at me like I’m an idiot for recommending something like DLP. I have a new answer next time someone asks me if there’s a free tool to replace whatever data security tool I recommend: Did you build your own Linux box running ipfw to protect your network, or did you buy a firewall? The important part is that I only recommend these purchases when they will provide you with clear value in terms of improving your security over alternatives. Yep, this is going to stay a tough sell until some regulation or PCI-like standard requires them. Thus I’m saying, here and now, that if you need to protect data you likely need DLP (the real thing, not merely a feature of some other product) and Database Activity Monitoring. I haven’t found any reasonable alternatives that provide the same value. There. I said it. No more apologies – if you have the need, spend the money. Just make sure you really have the need, and the tool you are looking at really delivers the value, since not all solutions are created equal. Share:

Share:
Read Post

Network Security Fundamentals: Monitor Everything

As we continue on our journey through the fundamentals of network security, the idea of network monitoring must be integral to any discussion. Why? Because we don’t know where the next attack is coming, so we need to get better at compressing the window between successful attack and detection, which then drives remediation activities. It’s a concept I coined back at Security Incite in 2006 called React Faster, which Rich subsequently improved upon by advocating Reacting Faster and Better. React Faster (and better) I’ve written extensively on the concept of React Faster, so here’s a quick description I penned back in 2008 as part of an analysis of Security Management Platforms, which hits the nail on the head. New attacks are happening at a fast and furious pace. It is a fool’s errand to spend time trying to anticipate where the issues are. REACT FASTER first acknowledges that all attacks cannot be stopped. Thus, focus remains on understanding typical traffic and application usage trends and monitoring for anomalous behavior, which could indicate an attack. By focusing on detecting attacks earlier and minimizing damage, security professionals both streamline their activities and improve their effectiveness. Rich’s corollary made the point that it’s not enough to just react faster, but you need to have a plan for how to react: Don’t just react – have a response plan with specific steps you don’t jump over until they’re complete. Take the most critical thing first, fix it, move to the next, and so on until you’re done. Evaluate, prioritize, contain, fix, and clean. So monitoring done well compresses the time between compromise and detection, and also accelerates root cause analysis to determine what the response should involve. Network Security Data Sources It’s hard to argue with the concept of reacting faster and collecting data to facilitate that activity. But with an infinite amount of data to collect, where do we start? What do we collect? How much of it? For how long? All of these are reasonable questions that need answers as you construct your network monitoring strategy. The major data sources from your network security infrastructure include: Firewall: Every monitoring strategy needs to correspond to the most prevalent attack vectors, and that means from the outside in. Yes, the insider threat is real, but script kiddies are alive and well and that means we need to start by looking at our Internet-facing devices. First we pull log and activity information from our firewalls and UTM devices on the perimeter. We look for strange patterns, which usually indicate something is wrong. We want to keep this data long enough to ensure we have sufficient data in the event of a well-executed low and slow attack, which means months rather than days. IPS: The next layer in tends to be IPS, looking for patterns of traffic that indicate a known attack. We want the alerts first and foremost. But we also want to collect the raw IPS logs as well. Just because the IPS doesn’t think specific traffic is an attack doesn’t mean it isn’t. It could be a dreaded 0-day, so we want to pull all the data we can off this box as well, since the forensic analysis can pinpoint when attacks first surfaced and also provide guidance as to the extent of the compromise. Vulnerability scans: Are those devices vulnerable to a specific attack? Vulnerability scan data is one of the key inputs to SIEM/correlation products. The best way to reduce false positives is not to fire an alert if the target is not vulnerable. Thus we keep scan data on hand, and use it both for real-time analysis and also forensics. If an attack happens during a window of vulnerability (like while you debate the merits of a certain patch with the ops guys), you need to know that. Network Flow Data: I’ve always been a big fan of network flow analysis and continue to be mystified that market never took off, given the usefulness of understanding how traffic flows within and out of a network. All is not lost, since a number of security management products use flow data in their analyses and a few lower end management products use flow data as well. Each flow record is small, so there is no reason not to keep a lot of it. Again, we use this data to both pinpoint potential badness, and also replay attacks to understand how they spread within the organization. Device Change Logs: If your network devices get compromised, it’s pretty much game over. Traffic can be redirected, logging suppressed, and lots of other badness can result. So keep track of device configuration and more importantly when those changes happen – which helps isolate the root causes of breaches. Yes, if the logs are turned off, you lose visibility, which can itself indicate an issue. Through the wonders of SNMP, you should collect data from all your routers, switches, and other pipes. Content security: Now we can climb the stack a bit to pull information off the content security gateways, since a lot of attacks still show up via phishing emails and malware-laden web links. Again, we aren’t trying to pull this data in necessarily to stop an attack (hopefully the anti-spam box will figure out you aren’t interested in the little blue pill), but rather to gather more information about the attack vectors and how an attack proliferates through your environment. Reacting faster is about learning all we can about what is compromised and responding in the most efficient and effective manner. Keeping things focused and pragmatic, you’d like to gather all this data all the time across all the networks. Of course, Uncle Reality comes to visit and understandably, collection of everything everywhere isn’t an option. So how do you prioritize? The objective is to use the data you already have. Most organizations have all of the devices listed above. So all the data sources exist, and should be prioritized based on importance to the

Share:
Read Post

The Network Forensics (Full Packet Capture) Revival Tour

I hate to admit that of all the various technology areas, I’m probably best known for my work covering DLP. What few people know is that I ‘fell’ into DLP, as one of my first analyst assignments at Gartner was network forensics. Yep – the good old fashioned “network VCRs” as we liked to call them in those pre-TiVo days. My assessment at the time was that network forensics tools like Niksun, Infinistream, and Silent Runner were interesting, but really only viable in certain niche organizations. These vendors usually had a couple of really big clients, but were never able to grow adoption to the broader market. The early DLP tools were sort of lumped into this monitoring category, which is how I first started covering them (long before the term DLP was in use). Full packet capture devices haven’t really done that well since my early analysis. SilentRunner and Infinistream both bounced around various acquisitions and re-spin-offs, and some even tried to rebrand themselves as something like DLP. Many organizations decided to rely on IDS as their primary network forensics tool, mostly because they already had the devices. We also saw Network Behavior Analysis, SIEM, and deep packet inspection firewalls offer some of the value of full capture, but focused more on analysis to provide actionable information to operations teams. This offered a clearer value proposition than capturing all your network data just to hold onto it. Now the timing might be right to see full capture make a comeback, for a few reasons. Mike mentioned full packet capture in Low Hanging Fruit: Network Security, and underscored the need to figure out how to deal with these new more subtle and targeted attacks. Full packet capture is one of the only ways we can prove some of these intrusions even happened, given the patience and skills of the attackers and their ability to prey on the gaps in existing SIEM and IPS tools. Second, the barriers between inside and outside aren’t nearly as clean as they were 5+ years ago; especially once the bad guys get their initial foothold inside our ‘walls’. Where we once were able to focus on gateway and perimeter monitoring, we now need ever greater ability to track internal traffic. Additionally, given the increase in processing power (thank you, Moore!), improvement in algorithms, and decreasing price of storage, we can actually leverage the value of the full captured stream. Finally, the packet capture tools are also playing better with existing enterprise capabilities. For instance, SIEM tools can analyze content from the capture tool, using the packet captures as a secondary source if a behavioral analysis tool, DLP, or even a ping off a server’s firewall from another internal system kicks off an investigation. This dramatically improves the value proposition. I’m not claiming that every organization needs, or has sufficient resources to take advantage of, full packet capture network forensics – especially those on the smaller side. Realistically, even large organizations only have a select few segments (with critical/sensitive data) where full packet capture would make sense. But driven by APT hype, I highly suspect we’ll see adoption start to rise again, and a ton of parallel technologies vendors starting to market tools such as NBA and network monitoring in the space. Share:

Share:
Read Post

Network Security Fundamentals: Default Deny (UPDATED)

(Update: Based on a comment, I added some caveats regarding business critical applications.) Since I’m getting my coverage of Network and Endpoint Security, as well as Security Management, off the ground, I’ll be documenting a lot of fundamentals. The research library is bare from the perspective of infrastructure content, so I need to build that up, one post at a time. As we start talking about the fundamentals of network security, we’ll first zero in on the perimeter of your network. The Internet-facing devices accessible by the bad guys, and usually one of the prevalent attack vectors. Yeah, yeah, I know most of the attacks target web applications nowadays. Blah blah blah. Most, but not all, so we have to revisit how our perimeter network is architected and what kind of traffic we allow into that web application in the first place. Defining Default Deny Which brings us to the first topic in the fundamentals series: Default Deny, which implements what is known in the trade as a positive security model. Basically it means unless you specifically allow something, you deny it. It’s the network version of whitelisting. In your perimeter device (most likely a firewall), you define the ports and protocols you allow, and turn everything else off. Why is this a good idea? Lots of attacks target unused and strange ports on your firewalls. If those ports are shut down by default, you dramatically reduce your attack surface. As mentioned in the Low Hanging Fruit: Network Security, many organizations have out-of-control firewall and router rules, so this also provides an opportunity to clean those rules up as well. As simple an idea as this sounds, it’s surprising how many organizations either don’t have default deny as a policy, or don’t enforce it tightly enough because developers and other IT folks need their special ports opened up. Getting to Default Deny One of the more contentious low hanging fruit recommendations, as evidenced by the comments, was the idea to just blow away your overgrown firewall rule set and wait for folks to complain. A number said that wouldn’t work in their environments, and I can understand that. So let’s map out a few ways to get to default deny: One Fell Swoop: In my opinion, we should all be working to get to default deny as quickly as possible. That means taking a management by compliant approach for most of your traffic, blowing away the rule set, and waiting for the help desk phone to start ringing. Prior to blowing up your rule base, make sure to define the handful of applications that will get you fired if they go down. Management by Compliant doesn’t work when the compliant is attached to a 12-gauge pointed at your head. Support for those applications needs to go into the base firewall configuration. Consensus: This method involves working with senior network and application management to define the minimal set of allowed protocols and ports. Then the impetus falls on the developers and ops folks to work within those parameters. You’ll also want a specific process for exceptions, since you know those pesky folks will absolutely positively need at least one port open for their 25-year-old application. If that won’t work, there is always the status quo approach… Case by Case: This is probably how you do things already. Basically you go through each rule in the firewall and try to remember why it’s there and if it’s still necessary. If you do remember who owns the rule, go to them and confirm it’s still relevant. If you don’t, you have a choice. Turn it off and risk breaking something (the right choice) or leave it alone and keep supporting your overgrown rule set. Regardless of how you get to Default Deny, communication is critical. Folks need to know when you plan to shut down a bunch of rules and they need to know the process to get the rules re-established. Testing Default Deny We at Securosis are big fans of testing your defenses. That means just because you think your firewall configuration enforces default deny, you need to be sure. So try to break it. Use vulnerability scanners and automated pen testing tools to find exposures that can be exploited. And make this kind of testing a standard part of your network security practice. Things change, including your firewall rule set. Mistakes are made and defects are introduced. Make sure you are finding them – not the bad guys. Default Deny Downside OK, as simple and clean as default deny is as a concept, you do have to understand this policy can break things, and broken stuff usually results in grumpy users. Sometimes they want to play that multi-player version of Doom with their college buddies and it uses a blocked port. Oh, well, it’s now broken and the user will be grumpy. You also may break some streaming video applications, which could become a productivity boost during March Madness. But a lot of the video guys are getting more savvy and use port 80, so this rule won’t impact them. As mentioned above, it’s important to ensure the handful of business critical applications still run after the firewall ruleset rationalization. So do an inventory of your key applications and what’s required to support those applications. Integrate those rules into your base set and then move on. Of course, mentioning that your trading applications probably shouldn’t need ports 38-934 open for all protocols is reasonable, but ultimately the business users have to balance the cost to re-engineer the application versus the impact to security posture of the status quo. That’s not the security team’s decision to make. Also understand default deny is not a panacea. As just mentioned, lots of application traffic uses port 80 or 443 (SSL), and will largely be invisible to your firewall. Sure, some devices claim “deep packet inspection” and others talk about application awareness, but most don’t. So more sophisticated attacks require additional layers of defense. Understand default deny for what it is: a coarse filter for your perimeter, which

Share:
Read Post

Friday Summary: January 29, 2010

I really enjoy making fun of marketing and sales pitches. It’s a hobby. At my previous employer, I kept a book of stupid and nonsense sales sayings I heard sales people make – kind of my I Ching by sociopaths. I would even parrot back nonsense slogans and jargon at opportune moments. Things like “No excuses,” “Now step up to the plate and meet your commitments,” “Hold yourself accountable,” “The customer is first, don’t forget that,” “We must find ways to support these efforts,” “The hard work is done, now you need to complete a discrete task,” “All of your answers are YES YES YES!” and “Allow us to position for success!” Usually these were thrown out in a desperate attempt to get the engineering team to spend $200k to close a $40k deal. Mainstream media marketing uses a similar ham-fisted belligerence in their messaging – trying to tie all your hopes, dreams, and desires to their product. My wife and I used to sit in front of the TV and call out all the overt and subliminal messages in commercials, like how buying a certain waffle iron would get you laid, or a vacuum cleaner that created marital bliss and made you the envy of your neighbors. Some of the pharmaceutical ads are the best, as you can turn off the sound altogether and just gaze at the the imagery and try to guess whether they are selling Viagra, allergy medicine, or eternal happiness. But playing classic music and, in a re-assuring voice, having a cute cartoon figure tell people just how smart they are, is surprisingly effective at getting them to pay an extra $.25 per gallon for gasoline. But I must admit I occasionally find myself swayed by marketing when I thought I was more or less impervious. Worse, when it happens, I can’t even figure out what triggered the reaction. This week was one of those rare occasions. Why the heck is it that I need an iPad? More to the point, what void is this device filling and why do I think it will make my life better? And that stupid little video was kind of condescending and childish … but I still watched it. And I still want one. Was it the design? The size? Maybe it’s because I know my newspaper is dead and I want some new & better way to get information electronically at the breakfast table? Maybe I want to take a browser with me when I travel, and not a phone trying to pretend to display web pages? Maybe it’s because this is a much more appropriate design for a laptop? I don’t know, and I don’t care. This think looks cool and useful in a way that the Kindle just cannot compare to. I want to rip Apple for calling this thing ‘magical’ and ‘revolutionary’, but dammit, I want one. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Rich, Martin, and Zach on this week’s Network Security Podcast. Favorite Securosis Posts Rich: Adrian’s start to the database security fundamentals series. Mike: Rich’s FireStarter on APT. I’m so over APT at this point, but Rich provides some needed rationality in midst of all the media frenzy. Adrian: Rich’s series on Pragmatic Data Security is getting interesting with the Define Phase. Mort: Low Hanging Fruit: Security Management takes Adam’s posts on the topic and fleshes them out. Meier: Security Strategies for Long-Term, Targeted Threats. “Advanced Persistent Threat” just does not cut it. Other Securosis Posts Pragmatic Data Security: Define Phase Incite 1/27/2010: Depending on the Kids Network Security Fundamentals: Default Deny The Certification Myth Pragmatic Data Security: Groundwork FireStarter: Security Endangered Species List Favorite Outside Posts Rich: Who doesn’t love a good cheat sheet? How about a couple dozen all compiled together into a nicely organized list? Mike: Daniel Miessler throws a little thought experiment bomb on pushing everyone through a web proxy farm for safer browsing. An interesting concept, and I need to analyze this in more depth next week. Adrian: Stupid: A Metalanguage For Cryptography Very cool idea. Very KISS! Mort: Managing to the biggest risk. More awesomeness from shrdlu. I particularly love the closer: “So I believe politics can affect both how you assess and prioritize your security risks, and how you go about mitigating them. If you had some kind of magic Silly String that you could spray into your organization to highlight the invisible political tripwires, you’d have a much broader picture of your security risk landscape.” Meier: I luvs secwerity. I also like Tenable’s post on Understanding the New Masschusetts Data Protection Law. Project Quant Posts Project Quant: Database Security – Encryption Project Quant: Project Comments Project Quant: Database Security – Protect through Monitoring Project Quant: Database Security – Audit Top News and Posts Krebs’ article on the Texas bank preemptively suing a customer. Feds boost breach fines. Politics and Security. Groundspeed: a Firefox add-on for web application pen testers. PCI QSAs, certifications to get new scrutiny. It’s The Adversaries Who Are Advanced And Persistent. The EFF releases a tool to see how private/unique your browser is. Intego releases their 2009 Mac security report. It’s pretty balanced. Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. Yeah, I am awarding myself a consolation prize for my comment in response to Mike’s post on Security Management, but I have to award this week’s best comment to Andre Gironda, in response to Matt Mike’s post on The Certification Myth. I usually throw up some strange straw-man and other kinds of confusing arguments like in my first post. But for this one, I’ll get right to the point: Does anyone know if China{|UK|AU|NZ|Russia|Taiwan|France} has a military directive similar to Department of Defense Directive 8570, thus requiring CISSP and/or GIAC certifications in various information assurance roles? Does anyone disagree that China has information superiority compared to the US, and potentially due in part to the existence of DoDD 8570? If China only hires the best (and not just the brown-nosers), then this would stand to achieve them a significant advantage, right? Could it be that instead

Share:
Read Post

Database Security Fundamentals: Introduction

I have been part of 5 different startups, not including my own, over the last 15 years. Every one of them has sold, or attempted to sell, enterprise software. So it is not surprising that when I provide security advice, by default it is geared toward an enterprise audience. And oddly, when it comes to security, large enterprises are a little further ahead of the curve. They have more resources and people dedicated to the subject than small and medium sized businesses, and their coverage is much more diverse. But security advice does not always transfer well from one audience to the other. The typical SMB IT security team is one person. Or in the case or database security, the DBA and the security practitioner are one and the same. The time they have to spend on learning and performing security tasks are significantly less, and the money they have to spend for security tools and automation is typically minimal. To remedy that issue I am creating a couple posts for some pragmatic, hands-on tasks for database security. I’ll provide clear and actionable steps to protect your database and the data it stores. This series is geared to small IT shops who just need a straightforward checklist for database security. We’re not covering advanced security here, and we’re not talking about huge database installations with thousands of users, but rather the everyday security stuff you can do in an afternoon. And to keep costs low, I will focus on the built-in database security functions built into the database. Access: User and administrative security, and security on the avenues into and out of the database. Configuration: Database settings and setup that affect security and protect database functions from subversion or unauthorized alteration. I’ll go into the issue of reliance on the operating system as well. Audit: An examination of activity, transactions, and anomalous events. Data Protection: In cases where the database cannot protect access to information, we will cover techniques to prevent information from being lost of stolen. The goal here is to protect the data stored within the database. We often lose sight of this goal as we spend so much time focusing on the container (i.e., the database) and less on the data and how it is used. Of course I will cover database security – much of which will be discussed as part of access control and configuration sections – but I will include security around the data and database functions as well. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.