Securosis

Research

FireStarter: Agile Development and Security

I am a big fan of the Agile project development methodology, especially Agile with Scrum. I love the granularity and focus the approach requires. I love that at any given point in time you are working on the most important feature or function. I love the derivative value of communication and subtle form of peer pressure that Scrum meetings produce. I love that if mistakes are made you do not go too far in the wrong direction, resulting in higher productivity and few software projects that are total disasters. I think Agile is the biggest advancement in code development in the last decade as it addresses issues of complexity, scalability, focus and bureaucratic overhead. But it comes with one huge caveat: Agile hurts secure code development. There, I said it. Someone had to. The Agile process, and even the scrum leadership model, hamstrings development in the area of building secure products. Security is not a freakin’ task card. Logic flaws are not well documented, discreet tasks to be assigned. Project managers (and unfortunately most ScrumMasters) learned security by skimming a ‘For Dummies’ book at Barnes & Noble while waiting for their lattes, but these are the folks making the choices as to what security should make it into the iterations. Just like general IT security, we end up wrapping the Agile process in a security blanket or bolting on security after the code is complete, because the process as we know it is not well suited to secure development. I know there will be several of you out there who saying “Prove it! Show us a study or research evidence that supports your theory.” I can’t. I don’t have meaningful statistical data to back up my claim. But that does not mean it’s not true, and there is anecdotal evidence to support what I am saying. For example: The average Sprint duration of two weeks is simply too short for meaningful security testing. Fuzzing & black box testing are infeasible with nightly builds or pre-release sanity checks. Trust assumptions between code modules or system functions where multiple modules process requests cannot be fully exercised and tested within the Agile timeline. White box testing can be effective, but face it – security assessments don’t fit into neat 4-8 hour windows. In the same way Agile products deviate from design and architecture specifications, they deviate from systemic analysis of trust and code dependancies. It’s a classic forest through the trees problem: efficiency and focus gained by skipping over big picture details necessarily come at the expense of understanding how the system and data are used as a whole. Agile’s great at dividing and conquering what you know, but not so great for dealing with the abstract. Secure code development is not like fixing bugs where you have a stack trace to follow. Secure code development is more about coding principles that lead to better security. In the same way Agile can’t help enforce code ‘style’, it won’t help with secure coding guidelines. (Secure) style verification is an advantage of peer programming and inherent in code reviews, but not intrinsic to Agile. The person on the Scrum team with the least knowledge of security, the Product Manager, prioritizes what gets done. Project managers as a general guideline don’t track security testing, and they are not incented to get security right. They are incented to get the software over the finish line. If they track bugs on the product backlog, they probably have a task card buried somewhere, but don’t understand the threats. Security personnel are chickens in the project and do not gate code acceptance they way they traditionally were able to do in waterfall testing, and may have limited exposure to developers. The fact that major software development organizations are modifying or wrapping Agile with other frameworks to compensate for security is evidence of the difficulties in applying security practices directly. The forms of testing that fit within Agile are more likely to get done. If they don’t fit, they are usually skipped (especially at crunch time), or they have to be scheduled outside the development cycle. It’s not just that the granular focus on tasks makes it harder to verify security at the code and system levels. It’s not just that the features are the focus, or that the wrong person is making security decisions. It’s not just that the quick turnaround in code production precludes some forms of testing known to be effective at identifying security issues. It’s not just that it’s hard to bucket security into discreet tasks. It’s all that and more. We’re not going to see a study that compares Waterfall with Agile for security benefits. Putting together similar development teams to create similar products under two development methodologies to prove this point is not practical. I have run Agile and Waterfall projects of a similar nature in parallel, and while Agile had overwhelming advantages in a number of areas, security was not one of them. If you are moving to Agile, great – but you will need to evolve your Agile process to accomodate security. What do you think? How have you successfully integrated secure coding practices with Agile? This is a FireStarter, so discuss in the comments. Share:

Share:
Read Post

You Have to Buy Data Security Tools

When Mike was reviewing the latest Pragmatic Data Security post he nailed me on being too apologetic for telling people they need to spend money on data-security specific tools. (The line isn’t in the published post). Just so you don’t think Mike treats me any nicer in private than he does in public, here’s what he said: Don’t apologize for the fact that data discovery needs tools. It is what it is. They can be like almost everyone else and do nothing, or they can get some tools to do the job. Now helping to determine which tools they need (which you do later in the post) is a good thing. I just don’t like the apologetic tone. As someone who is often a proponent for tools that aren’t in the typical security arsenal, I’ve found myself apologizing for telling people to spend money. Partially, it’s because it isn’t my money… and I think analysts all too often forget that real people have budget constraints. Partially it’s because certain users complain or look at me like I’m an idiot for recommending something like DLP. I have a new answer next time someone asks me if there’s a free tool to replace whatever data security tool I recommend: Did you build your own Linux box running ipfw to protect your network, or did you buy a firewall? The important part is that I only recommend these purchases when they will provide you with clear value in terms of improving your security over alternatives. Yep, this is going to stay a tough sell until some regulation or PCI-like standard requires them. Thus I’m saying, here and now, that if you need to protect data you likely need DLP (the real thing, not merely a feature of some other product) and Database Activity Monitoring. I haven’t found any reasonable alternatives that provide the same value. There. I said it. No more apologies – if you have the need, spend the money. Just make sure you really have the need, and the tool you are looking at really delivers the value, since not all solutions are created equal. Share:

Share:
Read Post

Network Security Fundamentals: Monitor Everything

As we continue on our journey through the fundamentals of network security, the idea of network monitoring must be integral to any discussion. Why? Because we don’t know where the next attack is coming, so we need to get better at compressing the window between successful attack and detection, which then drives remediation activities. It’s a concept I coined back at Security Incite in 2006 called React Faster, which Rich subsequently improved upon by advocating Reacting Faster and Better. React Faster (and better) I’ve written extensively on the concept of React Faster, so here’s a quick description I penned back in 2008 as part of an analysis of Security Management Platforms, which hits the nail on the head. New attacks are happening at a fast and furious pace. It is a fool’s errand to spend time trying to anticipate where the issues are. REACT FASTER first acknowledges that all attacks cannot be stopped. Thus, focus remains on understanding typical traffic and application usage trends and monitoring for anomalous behavior, which could indicate an attack. By focusing on detecting attacks earlier and minimizing damage, security professionals both streamline their activities and improve their effectiveness. Rich’s corollary made the point that it’s not enough to just react faster, but you need to have a plan for how to react: Don’t just react – have a response plan with specific steps you don’t jump over until they’re complete. Take the most critical thing first, fix it, move to the next, and so on until you’re done. Evaluate, prioritize, contain, fix, and clean. So monitoring done well compresses the time between compromise and detection, and also accelerates root cause analysis to determine what the response should involve. Network Security Data Sources It’s hard to argue with the concept of reacting faster and collecting data to facilitate that activity. But with an infinite amount of data to collect, where do we start? What do we collect? How much of it? For how long? All of these are reasonable questions that need answers as you construct your network monitoring strategy. The major data sources from your network security infrastructure include: Firewall: Every monitoring strategy needs to correspond to the most prevalent attack vectors, and that means from the outside in. Yes, the insider threat is real, but script kiddies are alive and well and that means we need to start by looking at our Internet-facing devices. First we pull log and activity information from our firewalls and UTM devices on the perimeter. We look for strange patterns, which usually indicate something is wrong. We want to keep this data long enough to ensure we have sufficient data in the event of a well-executed low and slow attack, which means months rather than days. IPS: The next layer in tends to be IPS, looking for patterns of traffic that indicate a known attack. We want the alerts first and foremost. But we also want to collect the raw IPS logs as well. Just because the IPS doesn’t think specific traffic is an attack doesn’t mean it isn’t. It could be a dreaded 0-day, so we want to pull all the data we can off this box as well, since the forensic analysis can pinpoint when attacks first surfaced and also provide guidance as to the extent of the compromise. Vulnerability scans: Are those devices vulnerable to a specific attack? Vulnerability scan data is one of the key inputs to SIEM/correlation products. The best way to reduce false positives is not to fire an alert if the target is not vulnerable. Thus we keep scan data on hand, and use it both for real-time analysis and also forensics. If an attack happens during a window of vulnerability (like while you debate the merits of a certain patch with the ops guys), you need to know that. Network Flow Data: I’ve always been a big fan of network flow analysis and continue to be mystified that market never took off, given the usefulness of understanding how traffic flows within and out of a network. All is not lost, since a number of security management products use flow data in their analyses and a few lower end management products use flow data as well. Each flow record is small, so there is no reason not to keep a lot of it. Again, we use this data to both pinpoint potential badness, and also replay attacks to understand how they spread within the organization. Device Change Logs: If your network devices get compromised, it’s pretty much game over. Traffic can be redirected, logging suppressed, and lots of other badness can result. So keep track of device configuration and more importantly when those changes happen – which helps isolate the root causes of breaches. Yes, if the logs are turned off, you lose visibility, which can itself indicate an issue. Through the wonders of SNMP, you should collect data from all your routers, switches, and other pipes. Content security: Now we can climb the stack a bit to pull information off the content security gateways, since a lot of attacks still show up via phishing emails and malware-laden web links. Again, we aren’t trying to pull this data in necessarily to stop an attack (hopefully the anti-spam box will figure out you aren’t interested in the little blue pill), but rather to gather more information about the attack vectors and how an attack proliferates through your environment. Reacting faster is about learning all we can about what is compromised and responding in the most efficient and effective manner. Keeping things focused and pragmatic, you’d like to gather all this data all the time across all the networks. Of course, Uncle Reality comes to visit and understandably, collection of everything everywhere isn’t an option. So how do you prioritize? The objective is to use the data you already have. Most organizations have all of the devices listed above. So all the data sources exist, and should be prioritized based on importance to the

Share:
Read Post

The Network Forensics (Full Packet Capture) Revival Tour

I hate to admit that of all the various technology areas, I’m probably best known for my work covering DLP. What few people know is that I ‘fell’ into DLP, as one of my first analyst assignments at Gartner was network forensics. Yep – the good old fashioned “network VCRs” as we liked to call them in those pre-TiVo days. My assessment at the time was that network forensics tools like Niksun, Infinistream, and Silent Runner were interesting, but really only viable in certain niche organizations. These vendors usually had a couple of really big clients, but were never able to grow adoption to the broader market. The early DLP tools were sort of lumped into this monitoring category, which is how I first started covering them (long before the term DLP was in use). Full packet capture devices haven’t really done that well since my early analysis. SilentRunner and Infinistream both bounced around various acquisitions and re-spin-offs, and some even tried to rebrand themselves as something like DLP. Many organizations decided to rely on IDS as their primary network forensics tool, mostly because they already had the devices. We also saw Network Behavior Analysis, SIEM, and deep packet inspection firewalls offer some of the value of full capture, but focused more on analysis to provide actionable information to operations teams. This offered a clearer value proposition than capturing all your network data just to hold onto it. Now the timing might be right to see full capture make a comeback, for a few reasons. Mike mentioned full packet capture in Low Hanging Fruit: Network Security, and underscored the need to figure out how to deal with these new more subtle and targeted attacks. Full packet capture is one of the only ways we can prove some of these intrusions even happened, given the patience and skills of the attackers and their ability to prey on the gaps in existing SIEM and IPS tools. Second, the barriers between inside and outside aren’t nearly as clean as they were 5+ years ago; especially once the bad guys get their initial foothold inside our ‘walls’. Where we once were able to focus on gateway and perimeter monitoring, we now need ever greater ability to track internal traffic. Additionally, given the increase in processing power (thank you, Moore!), improvement in algorithms, and decreasing price of storage, we can actually leverage the value of the full captured stream. Finally, the packet capture tools are also playing better with existing enterprise capabilities. For instance, SIEM tools can analyze content from the capture tool, using the packet captures as a secondary source if a behavioral analysis tool, DLP, or even a ping off a server’s firewall from another internal system kicks off an investigation. This dramatically improves the value proposition. I’m not claiming that every organization needs, or has sufficient resources to take advantage of, full packet capture network forensics – especially those on the smaller side. Realistically, even large organizations only have a select few segments (with critical/sensitive data) where full packet capture would make sense. But driven by APT hype, I highly suspect we’ll see adoption start to rise again, and a ton of parallel technologies vendors starting to market tools such as NBA and network monitoring in the space. Share:

Share:
Read Post

Network Security Fundamentals: Default Deny (UPDATED)

(Update: Based on a comment, I added some caveats regarding business critical applications.) Since I’m getting my coverage of Network and Endpoint Security, as well as Security Management, off the ground, I’ll be documenting a lot of fundamentals. The research library is bare from the perspective of infrastructure content, so I need to build that up, one post at a time. As we start talking about the fundamentals of network security, we’ll first zero in on the perimeter of your network. The Internet-facing devices accessible by the bad guys, and usually one of the prevalent attack vectors. Yeah, yeah, I know most of the attacks target web applications nowadays. Blah blah blah. Most, but not all, so we have to revisit how our perimeter network is architected and what kind of traffic we allow into that web application in the first place. Defining Default Deny Which brings us to the first topic in the fundamentals series: Default Deny, which implements what is known in the trade as a positive security model. Basically it means unless you specifically allow something, you deny it. It’s the network version of whitelisting. In your perimeter device (most likely a firewall), you define the ports and protocols you allow, and turn everything else off. Why is this a good idea? Lots of attacks target unused and strange ports on your firewalls. If those ports are shut down by default, you dramatically reduce your attack surface. As mentioned in the Low Hanging Fruit: Network Security, many organizations have out-of-control firewall and router rules, so this also provides an opportunity to clean those rules up as well. As simple an idea as this sounds, it’s surprising how many organizations either don’t have default deny as a policy, or don’t enforce it tightly enough because developers and other IT folks need their special ports opened up. Getting to Default Deny One of the more contentious low hanging fruit recommendations, as evidenced by the comments, was the idea to just blow away your overgrown firewall rule set and wait for folks to complain. A number said that wouldn’t work in their environments, and I can understand that. So let’s map out a few ways to get to default deny: One Fell Swoop: In my opinion, we should all be working to get to default deny as quickly as possible. That means taking a management by compliant approach for most of your traffic, blowing away the rule set, and waiting for the help desk phone to start ringing. Prior to blowing up your rule base, make sure to define the handful of applications that will get you fired if they go down. Management by Compliant doesn’t work when the compliant is attached to a 12-gauge pointed at your head. Support for those applications needs to go into the base firewall configuration. Consensus: This method involves working with senior network and application management to define the minimal set of allowed protocols and ports. Then the impetus falls on the developers and ops folks to work within those parameters. You’ll also want a specific process for exceptions, since you know those pesky folks will absolutely positively need at least one port open for their 25-year-old application. If that won’t work, there is always the status quo approach… Case by Case: This is probably how you do things already. Basically you go through each rule in the firewall and try to remember why it’s there and if it’s still necessary. If you do remember who owns the rule, go to them and confirm it’s still relevant. If you don’t, you have a choice. Turn it off and risk breaking something (the right choice) or leave it alone and keep supporting your overgrown rule set. Regardless of how you get to Default Deny, communication is critical. Folks need to know when you plan to shut down a bunch of rules and they need to know the process to get the rules re-established. Testing Default Deny We at Securosis are big fans of testing your defenses. That means just because you think your firewall configuration enforces default deny, you need to be sure. So try to break it. Use vulnerability scanners and automated pen testing tools to find exposures that can be exploited. And make this kind of testing a standard part of your network security practice. Things change, including your firewall rule set. Mistakes are made and defects are introduced. Make sure you are finding them – not the bad guys. Default Deny Downside OK, as simple and clean as default deny is as a concept, you do have to understand this policy can break things, and broken stuff usually results in grumpy users. Sometimes they want to play that multi-player version of Doom with their college buddies and it uses a blocked port. Oh, well, it’s now broken and the user will be grumpy. You also may break some streaming video applications, which could become a productivity boost during March Madness. But a lot of the video guys are getting more savvy and use port 80, so this rule won’t impact them. As mentioned above, it’s important to ensure the handful of business critical applications still run after the firewall ruleset rationalization. So do an inventory of your key applications and what’s required to support those applications. Integrate those rules into your base set and then move on. Of course, mentioning that your trading applications probably shouldn’t need ports 38-934 open for all protocols is reasonable, but ultimately the business users have to balance the cost to re-engineer the application versus the impact to security posture of the status quo. That’s not the security team’s decision to make. Also understand default deny is not a panacea. As just mentioned, lots of application traffic uses port 80 or 443 (SSL), and will largely be invisible to your firewall. Sure, some devices claim “deep packet inspection” and others talk about application awareness, but most don’t. So more sophisticated attacks require additional layers of defense. Understand default deny for what it is: a coarse filter for your perimeter, which

Share:
Read Post

Friday Summary: January 29, 2010

I really enjoy making fun of marketing and sales pitches. It’s a hobby. At my previous employer, I kept a book of stupid and nonsense sales sayings I heard sales people make – kind of my I Ching by sociopaths. I would even parrot back nonsense slogans and jargon at opportune moments. Things like “No excuses,” “Now step up to the plate and meet your commitments,” “Hold yourself accountable,” “The customer is first, don’t forget that,” “We must find ways to support these efforts,” “The hard work is done, now you need to complete a discrete task,” “All of your answers are YES YES YES!” and “Allow us to position for success!” Usually these were thrown out in a desperate attempt to get the engineering team to spend $200k to close a $40k deal. Mainstream media marketing uses a similar ham-fisted belligerence in their messaging – trying to tie all your hopes, dreams, and desires to their product. My wife and I used to sit in front of the TV and call out all the overt and subliminal messages in commercials, like how buying a certain waffle iron would get you laid, or a vacuum cleaner that created marital bliss and made you the envy of your neighbors. Some of the pharmaceutical ads are the best, as you can turn off the sound altogether and just gaze at the the imagery and try to guess whether they are selling Viagra, allergy medicine, or eternal happiness. But playing classic music and, in a re-assuring voice, having a cute cartoon figure tell people just how smart they are, is surprisingly effective at getting them to pay an extra $.25 per gallon for gasoline. But I must admit I occasionally find myself swayed by marketing when I thought I was more or less impervious. Worse, when it happens, I can’t even figure out what triggered the reaction. This week was one of those rare occasions. Why the heck is it that I need an iPad? More to the point, what void is this device filling and why do I think it will make my life better? And that stupid little video was kind of condescending and childish … but I still watched it. And I still want one. Was it the design? The size? Maybe it’s because I know my newspaper is dead and I want some new & better way to get information electronically at the breakfast table? Maybe I want to take a browser with me when I travel, and not a phone trying to pretend to display web pages? Maybe it’s because this is a much more appropriate design for a laptop? I don’t know, and I don’t care. This think looks cool and useful in a way that the Kindle just cannot compare to. I want to rip Apple for calling this thing ‘magical’ and ‘revolutionary’, but dammit, I want one. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Rich, Martin, and Zach on this week’s Network Security Podcast. Favorite Securosis Posts Rich: Adrian’s start to the database security fundamentals series. Mike: Rich’s FireStarter on APT. I’m so over APT at this point, but Rich provides some needed rationality in midst of all the media frenzy. Adrian: Rich’s series on Pragmatic Data Security is getting interesting with the Define Phase. Mort: Low Hanging Fruit: Security Management takes Adam’s posts on the topic and fleshes them out. Meier: Security Strategies for Long-Term, Targeted Threats. “Advanced Persistent Threat” just does not cut it. Other Securosis Posts Pragmatic Data Security: Define Phase Incite 1/27/2010: Depending on the Kids Network Security Fundamentals: Default Deny The Certification Myth Pragmatic Data Security: Groundwork FireStarter: Security Endangered Species List Favorite Outside Posts Rich: Who doesn’t love a good cheat sheet? How about a couple dozen all compiled together into a nicely organized list? Mike: Daniel Miessler throws a little thought experiment bomb on pushing everyone through a web proxy farm for safer browsing. An interesting concept, and I need to analyze this in more depth next week. Adrian: Stupid: A Metalanguage For Cryptography Very cool idea. Very KISS! Mort: Managing to the biggest risk. More awesomeness from shrdlu. I particularly love the closer: “So I believe politics can affect both how you assess and prioritize your security risks, and how you go about mitigating them. If you had some kind of magic Silly String that you could spray into your organization to highlight the invisible political tripwires, you’d have a much broader picture of your security risk landscape.” Meier: I luvs secwerity. I also like Tenable’s post on Understanding the New Masschusetts Data Protection Law. Project Quant Posts Project Quant: Database Security – Encryption Project Quant: Project Comments Project Quant: Database Security – Protect through Monitoring Project Quant: Database Security – Audit Top News and Posts Krebs’ article on the Texas bank preemptively suing a customer. Feds boost breach fines. Politics and Security. Groundspeed: a Firefox add-on for web application pen testers. PCI QSAs, certifications to get new scrutiny. It’s The Adversaries Who Are Advanced And Persistent. The EFF releases a tool to see how private/unique your browser is. Intego releases their 2009 Mac security report. It’s pretty balanced. Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. Yeah, I am awarding myself a consolation prize for my comment in response to Mike’s post on Security Management, but I have to award this week’s best comment to Andre Gironda, in response to Matt Mike’s post on The Certification Myth. I usually throw up some strange straw-man and other kinds of confusing arguments like in my first post. But for this one, I’ll get right to the point: Does anyone know if China{|UK|AU|NZ|Russia|Taiwan|France} has a military directive similar to Department of Defense Directive 8570, thus requiring CISSP and/or GIAC certifications in various information assurance roles? Does anyone disagree that China has information superiority compared to the US, and potentially due in part to the existence of DoDD 8570? If China only hires the best (and not just the brown-nosers), then this would stand to achieve them a significant advantage, right? Could it be that instead

Share:
Read Post

Database Security Fundamentals: Introduction

I have been part of 5 different startups, not including my own, over the last 15 years. Every one of them has sold, or attempted to sell, enterprise software. So it is not surprising that when I provide security advice, by default it is geared toward an enterprise audience. And oddly, when it comes to security, large enterprises are a little further ahead of the curve. They have more resources and people dedicated to the subject than small and medium sized businesses, and their coverage is much more diverse. But security advice does not always transfer well from one audience to the other. The typical SMB IT security team is one person. Or in the case or database security, the DBA and the security practitioner are one and the same. The time they have to spend on learning and performing security tasks are significantly less, and the money they have to spend for security tools and automation is typically minimal. To remedy that issue I am creating a couple posts for some pragmatic, hands-on tasks for database security. I’ll provide clear and actionable steps to protect your database and the data it stores. This series is geared to small IT shops who just need a straightforward checklist for database security. We’re not covering advanced security here, and we’re not talking about huge database installations with thousands of users, but rather the everyday security stuff you can do in an afternoon. And to keep costs low, I will focus on the built-in database security functions built into the database. Access: User and administrative security, and security on the avenues into and out of the database. Configuration: Database settings and setup that affect security and protect database functions from subversion or unauthorized alteration. I’ll go into the issue of reliance on the operating system as well. Audit: An examination of activity, transactions, and anomalous events. Data Protection: In cases where the database cannot protect access to information, we will cover techniques to prevent information from being lost of stolen. The goal here is to protect the data stored within the database. We often lose sight of this goal as we spend so much time focusing on the container (i.e., the database) and less on the data and how it is used. Of course I will cover database security – much of which will be discussed as part of access control and configuration sections – but I will include security around the data and database functions as well. Share:

Share:
Read Post

Pragmatic Data Security- Define Phase

Now that we’ve described the Pragmatic Data Security Cycle, it’s time to dig into the phases. As we roll through each of these I’m going to break it into three parts: the process, the technologies, and a case study. For the case study we’re going to follow a fictional organization through the entire process. Instead of showing you every single data protection option at each phase, we’ll focus on a narrow project that better represents what you will likely experience. Define: The Process From a process standpoint, this is both the easiest and hardest of the phases. Easy, since there’s only one thing you need to do and it isn’t very technical or complex, hard since it may involve coordination across multiple business units and the quest for executive sponsorship. Identify an executive sponsor to support your efforts. Without management support, the rest of the process will be extremely difficult. Identify the one piece of information/content/data you want to protect. The definition shouldn’t be too broad. For example, “engineering plans” is too broad, but “engineering plans for project X” is acceptable. Using “PCI/NPI/HIPAA” is acceptable, assuming you narrow it down in the next step. Define and model the information you defined in the step above. For totally unstructured content like engineering plans, identify a repository to use for your definition, or any watermarking/labels you are certain will be available to identify and protect the information. For PCI/NPI/HIPAA determine the exact fields/pieces of data to protect. For PCI it might be only the credit card number, for NPI it might be names and addresses, and for HIPAA it might be ICD9 billing codes. If you are protecting data from a database, also identify the source repository. Identify key business units with a stake in the information, and contact them to verify the priority, structure, and repositories for this information. It’s no fun if you think you’re going to protect a database of customer data, only to find out halfway through that it’s not really the important one from a business perspective. That’s it: find a sponsor, identify the category, identify the data/repository, and confirm with the business folks. Define: Technologies None. This is a manual business process and the only technology you need is something to take notes with… or maybe email to communicate. Define: Case Study Billy Bob’s Bait Shop and Sushi Outlet is a mid-sized, multi-site retail organization that specializes in “The freshest seafood, for your family or aquatic friends”. Billy Bob’s consists of a corporate headquarters and a few dozen retail outlets in three states. There are about 1,000 employees, and a growing web business due to their capability to ship fresh bait or sushi to any location in the US overnight. Billy Bob’s is struggling with PCI compliance and wants to avoid a major security breach after seeing the damage caused to their major competitor during a breach (John Boy’s Worms and Grub). They do not have a dedicated security team, but their CIO designated one of their top network administrators (the former firewall manager) to head up security operations. Frank has a solid history as a network administrator and is familiar with security (including some SANS training and a CISSP class). Due to problems with their first PCI assessment, Frank has the backing of the CIO. The category of data is PCI. After some research, Frank decides to go with a multilevel definition – at the top is credit card numbers. Since they are (supposedly) not storing them in a database they could feed to any data protection tools, Frank is starting with a regular expression to identify credit card numbers, and then plans on refining it using customer names (which are stored in the database). He is hoping that whatever tools he picks can use a generic credit card number definition for low-priority alerts, and a credit card (generic) tied with a customer name to trigger higher priority alerts. Frank also plans on using violation counts to help find real problems areas. Frank now has a generic category (PCI), a specific definition (generic regex and customer name from a database) and the repository location (the customer database itself). From the heads of the customer relations and billing, he learned that there are really two databases he needs to worry about: the main transaction processing/records system for the web outlet, and the point of sale transaction processing system for the retail outlets. The web outlet does not store unencrypted credit card numbers, but the retail outlets currently do, and they are working with the transaction processor to fix that. Thus he is adding credit card numbers from the retail database to his list of data sources. Fortunately, they are only stored in the central processing database, and not at the individual retail outlets. That’s the setup – in our next post we will cover the Discovery process to figure out where the heck all that data is. Share:

Share:
Read Post

Incite 1/27/2010: Depending on the Kids

Good Morning: Maybe it’s the hard-wired pessimist in me, but I never thought I’d live a long life. I know that’s kind of weird to think about, but with my family history of health badness (lots of the Big C), I didn’t give myself much of a chance. At the time, I must have forgotten that 3 out of my 4 grandparents lived past 85, and my paternal grandma is over 100 now (yes, still alive). But when considering your own mortality, logic doesn’t come into play. I also think my lifestyle made me think about my life expectancy. 3 years ago I decided I needed an attitude adjustment. I was fat and stressed out. Yes, I was running my own business and happy doing that, but it was pretty stressful (because I made it that way) and it definitely took a toll. Then I decided I was tired of being a fat guy. Literally in a second the decision was made. So I joined a gym and actually went. I started eating better and it kind of worked. I’m not where I want to be yet, but I’m getting there. I’m the kind of guy that needs a goal, so I decided I want to live to 90. I guess 88 would be OK. Or maybe even 92. Much beyond that I think I’ll be intolerably grumpy. I want to be old enough that my kids need to change my adult diapers. Yes, I’m plotting my revenge. Even if it takes 50 years, the tables will be turned. So how am I going to get there? I stopped eating red meat and chicken. I’m eating mostly plants and I’m exercising consistently and intensely. That’s my plan for now, but I’m also monitoring information sources to figure out what else I can be doing. That’s when I stumbled upon an interesting video from a TED conference featuring Dan Buettner (the guy from National Geographic) who talked about 9 ways to live to 100, based upon his study of a number of “Blue Zones” around the world where folks have great longevity. It’s interesting stuff and Dan is an engaging speaker. Check it out. Wish me luck on my journey. It’s a day by day thing, but the idea of depending on my kids to change my diaper in 50 years pretty motivating. And yes, I probably need to talk to my therapist about that. – Mike Photo credit: “and adult diapers” originally uploaded by &y Incite 4 U It seems everyone still has APT on the brain. The big debate seems to be whether it’s an apt description of the attack vector. Personally, I think it’s just ridiculous vibrations from folks trying to fathom what the adversary is capable of. Rich did a great FireStarter on Monday that goes into how we are categorizing APT and deflating this ridiculous “cyber-war” mumbo jumbo. Looking at everything through politically colored glasses – We have a Shrdlu admiration society here at Securosis. If you don’t read her stuff whenever she finds the time to write, you are really missing out. Like this post, which delves into how politics impacts the way we do security. As Rich says, security is about psychology and economics, which means we have to figure out what scares our customers the most. In a lot of cases, it’s auditors and lawyers – not hackers. So we have to act accordingly and “play the game.” I know, you didn’t get into technology to play the game, but too bad. If you want to prosper in any role, you need to understand how to read between the lines, how to build a power base, and how to get things done in your organization. And no, they don’t teach that in CISSP class. – MR I can haz your cloud in compliance – Even the power of cloud computing can’t evade its cousin, the dark cloud of compliance that ever looms over the security industry. As Chris Hoff notes in Cloud: Security Doesn’t Matter, organizations are far more concerned with compliance than security, and it’s even forcing structural changes in the offerings from cloud providers. Cloud providers are being forced to reduce multi-tenancy to create islands of compliance within their clouds. I spent an hour today talking with a (very very big) company about exactly this problem – how can they adopt public cloud technologies while meeting their compliance needs? Oh sure, security was also on the list – but as on many of these calls, compliance is the opener. The reality is you not only need to either select a cloud solution that meets your compliance needs (good luck), or implement compensating controls on your end, like virtual private storage, and you also need to get your regulator/auditor to sign off on it. – RM It’s just a wafer thin cookie, Mr. Creosote – Nice job by Michael Coates both on discovering and illustrating a Cookie Forcing attack. In a nutshell, an attacker can alter cookies already set regardless of whether it’s an encrypted cookie or not. By imitating the user in a man-in-the-middle attack, the attacker finds an unsecured HTML conversation, requests an unencrypted meta refresh, and then sends “set cookie” to the browser, which accepts the evil cookie. To be clear, this attack can’t view existing cookies, but can replace them. I was a little shocked by this as I was of the opinion meta refresh had not been considered safe for some time, and because the browser happily conflated encrypted and unencrypted session information. One of the better posts of the last week and worth a read! – AL IT not as a business, huh? – I read this column on not running IT as a business on infoworld.com and I was astounded. In the mid-90’s running IT as a business was all the rage. And it hasn’t subsided since then. It’s about knowing your customer and treating them like they have a choice in service providers (which they do). In fact, a big part of the Pragmatic CSO is to think about security like a business, with a business plan and everything.

Share:
Read Post

Security Strategies for Long-Term, Targeted Threats

After writing up the Advanced Persistent Threat in this week’s FireStarter, a few people started asking for suggestions on managing the problem. Before I lay out some suggestions, it’s important to understand what we are dealing with here. APT isn’t some sort of technical term – in this case the threat isn’t a type of attack, but a type of attacker. They are advanced – possessing strong skills and capabilities – and persistent, in that if you are a target they will continue to attempt attacks until they succeed or the costs are greater than the potential rewards. You don’t just have to block them once so they move on – they will continue to probe and strike until they achieve their goal. Thus my recommendations will by no means “eliminate” APT. I can make a jazillion recommendations on different technology solutions to block this or that attack technique, but in the end a persistent threat actor will just shift tactics in response. Rather, these suggestions will help detect, contain, and mitigate successful attacks. I also highly suggest you read Andrew Jaquith’s post, with this quote: If you fall into the category of companies that might be targeted by a determined adversary, you probably need a counter-espionage strategy – assuming you didn’t have one already. By contrast, thinking just about “APT” in the abstract medicalizes the condition and makes it treatable by charlatans hawking miracle tonics. Customers don’t need that, because it cheapens the threat. If you believe you are a target, I recommend the following: Segregate your networks and information. The more internal barriers an attacker needs to traverse, the greater your chance to detect. Network segregation also improves your ability to tailor security controls (especially monitoring) to the needs of each segment. It may also assist with compartmentalization, but if you allow VPN access across these barriers, segregation won’t help nearly as much. The root cause of many breaches has been a weak endpoint connecting over VPN to a secured network. Invest heavily in advanced monitoring. I don’t mean only simple signature-based solutions, although those are part of your arsenal. Emphasize two categories of tools: those that detect unusual behavior/anomalies, and those with extensive collection capabilities to help in investigations once you detect something. Advanced monitoring changes the playing field! We always say the reason you will eventually be hacked is that when you are on defense only, the attacker only needs a single mistake to succeed. Advanced monitoring gives you the same capability – now the attacker needs to execute with greater perfection, over a sustained period of time, or you have a greater chance of detection. Upgrade your damn systems. Internet Explorer 6 and Windows XP were released in 2001; these technologies were not designed for today’s operating environment, and are nearly impossible to defend. The anti-exploitation technologies in current operating systems aren’t a panacea, but do raise the barrier to entry significantly. This is costly, and I’ll leave it to you to decide if the price is worth the risk reduction. When possible, select 64 bit options as they include even stronger security capabilities. No, new operating systems won’t solve the problem, but we might as well stop making it so damn easy for the attackers. Longer term, we also need to pressure our application vendors to update their products to utilize the enhanced security capabilities of modern operating systems. For example, those of you in Windows environments could require all applications you purchase to enable ASLR and DEP (sorry Adobe). By definition, advanced persistent threats are as advanced as they need to be, and won’t be going away. Compartmentalization and monitoring will help you better detect and contain attacks, and are fairly useful no matter what tactics your opponent deploys. They are also pretty darn hard to implement comprehensively in current operating environments. But again, nothing can “solve” APT, since we’re talking about determined humans with time and resources, who are out to achieve the specific goal of breaking into your organization. Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.