Securosis

Research

Applied Network Security Analysis: The Forensics Use Case

Most organizations don’t really learn about the limitations of event logs, until forensic investigators hold up their hands and explain they know what happened, but aren’t really sure how. Huh? How could that happen? It’s pretty simple: logs are a backward-looking indicator. They can help you piece together what happened, but you can only infer how. In a forensic investigation inferring anything is suboptimal. You want to know, especially given the needs to isolate the root cause of the attack and to establish remediations to ensure it doesn’t happen again. So we need to look at additional data sources to fill in gaps in what the logs tell you. Let’s take a look at a simplified scenario to illuminate the issues. We’ll look at the scenario both from the standpoint of a log-only analysis and then with a few other data sources added. For a more detailed incident response scenario, check out our React Faster and Better paper. The Forensic Limitations of Logs It’s the call you never want to get. The Special Agent on the other end of the line called to give you a heads-up: they found some of your customer data as part of another investigation into some cyber-crime activity that helps fund a domestic terrorist ring. Normally the Feds aren’t interested in giving you a heads-up until their investigation is done, but you have a relationship with this agent from your work together in the local InfraGard chapter. So he did you a huge favor. The first thing you need to do is figure out what was lost and how. To the logs! You aren’t sure how it happened, but you see some strange log records indicating changes on a application server in the DMZ. Given the nature of the data your agent friend passed along, you check the logs on the database server where that data resides as well. Interestingly enough, you find a gap in the logs on the database server, where your system collected no log records for a five-minute period a few days ago. You aren’t sure exactly what happened, but you know with reasonable certainty that something happened. And it probably wasn’t good. Now you work backwards and isolate the additional systems compromised as the attackers made their way through the infrastructure to reach their target. It’s pretty resource intensive, but by searching in the log manager you can isolate devices with gaps in their logs during the window you identified. The attackers were pretty effective, taking advantage of unpatched vulnerabilities (Damn, Ops!) and covering their tracks by turning off logging where necessary. At this point you know the attack path, and at least part of what was stolen, thanks to the FBI. Beyond that you are blind. So what can you do to make sure you aren’t similarly suprised somewhere down the line? You can set the logging system to alert if you don’t get any log records from critical assets in any 2-minute period. Again, this isn’t perfect and will result in a bunch more alerts, but at least you’ll know something is amiss before the FBI calls. With only log data you can identify what was attacked but probably not how the attack happened. Forensics Driven by Broader Data Let’s take a look at an alternative scenario with a few other data sources such as full network packet capture, network flow records, and configuration files. Of course it is still a bad day when you get the call from your pal the Special Agent. Of course Applied Network Security Analysis cannot magically make you omniscient, but how you investigate breaches changes. You still start with the logs on the perimeter server and identify the device that served as the attacker’s initial foothold. But you’ve implemented the Full Packet Capture Sandwich architecture described in the last post, so you are capturing the network traffic in your DMZ. You proceed to the network analysis console (using the full packet capture stream) and search all the traffic to and from the compromised server. Most sessions to that server are typical – standard application traffic. But you find some reconnaissance, and then something pretty strange: an executable injected into the server via faulty field validation on the web app (Damn, Developers!). Okay, this confirms the first point of exploit. Next we go to the target (keeping in mind what data was compromised) and do a similar analysis. Again, with our full packet capture sandwich in place, we captured traffic to/from the database server as well. As in the log-only scenario, we pinpoint the time period when logging was turned off, then perform a search in our analysis console to figure out what happened during that 5-minute period on that segment. Yep, a privileged account turned off logging on the database server and added an admin account to the database. Awesome. Using that account, the attacker dumped the database table and moved the data to a staging server elsewhere on your network. Now you know which data was taken, but how? You aren’t capturing all the traffic on your network (infeasible), so you have some blind spots, but with your additional data sources you are able to pinpoint the attack path. The NetFlow records coming from the compromised database server show the path to the staging server. The configuration records from the staging server indicate what executables were installed, which enabled the attacker to package and encrypt the payload for exfiltration. Further analysis of the NetFlow data shows the exfiltration, presumably to yet another staging server on another compromised network elsewhere. It’s not perfect, because you are figuring out what already happened. But now you can get back to your FBI buddy with a lot more information about what tactics the attacker used, and maybe even evidence that might be helpful in prosecution. Can’t Everyone Get Along? Clearly this is a simplified scenario that perfectly demonstrates the need to collect additional data sources to isolate the root cause and attack path of any

Share:
Read Post

Friday Summary: October 28, 2011

I really enjoyed Marco Arment’s I finally cracked it post, both because he captured the essence of Apple TV here and now, and because his views on media – as a consumer – are exactly in line with mine. Calling DVRs “a bad hack” is spot-on. I went through this process 7 years ago when I got rid of television. I could not accept a 5 minute American Idol segment in the middle of the 30 minute Fox ‘news’ broadcast. Nor the other 200 channels of crap surrounding the three channels I wanted. At the time people thought I was nuts, but now I run into people (okay – only a handful) who have pulled the plug on the broadcast media of cable and satellite. Most people are still frustrated with me when they say “Hey, did you see SuperJunk this weekend?” and I say “No, I don’t get television.” They mutter something like ‘Luddite’ and wonder off. Don’t get me wrong, I have a television. A very nice one in fact, but I have been calling it a ‘monitor’ for the last few years because it’s not attached to broadcast media. But not getting broadcast television does not make me a Luddite – quite to the contrary, I am waiting for the future. I am waiting for the day when I can get the rest of the content I want just as I get streaming Netflix today. And it’s not just the content, but the user experience as well. I don’t want to be boxed into some bizarre set of rules the content owners think I should follow. I don’t want half-baked DRM systems or advertising thrust at me – and believe me, this is what many of the other streaming boxes are trying to do. I don’t want to interact with a content provider because I am not interested – it was a bad idea proven foul a long time ago. Just let me watch what I want to watch when I want to watch it. Not so hard. But I wanted to comment on Marco’s point about Apple and their ability to be disruptive. My guess is that Apple TV will go fully a la carte: Show by show, game by game, movie by movie. But the major difference is we would get first run content, not just stuff from 2004. Somebody told me the other day that HBO stands for “Hey, Beastmaster’s On!”, which is how some of the streaming services and many of the movie channels feel. SOS/DD. The long tail of the legacy television market. The major gap in today’s streaming is first run programming. All I really want that I don’t have today is the Daily Show and… the National Football League (queue Monday Night Football soundtrack). And that’s the point where Mr. Arment’s analysis and mine diverge – the NFL. I agree that whatever Apple offers will likely be disruptive because the technology will simplify how we watch, rather than tiptoeing around legacy businesses and perverse contracts. But today there is only one game in town: the NFL. That’s why all those people pay $60 (in many cases it’s closer to $120) a month – to watch football. You placate kids with DVDs; you subscribe to cable for football! Just about every man I know, and 30% of the women, want to watch their NFL home team on Sunday. It’s the last remaining reason people still pay for cable or satellite in this economy. Make no mistake – the NFL is the 600 lb. gorilla of television. They currently hold sway over every cable and satellite network in the US. And the NFL makes a ridiculous amount of money because networks must pay princely sums for NFL games to be in the market. Which is why the distributors are so persnickety about not having NFL games on the Internet. Why else would they twist the arm of the federal government to shut down a guy relaying NFL games onto the Internet? (Thanks a ton for that one you a-holes – metropolitan areas broadcast over-the-air for free but it’s illegal to stream? WTF?) Nobody broadcasts live games over the Internet!?! Why not?!? The NFL could do it directly – they are already set up with “Game Pass” and “Game Rewind” – but likely can’t because fat network contracts prohibit it. Someone would need to spend the $$$ to get Internet distribution rights. Someone should, because there is huge demand, but there are only a handful of firms which could ante up a billion dollars to compete with DirecTV. But when this finally happens it will be seriously disruptive. Cable boxes will be (gleefully) dumped. Satellite providers will actually have competition, forcing them to alter their contacts and rates, and go back to delivering quality picture. ISPs will be pressured to actually deliver the bandwidth they claim to be selling. Consumers will get what they want at lower cost and with greater convenience. Networks will scramble to license the rest of their content to any streaming service provider they can, increasing content availability and pushing prices lower. If Apple wants to be disruptive, they will stream NFL games over the Internet on demand. If they can get rights to broadcast NFL for a reasonable price, they win. The company that gets the NFL for streaming wins. If Apple doesn’t, bet that Amazon will. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Rich quoted on SaaS security services. Adrian quoted in SearchSOA. Compliance Holds Up Los Angeles Google Apps Deployment. Mike plays master of the obvious. Ask the auditor before you commit to something that might be blocked by compliance. Duh! Favorite Securosis Posts Adrian Lane: A Kick-Ass Cloud Database Security Automation Example. And most IaaS cloud providers have the hooks to do most of this today. You can even script the removal of base database utilities you don’t want. Granted, you still have to set permissions on data and users, but the

Share:
Read Post

Next Generation != (Always) Better

It all started with a simple tweet from The Mogull, which succinctly summed up a lot of the meat grinder of high tech marketing. You see the industry is based on upgrades and refreshes, largely driven by planned obsolescence. Let’s just look at Microsoft Word. I haven’t really used any new functionality since Office 2003. You? They have overhauled the UI and added some cloudiness (which they call Office Live), but it’s really moving deck chairs around. A word processor is a word processor for 95% of the folks out there. Rich was reacting to the constant barrage of “next generation” this and next generation that we are constantly get pitched, while most organizations can’t even make the current generation work. It is becoming rare to survive a vendor briefing without hearing about how their product is NextGen (only their product, of course). This is rampant in the spaces I cover: network and endpoint security. Who hasn’t heard of a next generation firewall? Now we have next generation IPS, and it’s just a matter of time before we see next generation TBD promising to make security easy. We know how this movie ends. To be fair, some innovations really are next generation, and they make a difference to leading edge companies that can take advantage of them. I mentioned NGFW in a tongue-in-cheek fashion, but the reality is that moving away from ports and protocols, to application awareness, is fundamentally different and can be better. But only if the customer can take advantage and build these new application-oriented policies. A NGFW is no better than a CGFW (current generation firewall) without a next-generation rule base to take advantage of the additional capabilities. I guess what I find most frustrating about the rush to the next generation is the arbitrary nature of what is called “next generation”. Our pals at the Big G (that’s Gartner for you Securosis n00bs) recently published a note on NGIPS (next generation IPS), which you can get from SourceFire (behind a reg wall). As the SourceFire folks kindly point out, they have offered many of these so-called next generation functions since 2003 – they just couldn’t tell a coherent story about it. Can something over 6 years old really be next generation? So next generation monikers are crap. Driven by backwards-looking indicators – like most big IT research. SourceFire did a crappy job of communicating why their IPS was different back in the day, and it wasn’t until some other companies (notably the NGFW folks) started offing application-aware IPS capabilities that the infinite wisdom in Stamford decided it was suddenly time for NGIPS. And now this will start a vendor hump-a-thon where every other IPS vendor (yeah, the two left) will need to spin their positioning to say ‘NGIPS’ a lot. Whether they really do NGIPS is besides the point. You can’t let the truth get in way of a marketing campaign, can you? What’s lost in all the NextGen quicksand? What customers need. Most folks don’t need a next generation word processor, but one shows up every 2-3 years like clockwork. Our infrastructure security markets are falling in line with this model. Do we need NextGen key management? NextGen endpoint security? NextGen application protection? Given how well the current generation works, I’d say yes. But here’s the problem. I know this is largely a marketing exercise, so let’s be clear about what we are looking for. Something that works. Call it what you want, but if it’s the same old crap that we couldn’t use before, rebranded as next generation… I’m not interested. And no one else will be either. Share:

Share:
Read Post

A Kick-Ass Cloud Database Security Automation Example

Yesterday I was in Vegas to participate in a panel at IBM’s Information on Demand Conference. To my amusement and frustration, I was already in Vegas that weekend, drove 4.5 hours home to Phoenix on Sunday, then flew back Monday evening (4 hours door to door). The panel was on database security in the cloud, and at one point I came up with an example to show how this sh*t is seriously different than how we do security today. The example below would be nearly impossible in a non-cloud environment. It’s fictional, but there are no technical obstacles to implementing it right now. There is, however, one limitation I will mention at the end. Imagine a world where you have a robust internal cloud to support business units in a large enterprise. This is in contrast to current environments where, if a business unit wants an application or database resource: they submit a request, things are approved (maybe), then physical or virtual assets are acquired, configured, and assigned. You are one of those forward-thinking orgs which stood up your private cloud with a self-service portal where approved managers can dynamically provision a pre-established set of resources. No, this probably isn’t how most of you use the cloud today, but it will be. Now imagine that some of these resource stacks include databases. You are, obviously, concerned with the security and compliance of these databases. This is the sort of thing that used to constantly bite you in the ass, as teams ranging from developers to sub-departments installed their own stuff, loaded sensitive data, and then failed to secure it. But you now sleep soundly at night because… When the user requests the application stack, all operating systems and software are automatically patched to current levels using mandatory installation scripts. The installation scripts also configure the resources to a secure-by-default state, doing things like inserting user credentials, locking down ports, setting appropriate file permissions, configuring application defaults, and so on. You can even automate service account management and cross-link them between application components (heck, we do this in the CCSK Plus training class). All application components instantiate themselves in different, locked-down network security groups. Only required internal ports are open. This can be much more granular and restrictive than current application stacks which require physical hardware to protect. When the database spins up it registers itself with your Database Activity Monitoring (DAM) and assessment tools via their APIs. The DAM tool performs an initial database vulnerability assessment and registers the database for future scans. (Other stack components do similar things, but we’re focusing on the database for this example). Thanks to those cloud APIs, it knows where to look for the database and who created it, and the necessary firewall ports are opened. After the initial DAM scan is complete and passed, the DAM tool makes an API call to the cloud’s network controller to open up any additional ports needed for internal access. Depending on the script, this may be restricted to subnets, individual IPs, and so on. Similar processes are followed for the application and web server components and their various security tools (vulnerability assessment, asset registration, configuration management, etc.). Assuming everything is hunky dory, any last required ports to access the application can be opened up. The user won’t pick this – it will be handled automatically via API and policy scripts. The DAM tool will have installed its monitoring agent at initial launch. The agent connects back to the DAM server and activity is now monitored (including administrative SQL queries). On a specified schedule, the database is scanned for ongoing configuration compliance and vulnerabilities. It is also scanned for sensitive data, using the content discovery feature of your DAM tool and policies tied to the type of application stack deployed and the business unit assigned. If it isn’t supposed to have credit card numbers, but they start appearing, security gets an alert. Think about this for a moment – today people try to spin stuff up all over the place and it’s nearly impossible to find, never mind configure securely. In the example above we completely automate the configuration and security of the application stack (including the database) on a dynamic basis using APIs and policy scripts. The database spins up with secure settings in a secure network; it is centrally registered, actively monitored, and scanned for both problems and sensitive (read ‘regulated’) data on an ongoing basis. Today’s limitation is that very few security tools, by default, support the automation I described above. But things like initialization scripts and dynamic network management via APIs are fundamental to all cloud platforms. Cool, eh? And heck, I’m probably missing a bunch of things Share:

Share:
Read Post

Applied Network Security Analysis: Collection and Analysis = A Fighting Chance

In the introduction to our Applied Network Security Analysis series, we talked about monitoring everything and the limitations of a log-centric data collection approach, in our battle to improve security operational processes. Now let’s dig in a little deeper and understand what kind of data collection foundation makes sense, given the types of analysis we need to deal with our adversaries. Let’s define the critical data types for our analysis. First are the foundational elements, which were covered ad nauseum in our Monitoring Up the Stack paper. These include event logs from the network, security, databases, and applications. We have already pointed out that log data is not enough, but you still need it. The logs provide a historical view of what happened, as well as the basis for the rule base needed for actionable alerts. Next we’ll want to add additional data commonly used by SIEM devices – that includes network flows, configuration data, and some identity information. These additional data types provide increased context to detect patterns of potential badness. But this is not enough – we need to look beyond these data types for more detail. Full Packet Capture As we wrote in the React Faster and Better paper: One emerging advanced monitoring capability – the most interesting to us – is full packet capture. These devices basically capture all traffic on a given network segment. Why? The only way you can really piece together exactly what happened is to use the actual traffic. In a forensic investigation this is absolutely crucial, providing detail you cannot get from log records. Going back to a concept we call the Data Breach Triangle, you need three components for a real breach: an attack vector, something to steal, and a way to exfiltrate it. It’s impossible to stop all potential attacks, and you can’t simply delete all your data, so we advocate heavy perimeter egress filtering and monitoring, to (hopefully) prevent valuable data from escaping your network. So why is having the packet stream so important? It is a critical facet of heavy perimeter monitoring. The full stream can provide a smoking gun for an actual breach, showing whether data actually left the organization, and which data. If you look at ingress traffic, the network capture enables you to pinpoint the specific attack vector(s) as well. We will discuss both these use cases, and more, in additional detail later in this series, but for now it’s enough to say that full network packet capture data is the cornerstone of Applied Network Security Analysis. Intelligence and Context Two additional data sources bear mentioning: reputation and malware. Both these data types provide extra context to understand what is happening on your networks and are invaluable for refining alerts. Reputation: Wouldn’t it be great if you knew some devices and/or destinations were up to no good? If you could infer some intent from just an IP address or other identifying characteristics? Well you can, at least a bit. By leveraging some of the services that aggregate data on command and control networks, and on other known bad actors, you can refine your alerts and optimize your packet capture based on behavior, not just on luck. Reputation made a huge difference in both email and web security, and we expect a similar impact on more general network security. This data helps focus monitoring and investigation on areas likely to cause problems. Malware samples: A log file won’t tell you that a packet carried a payload with known malware. But samples of known malware are invaluable when scrutinizing traffic as it enters the network, before it has a chance to do any damage. Of course nothing is foolproof, but we are trying to get smarter and optimize our efforts. Recognizing something that looks bad as it enters the network would provide a substantial jump for blocking malware. Especially compared to other folks, whose game is all about cleaning up the messes after they fail to block it. We will dive into how to leverage these data types by walking through the actual use cases where this data pays dividends later in the series. But for now our point is that more data is better than less, and without building a foundation of data collection analysis is likely futile. Digesting Massive Amounts of Data The challenge of collecting and analyzing a multi-gigabit network stream is significant, and each vendor is likely to have its own special sauce to collect, index, and analyze the data stream in real time. We won’t get into specific technologies or approaches – after all, beauty is in the eye of the beholder – but there are a couple things to look for: Collection Integrity: A network packet capture system that drops packets isn’t very useful, so the first and foremost requirement is the ability to collect network traffic at your speeds. Given that you are looking to use this data for investigation, it is also important to maintain traffic integrity to prove packets weren’t dropped. Purpose-built data store: Unfortunately MySQL won’t get it done as a data store. The rate of insertions required to deal with 10gbps traffic demand something built specifically that purpose. Again, there will be lots of puffery about this data store or that one. Your objective is simply to ensure the platform or product you choose will scale to your needs. High-speed indexing: Once you get the data into the store you need to make sense of it. This is where indexing and deriving metadata become critical. Remember this has to happen at wire speeds, is likely to involve identifying applications (like an application-aware firewall or IDS/IPS), and enriching the data with geolocation and/or identity information. Scalable storage: Capturing high-speed network traffic demands a lot of storage. And we mean a lot. So you need to calibrate onboard storage against archiving approaches, optimizing the amount of storage on the capture devices based on the number of days of traffic to keep. Keep in mind that the metadata

Share:
Read Post

Incite 10/26/2011: The Curious Case of Flat Stanley

Flat Stanley has it pretty good. If you have elementary school age kids, you probably know all about him. Flat Stanley is a cute story about a kid who gets flattened, and then spends most of the book trying to regain his natural form. Many teachers have kids do a Flat Stanley project, where they color a picture and send it to a friend or relative. The recipient then takes pictures of Flat Stanley doing something from their daily routine and writes a letter to send back with the photo. The kids learn a bit about someone else, and they have to read the letter. Win/win. Last week, XX2 gave me her Flat Stanley to take on a trip. I started at SecTor CA up in Toronto, so Flat Stanley got to take a picture by the CN Tower. While I’m on this topic, I need to shout out for the folks behind SecTor CA. It’s a great conference, with great speakers and a great community. If you are in or around Toronto, you need to get to SecTor CA. They even invited Stanley to get up on stage and talk about his curious life (picture below). The audience was enthralled. Evidently Stanley doesn’t make too many high-profile keynote speeches, so XX2’s teacher showed the class the picture. It was a big hit. Turns out the wonderful Arlen clan also has lots of experience with Flat Stanley. So we traded stories of what they did with Flat Stanley. They even heard tales of Flat Stanley going to London and attending the Royal Wedding. That dude gets around. Then I took Flat Stanley on my annual golf trip with the boys. Why not? That keynote speech business is hard work, and Stanley needed a bit of R&R. I’m pretty sure I should have had Stanley hit a few drives for me since – he couldn’t have done worse. Let’s just say I should stick to writing and pontificating. I did get some good photos of Stanley in the golf cart, and putting in a birdie. Stanley is a child, so I put him to bed before the evening festivities. And that’s all I’ll say about that. But all told, Flat Stanley has a pretty good gig. He travels around the world and experiences interesting stuff. Which, when I come to think about it, is kind of what I do. And I’m not flat either. That would be a win for me. -Mike Photo credits: Mike Rothman on his rockin’ iPhone 4S Incite 4 U Getting Binary on Risk Assessment: If there is one thing I can say with a high level of confidence, it’s that math guys will defend math. Alex Hutton doesn’t disappoint, as he critiques Ben Sapiro’s Binary Risk Assessment thought balloon (presented at SecTor CA). Alex is balanced but objects to calling Ben’s approach risk assessment, instead he calls it a way to assess vulnerability severity. Vernacular and semantics – the tools of lawyers and, seemingly, math guys. What I like about Ben’s approach is that it’s simple and quick. Most real risk assessment methods are neither. And given the need to prioritize actions in real time, it’s better to be quick than right to 5 decimal places. So I like Ben’s approach – read it and use it. That doesn’t mean you shouldn’t still push toward true risk quantification (if you have that kind of threshold for pain), but understand that there is a time and place for each approach. – MR NoSQL on NoCloud: I am not surprised that Oracle launched a NoSQL database at OpenWorld. NoSQL threatens the relational DB status quo with cheaper, more agile capabilities, with greater data capacity. What does surprise me is their release of NoSQL on a big-ass big data appliance. So new, yet so old school. This is especially interesting in light of the news that Oracle’s acquiring RightNow while talkin’ smack about how Salesforce.com is the roach motel of cloud. I think some of this puffery is because Oracle was late to adopt the cloud, much as Microsoft was with the Internet, but they are certainly making a concerted cloudy push now. Regardless, the big appliance deployment could really work. It’s anti-cloud, but wears like a comfortable old jacket. And it’s so self-contained that it’s generic storage, like a SAN, and you’ll likely be able to outsource security and maintenance and just worry about pushing data. I think this will be very popular for small enterprises who just need to get work done without worrying too much about new technologies. – AL Security small guy syndrome: I think I have ranted about this one before, but one of my pet peeves is people in security talking about how “We have to educate the users/developers/business/whatever.” Because, more often than not, when they say ‘educate’ they really mean ‘indoctrinate’. To me it always sounds like small guy syndrome – you know, the kid who has all the answers if the stupid world would just listen! Chris Eng pokes at a recent presentation that sounds like it falls into this category. It isn’t that security shouldn’t talk to development or try to work with them, but we will never succeed if we don’t understand their priorities in the context of our own bias. Even then their priorities will never completely align with ours because we have different jobs. So my advice is try to work with developers, but don’t expect to change them – instead assume you will be adding whatever else you need to improve the end product (secure code, right?). – RM Cyber-insurance: Win or Futility? We are starting to see better analyses of whether cyber-insurance makes sense. I have been pretty negative because it wasn’t clear to me that the underwriting was based on any real loss data – which means the environment has been rife Ouija board pricing. There is a good primer on NetworkWorld explaining how to maybe use cyber-insurance effectively, and I have seen a

Share:
Read Post

New Series: Understanding and Selecting a Database Activity Monitoring Solution 2.0

Back in 2007 we – it was actually just Rich back then – published Understanding and Selecting Database Activity Monitoring – the first in-depth examination of what was then a relatively new security technology. That paper is, and remains, the definitive guide for DAM, but a lot has happened in the past 4 years. The products – and the vendors who sell them – have all changed. The reasons customers bought four years ago are not the reasons they buy today. Furthermore, the advanced features of 2007 are now part of the baseline. Given the technology’s increased popularity and maturity, it is time to take a fresh look at Database Activity Monitoring – reassessing the technology, use cases, and market drivers. So we are launching Understanding and Selecting a Database Activity Monitoring Solution Version 2.0. We will update the original content to reflect our current research, and share what we hear now from customers. We’ll include some of the original content that remains pertinent, but largely rewrite the supporting trends, use cases, and deployment models, to reflect today’s market. A huge proportion of the original paper was influenced by vendors and the user community. I know because I commented on every post during development – a year or so before I joined the company. As with that first version, in accordance with our Totally Transparent Research process, we encourage user and vendors to comment during this series. It does change the resulting paper, for the better, and really helps the community understand what’s great and what needs improvement. All pertinent comments will be open for public review, including any discussion on Twitter, which we will reflect here. The areas we know need updating are: Architecture & Deployment: Basic architectures remain constant, but hardware-based deployments are slowly giving way to software and virtual appliances. Data collection capabilities have evolved to provide new options to capture events, and inline use has become commonplace. DAM “in the Cloud” requires a fresh examination of platforms to see who has really modified their products and who simply markets their products are “Cloud Ready”. Analytics: Content and query structure analysis now go hand in hand with rule and attribute based analysis. SQL injection remains a top problem but there are new methods to detect and block these attacks. Blocking: When the original paper was written blocking was a dangerous proposition. With better analytics and varied deployment models, and much-improved integration to react to ongoing threats, blocking is being adopted widely for critical databases. Platform Bundles: DAM is seldom used standalone – instead it is typically bundled with other technologies to address broad security, compliance, and operational challenges far beyond the scope of our 2007 paper. We will cover a handful of the ways DAM is bundled with other technologies to address more inclusive demands. SIEM, WAF, and masking are all commonly used in conjunction with assessment, auditing, and user identity management. Trends: When it comes to compliance, data is data – relational or otherwise. The current trend is for DAM to be applied to many non-relational sources, using the same analytics while casting a wider net for sensitive information housed in different formats. Adoption of File Activity Monitoring, particularly in concert with user and database monitoring, is growing. DAM for data warehouse platforms has been a recent development, which we expect to continue, along with DAM for non-relational databases (NoSQL). Use cases and market drivers: DAM struggled for years, as users and vendors sought to explain it and justify budget allocations. Compliance has been a major factor in its success, but we now see the technology being used beyond basic security and compliance – even playing a role in performance management. In our next post we will delve into architecture and deployment model changes – and discuss how this changes performance, scalability, and real-time analysis. Share:

Share:
Read Post

Friday Summary: October 21, 2011

My wife and I are pretty big Jimmy Buffett fans. I first got hooked way back in high school, working as a lifeguard. The summer of my freshman year in college I went with a group of friends down to the Orange Bowl, and we snuck off for a day trip to Key West and a short visit to the very first Margaritaville. I really got hooked when I was deep into paramedic school. In our program you worked or attended classes 80+ hours a week – bouncing around between a bunch of hospitals, fire stations, and ambulance bays throughout the entire Denver Metro area. In the middle of winter I survived all those hours on the road thanks only to a Buffett tape serenading me with sweet visions of beaches and beer. Later, it didn’t hurt that I met my wife at a Buffett show. While he tours consistently year after year, he only hits Phoenix every 2-3 years now. So when we didn’t see our home town on the schedule, a bunch of us decided to get tickets to the Vegas show. Then he added the Denver show. I lived in Boulder for 16 years and still have a big chunk of friends there who convinced me to pop over for the show – especially since I hadn’t seen some of them in 2 years, and Buffett hadn’t played Denver in 8. Then he added the Phoenix show. And that, my friends, is how I managed to sign up for three Jimmy Buffett shows, in three different cities, in three different states in one week. One of which is tonight, and I have to go assemble our new portable grill. So… On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Quiet week. Guess even media whores need some time off. Favorite Securosis Posts Adrian Lane: Tokenization Guidance: Merchant Advice. Rich: Applied Network Security Analysis. Because Mike writes much better section headings than I do. Other Securosis Posts Incite 10/19/2011: The Inquisition. Database Security Market Sizing and Guesstimation. Favorite Outside Posts Adrian Lane: Secret iOS business; what you don’t know about your apps. There are scarier threats to all mobile platforms than what’s mentioned here, but the post does a great job of underscoring that security is only as good as the app developer. And if they want to spy on you… they will. Mike Rothman: The forever recession (and the coming revolution). Seth Godin is the philosopher king of the Internet age. This is a great post about how every recession gives way to unbounded growth. If you can figure out how to deal with the next thing. Read this. Read his stuff. Adapt. Pepper: Georgia Tech Turns iPhone into spiPhone. Fortunately not suitable for even half-decent passwords, but a very clever hack to eavesdrop via an accelerometer. Should work on Android phones too – for now. Rich: Michael Winslow gets the Led out. I know this has nothing to do with security. And I know it’s been all over Twitter. But it’s still the awesomest thing I’ve seen in a while. Research Reports and Presentations Fact-Based Network Security: Metrics and the Pursuit of Prioritization. Tokenization vs. Encryption: Options for Compliance. Security Benchmarking: Going Beyond Metrics. Understanding and Selecting a File Activity Monitoring Solution. Database Activity Monitoring: Software vs. Appliance. React Faster and Better: New Approaches for Advanced Incident Response. Measuring and Optimizing Database Security Operations (DBQuant). Network Security in the Age of Any Computing. Top News and Posts Venafi’s take on Duqu. W32.Duqu: The Precursor to the Next Stuxnet. Supposedly from the Stuxnet authors. New Jersey Transit Embraces Google Wallet. And so it begins. Oracle publishes major patch release. Many database and Java patches. Cloud Security in Datacenter Terms. Google embraces HTTPS. Social Security kept silent about private data breach. We missed this last week. APT – The Plain Hard Truth. RSA blames breach on two hacker clans working for China. I didn’t get to see the talk, and so am still slightly skeptical, but expect more info to come out at RSA this year. Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Patrick, in response to Database Security Market Sizing and Guesstimation. This post raises an interesting issue for me – And that is, what is the purpose of measurement and estimation? Of anything, really – a market, an effect, a potential risk or loss magnitude? In my mind, it’s a matter of accuracy vs precision, bounded by the contextual requirements of how much reduction in uncertainty is required by the subject/decision at hand. Single point estimates, like the one referenced above – are usually not as informative as we might wish. A range, or even an estimated probability distribution, is much more useful, and not that hard to do quickly. How big is the database security market? I don’t know – but that doesn’t mean I couldn’t come up with something useful if I needed to make a decision. The key here is useful, not precise – just about measurement carries some uncertainty. Share:

Share:
Read Post

Applied Network Security Analysis: Introduction

Today we launch our next blog series, on a topic we believe is critical to success in today’s threat environment. It is network security analysis, a rather grand and nebulous term, but consider this the next step on the path which started with Incident Response Fundamentals and continued with React Faster and Better. The issues are pretty straightforward. We cannot assume we can stop the attackers, so we have to plan for a compromise. The difference between success and failure breaks down to how quickly you can isolate the attack, contain the damage, and then remediate the issue. So we build our core security philosophy around monitoring critical networks and devices, facilitating our ability to find the root cause of any attack. Revisiting Monitor Everything Back in early 2010, we published a set of Network Security Fundamentals, one of which was Monitor Everything. If you read the comments at the bottom of the post, you’ll see some divergent opinions of what everything means to different folks, but nobody really disagrees with broad monitoring as a core tenet of security nowadays. We can thank the compliance gods for that. To understand the importance of monitoring everything, let’s excerpt some research I published back in early 2008 that is still relevant today. New attacks are happening at a fast and furious pace. It is a fool’s errand to spend time trying to anticipate where the issues are. REACT FASTER first acknowledges that all attacks cannot be stopped. Thus, focus remains on understanding typical traffic and application usage trends and monitoring for anomalous behavior, which could indicate an attack. By focusing on detecting attacks earlier and minimizing damage, security professionals both streamline their activities and improve their effectiveness. That post then discusses some data sources you can (and should) monitor, including firewalls, IDS/IPS, vulnerability scans, network flows, device configurations, and content security devices. But we are still looking at this data in terms of profiling what has happened and using that as a baseline. Then watch for variations beyond tolerance and alert when you see them. We still fundamentally believe in this approach. It’s clearly the place to start for most organizations, for which any data is more than they have now. But for maturing security organizations, let’s examine why logs are only the start. Logs are not enough Back when I was in the SIEM space, it was clear that event logs are a great basis for compliance reporting, because they effectively substantiate implemented controls. As long as the logs are not tampered with, at least. But when you are working to isolate a security issue, the logs tell you what happened, but lack the depth to truly understand how it happened. Isolating a security attack using log data requires having logs from all points in the path between attacker and target. If you aren’t capturing information from the application servers, databases, and applications themselves, visibility is severely impaired. Contrast that against the ability to literally replay an attack from a full network packet capture. You could follow along as the attacker broke your stuff. See the path they took to traverse your network, the exploits they used to compromise devices, the data they exfiltrated, and how they covered their tracks by tampering with the logs. Of course this assumes you are capturing the right network traffic along the attacker’s path, and it might not be feasible to capture all traffic all the time. But still, if you look to implement a full network packet capture sandwich (as we described in the React Faster and Better series), incident responders have much more information to work with. We’ll discuss how to deploy the technology to address some of these issues later in this series. Given that you need additional data to do your job, where should you look? The Network Doesn’t Lie For the purposes of this discussion, let’s assume time starts at the moment an attacker gains a foothold in your network. That could be by compromising a device (through whatever means) already on the network, or by having a compromised device connect to the internal network. At that point the attacker is in the house, so the clock is ticking. What do they do next? An attacker will try to move through your environment to achieve their ultimate goal, whether that be compromising a specific data store or adding to their bot army, or whatever. There are about a zillion specific things the attacker could do, and 99% of them depend on the network in some way. They can’t find another target(s) without using the network to locate it. They can’t attack the target without trying to connect to it, right? Furthermore, even if they are able to compromise the ultimate target, the attackers must then exfiltrate the data. So they will try to use the network to move the data. They need the network, pure and simple. Which means they will leave tracks, but only if you are looking. This is why we favor (as described in React Faster and Better) capturing the full network packet data as possible. Attackers could compromise network devices and delete log records. They could generate all sorts of meaningless traffic to confuse network behavioral analysis. But they can’t alter the packet stream as it’s captured, which becomes the linchpin of the data you’ll collect to perform this advanced network security analysis. Data is not information But just collecting data isn’t enough. You need to use the data to draw conclusions about what’s happening in your environment. That requires indexing the data, supplementing and enriching it with additional context, alerting on the data, and then searching through the data to pursue an investigation. This is all technically demanding. Just capturing the full network packet stream requires a purpose-built data store, which does some black magic to digest and index network traffic at sufficient speed to provide usable, actionable information to shorten the exploit window. To get an idea of the magnitude of this challenge, note

Share:
Read Post

Incite 10/19/2011: The Inquisition

As my kids get older, fundamental aspects of their personalities become more apparent. XX1 won the “most inquisitive” award in kindergarten. 5 years later, she still asks questions. Lots of questions. A seemingly endless stream of questions. The Inquisition went into full effect when we went to the Falcons game last weekend. This is the 4th year we’ve had tickets, so it now becoming more about the game, rather than just about the ice cream and other snacks. From the opening kickoff until the last touchdown in the 4th quarter, I got a steady stream of questions. Which direction are they going? Why was that a penalty? Who would you root for if the Giants played the Falcons? Should I get a Dippin’ Dots or frozen lemonade? What’s pass interference? Questions, questions, questions. Now I like watching my football. I don’t like to talk during the game. If I do talk, it’s about soft zones, off tackles, and shot plays. I felt myself getting a bit frustrated under the constant barrage of questions. Then I remembered this was my evil plan in the first place. I want the kids to love watching football. I want them to have memories of going to NFL games. If they don’t understand the game they won’t want to go with me, and I’ll be sad. So I spent the time and tried to explain a few easy concepts. Like possessions (the Falcons have the ball, and they are going for that end zone), first downs, and kickoffs/punts. And she started to understand. We had a great time and that’s what it’s all about. I love that she asks questions. She wants to learn and when she doesn’t understand, she asks questions until she does. That’s a lot better than nodding like you get it, but being too proud to admit you don’t. This is a great skill, and over time we’ll work on trying to figure some stuff out herself and then ask the remaining questions. But I need to keep in mind that it’s a patience thing for me as well. I don’t have all the answers – certainly not to an endless stream of questions. So I have to get better about admitting I don’t know, and (given all the devices in our house) walking up to one of my magic boxes to figure it out. So as uncomfortable as the Inquisition may be at times, I wouldn’t have it any other way. -Mike Photo credits: “Spanish Inquisition torture method: the rack” originally uploaded by un_owen Incite 4 U Love and Hate, version 1: I never met Dennis Ritchie, but he certainly had a major impact on my life. As a computer science undergrad at Cal, UNIX and C were everything to me. I lived with The C Programming Language. Literally. Along with The UNIX Programming Environment – neither book ever left my backpack. They remain on my bookshelf to this day. And I hated both. I thought C was a miserable language. Pointer issues, memory leaks, awkward syntax, hard-to-find information. The FAQ for proper uses of the null pointer was 100 pages long. Clearly a language is screwed if it takes 100 pages to describe just one aspect of the language (mostly things you must not do). When I read Creators Admit UNIX, C Hoax, I laughed my ass off because I thought it was true – C was a freakin’ prank. Only years later did a couple UNIX experts really teach me C and UNIX (no, they don’t teach you languages at Cal, they just assume you’re plugged into The Matrix and will imprint them into your brain as needed). Only when they handed me a copy of Using C on the UNIX System did I really start to admire the power of the C language and the beauty of UNIX’s architecture. Both are incredibly powerful, and the essence of flexible and extensible. Ritchie’s passing is a good time to reflect on their landmark achievements and celebrate all the things that we use almost every minute of the day, which have been built on those two standards. – AL If there are so many detection techniques, why do they still suck? Lenny Z highlights the current state of the art for malware detection in a couple articles at SearchSecurity: How antivirus software works: Virus detection technique, and in the deeper Antimalware product suites: Understanding capabilities and limitations, on full endpoint suites. But he begs the question: with all this technology, why can’t we stop the bad guys? Because they have changed tactics. They are going after users and applications, preying on those who haven’t updated their devices and the simply stupid (or ignorant, which is just as good for their purposes). Yes, there are a plenty of easy targets. But whining about what we can’t do isn’t my style, so let’s step back to fundamentals. Assume that devices (at least some of them) are compromised. The ones that must not get compromised (high value assets) should be locked down – even if users squeal like stuck pigs. Monitor the hell out of everything, and do some egress filtering and/or DLP monitoring to make sure stuff doesn’t get out. But we cannot assume that anti-malware provides any security. – MR You already had to do it: There has been a lot of hubbub this week over recent guidance from the SEC that public companies should report on cyber-security risk. This is interesting, because my understanding has been that companies have always been required to report any potentially material risk, no matter its origin. We have seen companies report major breach losses for a while, and in rare cases they report some of the cyber risk (usually as an add-on to a public breach). That the SEC felt they needed to issue additional guidance means that companies were either confused (I don’t see what’s confusing – a loss is a loss), trying to play games, or simply not reporting. So I don’t

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.