Securosis

Research

Friday Summary: November 4, 2011

I wouldn’t say I’m a control freak, but I am definitely “control aligned”. If something is important to me I like to know what’s going on under the hood. I also hate to depend on someone else for something I’m capable of. So I have no problem trusting my accountant to keep me out of tax jail, or hiring a painter for the house, but there is a long list of things I tend to overanalyze and have trouble letting go of. Pretty damn high up that list is the Securosis Nexus. I have been programming as a hobby since third grade, and for a while there in the early days of web applications it was my full time profession. I don’t know C worth a darn, but I was pretty spiffy with database design and my (now antiquated) toolset for building web apps. I still code when I can, but it’s more like home repair than being a general contractor. When Mike, Adrian, and I came up with the idea for the Nexus I did all the design work. From the UI sketches we sent to the visual designers to the features and logic flow. Not that I did it all alone, but I took point, and I’m the one who interfaces with our contractors. Which is where I’m learning how to let go. The hard way. I have managed (small) programming teams before but this is my first time on the hiring side of the contractor relationship. It’s also the first time I haven’t written any significant amount of code for something I’m pretty much betting my future on (and the future of my partners and our families). Our current contractor team is great. Among other things they suggested an entirely new architecture for the backend that is far better than my initial plans and our PoC code. I wish they would QA a little better (hi guys!), and we don’t always see things the same way, but I’m damn happy with the product. But it’s extremely hard for me to rely on them. For example, today I wanted to change how a certain part of the system functioned (how we handle internal links). I know what needs to be done, and even know generally what needs to happen within the code, but I realized I would probably just screw it up. And it would take me a few hours (to screw up), while they can sort it all out in a fraction of the time. I don’t know why this bothers me. Maybe it’s knowing that I’ll see a line item on an invoice down the road. But it’s probably some deep-seated need to feel I’m in control and not dependant on someone else for something so important. But I am. And I need to get used to it. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Me (Rich) in a DLP video I for Trend Micro. I really liked the video crew on this one and the quality shows. I may need to get myself a Canon DSLR for our future Securosis videos instead of our current HD camcorder. I also wrote up how to recover lost iCloud data based on my own serious FAIL this week. Favorite Securosis Posts Mike Rothman: Virtual USB? Not.. Adrian has it right here. Even though it’s more secure to carry (yet another) device, users won’t do it. They want everything on their smartphone, and they will get it. It’s just a matter of when, and at what cost (in terms of security or data loss). Adrian Lane: How Regular Folks See Online Safety. Lately news items are right out of Theater of the Absurd: Security Tragicomedy. Rich: Tokenization Guidance: Audit Advice. Adrian is really building the most definitive guide out there. Other Securosis Posts Incite 11/2/2011: Be Yourself. Conspiracy Theories, Tin Foil Hats, and Security Research. Applied Network Security Analysis: The Advanced Security Use Case. Applied Network Security Analysis: The Forensics Use Case. Favorite Outside Posts Mike Rothman: 3 Free Tools to Fake DNS Responses for Malware Analysis. This is a good tip for testing, but also critical for understanding the tactics adversaries will use against you. Adrian Lane: The Chicago Way. Our own Dave Lewis does the best job in the blogsphere at explaining what the heck is going on with the Anonymous / Los Zetas gang confrontation. James Arlen: Harvard Stupid. Two posts in one – interesting financial story tailed by an excellent example of how security should be implemented from a big picture view. If you run IT security for your company, read this! Rich: Kevin Beaver on why users violate policies. I don’t agree with the lazy comment though – it’s not being lazy if your goal is to get your job done and you deal with something in the way. Research Reports and Presentations Fact-Based Network Security: Metrics and the Pursuit of Prioritization. Tokenization vs. Encryption: Options for Compliance. Security Benchmarking: Going Beyond Metrics. Understanding and Selecting a File Activity Monitoring Solution. Database Activity Monitoring: Software vs. Appliance. React Faster and Better: New Approaches for Advanced Incident Response. Measuring and Optimizing Database Security Operations (DBQuant). Network Security in the Age of Any Computing. Top News and Posts UK Cops Using Fake Mobile Phone Tower to Intercept Calls, Shut Off Phones. Malaysian CA Digicert Revokes Certs With Weak Keys, Mozilla Moves to Revoke Trust. Four CAs Have Been Compromised Since June. Hackers attacked U.S. government satellites. How Visa Protects Your Data. Exposing the Market for Stolen Credit Cards Data. ‘Nitro’ Cyberespionage Attack Targets Chemical, Defense Firms. Blog Comment of the Week This week we are redirecting our donation to support Brad “theNurse” Smith. This week’s best comment goes to Zac, in response to Conspiracy Theories, Tin Foil Hats, and Security Research. I personally think that the problem with the media hype is that it seems to distract more than inform. The overall result being that you end up with “experts” arguing over inconsequential

Share:
Read Post

Incite 11/2/2011: Be Yourself

Last week I was invited to speak at Kennesaw State University’s annual cybersecurity awareness day. They didn’t really give me much direction on the topic, so I decided to give my Happyness presentation. I figured there would be students and other employees who could benefit from my journey from total grump to fairly infrequent grump, and a lot of the stuff I’ve learned along the way. One of my key lessons is to accept the way I am and stop trying to be someone else. Despite my public persona I like (some) people. Just not many, and in limited doses. I value and need my solitary time and I have designed a lifestyle to embrace that. I say what I think, and know I can be blunt. Don’t ask me a question if you don’t want the answer. Sure, I have mellowed over the years, but ultimately I am who I am, and my core personality traits are unlikely to change. The other thing I have realized is the importance of managing expectations. For example, I was at SecTor CA a few weeks back, and at the beginning of my presentation on Cloud Security (WMV file), I mentioned the Internet with a snarky, “You know, the place were pr0n is.” (h/t to Rich – it’s his deck). There was a woman sitting literally in the front row who blurted out, “That’s totally inappropriate.” I immediately stopped my pitch, because this was a curious comment. I asked the woman what she meant. She responded that she didn’t think it was appropriate to mention pr0n on a slide in a conference presentation. Yeah, I guess she doesn’t get to many conferences. But it wasn’t something I was going to gloss over. So I responded: “Oh you think so, then this may not be the session for you.” Yes, I really said that, much to the enjoyment of everyone else in the room. I figured given the rest of the content and my presentation style that this wasn’t going to end well. There was no reason for her to spend an hour and be disappointed. To her credit, she got up and found another session, which was the best outcome for both of us. Earlier in my career, I would have let it go. I would probably have adapted my style a bit to be less, uh, offensive. I would have gotten the session done, but it wouldn’t have been my best effort. Now I just don’t worry about it. If you don’t like my style, leave. If you don’t think I know what I’m talking about, leave. If you don’t like my blog posts, don’t read them. It’s all good. I’m not going to feel bad about who I am. Which philosophy is directly from Steve Jobs. “Your time is limited, so don’t waste it living someone else’s life.” I have got lots of problems, but trying to be someone else isn’t one of them. For that I’m grateful. So just be yourself, not who they want you to be. That’s the only path to make those fleeting moments of happiness less fleeting. -Mike Photo credits: “Just be Yourself” originally uploaded by Akami Incite 4 U Keeping tabs on theNurse: I know Brad “theNurse” Smith isn’t familiar to most of you, but if you have been to a major security conference, odds are you have seen him and perhaps met him. I first met Brad 5+ years ago when we worked as Black Hat room proctors together, and have since seen him all over the place. Last week Brad suffered a serious stroke while delivering a presentation at the Hacker Halted conference in Miami, and he still hasn’t regained consciousness. You can get updates on Brad over at the social-engineer.org site, and can leave donations if you want. Maybe I’m identifying a bit too much after my recent health scare on the road, but we feel terrible for Brad and his wife and all of us at Securosis wish them the best. We are also putting our money where our mouths are, and directing (and increasing) our Friday Summary donation his way this week. – RM The weakest link? Your people… I just love stories of social engineering. Yes, there are some very elegant technical attacks, but they seem so much harder than just asking for access to the stuff you need. Like a wiring closet or conference room. Why pick the lock on the door when they’ll just open when you knock? Kai Axford had a great video (WMV) of actually putting his own box into a pen test client’s wiring closet – with help from the network admin – in his SecTor CA presentation. And NetworkWorld has a good story on social engineering, including elegant use of a tape measure. But it’s not like we haven’t seen this stuff before. On my golf trip, we stumbled across Beverly Hills Cop on a movie channel and Axel Foley is one of the best social engineers out there. – MR Token gesture: 403 Labs QSA and PCI columnist Walt Conway noted a major change to the PCI Special Interest Groups (SIGs) this year. The “participating organizations” – a group comprised mostly of the merchants who are part of the PCI Council – will get the deciding vote on which SIGs get to provide the PCI Council advice. Yes, they get a vote on what topics get the option of community guidance. The SIGs do a lot of the discovery and planning work that goes into the guidance ultimately published by the PCI Council – end to end encryption is one example. Unless, of course, someone like Visa objects to the SIG’s guidance, in which case the PCI Council squashes it like a bug – as they did with tokenization. This olive branch is nice, but it’s a token minuscule gesture. – AL Job #1: Keep head attached to body: I joke a lot during presentations about the importance of a public execution

Share:
Read Post

How Regular Folks See Online Safety, and What It Says about Us

I remember very clearly the day I vowed to stop watching local news. I was sitting at home cooking dinner or something, when a teaser report of a toddler who died after being left in a car in the heat aired during that “what we’re covering tonight” opening to the show. It wasn’t enough to report the tragedy – the reporter (a designation she surely didn’t deserve) seemed compelled to illustrate the story by locking a big thermometer in the car, to be pulled out during the actual segment. Frankly, I wanted to vomit. I have responded to more than a few calls involving injured or dead children, and I was disgusted by the sensationalism and desperate bid for ratings. With rare exceptions, I haven’t watched local news since then; I can barely handle cable news (CNN being the worst – I like to say Fox is right, MSNBC left, and CNN stupid). But this is how a large percentage of the population learns what’s going on outside their homes and work, so ‘news’ shows frame their views. Local news may be crap, but it’s also a reflection of the fears of society. Strangers stealing children, drug assassins lurking around every corner, and the occasional cancer-causing glass of water. So I wasn’t surprised to get this email from a family member (who found it amusing): Maybe you have seen this, but thought I would send it on anyway. SCARY.. This is a MUST SEE/ READ. If you have children or grandchildren you NEED to watch this. I had no idea this could happen from taking pictures on the blackberry or cell phone. It’s scary. http://www.youtube.com/watch?v=N2vARzvWxwY Crack open a cold beer and enjoy the show… it’s an amusing report on how frightening geotagged photos posted online are. I am not dismissing the issue. If you are, for example, being stalked or dealing with an abusive spouse, spewing your location all over the Internet might not be so smart. But come on people, it just ain’t hard to figure out where someone lives. And if you’re a stalking victim, you need better sources for guidance on protecting yourself than stumbling on a TV special report or the latest chain mail. But there are two reasons I decided to write this up (aside from the lulz). First, it’s an excellent example of framing. Despite the fact that there is probably not a single case of a stranger kidnapping due to geotagging, that was the focus of this report. Protecting your children is a deep-seated instinct, which is why so much marketing (including local news, which is nothing but marketing by dumb people) leverages it. Crime against children has never been less common, but plenty of parents won’t let their kids walk to school “because the world is different” than when they grew up. Guess what: we are all subject to the exact same phenomenon in IT security. Email is probably one of the least important data loss channels, but it’s the first place people install DLP. Not a single case of fraud has ever been correlated with a lost or stolen backup tape, but many organizations spend multiples more on those tapes than on protecting web applications. Second, when we are dealing with non-security people, we need to remember that they always prioritize security based on their own needs and frame of reference. Policies and boring education about them never make someone care about what you care about as a security pro. This is why most awareness training fails. To us this report is a joke. To the chain of people who passed it on, it’s the kind of thing that freaks them out. They aren’t stupid (unless they watch Nancy Grace) – they just have a different frame of reference. Share:

Share:
Read Post

Tokenization Guidance: Audit Advice

In this portion of our Tokenization Guidance series I want to offer some advice to auditors. I am addressing both internal auditors going through one of the self assessment questionnaires, as well as external auditors validating adherence to PCI requirements. For the most part auditors follow PCI DSS for the systems that process credit card information, just as they always have. But I will discuss how tokenization alters the environment, and how to adjust the investigation process in the select areas where tokenization systems supplants PAN processing. At the end of this paper, I will go section by section through the PCI DSS specification and talk about specifics, but here I just want to provide an overview. So what does the auditor need to know? How does it change discovery processes? We have already set the ground rules: anywhere PAN data is stored, applications that make tokenization or de-tokenization requests, and all on-premise token servers require thorough analysis. For those systems, here is what to focus on: Interfaces & APIs: At the integration points (APIs and web interfaces) for tokenization and de-tokenization, you need to review security and patch management – regardless of whether the server is in-house or hosted by a third party. The token server vendor should provide the details of which libraries are installed, and how the systems integrate with authentication services. But not every vendor is great with documentation, so ask for this data if they failed to provide it. And merchants need to document all applications that communicate with the token server. This encompasses all communication, including token-for-PAN transactions, de-tokenization requests, and administrative functions. Tokens: You need to know what kind of tokens are in use – each type carries different risks. Token Storage Locations: You need to be aware of where tokens are stored, and merchants need to designate at least one storage location as the ‘master’ record repository to validate token authenticity. In an on-premise solution this is the token server; but for third-party solutions, the vendor needs to keep accurate records within their environment for dispute resolution. This system needs to comply fully with PCI DSS to ensure tokens are not tampered with or swapped. PAN Migration: When a tokenization service or server is deployed for the first time, the existing PAN data must be removed from where it is stored, and replaced with tokens. This can be a difficult process for the merchant and may not be 100% successful! You need to know what the PAN-to-token migration process was like, and review the audit logs to see if there were issues during the replacement process. If you have the capability to distinguish between tokens and real PAN data, audit some of the tokens as a sanity check. If the merchant hired a third party firm – or the vendor – then the service provider supplies the migration report. Authentication: This is key: any attacker will likely target the authentication service, the critical gateway for de-tokenization requests. As with the ‘Interfaces’ point above: pay careful attention to separation of duties, least privilege principle, and limiting the number of applications that can request de-tokenization. Audit Data: Make sure that the token server, as well as any API or application that performs tokenization/de-tokenization, complies with PCI section Requirement 10. This is covered under PCI DSS, but these log files become a central part of your daily review, so this is worth repeating here. Deployment & Architecture: If the token server is in-house or managed on-site you will need to review the deployment and system architecture. You need to understand what happens in the environment if the token server goes down, and how token data is synchronized being multi-site installations. Weaknesses in the communications, synchronization, and recovery processes are all areas of concern; so the merchant and/or vendors must document these facilities and the auditor needs to review. Token Server Key Management: If the token server is in-house or managed on site, you will need to review key management facilities, because every token server encrypts PAN data. Some solutions offer embedded key management while others use external services, but you need to ensure this meets PCI DSS requirements. For non-tokenization usage, and systems that store tokens but do not communicate with the token server, auditors need to conduct basic checks to ensure the business logic does not allow tokens to be used as currency. Tokens should not be used to initiate financial transactions! Make certain that tokens are merely placeholders or surrogates, and don’t work act as credit card numbers internally. Review select business processes to verify that tokens don’t initiate a business process or act as currency themselves. Repayment scenarios, chargebacks, and other monetary adjustments are good places to check. The token should be a transactional reference – not currency or a credit proxy. These uses lead to fraud; and in the event of a compromised system, might be used to initiate fraudulent payments without credit card numbers. The depth of these checks varies – merchants filling out self-assessment questionnaires tend to be more liberal in interpreting of the standard than top-tier merchants and the have external auditors combing through their systems. But these audit points are the focus for either group. In the next post, I will provide tables which go point by point through the PCI requirements, noting how tokenization alters PCI DSS checks and scope. Share:

Share:
Read Post

Conspiracy Theories, Tin Foil Hats, and Security Research

It seems far too much of security research has become like Mel Gibson in “Conspiracy Theory.” Unbalanced, mostly crazy, but not necessarily wrong. But we created this situation, so we have to deal with it. I’m reacting to the media cycle around the Duqu virus, or Son of Stuxnet, identified by F-Secure (among others). You see, no one is interested in product news anymore. No one cares about the incremental features of a vendor widget. They don’t care about success stories. The masses want to hear about attacks. Juicy attacks that take down nuclear reactors. Or steal zillions of dollars. Or result in nudie pictures of celebrities stolen from their computers or cell phones. That’s news today, and that’s why vendor research teams focus on giving the media news, rather than useful information. It started with F-Secure claiming that Duqu was written by someone with access to the Stuxnet source code. Duqu performs reconnaissance rather than screwing with centrifuges, but their message was that this is a highly sophisticated attack, created by folks with Stuxnet-like capabilities. The tech media went bonkers. F-Secure got lots of press, and the rest of the security vendors jumped on – trying to credit, discredit, expand, or contract F-Secure’s findings – anything that would get some press attention. Everyone wanted their moment in the sun, and Duqu brought light to the darkness. But here’s the thing. Everyone saying Duqu and Stuxnet were related in some way might have been wrong. The folks at SecureWorks released research a week later, making contrary claims and disputing any relation beyond some coarse similarities in how the attacks inject code (using a kernel driver) and obscure themselves (encryption and signing using compromised certificates). The media went bonkers again. Nothing like a spat between researchers to drive web traffic to the media. So who is right? That is actually the wrong question. It really doesn’t matter who is right. Maybe Duqu was done by the Stuxnet guys. Maybe it wasn’t. Ultimately, though, to everyone aside from page-whoring beat reporters who benefit from another media cycle, who’s right and who’s wrong about Duqu’s parentage aren’t relevant. The only thing that matters is that you, as a security professional, understand the attack; and have controls in place to protect against it. Or perhaps not – analyzing the attack and accepting its risk is another legitimate choice. This is how the process is supposed to work. A new threat comes to light, and the folks involved early in the cycle draw conclusions about the threat. Over time other researchers do more work and either refute or confirm the original claims. The only thing different now is that much of this happens in public, with the media showing how the sausage is made. And it’s not always pretty. But success in security is about prioritizing effectively, which means shutting out the daily noise of media cycles and security research. Not that most security professionals do anything but fight fires all day anyway. Which means they probably don’t read our drivel either… Photo credit: “Tin Foil Hat” originally uploaded by James Provost Share:

Share:
Read Post

Applied Network Security Analysis: The Advanced Security Use Case

The forensics use case we discussed previously is about taking a look at something that already happened. You presume the data is already lost, the horse is out of the barn, and Pandora’s Box is open. But what if we tried to look at some of these additional data types in terms of making security alerts better, with the clear goal of reducing the window between exploit and detection: reacting faster? Can we leverage something like network full packet capture to learn sooner when something is amiss and to improve security? Yes, but this presents many of the same challenges as using log-based analysis to detect what is going on. You still need to know what you are looking for, and an analysis engine that can not only correlate behavior across multiple types of logs, but also analyze a massive amount of network traffic for signs of attack. So when we made the point in Collection and Analysis that these Network Security Analysis platforms need to be better SIEMs than a SIEM, this is what we were talking about. Pattern Matching and Correlation Assuming that you are collecting some of these additional data sources, the next step is to turn said data into actionable information, which means some kind of alerting and correlation. We need to be careful when using the ‘C’ word (correlation), given the nightmare most organizations have when they try to correlate data on SIEM platforms. Unfortunately the job doesn’t get any easier when extending the data types to include network traffic, network flow records, etc. So we continue to advocate a realistic and incremental approach to analysis. Much of this approach was presented (in gory detail) in our Network Security Operations Quant project. Identify high-value data: This is key – you probably cannot collect from every network, nor should you. So figure out the highest profile targets and starting with them. Build a realistic threat model: Next put on your hacker hat and build a threat model for how you’d attack that high value data. It won’t be comprehensive but that’s okay. You need to start somewhere. Figure out how you would attack the data if you needed to. Enumerate those threats in the tool: With the threat models, design rules to trigger based on the specific attacks you are looking for. Refine the rules and thresholds: The only thing we can know for certain is that your rules will be wrong. So you will go through a tuning process to hone in on the types of attacks you are looking for. Wash, rinse, repeat: Add another target or threat and build more rules as above. With the additional traffic analysis you can look for specific attacks. Whether it’s looking for known malware (which we will talk about in the next post), traffic destined for a known command and control network, or tracking a buffer overflow targeted at an application residing in the DMZ, you get a lot more precision in refining rules to identify what you are looking for. Done correctly this reduces false positives and helps to zero in on specific attacks. Of course the magic words are “done correctly”. It is essential to build the rule base incrementally – test the rules and keep refining the alerting thresholds – especially given the more granular attacks you can look for. Baselining The other key aspect of leveraging this broader data collection capability is understanding how baselines change from what you may be used to with SIEM. Using logs (or more likely NetFlow), you can get a feel for what is normal behavior and use that to kickstart your rule building. Basically, you assume what is happening when you first implement the system is what should be happening, and alert if something varies too far from that normal. That’s not actually a safe assumption but you need to start somewhere. As with correlation this process is incremental. Your baselines will be wrong when you start, and you adjust them over time based with operational experience responding to alerts. But the most important step is the start, and baselines help to get things going. Revisiting the Scenario Getting back to the scenario presented in the Forensics use case, how would some of this more pseudo-real-time analysis help reduce the window between attack and detection? To recap that scenario briefly, a friend at the FBI informed you that some of your customer data showed up as part of a cybercrime investigation. Of course by the time you get that call it is too late. The forensic analysis revealed an injection attack enabled by faulty field validation on a public-facing web app. If you were looking at network full packet capture, you might find that attack by creating a rule to look for executables entered into the form fields of POST transactions, or some other characteristic signature of the attack. Since you are capturing the traffic on the key database segment, you could establish a content rule looking for content strings you know are important (as a poor man’s DLP), and alert when you see that type of data being sent anywhere but the application servers that should have access to it. You could also, for instance, set up alerts on seeing an encrypted RAR file on an egress network path. There are multiple places you could detect the attack if you know what to look for. Of course that example is contrived and depends on your ability to predict the future, figuring out the vectors before the attack hits. But at lot of this discipline is based on a basic concept: “Fool me once, shame on you. Fool me twice, shame on me.” Once you have seen this kind of attack – especially if it succeeds – make sure it doesn’t work again. It’s a bit of solving yesterday’s problems tomorrow, but many security attacks use very similar tactics. So if you can enumerate a specific attack vector based on what you saw, there is an excellent

Share:
Read Post

Virtual USB? Not.

Secure USB devices – ain’t they great? They offer us the ability to bring trusted devices into insecure networks, and perform trusted operations on untrusted computers. If I could drink out of one, maybe it would be the holy grail. Services like cryptographic key management, identity certificates and mutual authentication, sensitive document storage, and a pr0nsafe web browser platform. But over the last year, as I look at the mobile computing space – the place where people will want to use secure USB features – the more I think the secure USB market is in trouble. How many of you connect a USB stick to your Droid phone? How about your iPad? My point is that when you carry your smart device with you, you are unlikely to carry a secure USB device with you as well. The security services mentioned above are necessary, but there has been little integration of these functions into the devices we carry. USB hardware does offer some security advantages, but USB sticks are largely part of the laptop model (era) of mobile computing, which is being marginalized by smart phones. Secure on-line banking, go-anywhere data security, and “The Key to the Cloud” are clever marketing slogans. Each attempts to reposition the technology to gain user preference – and fails. USB sticks are going the way of the zip drive and the CD – the need remains but they are rapidly being marginalized by more convenient media. That’s really the key: the security functions are strategic but the medium is tactical. So where does the Secure USB market segment go? It should go with the users are: embrace the new platforms. And smart device users should look for these security features embedded in their mobile platforms. Just because the media is fading does not mean the security features aren’t just as important as we move on to the next big thing. These things all tend to cycles, but the current strong fashion is to get “an app for that” rather than carry another device. Lack of strong authenication won’t make users carry and use laptops rather than phones. It is unclear why USB vendors have been so slow to react, but they need to untie themselves from their fading medium to support user demand. I am not saying secure USB is dead, but saying the vendors need to provide their core value on today’s relevant platforms. Share:

Share:
Read Post

Applied Network Security Analysis: The Forensics Use Case

Most organizations don’t really learn about the limitations of event logs, until forensic investigators hold up their hands and explain they know what happened, but aren’t really sure how. Huh? How could that happen? It’s pretty simple: logs are a backward-looking indicator. They can help you piece together what happened, but you can only infer how. In a forensic investigation inferring anything is suboptimal. You want to know, especially given the needs to isolate the root cause of the attack and to establish remediations to ensure it doesn’t happen again. So we need to look at additional data sources to fill in gaps in what the logs tell you. Let’s take a look at a simplified scenario to illuminate the issues. We’ll look at the scenario both from the standpoint of a log-only analysis and then with a few other data sources added. For a more detailed incident response scenario, check out our React Faster and Better paper. The Forensic Limitations of Logs It’s the call you never want to get. The Special Agent on the other end of the line called to give you a heads-up: they found some of your customer data as part of another investigation into some cyber-crime activity that helps fund a domestic terrorist ring. Normally the Feds aren’t interested in giving you a heads-up until their investigation is done, but you have a relationship with this agent from your work together in the local InfraGard chapter. So he did you a huge favor. The first thing you need to do is figure out what was lost and how. To the logs! You aren’t sure how it happened, but you see some strange log records indicating changes on a application server in the DMZ. Given the nature of the data your agent friend passed along, you check the logs on the database server where that data resides as well. Interestingly enough, you find a gap in the logs on the database server, where your system collected no log records for a five-minute period a few days ago. You aren’t sure exactly what happened, but you know with reasonable certainty that something happened. And it probably wasn’t good. Now you work backwards and isolate the additional systems compromised as the attackers made their way through the infrastructure to reach their target. It’s pretty resource intensive, but by searching in the log manager you can isolate devices with gaps in their logs during the window you identified. The attackers were pretty effective, taking advantage of unpatched vulnerabilities (Damn, Ops!) and covering their tracks by turning off logging where necessary. At this point you know the attack path, and at least part of what was stolen, thanks to the FBI. Beyond that you are blind. So what can you do to make sure you aren’t similarly suprised somewhere down the line? You can set the logging system to alert if you don’t get any log records from critical assets in any 2-minute period. Again, this isn’t perfect and will result in a bunch more alerts, but at least you’ll know something is amiss before the FBI calls. With only log data you can identify what was attacked but probably not how the attack happened. Forensics Driven by Broader Data Let’s take a look at an alternative scenario with a few other data sources such as full network packet capture, network flow records, and configuration files. Of course it is still a bad day when you get the call from your pal the Special Agent. Of course Applied Network Security Analysis cannot magically make you omniscient, but how you investigate breaches changes. You still start with the logs on the perimeter server and identify the device that served as the attacker’s initial foothold. But you’ve implemented the Full Packet Capture Sandwich architecture described in the last post, so you are capturing the network traffic in your DMZ. You proceed to the network analysis console (using the full packet capture stream) and search all the traffic to and from the compromised server. Most sessions to that server are typical – standard application traffic. But you find some reconnaissance, and then something pretty strange: an executable injected into the server via faulty field validation on the web app (Damn, Developers!). Okay, this confirms the first point of exploit. Next we go to the target (keeping in mind what data was compromised) and do a similar analysis. Again, with our full packet capture sandwich in place, we captured traffic to/from the database server as well. As in the log-only scenario, we pinpoint the time period when logging was turned off, then perform a search in our analysis console to figure out what happened during that 5-minute period on that segment. Yep, a privileged account turned off logging on the database server and added an admin account to the database. Awesome. Using that account, the attacker dumped the database table and moved the data to a staging server elsewhere on your network. Now you know which data was taken, but how? You aren’t capturing all the traffic on your network (infeasible), so you have some blind spots, but with your additional data sources you are able to pinpoint the attack path. The NetFlow records coming from the compromised database server show the path to the staging server. The configuration records from the staging server indicate what executables were installed, which enabled the attacker to package and encrypt the payload for exfiltration. Further analysis of the NetFlow data shows the exfiltration, presumably to yet another staging server on another compromised network elsewhere. It’s not perfect, because you are figuring out what already happened. But now you can get back to your FBI buddy with a lot more information about what tactics the attacker used, and maybe even evidence that might be helpful in prosecution. Can’t Everyone Get Along? Clearly this is a simplified scenario that perfectly demonstrates the need to collect additional data sources to isolate the root cause and attack path of any

Share:
Read Post

Friday Summary: October 28, 2011

I really enjoyed Marco Arment’s I finally cracked it post, both because he captured the essence of Apple TV here and now, and because his views on media – as a consumer – are exactly in line with mine. Calling DVRs “a bad hack” is spot-on. I went through this process 7 years ago when I got rid of television. I could not accept a 5 minute American Idol segment in the middle of the 30 minute Fox ‘news’ broadcast. Nor the other 200 channels of crap surrounding the three channels I wanted. At the time people thought I was nuts, but now I run into people (okay – only a handful) who have pulled the plug on the broadcast media of cable and satellite. Most people are still frustrated with me when they say “Hey, did you see SuperJunk this weekend?” and I say “No, I don’t get television.” They mutter something like ‘Luddite’ and wonder off. Don’t get me wrong, I have a television. A very nice one in fact, but I have been calling it a ‘monitor’ for the last few years because it’s not attached to broadcast media. But not getting broadcast television does not make me a Luddite – quite to the contrary, I am waiting for the future. I am waiting for the day when I can get the rest of the content I want just as I get streaming Netflix today. And it’s not just the content, but the user experience as well. I don’t want to be boxed into some bizarre set of rules the content owners think I should follow. I don’t want half-baked DRM systems or advertising thrust at me – and believe me, this is what many of the other streaming boxes are trying to do. I don’t want to interact with a content provider because I am not interested – it was a bad idea proven foul a long time ago. Just let me watch what I want to watch when I want to watch it. Not so hard. But I wanted to comment on Marco’s point about Apple and their ability to be disruptive. My guess is that Apple TV will go fully a la carte: Show by show, game by game, movie by movie. But the major difference is we would get first run content, not just stuff from 2004. Somebody told me the other day that HBO stands for “Hey, Beastmaster’s On!”, which is how some of the streaming services and many of the movie channels feel. SOS/DD. The long tail of the legacy television market. The major gap in today’s streaming is first run programming. All I really want that I don’t have today is the Daily Show and… the National Football League (queue Monday Night Football soundtrack). And that’s the point where Mr. Arment’s analysis and mine diverge – the NFL. I agree that whatever Apple offers will likely be disruptive because the technology will simplify how we watch, rather than tiptoeing around legacy businesses and perverse contracts. But today there is only one game in town: the NFL. That’s why all those people pay $60 (in many cases it’s closer to $120) a month – to watch football. You placate kids with DVDs; you subscribe to cable for football! Just about every man I know, and 30% of the women, want to watch their NFL home team on Sunday. It’s the last remaining reason people still pay for cable or satellite in this economy. Make no mistake – the NFL is the 600 lb. gorilla of television. They currently hold sway over every cable and satellite network in the US. And the NFL makes a ridiculous amount of money because networks must pay princely sums for NFL games to be in the market. Which is why the distributors are so persnickety about not having NFL games on the Internet. Why else would they twist the arm of the federal government to shut down a guy relaying NFL games onto the Internet? (Thanks a ton for that one you a-holes – metropolitan areas broadcast over-the-air for free but it’s illegal to stream? WTF?) Nobody broadcasts live games over the Internet!?! Why not?!? The NFL could do it directly – they are already set up with “Game Pass” and “Game Rewind” – but likely can’t because fat network contracts prohibit it. Someone would need to spend the $$$ to get Internet distribution rights. Someone should, because there is huge demand, but there are only a handful of firms which could ante up a billion dollars to compete with DirecTV. But when this finally happens it will be seriously disruptive. Cable boxes will be (gleefully) dumped. Satellite providers will actually have competition, forcing them to alter their contacts and rates, and go back to delivering quality picture. ISPs will be pressured to actually deliver the bandwidth they claim to be selling. Consumers will get what they want at lower cost and with greater convenience. Networks will scramble to license the rest of their content to any streaming service provider they can, increasing content availability and pushing prices lower. If Apple wants to be disruptive, they will stream NFL games over the Internet on demand. If they can get rights to broadcast NFL for a reasonable price, they win. The company that gets the NFL for streaming wins. If Apple doesn’t, bet that Amazon will. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Rich quoted on SaaS security services. Adrian quoted in SearchSOA. Compliance Holds Up Los Angeles Google Apps Deployment. Mike plays master of the obvious. Ask the auditor before you commit to something that might be blocked by compliance. Duh! Favorite Securosis Posts Adrian Lane: A Kick-Ass Cloud Database Security Automation Example. And most IaaS cloud providers have the hooks to do most of this today. You can even script the removal of base database utilities you don’t want. Granted, you still have to set permissions on data and users, but the

Share:
Read Post

Next Generation != (Always) Better

It all started with a simple tweet from The Mogull, which succinctly summed up a lot of the meat grinder of high tech marketing. You see the industry is based on upgrades and refreshes, largely driven by planned obsolescence. Let’s just look at Microsoft Word. I haven’t really used any new functionality since Office 2003. You? They have overhauled the UI and added some cloudiness (which they call Office Live), but it’s really moving deck chairs around. A word processor is a word processor for 95% of the folks out there. Rich was reacting to the constant barrage of “next generation” this and next generation that we are constantly get pitched, while most organizations can’t even make the current generation work. It is becoming rare to survive a vendor briefing without hearing about how their product is NextGen (only their product, of course). This is rampant in the spaces I cover: network and endpoint security. Who hasn’t heard of a next generation firewall? Now we have next generation IPS, and it’s just a matter of time before we see next generation TBD promising to make security easy. We know how this movie ends. To be fair, some innovations really are next generation, and they make a difference to leading edge companies that can take advantage of them. I mentioned NGFW in a tongue-in-cheek fashion, but the reality is that moving away from ports and protocols, to application awareness, is fundamentally different and can be better. But only if the customer can take advantage and build these new application-oriented policies. A NGFW is no better than a CGFW (current generation firewall) without a next-generation rule base to take advantage of the additional capabilities. I guess what I find most frustrating about the rush to the next generation is the arbitrary nature of what is called “next generation”. Our pals at the Big G (that’s Gartner for you Securosis n00bs) recently published a note on NGIPS (next generation IPS), which you can get from SourceFire (behind a reg wall). As the SourceFire folks kindly point out, they have offered many of these so-called next generation functions since 2003 – they just couldn’t tell a coherent story about it. Can something over 6 years old really be next generation? So next generation monikers are crap. Driven by backwards-looking indicators – like most big IT research. SourceFire did a crappy job of communicating why their IPS was different back in the day, and it wasn’t until some other companies (notably the NGFW folks) started offing application-aware IPS capabilities that the infinite wisdom in Stamford decided it was suddenly time for NGIPS. And now this will start a vendor hump-a-thon where every other IPS vendor (yeah, the two left) will need to spin their positioning to say ‘NGIPS’ a lot. Whether they really do NGIPS is besides the point. You can’t let the truth get in way of a marketing campaign, can you? What’s lost in all the NextGen quicksand? What customers need. Most folks don’t need a next generation word processor, but one shows up every 2-3 years like clockwork. Our infrastructure security markets are falling in line with this model. Do we need NextGen key management? NextGen endpoint security? NextGen application protection? Given how well the current generation works, I’d say yes. But here’s the problem. I know this is largely a marketing exercise, so let’s be clear about what we are looking for. Something that works. Call it what you want, but if it’s the same old crap that we couldn’t use before, rebranded as next generation… I’m not interested. And no one else will be either. Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.