Securosis

Research

Applied Network Security Analysis: The Breach Confirmation Use Case

As our last use case in Applied Network Security Analysis, it’s time to consider breach confirmation: confirming and investigating a breach that has already happened. There are clear similarities to the forensics use case, but breach confirmation takes forensic analysis to the next level: you need to learn the extent of the breach, determining exactly what was taken and from where. So let’s revisit our Forensics scenario to look at how that can be extended to confirm a breach. In that scenario, a friend at the FBI gave you a ring to let you know they found some of your organization’s private data during a cybercrime investigation. In the initial analysis you found the compromised devices and the attack paths, as part of isolating the attack and containing the damage. You cleaned the devices and configured IPS rules to ensure those particular attacks will be blocked in the future. You also added some rules to the network security analysis platform to ensure you are watching for those attack traffic patterns moving forward – just in case the attackers evade the IPS. But you still can’t answer the question: “What was stolen?” with any precision. We know the attackers compromised a device and used it to pivot within your environment to get to the target: a database of sensitive information. We can’t assume that was the only target, and if your attack was like any other involving a persistent attacker, they found multiple entry points and have a presence on multiple devices. So it’s time to head back into your analysis console to figure out a few more things. What other devices were impacted: Start by figuring out how many other machines were compromised by the perimeter device. By searching all traffic originating from the compromised DMZ server, you can see which devices were scanned and possibly owned. Then you can confirm using either the configuration data you’ve been collecting, or by analyzing the machine using an endpoint forensics tool. You may already have some or all of this information already when trying to isolate the attack path. What was taken: Next you need to figure out what was taken. You already know at least one egress path, identified during the initial forensic analysis. Now you need to dig deeper into the egress captures to see if there were any other connections or file transfers to unauthorized sites. The attackers continue to improve their exfiltration techniques, so it’s likely they’ll use both encrypted protocols and encrypted files to make it hard to figure out what was actually stolen. Having the full packet stream allows you to analyze the actual files, though depending on the sophistication of your attacker, you might need specialized help (from third party experts or law enforcement) to break the crypto. Remember that the first wave of forensic investigation focuses on determining the attack paths and gathering enough information to do an initial damage assessment and remediation. From there it’s all about containing the immediate damage as best you can. This next level of forensic analysis is more comprehensive: determine the true extent of the compromise and inventory what was taken. As you can imagine, without having the network packet capture it’s impossible to do this analysis. You would be stuck with log files telling you what happened, but not what, how, or how much was taken. That’s why we keep harping on the need for more comprehensive data on which to base network security analysis. Clearly you can’t capture all the data flowing around your networks, so it’s likely you’ll miss something. But you will have a lot more useful information for responding to attacks than organizations which do not capture traffic. Summary To wrap up our Applied Network Security Analysis series let’s revisit some of the key concepts we’ve covered. First of all, in today’s environment, you can’t assume you will be able to stop a targeted attacker. It is much smarter and more realistic to assume your devices are compromised, and to act accordingly. This assumption puts a premium on detecting successful attacks, preventing breaches, and containing damage. All those functions require data collection to be able to understand what is happening in your environment. Log Data Is Not Enough: Most organizations start by collecting their logs, which is clearly necessary for compliance purposes. But it’s not sufficient – not by a long shot. Additional data – including configuration, network flow, and even the full network packet stream, is key to understanding what is happening in your environment. Forensics: We walked through how these additional data sources can be instrumental in a forensic investigation, ultimately resulting in a breach confirmation (or not). The key objective of forensics is to figure out what happened, and the ability to replay attacks and monitor egress points (using the actual traffic) replaces forensic interpretation with tested fact. And forensics folks like facts much better. Security: These additional data sources are not just useful after an attack has happened, but also to recognize issues earlier. By analyzing the traffic in your perimeter and critical network segments, you can spot anomalous behavior and investigate preemptively. To be fair, the security use cases are predicated on knowledge of what to look for, which is never perfect. But in light of the number of less sophisticated attackers using known attacks, making sure you don’t get hit by the same attack twice is very useful. We all know there are always more projects than your finite resources and funding will allow. So why is network security analysis something that should bubble up to the top of your list? The answer is about what we don’t know – we cannot be sure what the attack will be in the future, but we know it will be hard to detect, and it will steal critical information. We built the Securosis security philosophy on Reacting Faster and Better, by focusing on gathering as much data as possible, which requires an institutional commitment to data collection and analysis.

Share:
Read Post

A Public Call for eWallet Design Standards

Last week StorefrontBacktalk ran an article on Mobile Wallets. It underscored my personal naivete in assuming that anyone who designed and built a digital wallet for ecommerce would first and foremost protect customer payment data and other private information. Reading this post I had one of those genuine “Oh $&!#” moments – what if the wallet provider was not interested in my security or privacy? Duh! A wallet is a small data store for your financial, personal, and shopping information. Think about that for a minute. If you buy stuff on your computer or from your phone via an eWallet app, over time it will collect a ton of information. Transaction receipts. Merchant lists. Merchant relationship information such as passwords. Buying history. Digital coupons. Pricing information. Favorites and wish lists. Private keys. This is all in addition to “payment instruments” such as credit cards, PayPal accounts, and bank account information. Along with personal data including phone number, address, and (possibly) Social Security Number for antitheft/identity verification. It’s everything about you and your buying history all in one spot – effectively a personal data warehouse, on you. And it’s critical that you control your own data. This is a really big deal! To underscore why, let me provide a similar example from an everyday security product. For those of you in security, wallets are effectively personal equivalents to key management servers and Hardware Security Modules (HSMs). Key management vendors do not have full access to their customers’ encryption keys. You do not and would not give them a backdoor to the keys that secure your entire IT infrastructure. The whole point of an HSM is to secure the data from everyone who is not authorized to use it. And only the customer who owns the HSM gets to decide who gets keys. For those of you not in security, think of the eWallet as a combination wallet and keychain. It’s how you gain access to your home, your car, your mailbox, your office, and possibly your neighbors’ houses for when you catsit. And it holds your cash (technically more like a blank checkbook, along with your electronic signature), credit cards, debit card, pictures of your kids, and that Post-It with your passwords. You don’t hand this stuff out to third parties! Heck, when your kid wants to borrow the car, you only give them one key and forty bucks for gas – they don’t get everything! But the eWallet systems described in that article don’t belong to you – they are the property of third parties, who would naturally want the ability to rummage through them for useful (marketing and sales) data – what you might consider your data. Human history clearly shows that if someone can abuse your trust for financial gain, they will. Seriously, people – don’t give your wallet to strangers. Let’s throw a couple design principles out there for people who are building these apps: If the wallet does not secure all the user’s content – not just credit card data – it’s insecure and the design is a failure. If the wallet’s author does not architect and implement controls for the user to select what they wish to share with third parties, they have failed. If the wallet does not programatically protect one ‘pocket’, or compartment inside the wallet, from other compartments, it is untrustworthy (as is its creator). If the wallet has a vendor backdoor, it has failed. If the wallet does not use secure and publicly validated communications protocols, it has failed. Wallet designers need to consider the HSM / key management security model. It must protect user data from all outsiders first and foremost. If sharing data/coupons/trends/transaction receipts, easy shopping, “loyalty points”, providing location data, or any other objective supersedes security: the wallet needs to be scrapped and re-engineered. Security models like iOS compartmentalization could be adapted, but any intra-wallet communication must be tightly controlled – likely forcing third parties to perform various actions outside the wallet, if the wallet cannot enable them with sufficient security and privacy. I’ll follow up with consider the critical components of a wallet, as a general design framework; things like payment protocols, communications protocols, logging, authentication, and digital receipts should all be standardized. But more important: the roles of buyer, seller, and any mediators should be defined publicly. Just because some giant company creates an eWallet does not mean you should trust it. Share:

Share:
Read Post

Understanding and Selecting DAM 2.0: Market Drivers and Use Cases

I was going to being this series talking about some of the architectural changes, but I’ve reconsidered. Since our initial coverage of Database Activity Monitoring technology in 2007, the products have fully matured into enterprise worthy platforms. What’s more, they’ve proven significant security and compliance benefits, as evidenced by market growth from $40M to revenues well north of $100M per year. This market is no longer dominated by small vendors, rather large vendors who have acquired six of the DAM startups. As such, DAM is being integrated with other security products into a blended platform. Because of this, I thought it best to go back and define what DAM is, and discuss market evolution first as it better frames the remaining topics we’ll discuss rest of this series. Defining DAM Our longstanding definition is: Database Activity Monitors capture and record, at a minimum, all Structured Query Language (SQL) activity in real time or near real time, including database administrator activity, across multiple database platforms, and can generate alerts on policy violations. While a number of tools can monitor various level of database activity, Database Activity Monitors are distinguished by five features: The ability to independently monitor and audit all database activity including administrator activity and SELECT transactions. Tools can record all SQL transactions: DML, DDL, DCL, (and sometimes TCL) activity. The ability to store this activity securely outside of the database. The ability to aggregate and correlate activity from multiple, heterogeneous Database Management Systems (DBMS). Tools can work with multiple DBMS (e.g.,Oracle, Microsoft, IBM) and normalize transactions from different DBMS despite differences in their flavors of SQL. The ability to enforce separation of duties on database administrators. Auditing activity must include monitoring of DBA activity, and solutions should prevent DBA manipulation of and tampering with logs and activity records. The ability to generate alerts on policy violations. Tools don’t just record activity, they provide real-time monitoring, analysis and rule-based alerting. For example, you might create a rule that generates an alert every time a DBA performs a SELECT query on a credit card column that returns more than 5 results. DAM tools are no longer limited to a single data collection method, rather they offer network, OS layer, memory scanning and native audit layer support. Users can tailor their deployment to their security and performance requirements, and collect data from sources best fit their requirements. Platforms Reading that you’ll notice few differences from what was discussed in 2007. Further, we predicted the evolution of Applications and Database Security & Protection (ADMP) on the road to Content Monitoring and Protection, stating “DAM will combine with application firewalls as the center of the applications and database security stack, providing activity monitoring and enforcement within databases and applications.” But where it gets interesting is the other – different- routes vendors are taking to achieve this unified model. It’s how vendors bundle DAM into a solution that distinguishes one platform from another. The Enterprise Data Management Model – In this model, DAM features are generically extended to many back-office applications. Data operations, such as a a file read or SAP transaction, are treated just like a database query. As before, operations are analyzed to see if a rule was violated, and if so, a security response is triggered. In this model DAM does more than alerting and blocking, but leverages masking, encryption and labeling technologies to address security and compliance requirements. This model relies heavily on discovery to help administrators locate data and define usage policies in advance. While in many respects similar to SIEM – the model leans more toward real time analysis of data usage. There is some overlap with DLP, but this model lacks endpoint capabilities and full content awareness. The ADMP Model – What’s sometimes called the Web AppSec model, here DAM is linked with web application firewalls to provide activity monitoring and enforcement within databases and applications. DAM protects content in a structured application and database stack, WAF shields application functions from misuse and injection attacks, and File Activity Monitoring (FAM) protects data as it moves in and out of documents or unstructured repositories. This model is more application aware than the others, reducing false positives through transactional awareness. the ADMP model also provides advanced detection of web borne threats. Policy Driven Security Model – Classic database security workflow of discovery, assessment, monitoring and auditing; each function overlapping with the next to pre-generate rules and policies. In this model, DAM is just one of many tools to collect and analyze events, and not necessarily central to the platform. What’s common amongst vendors who offer this model is policy orchestration: policies are abstracted from the infrastructure, with the underlying database – and even non-database – tools working in unison to fulfill the security and compliance requirements. How work gets done is somewhat hidden from the user. This model is great for reducing the pain of creating and managing policies, but as the technologies are pre-bundled, lacks the flexibility of other platforms. The Proxy Model – Here DAM sits in front of the database, filtering inbound requests, acting as a proxy server. What’s different is what the proxy does with inbound queries. In some cases the query is blocked because it fits a known attack signature, and DAM acts as a firewall to protect – a method sometimes called ‘virtual patching’ – the database. In other cases the query is not forwarded to the database because the DAM proxy has recently seen the same request, and returns query results directly to the calling application. DAM is in essence a cache to speed up performance. Some platforms also provide the option of rewriting inbound queries, either to optimize performance or to minimize the risk an inbound query is malicious. DAM tools have expanded into other areas of data, database and application security. Market Drivers DAM tools are extremely flexible and often deployed for what may appear to be totally unrelated reasons. Deployments are typically driven by one of three drivers: Auditing for

Share:
Read Post

Tokenization Guidance: PCI Requirement Checklist

So far in this series on tokenization guidance for protecting payment data, we have covered deficiencies in the PCI supplement, offered specific advice for merchants to reduce audit scope, and provided specific tips on what to look for during an audit. In this final post we will provide a checklist of each PCI requirement affected by tokenization, with guidance on how to modify compliance efforts in light of tokenization. I have tried to be as brief as possible while still covering the important areas of compliance reporting you need to adjust. Here is our recommended PCI requirements checklist for tokenization: PCI Requirement   Recommendation 1.2 Firewall Configuration   Token server should restrict all IP traffic – to and from – to systems specified under the ‘ground rules’, specifically: *      Payment processor *      PAN storage server (if separate) *      Systems that request tokens *      Systems that request de-tokenization This is no different than PCI requirements for the CDE, but it’s recommended that these systems communicate only with each other. If the token server is on site, Internet and DMZ access should be limited to communications with payment processor.   2.1 Defaults Implementation for most of requirement 2 will be identical, but section 2.1 is most critical in the sense there should be no ‘default’ accounts or passwords for the tokenization server. This is especially true for systems that are remotely managed or have remote customer-care options. All PAN security hinges on effective identity and access management, so establishing unique accounts with strong pass-phrases is essential.   2.2.1 Single function servers 2.2.1 bears mention both from a security standpoint, as well as protection from vendor lock-in. For security, consider an on-premise token server as a standalone function, separate and distinct from applications that make tokenization and de-tokenization requests.   To reduce vendor lock-in, make sure the token service API calls or vendor supplied libraries used by your credit card processing applications are sufficiently abstracted to facilitate switching token services without significant modifications to your PAN processing applications.   2.3 Encrypted communication You’ll want to encrypt non-console administration, per the specification, but all API calls to the token service as well. It’s also important to consider, when using multiple token servers to support failover, scalability or multiple locations that all synchronization occurs over encrypted communications, preferably a dedicated VPN with bi-directional authentication.   3.1 Minimize PAN storage The beauty of tokenization is it’s the most effective solution available for minimizing PAN storage. By removing credit card numbers from every system other than the central token server, you cut the scope of your PCI audit. Look to tokenize or remove every piece of cardholder data you can, and keeping it in the token server as well. You’ll still meet business, legal and regulatory requirements while improving security.   3.2 Authentication data Tokenization does not circumvent this requirement; you must remove sensitive authentication data per sub-sections 3.2.X 3.3 Masks Technically you are allowed to preserve the first six (6) digits and the last four (4) digits of the PAN. However, it’s our recommendation that you examine your business processing requirements to determine if you can fully tokenize the PAN, or at a minimum, only preserve the last four digits for customer verification. The number of possible tokens you can generate with the remaining six digits is too small for many merchants to generate quality random numbers. Please refer to ongoing public discussions on this topic for more information.   When using a token service from your payment processor that you ask for single-use tokens to avoid possible cases of cross-vendor fraud.   3.4 Render PAN unreadable One of the principle benefits of tokenization is that it renders PAN unreadable. However, auditing environments with tokenization requires two specific changes: 1.     Verify that PAN data is actually swapped for tokens in all systems. 2.     For on-premise token servers, verify that the token server adequately encrypts stored PAN, or offers an equivalent form of protection, such as not storing PAN data*.   We do not recommend hashing as it offers poor PAN data protection. Many vendors’ store hashed PAN values in the token database as a means of speedy token lookup, but while common, it’s a poor choice. Our recommendation is to encrypt PAN data, and as many use databases to store information, we believe table, column or row level within the token database is your best bet. Use of full database or file layer encryption can be highly secure, but most solutions offer no failsafe protection when database or token admin credentials are compromised.   We acknowledge our recommendations differ from most, but experience taught us to err on the side of caution when it comes to PAN storage.   *Select vendors offer one-time pad and codebooks options that don’t require PAN storage.   3.5 Key management Token server encrypt the PAN data stored internally, so you’ll need to verify the supporting key management system as best you can. Some token servers offer embedded key management, while others are designed to leverage your existing key management services.   There are very few people who can adequately evaluate key management systems to ensure they are really secure, but at the very least, you can check that the vendor is using industry standard components, or has validated their implementation with a 3rd party. Just make sure they are not storing the keys in the token database unencrypted. It happens.   4.1 Strong crypto on Internet communications As with requirement 2.3, when using multiple token servers to support failover, scalability and/or multi-region support that all synchronization occurs over encrypted communications, preferably a dedicated VPN with bi-directional authentication.   6.3 Secure development One site or 3rd party token servers will both introduce new libraries and API calls into your environment. It’s critical that your development process includes the validation that what you put into production is secure. You can’t take the vendor’s word for it – you’ll need to validate that all defaults, debugging code and API calls are secured. Ultimately

Share:
Read Post

Applied Network Security Analysis: The Malware Analysis Use Case

As we resume our tour of advanced use cases for Network Security Analysis, it’s time to consider malware analysis. Of course most successful attacks involve some kind of malware at some point during the attack. If only just to maintain a presence on the compromised device, some kind of bad stuff is injected. And once the bad stuff is on a device, it’s very very hard to get rid of it – and even harder to be sure. Most folks (including us) recommend you just re-image the device, as opposed to trying to clean the malware. This makes it even more important to detect malware as quickly as possible and (hopefully) block it before a user does something stupid to compromise their device. There are many ways to detect malware, depending on the attack vector, but a lot of what we see today is snuck through port 80 as web traffic. Sure, dimwit users occasionally open a PDF or ZIP file from someone they don’t know, but more often it’s a drive-by download, which means it comes in with all the other web traffic. So we have an opportunity to detect this malware when it enters the network. Let’s examine two situations, one with a purpose-built device to protect against web malware, and another where we’re analyzing malware directly on the network analysis platform. Detecting Malware at the Perimeter As we’ve been saying throughout this series, extending data collection and capture beyond logs is essential to detecting modern attacks. One advantage of capturing the full network packet stream at the ingress point of your network is that you can check for known malware and alert as they enter the network. This approach is better than nothing but it has two main issues: Malware sample accuracy: This approach requires accurate and comprehensive malware samples already loaded into the device to detect the attack. We all know that approach doesn’t work well with endpoint anti-virus and is completely useless against zero-day attacks, and this approach has the same trouble. No blocking: Additionally, once you detect something on the analysis platform, your options to remediate are pretty limited. Alerting on malware entering the network is useful, but blocking it is much better. Alternatively, a new class of network security device has emerged to deal with this kind of sneaky malware, by exploding the malware as it enters the network to understand the behavior of inbound sessions. Again, given the prevalence of unknown zero-day attacks, the ability to classify known bad behavior and see how a packet stream actually behaves can be very helpful. Of course no device is foolproof, but these devices can provide earlier warning of impending problems than traditional perimeter network security controls. Using these devices you can also block the offending traffic at the perimeter if it is detected in time, reducing the likelihood of device compromise. But you can’t guarantee you will catch all malware, so you must figure out the extent of t he compromise. There is also a more reactive approach: analyzing outbound traffic to pinpoint known command and control behavior and targets, which usually indicating a compromised device. At this point the device is already pwned, so you need to contain the damage. Either way, you must figure out exactly what happened and whether you need to sound the alarm. Containing the Damage Based on the analysis on the perimeter, we know both the target device and the originating network address. With our trusty analysis platform we can then figure out the extent of the damage. Let’s walk through the steps: Evaluate the device: First you need to figure out if the device is compromised. Your endpoint protection suite might not be able to catch an advanced attack, so search your analysis platform (and logging system) to find any configuration changes made on the device, and look for any strange behavior – typically through network flow analysis. If the device is still clean all the better. But we will assume it’s not. Profile the malware: Now you know the device is compromised, you need to figure out how. Sure you could just wipe it, but that eliminates the best opportunity to profile the attack. The network traffic and device information enable your analysts to piece together exactly what the malware does, replay the attack to confirm, and profile its behavior. This helps figure out how many other devices have been compromised, because you know what to look for. Determine the extent of the damage: The next step is to track malware proliferation. You can search the analysis platform to look for the malware profile you built in the last step. This might mean looking for communication with the external addresses you identified, identifying command and control patterns, or watching for the indicative configuration changes; but however you proceed, having all that data in one place facilitates identifying compromised devices. Watch for the same attack: You know the saying: “Fool me once, shame on you. Fool me twice, shame on me.” Shame on you if you let the same attack succeed on your network. Add rules to detect and block attacks you have seen for the future. We have acknowledged repeatedly that security professionals get no credit for blocking attacks, but you certainly look like a fool if you get compromised repeatedly by the same attack. You are only as good as your handling of the latest attack. So learn from these attacks; the additional data collection capabilities of network security analysis platforms can give you an advantage, both for containing the damage and for ensuring it doesn’t happen again. As we wrap up this Applied Network Security Analysis series early next week, we will examine the use case of confirming a breach actually happened, and then revisit the key points to solidify our case for capturing network traffic as a key facet of your detection capabilities. Share:

Share:
Read Post

Friday Summary: November 4, 2011

I wouldn’t say I’m a control freak, but I am definitely “control aligned”. If something is important to me I like to know what’s going on under the hood. I also hate to depend on someone else for something I’m capable of. So I have no problem trusting my accountant to keep me out of tax jail, or hiring a painter for the house, but there is a long list of things I tend to overanalyze and have trouble letting go of. Pretty damn high up that list is the Securosis Nexus. I have been programming as a hobby since third grade, and for a while there in the early days of web applications it was my full time profession. I don’t know C worth a darn, but I was pretty spiffy with database design and my (now antiquated) toolset for building web apps. I still code when I can, but it’s more like home repair than being a general contractor. When Mike, Adrian, and I came up with the idea for the Nexus I did all the design work. From the UI sketches we sent to the visual designers to the features and logic flow. Not that I did it all alone, but I took point, and I’m the one who interfaces with our contractors. Which is where I’m learning how to let go. The hard way. I have managed (small) programming teams before but this is my first time on the hiring side of the contractor relationship. It’s also the first time I haven’t written any significant amount of code for something I’m pretty much betting my future on (and the future of my partners and our families). Our current contractor team is great. Among other things they suggested an entirely new architecture for the backend that is far better than my initial plans and our PoC code. I wish they would QA a little better (hi guys!), and we don’t always see things the same way, but I’m damn happy with the product. But it’s extremely hard for me to rely on them. For example, today I wanted to change how a certain part of the system functioned (how we handle internal links). I know what needs to be done, and even know generally what needs to happen within the code, but I realized I would probably just screw it up. And it would take me a few hours (to screw up), while they can sort it all out in a fraction of the time. I don’t know why this bothers me. Maybe it’s knowing that I’ll see a line item on an invoice down the road. But it’s probably some deep-seated need to feel I’m in control and not dependant on someone else for something so important. But I am. And I need to get used to it. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Me (Rich) in a DLP video I for Trend Micro. I really liked the video crew on this one and the quality shows. I may need to get myself a Canon DSLR for our future Securosis videos instead of our current HD camcorder. I also wrote up how to recover lost iCloud data based on my own serious FAIL this week. Favorite Securosis Posts Mike Rothman: Virtual USB? Not.. Adrian has it right here. Even though it’s more secure to carry (yet another) device, users won’t do it. They want everything on their smartphone, and they will get it. It’s just a matter of when, and at what cost (in terms of security or data loss). Adrian Lane: How Regular Folks See Online Safety. Lately news items are right out of Theater of the Absurd: Security Tragicomedy. Rich: Tokenization Guidance: Audit Advice. Adrian is really building the most definitive guide out there. Other Securosis Posts Incite 11/2/2011: Be Yourself. Conspiracy Theories, Tin Foil Hats, and Security Research. Applied Network Security Analysis: The Advanced Security Use Case. Applied Network Security Analysis: The Forensics Use Case. Favorite Outside Posts Mike Rothman: 3 Free Tools to Fake DNS Responses for Malware Analysis. This is a good tip for testing, but also critical for understanding the tactics adversaries will use against you. Adrian Lane: The Chicago Way. Our own Dave Lewis does the best job in the blogsphere at explaining what the heck is going on with the Anonymous / Los Zetas gang confrontation. James Arlen: Harvard Stupid. Two posts in one – interesting financial story tailed by an excellent example of how security should be implemented from a big picture view. If you run IT security for your company, read this! Rich: Kevin Beaver on why users violate policies. I don’t agree with the lazy comment though – it’s not being lazy if your goal is to get your job done and you deal with something in the way. Research Reports and Presentations Fact-Based Network Security: Metrics and the Pursuit of Prioritization. Tokenization vs. Encryption: Options for Compliance. Security Benchmarking: Going Beyond Metrics. Understanding and Selecting a File Activity Monitoring Solution. Database Activity Monitoring: Software vs. Appliance. React Faster and Better: New Approaches for Advanced Incident Response. Measuring and Optimizing Database Security Operations (DBQuant). Network Security in the Age of Any Computing. Top News and Posts UK Cops Using Fake Mobile Phone Tower to Intercept Calls, Shut Off Phones. Malaysian CA Digicert Revokes Certs With Weak Keys, Mozilla Moves to Revoke Trust. Four CAs Have Been Compromised Since June. Hackers attacked U.S. government satellites. How Visa Protects Your Data. Exposing the Market for Stolen Credit Cards Data. ‘Nitro’ Cyberespionage Attack Targets Chemical, Defense Firms. Blog Comment of the Week This week we are redirecting our donation to support Brad “theNurse” Smith. This week’s best comment goes to Zac, in response to Conspiracy Theories, Tin Foil Hats, and Security Research. I personally think that the problem with the media hype is that it seems to distract more than inform. The overall result being that you end up with “experts” arguing over inconsequential

Share:
Read Post

Incite 11/2/2011: Be Yourself

Last week I was invited to speak at Kennesaw State University’s annual cybersecurity awareness day. They didn’t really give me much direction on the topic, so I decided to give my Happyness presentation. I figured there would be students and other employees who could benefit from my journey from total grump to fairly infrequent grump, and a lot of the stuff I’ve learned along the way. One of my key lessons is to accept the way I am and stop trying to be someone else. Despite my public persona I like (some) people. Just not many, and in limited doses. I value and need my solitary time and I have designed a lifestyle to embrace that. I say what I think, and know I can be blunt. Don’t ask me a question if you don’t want the answer. Sure, I have mellowed over the years, but ultimately I am who I am, and my core personality traits are unlikely to change. The other thing I have realized is the importance of managing expectations. For example, I was at SecTor CA a few weeks back, and at the beginning of my presentation on Cloud Security (WMV file), I mentioned the Internet with a snarky, “You know, the place were pr0n is.” (h/t to Rich – it’s his deck). There was a woman sitting literally in the front row who blurted out, “That’s totally inappropriate.” I immediately stopped my pitch, because this was a curious comment. I asked the woman what she meant. She responded that she didn’t think it was appropriate to mention pr0n on a slide in a conference presentation. Yeah, I guess she doesn’t get to many conferences. But it wasn’t something I was going to gloss over. So I responded: “Oh you think so, then this may not be the session for you.” Yes, I really said that, much to the enjoyment of everyone else in the room. I figured given the rest of the content and my presentation style that this wasn’t going to end well. There was no reason for her to spend an hour and be disappointed. To her credit, she got up and found another session, which was the best outcome for both of us. Earlier in my career, I would have let it go. I would probably have adapted my style a bit to be less, uh, offensive. I would have gotten the session done, but it wouldn’t have been my best effort. Now I just don’t worry about it. If you don’t like my style, leave. If you don’t think I know what I’m talking about, leave. If you don’t like my blog posts, don’t read them. It’s all good. I’m not going to feel bad about who I am. Which philosophy is directly from Steve Jobs. “Your time is limited, so don’t waste it living someone else’s life.” I have got lots of problems, but trying to be someone else isn’t one of them. For that I’m grateful. So just be yourself, not who they want you to be. That’s the only path to make those fleeting moments of happiness less fleeting. -Mike Photo credits: “Just be Yourself” originally uploaded by Akami Incite 4 U Keeping tabs on theNurse: I know Brad “theNurse” Smith isn’t familiar to most of you, but if you have been to a major security conference, odds are you have seen him and perhaps met him. I first met Brad 5+ years ago when we worked as Black Hat room proctors together, and have since seen him all over the place. Last week Brad suffered a serious stroke while delivering a presentation at the Hacker Halted conference in Miami, and he still hasn’t regained consciousness. You can get updates on Brad over at the social-engineer.org site, and can leave donations if you want. Maybe I’m identifying a bit too much after my recent health scare on the road, but we feel terrible for Brad and his wife and all of us at Securosis wish them the best. We are also putting our money where our mouths are, and directing (and increasing) our Friday Summary donation his way this week. – RM The weakest link? Your people… I just love stories of social engineering. Yes, there are some very elegant technical attacks, but they seem so much harder than just asking for access to the stuff you need. Like a wiring closet or conference room. Why pick the lock on the door when they’ll just open when you knock? Kai Axford had a great video (WMV) of actually putting his own box into a pen test client’s wiring closet – with help from the network admin – in his SecTor CA presentation. And NetworkWorld has a good story on social engineering, including elegant use of a tape measure. But it’s not like we haven’t seen this stuff before. On my golf trip, we stumbled across Beverly Hills Cop on a movie channel and Axel Foley is one of the best social engineers out there. – MR Token gesture: 403 Labs QSA and PCI columnist Walt Conway noted a major change to the PCI Special Interest Groups (SIGs) this year. The “participating organizations” – a group comprised mostly of the merchants who are part of the PCI Council – will get the deciding vote on which SIGs get to provide the PCI Council advice. Yes, they get a vote on what topics get the option of community guidance. The SIGs do a lot of the discovery and planning work that goes into the guidance ultimately published by the PCI Council – end to end encryption is one example. Unless, of course, someone like Visa objects to the SIG’s guidance, in which case the PCI Council squashes it like a bug – as they did with tokenization. This olive branch is nice, but it’s a token minuscule gesture. – AL Job #1: Keep head attached to body: I joke a lot during presentations about the importance of a public execution

Share:
Read Post

How Regular Folks See Online Safety, and What It Says about Us

I remember very clearly the day I vowed to stop watching local news. I was sitting at home cooking dinner or something, when a teaser report of a toddler who died after being left in a car in the heat aired during that “what we’re covering tonight” opening to the show. It wasn’t enough to report the tragedy – the reporter (a designation she surely didn’t deserve) seemed compelled to illustrate the story by locking a big thermometer in the car, to be pulled out during the actual segment. Frankly, I wanted to vomit. I have responded to more than a few calls involving injured or dead children, and I was disgusted by the sensationalism and desperate bid for ratings. With rare exceptions, I haven’t watched local news since then; I can barely handle cable news (CNN being the worst – I like to say Fox is right, MSNBC left, and CNN stupid). But this is how a large percentage of the population learns what’s going on outside their homes and work, so ‘news’ shows frame their views. Local news may be crap, but it’s also a reflection of the fears of society. Strangers stealing children, drug assassins lurking around every corner, and the occasional cancer-causing glass of water. So I wasn’t surprised to get this email from a family member (who found it amusing): Maybe you have seen this, but thought I would send it on anyway. SCARY.. This is a MUST SEE/ READ. If you have children or grandchildren you NEED to watch this. I had no idea this could happen from taking pictures on the blackberry or cell phone. It’s scary. http://www.youtube.com/watch?v=N2vARzvWxwY Crack open a cold beer and enjoy the show… it’s an amusing report on how frightening geotagged photos posted online are. I am not dismissing the issue. If you are, for example, being stalked or dealing with an abusive spouse, spewing your location all over the Internet might not be so smart. But come on people, it just ain’t hard to figure out where someone lives. And if you’re a stalking victim, you need better sources for guidance on protecting yourself than stumbling on a TV special report or the latest chain mail. But there are two reasons I decided to write this up (aside from the lulz). First, it’s an excellent example of framing. Despite the fact that there is probably not a single case of a stranger kidnapping due to geotagging, that was the focus of this report. Protecting your children is a deep-seated instinct, which is why so much marketing (including local news, which is nothing but marketing by dumb people) leverages it. Crime against children has never been less common, but plenty of parents won’t let their kids walk to school “because the world is different” than when they grew up. Guess what: we are all subject to the exact same phenomenon in IT security. Email is probably one of the least important data loss channels, but it’s the first place people install DLP. Not a single case of fraud has ever been correlated with a lost or stolen backup tape, but many organizations spend multiples more on those tapes than on protecting web applications. Second, when we are dealing with non-security people, we need to remember that they always prioritize security based on their own needs and frame of reference. Policies and boring education about them never make someone care about what you care about as a security pro. This is why most awareness training fails. To us this report is a joke. To the chain of people who passed it on, it’s the kind of thing that freaks them out. They aren’t stupid (unless they watch Nancy Grace) – they just have a different frame of reference. Share:

Share:
Read Post

Tokenization Guidance: Audit Advice

In this portion of our Tokenization Guidance series I want to offer some advice to auditors. I am addressing both internal auditors going through one of the self assessment questionnaires, as well as external auditors validating adherence to PCI requirements. For the most part auditors follow PCI DSS for the systems that process credit card information, just as they always have. But I will discuss how tokenization alters the environment, and how to adjust the investigation process in the select areas where tokenization systems supplants PAN processing. At the end of this paper, I will go section by section through the PCI DSS specification and talk about specifics, but here I just want to provide an overview. So what does the auditor need to know? How does it change discovery processes? We have already set the ground rules: anywhere PAN data is stored, applications that make tokenization or de-tokenization requests, and all on-premise token servers require thorough analysis. For those systems, here is what to focus on: Interfaces & APIs: At the integration points (APIs and web interfaces) for tokenization and de-tokenization, you need to review security and patch management – regardless of whether the server is in-house or hosted by a third party. The token server vendor should provide the details of which libraries are installed, and how the systems integrate with authentication services. But not every vendor is great with documentation, so ask for this data if they failed to provide it. And merchants need to document all applications that communicate with the token server. This encompasses all communication, including token-for-PAN transactions, de-tokenization requests, and administrative functions. Tokens: You need to know what kind of tokens are in use – each type carries different risks. Token Storage Locations: You need to be aware of where tokens are stored, and merchants need to designate at least one storage location as the ‘master’ record repository to validate token authenticity. In an on-premise solution this is the token server; but for third-party solutions, the vendor needs to keep accurate records within their environment for dispute resolution. This system needs to comply fully with PCI DSS to ensure tokens are not tampered with or swapped. PAN Migration: When a tokenization service or server is deployed for the first time, the existing PAN data must be removed from where it is stored, and replaced with tokens. This can be a difficult process for the merchant and may not be 100% successful! You need to know what the PAN-to-token migration process was like, and review the audit logs to see if there were issues during the replacement process. If you have the capability to distinguish between tokens and real PAN data, audit some of the tokens as a sanity check. If the merchant hired a third party firm – or the vendor – then the service provider supplies the migration report. Authentication: This is key: any attacker will likely target the authentication service, the critical gateway for de-tokenization requests. As with the ‘Interfaces’ point above: pay careful attention to separation of duties, least privilege principle, and limiting the number of applications that can request de-tokenization. Audit Data: Make sure that the token server, as well as any API or application that performs tokenization/de-tokenization, complies with PCI section Requirement 10. This is covered under PCI DSS, but these log files become a central part of your daily review, so this is worth repeating here. Deployment & Architecture: If the token server is in-house or managed on-site you will need to review the deployment and system architecture. You need to understand what happens in the environment if the token server goes down, and how token data is synchronized being multi-site installations. Weaknesses in the communications, synchronization, and recovery processes are all areas of concern; so the merchant and/or vendors must document these facilities and the auditor needs to review. Token Server Key Management: If the token server is in-house or managed on site, you will need to review key management facilities, because every token server encrypts PAN data. Some solutions offer embedded key management while others use external services, but you need to ensure this meets PCI DSS requirements. For non-tokenization usage, and systems that store tokens but do not communicate with the token server, auditors need to conduct basic checks to ensure the business logic does not allow tokens to be used as currency. Tokens should not be used to initiate financial transactions! Make certain that tokens are merely placeholders or surrogates, and don’t work act as credit card numbers internally. Review select business processes to verify that tokens don’t initiate a business process or act as currency themselves. Repayment scenarios, chargebacks, and other monetary adjustments are good places to check. The token should be a transactional reference – not currency or a credit proxy. These uses lead to fraud; and in the event of a compromised system, might be used to initiate fraudulent payments without credit card numbers. The depth of these checks varies – merchants filling out self-assessment questionnaires tend to be more liberal in interpreting of the standard than top-tier merchants and the have external auditors combing through their systems. But these audit points are the focus for either group. In the next post, I will provide tables which go point by point through the PCI requirements, noting how tokenization alters PCI DSS checks and scope. Share:

Share:
Read Post

Conspiracy Theories, Tin Foil Hats, and Security Research

It seems far too much of security research has become like Mel Gibson in “Conspiracy Theory.” Unbalanced, mostly crazy, but not necessarily wrong. But we created this situation, so we have to deal with it. I’m reacting to the media cycle around the Duqu virus, or Son of Stuxnet, identified by F-Secure (among others). You see, no one is interested in product news anymore. No one cares about the incremental features of a vendor widget. They don’t care about success stories. The masses want to hear about attacks. Juicy attacks that take down nuclear reactors. Or steal zillions of dollars. Or result in nudie pictures of celebrities stolen from their computers or cell phones. That’s news today, and that’s why vendor research teams focus on giving the media news, rather than useful information. It started with F-Secure claiming that Duqu was written by someone with access to the Stuxnet source code. Duqu performs reconnaissance rather than screwing with centrifuges, but their message was that this is a highly sophisticated attack, created by folks with Stuxnet-like capabilities. The tech media went bonkers. F-Secure got lots of press, and the rest of the security vendors jumped on – trying to credit, discredit, expand, or contract F-Secure’s findings – anything that would get some press attention. Everyone wanted their moment in the sun, and Duqu brought light to the darkness. But here’s the thing. Everyone saying Duqu and Stuxnet were related in some way might have been wrong. The folks at SecureWorks released research a week later, making contrary claims and disputing any relation beyond some coarse similarities in how the attacks inject code (using a kernel driver) and obscure themselves (encryption and signing using compromised certificates). The media went bonkers again. Nothing like a spat between researchers to drive web traffic to the media. So who is right? That is actually the wrong question. It really doesn’t matter who is right. Maybe Duqu was done by the Stuxnet guys. Maybe it wasn’t. Ultimately, though, to everyone aside from page-whoring beat reporters who benefit from another media cycle, who’s right and who’s wrong about Duqu’s parentage aren’t relevant. The only thing that matters is that you, as a security professional, understand the attack; and have controls in place to protect against it. Or perhaps not – analyzing the attack and accepting its risk is another legitimate choice. This is how the process is supposed to work. A new threat comes to light, and the folks involved early in the cycle draw conclusions about the threat. Over time other researchers do more work and either refute or confirm the original claims. The only thing different now is that much of this happens in public, with the media showing how the sausage is made. And it’s not always pretty. But success in security is about prioritizing effectively, which means shutting out the daily noise of media cycles and security research. Not that most security professionals do anything but fight fires all day anyway. Which means they probably don’t read our drivel either… Photo credit: “Tin Foil Hat” originally uploaded by James Provost Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.