Securosis

Research

Friday Summary: November 11, 2011

Coupons. Frequent flyer miles. Rebates. Loyalty programs. Member specials. Double coupon days. Frequent buyer programs. Weekly drawings. Big sales events. Seasonal sales. Presidents day sales. Sales tax holiday sales. Going out of business sales. Private clearance sales. 2 for 1 sales. Buy 2 get 1 free. Sometimes it strikes me just how weird commercial promotions are. It’s a sport where nothing is as it seems. We don’t just buy things – we have to make a game out of it. A game slanted against those who don’t follow the rules, don’t care to play, or just plain can’t do math. We don’t base most of our buying decisions on price vs. quality – instead we are always looking for an angle or a deal. We want to “game the system”, so business provides games to feed our habit. ‘Exclusive’ Internet deals. ‘Sticker’ books. Rewards programs. Receipt Bingo. Discount ‘accelerators’. Friends fly free. Nights and weekend minutes. Family plans. Price match guarantee. All while playing classical music (or country music here in the South) and telling you how smart you are. It’s not just retail merchants either. We made mortgages into a game: mortgage brokers, mortgage ‘points’, marketing fund indexes, teaser rates, interest rate buy-backs, variable interest, no-interest, balloon notes, FHA programs, tax credit programs, no-doc, and any other combination of variables that can be shuffled to squeeze you into a deal. Heck, we even get games from our government. Our tax system is essentially a game. There is absolutely no such thing as a straight formula. We are incentivized to find for ways bend the rules without a violation and penalty – especially with the new tax codes – to tweak what you pay. If you know how to leverage the code in your favor, you pay far less. And if you don’t know the rules of the game you pay more. We get distractions like “Secret codes” – announced over the radio. Cute reptiles with Cockney accents which equate buying their product with drinking tea and eating cake. Preferred memberships. Free shipping on orders over $25. Double-discount Wednesdays. Your tenth cup of coffee free. Free gift with purchase. Free credit reports. Trade-ins. Trade-ups. Free upgrades. Get more. Pay less. Bring the kids! You are so very smart to take advantage of our one-time-only 9-year auto lease program with an 70% residual cap! Because, after all, you deserve it! Hey, do I hear Mozart? Our healthcare system is even more of a game than our tax system, but it’s much less obvious, except to people who try to avoid playing by the rules. Pre-existing conditions? Preferred provider networks? Anyone? Ever have a hospital say they can’t tell you what you owe so you have to wait for your bill? That’s because they don’t know. Nobody does. Price is an illusion that only comes into focus when the medical provider determines what your insurance provider(s) will swallow. It’s a game within a game. Don’t believe me? Trying paying for medication or a simple office visit without providing health insurance details. The price quintuples after the fact. And people who don’t play, aka those without health care, know they pay a premium when the get services. It’s a giant shell game, and your motivation to play comes through through cheap copays and the lure of the pre-tax spending set-aside. And you will play the game. After all, you want to be healthy, don’t you? Pay the premiums, follow the process and nobody get’s hurt! I know the basic scam is selling a dream while masking the truth. What I have not figured out is whether all these games are just a by-product of sales people trying to sell the unpalatable – and how they prefer to sell it – or if people have genuinely come to enjoy the game so much they no longer care. Who knows? Maybe it’s both. I know some people who won’t buy if they don’t have a coupon, but the more serious problem is people who always buy when they have a coupon – regardless of need. But people like to play, and it all feels so much more virtuous than roulette or poker. How many of you have a free set of pots from the supermarket? Or a knife set? Or buy gas across the street because they accept your grocery reward card? How many of you shop on double-coupon days? How many loyalty cards are in your wallet? On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Rich quoted on SaaS security services. Favorite Securosis Posts Mike Rothman: A Public Call for eWallet Design Standards. Everyone wants a free lunch, even if it’s not even remotely free. Folks will eventually learn the evil plans of these marketing companies (offering said eWallets) the hard way. And I’ll be happy I pay for 1Password to protect all my important info. Adrian Lane: Managed Services in a Security Management 2.0 World. When adopting complex solutions, managed services are a pretty attractive option in terns of risk reduction and skills management. Other Securosis Posts Sucking less is not a brand position. Incite 11/9/11: Childlike Wonder. Breakdown of Trust and Privacy. Applied Network Security Analysis: The Breach Confirmation Use Case. Tokenization Guidance: PCI Requirement Checklist. Friday Summary: November 4, 2011. Favorite Outside Posts Mike Rothman: End of year predictions. One of the only guys who can rival my curmudgeonly ways, Jack Daniel offers some end of year perspective. Like ‘Admitting that “life is a crap shoot” doesn’t get you the respect it should.’ Amen, brother. Adrian Lane: Jobs Was Right: Adobe Abandons Mobile Flash, Backs HTML5. Big news with big security ramifications (i.e., this is good for security too)! Project Quant Posts DB Quant: Index. NSO Quant: Index of Posts. NSO Quant: Health Metrics–Device Health. NSO Quant: Manage Metrics–Monitor Issues/Tune IDS/IPS. NSO Quant: Manage Metrics–Deploy and Audit/Validate. NSO Quant: Manage Metrics–Process Change Request and Test/Approve. NSO Quant: Manage Metrics–Signature Management. Research Reports and Presentations Fact-Based Network Security: Metrics and

Share:
Read Post

Sucking less is not a brand position

I guess if you have been around long enough, you have seen everything over and over again. I felt my age today when I saw yet another (lame) attempt to Move Security from a Cost Center to a Brand Differentiator. How many times have we security folks wished for the day we could get project funding because it helped the business either to make more money or to spend less money? Gosh, that would make life a lot easier. The holy grail has always been to position security as an enabling technology. Unfortunately it just isn’t. The only thing security enables is…uh…nothing. It gets back to assurances, and we security folks can’t make assurances either way. If you spend $X on $widget, maybe it will stop an attack. Maybe it won’t. If you don’t have $widget maybe you won’t even be attacked, so you might as well light a bag of money on fire. It’s like building a house on quicksand. To be fair, in some cases security is table stakes. For example you expect your private data to be protected. In a many cases you will be disappointed, but we don’t really see organizations positioning security as a differentiator. They make those pronouncements to allay our fears and eliminate an obstacle to purchase – not as a buying catalyst. But the most offensive part of the article comes later, in a section that at first seemed kind of logical. But this quote from some guy named Alan Wlasuk almost made me fall out of my chair: “But any company can shine in an industry environment where the majority of their competitors have suffered from confidence destroying security attacks.” Shine? Really? Your suggestion is that companies tells customers to do business with them because they suck less?? That’s how I read Alan’s statement. I’ll admit I clearly didn’t learn too much as a VP Marketing, but I do know it’s a bad idea to position and build campaigns around attributes with little to no longevity. So we should build our brands on being more secure? Unbreakable much? Thanks to our pals at LiquidMatrix for that little chuckle this morning. I thump vendors regularly for trying to run campaigns based on competitor breaches. Like when a token vendor (okay – all of them) tried to capitalize on the RSA token breach by positioning their tokens as more secure, whatever that means. Kicking the competition when they are down comes back to haunt you – we all live in glass housees. Sure enough, some of those very vendors had high profile issues with their own certificate authorities. Karma is a bitch, isn’t it? Take it from someone who has tried to position security as anything but a cost center for close to a decade. It doesn’t work. Your best bet is to realistically show the risk of not doing something, and let business people make their business decisions. And if your marketing folks tell you about this brand spanking new campaign to be launched based on a breach at your competitor, give them my number. I have a clue bat for them. Photo credit: “VISI Black Hat” originally uploaded by delta407 Share:

Share:
Read Post

Managed Services in a Security Management 2.0 World

As we posted the Security Management 2.0 series, we focused heavily on replacing an on-premise option with another on-premise option. We paid a bit of lip service to the managed SIEM/Log Management option, but not enough – the reality is that, under the proper, circumstances a managed service presents an interesting alternative to racking and stacking another set of appliances. So consider this a primer for managed services in the context of our Security Management 2.0 discussion. We will go through the drivers, use cases, and deployment architectures for those considering managed services. And we will provide cautions for areas where a service offering might not meet your expectations. Drivers for Managed Services We have no illusions about the amount of effort required to get a security management platform up and running, or what it takes to keep one current and useful. Many organizations have neither the time nor the resources to implement technology to help automate some of these key functions. So they are trapped on the hamster wheel of pain, reacting without sufficient visibility, but without time to invest in gaining that much-needed visibility into threats without diving deep into raw log files. A suboptimal situation for sure, and one that usually triggers discussions of managed services in the first place. Let’s be a bit more specific about situations where it’s worth a look at managed services. Lack of internal expertise: Even having people to throw at security management may not be enough. They need to be the right people – with expertise in confirming exposures, closing simple issues, and when to pull the alarm and escalate to the investigations team. Reviewing events, setting up policies, and managing the system, all take skills that come with training and time with the product. Clearly this is not a skill set you can just pick up anywhere – finding and keeping talented people is hard – so if you don’t have sufficient sophistication internally, that’s a good reason to check out a service alternative. Scalability of existing platform: You may have a decent platform, but perhaps it can’t scale to what you need for real-time analysis. As we discussed in the Platform Evaluation post, this is common for those deploying first generation database-based SIEM products, who then face a significant and costly upgrade to scale the system. This can also happen to acquisitive organizations, who bring on significant assets and need to integrate management capabilities quickly to get sufficient leverage. With a managed service offering scale is not an issue – any sizable provider is handling billions of events per day. Risk Transference: You have been burned before – that’s why you are looking at alternatives, right? You’re not sure what solution to select for the long haul. Why risk the investment when you can drop that monkey on someone else’s back? This allows you to focus on the functionality you need instead of vendor hyperbole and sniping. Ultimately you only need to be concerned with the application and the user experience, and all that other stuff is the provider’s problem. So selecting a provider becomes effectively an insurance policy to minimize your investment risk. Similarly, if you are worried about your ops team’s ability to keep a broad security management platform up and running, you can transfer operational risk to a safer outside team. Once again, that operational risk goes to the provider, who assumes responsibility for uptime and performance. Geographically dispersed small sites: Managed services also interest organizations which need to support many small locations. Think retail or other distribution-centric organizations, where the central site may have sufficient expertise but there is very little capability at the remote sites. That might work well – particularly if event traffic can be centrally aggregated. But if not, this presents a good opportunity for a service provider who can monitor the remote sites. Round the clock monitoring: Some organizations need to move from a 8-hour/5-day monitoring schedule to a round-the-clock approach. Whether this is driven by a breach, a new regulatory requirement, or some kind of religious awakening in the executive suite, staffing a security operations center (SOC) 24/7 is a huge undertaking. But a service provider can leverage that 24/7 staffing investment across many customers, and might be in a much better position to deliver round-the-clock services. Of course you can’t outsource thinking or accountability, so ultimately the buck stops with the internal team, but under the right circumstances managed services can address skills and capabilities gaps. So let’s dig into a few of the use cases that provide a good fit for managed SIEM or Log Management. Favorable Use Cases Many providers offer a managed SIEM/Log Management platform as the equal of an in-house solution, and that may be the case. Or it might not – depending on the sophistication of the implementation, as well as the capabilities of the provider’s technology and internal processes. Under the right circumstances you can get a managed SIEM offering to do (almost) everything you could with an in-house option, but in reality we very rarely see that. More often we see the following use cases when considering a service alternative: Device Monitoring: You have a ton of network and security devices and you don’t have the resources to properly monitor them. That’s a key situation where managed security management can help. These services are generally architected to aggregate data on your site and ship it to the service provider for analysis and alerting. The provider should have a correlation system to identify issues, and a bunch of analysts who can verify issues quickly and then give you a heads-up. Compliance Reporting: Another no-brainer for a services alternative is basic log aggregation and reporting – typically driven by a compliance requirement. This isn’t a very complicated use case, and it fits well with service offerings. It also gets you out of the business of managing storage and updating reports when a requirement changes. The provider should take care of all that for you.

Share:
Read Post

Incite 11/9/11: Childlike Wonder

Heading down into Atlanta last week for the BSides ATL conference, I got into my car and the magic began. I whipped out my magic box and pulled up the address on the Maps app, just to make sure I remembered where it is. Then I fired up Pandora, which dutifully streamed rocking music to my Bluetooth-equipped car stereo. I checked out the NaviGAtor mobile site for real-time traffic data; then I was set and on my way. Wait. What? Think about this for a second. None of what I just described was even possible 4 years ago. I normally just take all this rapid technology evolution for granted, but that day I reflected a bit on how surreal that entire trip was. The idea of having a personalized radio station streaming from the Internet and playing through my car stereo? Ha! Having a fairly accurate map and an idea of traffic before I stumbled into bumper to bumper mayhem? Maybe in a science fiction movie, or something. But no, this stuff happens every day on a variety of smartphones, enabled by fairly ubiquitous wireless Internet connectivity. As another example, Rich just texted me on Monday to let me know he deposited my monthly commission check to our bank from his device, while taking a potty break during a strategy day. Yeah, that’s probably TMI. My bad. Our recently departed leader talked about the sense of “childlike wonder” you get when discovering these applications that enable totally different ways of communicating and living. And it’s true. As I drove down the highway, jamming to my music, with no traffic because I routed around the congestion, I could only marvel at how things have changed. It’s a far cry from my first bag phone. Or that ancient StarTac, which was state of the art, what, five years ago? How can you not be excited by the future? We have only just scratched the surface on how these little computers will change the way we do things. Bandwidth will get broader. Devices will get smarter. Apps will get more capable. And we’ll all benefit. Maybe. It takes a lot of self-control to just enjoy the music while I’m driving. The inclination is to multi-task, at all times. You know, checking Twitter, texting, and catching up on email, in a metal projectile traveling about 70mph, surrounded by other metal projectiles traveling just as fast. That can’t end well. As with everything, there is a downside to this connectivity. It’s hard to just shut down the distractions and think, or to focus enough to stay on the road. It seems the only place I can get some peace is on a plane, and even there I can get WiFi (though I tend not to connect on most flights). The good news is that nothing I do is really that urgent. My Twitter can wait 15 minutes until I stop moving. But it doesn’t mean I don’t have to make a conscious effort to stay focused on the road. I do, and you probably do as well. I guess what is most amazing to me is that my kids have no idea that there was a time when all this stuff didn’t exist. The idea of not being able to text whenever they wanted? Madness. A world without Words with Friends? A time when they could only listen to 10 CDs because that’s all they could carry in their travel bag? They can hardly remember what a CD is. Nor should they. It’s not like when I was a kid I had any concept of a world where we hung out by the radio to get news, sports, entertainment – basically everything. But that’s how my folks grew up. I wonder if someday SkyNet will look back and wonder what things were like before it was self-aware? Oy, that’s a slippery slope. -Mike Photo credits: “Childlike Wonder” originally uploaded by SashaW Incite 4 U Peeking into Dan’s brain: There are a select few folks who really make me think. Like every time I talk to them (which isn’t enough), I have to bring my A game, just to hold a conversation. Dan Geer is one of those folks. So when the Threatpost folks asked Dan about the research agenda in security, he didn’t disappoint. He starts by proposing that we’d need a lot less research if we put into practice what we already know, and that we should research why we don’t do that. Yeah, Dan makes recursive thinking cool. Then there are other nuggets about building systems too complex to effectively manage, the strategic importance of traffic analysis, and the security implications of IPv6. He may not have all those research-grade answers yet, but Dan certainly knows the questions to ask. – MR Johnny doesn’t care: Carnegie Mellon released a research paper called Why Johnny Can’t Opt Out, an examination of tools to thwart online behavioral monitoring, and how users use them. I recommend downloading the paper and taking a quick look at the study – it contains some interesting stuff, but I am a bit disappointed by several aspects. First, the executive summary makes it sound like the tools they surveyed are ineffective, when that’s clearly not the case. They found users were confused by the UIs of the respective products and failed to configure the products correctly. OK, that’s reasonable – most utilities leave a bit to be desired from a user experience standpoint. But not all offerings are like that; for example Ghostery’s setup wizard is dead simple to use, but the data is the data. The other thing that bothered me was not testing NoScript (a fantastic tool!) as another privacy tactic. The final annoyance was their assumption that users do not want privacy tools to hinder usability! WTF? They do understand behavioral advertising is woven into the web’s fabric, right? That “no hindrance” requirement eliminates NoScript, and stymies any effective product, because there’s no way to eliminate certain risks

Share:
Read Post

Breakdown of Trust and Privacy

I try not to cover data privacy much any more, despite being an advocate, because we have already crossed the point of no return. We have allowed just about every piece of our personal data to be available on the Internet, making privacy effectively a dead issue, but in most cases the user makes the choice. But many very large public firms have been promising consumers that carefully protect customer information, and fully anonymize any data before it’s sold. This is bull$&!#. As an example, Visa and Mastercard have been in the news lately, because of the sale of ‘anonymized’ data to marketing firms. True to form, “MasterCard told the Journal that customers have nothing to worry about.” But most firms that collect customer data – Mastercard included – know full well that their marketing partners can and do link purchase histories to specific individuals. Especially when you leave bread crumbs to follow: something like customer ID, or last name and age – either of which serves as a surefire way of pinpointing user identity. And the third party firms can do this because Visa leaves enough information to accommodate linking. We know Mastercard is speaking from both sides of its mouth on this – their own corporate sales presentations to marketing organizations tout this as an advantage. “We have extensive experience partnering with third parties to link anonymized purchased attributes to consumer names and addresses (owned by third party)” This sort of thing may bother you; it may not. But let’s be clear that Mastercard is lying about the practice because they know the majority of the public feels selling their personal data is a betrayal of trust. These slides clearly demonstrate that this isn’t just a simple lie or mistake – it’s a bold-face lie. They have been marketing their concern for user data privacy on one side, and marketing de-anonymization to third party marketers for years. And third party marketing firms pay a lot of money for the data because they know it can be linked to specific card holders. I am especially aggravated by this compromising of user data because Mastercard & Visa don’t just facilitate electronic fund transfers, but they also actively market the trustworthiness of their brands to consumers. Turning around and selling this data, obviously intending for it to be reverse engineered, betrays that trust. As I mentioned recently in a post on Payment Trends and Security Ramifications, the card brands are eager to increase revenue through these third party relationships for targeted ads and affinity marketing. I fully expect to see coupons available via smart cards in the next two years, in an attempt to disintermediate companies like Groupon. And in their rush to profit from profiling, they seem have forgotten that users are tired of these shenanigans. Of course their legal teams say customer privacy comes first, then get defensive when people like me say otherwise, touting their ‘opt-out’ options. But customers can’t really opt out. Not just because the options are hidden on their various sites where no one can find them all. And not just because you’re automatically opted in when you get each card. The deeper problem is this data is always collected, no matter what. It’s hard coded into the systems that process the transactions. Always. It’s simply a question of whether Mastercard chooses to sell customer data – and in light of the above quote it is difficult to trust them. If they want to earn our trust, they should show us sample data and of how it is anonymized. I am willing to bet it cannot stand up to scrutiny. Share:

Share:
Read Post

Applied Network Security Analysis: The Breach Confirmation Use Case

As our last use case in Applied Network Security Analysis, it’s time to consider breach confirmation: confirming and investigating a breach that has already happened. There are clear similarities to the forensics use case, but breach confirmation takes forensic analysis to the next level: you need to learn the extent of the breach, determining exactly what was taken and from where. So let’s revisit our Forensics scenario to look at how that can be extended to confirm a breach. In that scenario, a friend at the FBI gave you a ring to let you know they found some of your organization’s private data during a cybercrime investigation. In the initial analysis you found the compromised devices and the attack paths, as part of isolating the attack and containing the damage. You cleaned the devices and configured IPS rules to ensure those particular attacks will be blocked in the future. You also added some rules to the network security analysis platform to ensure you are watching for those attack traffic patterns moving forward – just in case the attackers evade the IPS. But you still can’t answer the question: “What was stolen?” with any precision. We know the attackers compromised a device and used it to pivot within your environment to get to the target: a database of sensitive information. We can’t assume that was the only target, and if your attack was like any other involving a persistent attacker, they found multiple entry points and have a presence on multiple devices. So it’s time to head back into your analysis console to figure out a few more things. What other devices were impacted: Start by figuring out how many other machines were compromised by the perimeter device. By searching all traffic originating from the compromised DMZ server, you can see which devices were scanned and possibly owned. Then you can confirm using either the configuration data you’ve been collecting, or by analyzing the machine using an endpoint forensics tool. You may already have some or all of this information already when trying to isolate the attack path. What was taken: Next you need to figure out what was taken. You already know at least one egress path, identified during the initial forensic analysis. Now you need to dig deeper into the egress captures to see if there were any other connections or file transfers to unauthorized sites. The attackers continue to improve their exfiltration techniques, so it’s likely they’ll use both encrypted protocols and encrypted files to make it hard to figure out what was actually stolen. Having the full packet stream allows you to analyze the actual files, though depending on the sophistication of your attacker, you might need specialized help (from third party experts or law enforcement) to break the crypto. Remember that the first wave of forensic investigation focuses on determining the attack paths and gathering enough information to do an initial damage assessment and remediation. From there it’s all about containing the immediate damage as best you can. This next level of forensic analysis is more comprehensive: determine the true extent of the compromise and inventory what was taken. As you can imagine, without having the network packet capture it’s impossible to do this analysis. You would be stuck with log files telling you what happened, but not what, how, or how much was taken. That’s why we keep harping on the need for more comprehensive data on which to base network security analysis. Clearly you can’t capture all the data flowing around your networks, so it’s likely you’ll miss something. But you will have a lot more useful information for responding to attacks than organizations which do not capture traffic. Summary To wrap up our Applied Network Security Analysis series let’s revisit some of the key concepts we’ve covered. First of all, in today’s environment, you can’t assume you will be able to stop a targeted attacker. It is much smarter and more realistic to assume your devices are compromised, and to act accordingly. This assumption puts a premium on detecting successful attacks, preventing breaches, and containing damage. All those functions require data collection to be able to understand what is happening in your environment. Log Data Is Not Enough: Most organizations start by collecting their logs, which is clearly necessary for compliance purposes. But it’s not sufficient – not by a long shot. Additional data – including configuration, network flow, and even the full network packet stream, is key to understanding what is happening in your environment. Forensics: We walked through how these additional data sources can be instrumental in a forensic investigation, ultimately resulting in a breach confirmation (or not). The key objective of forensics is to figure out what happened, and the ability to replay attacks and monitor egress points (using the actual traffic) replaces forensic interpretation with tested fact. And forensics folks like facts much better. Security: These additional data sources are not just useful after an attack has happened, but also to recognize issues earlier. By analyzing the traffic in your perimeter and critical network segments, you can spot anomalous behavior and investigate preemptively. To be fair, the security use cases are predicated on knowledge of what to look for, which is never perfect. But in light of the number of less sophisticated attackers using known attacks, making sure you don’t get hit by the same attack twice is very useful. We all know there are always more projects than your finite resources and funding will allow. So why is network security analysis something that should bubble up to the top of your list? The answer is about what we don’t know – we cannot be sure what the attack will be in the future, but we know it will be hard to detect, and it will steal critical information. We built the Securosis security philosophy on Reacting Faster and Better, by focusing on gathering as much data as possible, which requires an institutional commitment to data collection and analysis.

Share:
Read Post

A Public Call for eWallet Design Standards

Last week StorefrontBacktalk ran an article on Mobile Wallets. It underscored my personal naivete in assuming that anyone who designed and built a digital wallet for ecommerce would first and foremost protect customer payment data and other private information. Reading this post I had one of those genuine “Oh $&!#” moments – what if the wallet provider was not interested in my security or privacy? Duh! A wallet is a small data store for your financial, personal, and shopping information. Think about that for a minute. If you buy stuff on your computer or from your phone via an eWallet app, over time it will collect a ton of information. Transaction receipts. Merchant lists. Merchant relationship information such as passwords. Buying history. Digital coupons. Pricing information. Favorites and wish lists. Private keys. This is all in addition to “payment instruments” such as credit cards, PayPal accounts, and bank account information. Along with personal data including phone number, address, and (possibly) Social Security Number for antitheft/identity verification. It’s everything about you and your buying history all in one spot – effectively a personal data warehouse, on you. And it’s critical that you control your own data. This is a really big deal! To underscore why, let me provide a similar example from an everyday security product. For those of you in security, wallets are effectively personal equivalents to key management servers and Hardware Security Modules (HSMs). Key management vendors do not have full access to their customers’ encryption keys. You do not and would not give them a backdoor to the keys that secure your entire IT infrastructure. The whole point of an HSM is to secure the data from everyone who is not authorized to use it. And only the customer who owns the HSM gets to decide who gets keys. For those of you not in security, think of the eWallet as a combination wallet and keychain. It’s how you gain access to your home, your car, your mailbox, your office, and possibly your neighbors’ houses for when you catsit. And it holds your cash (technically more like a blank checkbook, along with your electronic signature), credit cards, debit card, pictures of your kids, and that Post-It with your passwords. You don’t hand this stuff out to third parties! Heck, when your kid wants to borrow the car, you only give them one key and forty bucks for gas – they don’t get everything! But the eWallet systems described in that article don’t belong to you – they are the property of third parties, who would naturally want the ability to rummage through them for useful (marketing and sales) data – what you might consider your data. Human history clearly shows that if someone can abuse your trust for financial gain, they will. Seriously, people – don’t give your wallet to strangers. Let’s throw a couple design principles out there for people who are building these apps: If the wallet does not secure all the user’s content – not just credit card data – it’s insecure and the design is a failure. If the wallet’s author does not architect and implement controls for the user to select what they wish to share with third parties, they have failed. If the wallet does not programatically protect one ‘pocket’, or compartment inside the wallet, from other compartments, it is untrustworthy (as is its creator). If the wallet has a vendor backdoor, it has failed. If the wallet does not use secure and publicly validated communications protocols, it has failed. Wallet designers need to consider the HSM / key management security model. It must protect user data from all outsiders first and foremost. If sharing data/coupons/trends/transaction receipts, easy shopping, “loyalty points”, providing location data, or any other objective supersedes security: the wallet needs to be scrapped and re-engineered. Security models like iOS compartmentalization could be adapted, but any intra-wallet communication must be tightly controlled – likely forcing third parties to perform various actions outside the wallet, if the wallet cannot enable them with sufficient security and privacy. I’ll follow up with consider the critical components of a wallet, as a general design framework; things like payment protocols, communications protocols, logging, authentication, and digital receipts should all be standardized. But more important: the roles of buyer, seller, and any mediators should be defined publicly. Just because some giant company creates an eWallet does not mean you should trust it. Share:

Share:
Read Post

Understanding and Selecting DAM 2.0: Market Drivers and Use Cases

I was going to being this series talking about some of the architectural changes, but I’ve reconsidered. Since our initial coverage of Database Activity Monitoring technology in 2007, the products have fully matured into enterprise worthy platforms. What’s more, they’ve proven significant security and compliance benefits, as evidenced by market growth from $40M to revenues well north of $100M per year. This market is no longer dominated by small vendors, rather large vendors who have acquired six of the DAM startups. As such, DAM is being integrated with other security products into a blended platform. Because of this, I thought it best to go back and define what DAM is, and discuss market evolution first as it better frames the remaining topics we’ll discuss rest of this series. Defining DAM Our longstanding definition is: Database Activity Monitors capture and record, at a minimum, all Structured Query Language (SQL) activity in real time or near real time, including database administrator activity, across multiple database platforms, and can generate alerts on policy violations. While a number of tools can monitor various level of database activity, Database Activity Monitors are distinguished by five features: The ability to independently monitor and audit all database activity including administrator activity and SELECT transactions. Tools can record all SQL transactions: DML, DDL, DCL, (and sometimes TCL) activity. The ability to store this activity securely outside of the database. The ability to aggregate and correlate activity from multiple, heterogeneous Database Management Systems (DBMS). Tools can work with multiple DBMS (e.g.,Oracle, Microsoft, IBM) and normalize transactions from different DBMS despite differences in their flavors of SQL. The ability to enforce separation of duties on database administrators. Auditing activity must include monitoring of DBA activity, and solutions should prevent DBA manipulation of and tampering with logs and activity records. The ability to generate alerts on policy violations. Tools don’t just record activity, they provide real-time monitoring, analysis and rule-based alerting. For example, you might create a rule that generates an alert every time a DBA performs a SELECT query on a credit card column that returns more than 5 results. DAM tools are no longer limited to a single data collection method, rather they offer network, OS layer, memory scanning and native audit layer support. Users can tailor their deployment to their security and performance requirements, and collect data from sources best fit their requirements. Platforms Reading that you’ll notice few differences from what was discussed in 2007. Further, we predicted the evolution of Applications and Database Security & Protection (ADMP) on the road to Content Monitoring and Protection, stating “DAM will combine with application firewalls as the center of the applications and database security stack, providing activity monitoring and enforcement within databases and applications.” But where it gets interesting is the other – different- routes vendors are taking to achieve this unified model. It’s how vendors bundle DAM into a solution that distinguishes one platform from another. The Enterprise Data Management Model – In this model, DAM features are generically extended to many back-office applications. Data operations, such as a a file read or SAP transaction, are treated just like a database query. As before, operations are analyzed to see if a rule was violated, and if so, a security response is triggered. In this model DAM does more than alerting and blocking, but leverages masking, encryption and labeling technologies to address security and compliance requirements. This model relies heavily on discovery to help administrators locate data and define usage policies in advance. While in many respects similar to SIEM – the model leans more toward real time analysis of data usage. There is some overlap with DLP, but this model lacks endpoint capabilities and full content awareness. The ADMP Model – What’s sometimes called the Web AppSec model, here DAM is linked with web application firewalls to provide activity monitoring and enforcement within databases and applications. DAM protects content in a structured application and database stack, WAF shields application functions from misuse and injection attacks, and File Activity Monitoring (FAM) protects data as it moves in and out of documents or unstructured repositories. This model is more application aware than the others, reducing false positives through transactional awareness. the ADMP model also provides advanced detection of web borne threats. Policy Driven Security Model – Classic database security workflow of discovery, assessment, monitoring and auditing; each function overlapping with the next to pre-generate rules and policies. In this model, DAM is just one of many tools to collect and analyze events, and not necessarily central to the platform. What’s common amongst vendors who offer this model is policy orchestration: policies are abstracted from the infrastructure, with the underlying database – and even non-database – tools working in unison to fulfill the security and compliance requirements. How work gets done is somewhat hidden from the user. This model is great for reducing the pain of creating and managing policies, but as the technologies are pre-bundled, lacks the flexibility of other platforms. The Proxy Model – Here DAM sits in front of the database, filtering inbound requests, acting as a proxy server. What’s different is what the proxy does with inbound queries. In some cases the query is blocked because it fits a known attack signature, and DAM acts as a firewall to protect – a method sometimes called ‘virtual patching’ – the database. In other cases the query is not forwarded to the database because the DAM proxy has recently seen the same request, and returns query results directly to the calling application. DAM is in essence a cache to speed up performance. Some platforms also provide the option of rewriting inbound queries, either to optimize performance or to minimize the risk an inbound query is malicious. DAM tools have expanded into other areas of data, database and application security. Market Drivers DAM tools are extremely flexible and often deployed for what may appear to be totally unrelated reasons. Deployments are typically driven by one of three drivers: Auditing for

Share:
Read Post

Tokenization Guidance: PCI Requirement Checklist

So far in this series on tokenization guidance for protecting payment data, we have covered deficiencies in the PCI supplement, offered specific advice for merchants to reduce audit scope, and provided specific tips on what to look for during an audit. In this final post we will provide a checklist of each PCI requirement affected by tokenization, with guidance on how to modify compliance efforts in light of tokenization. I have tried to be as brief as possible while still covering the important areas of compliance reporting you need to adjust. Here is our recommended PCI requirements checklist for tokenization: PCI Requirement   Recommendation 1.2 Firewall Configuration   Token server should restrict all IP traffic – to and from – to systems specified under the ‘ground rules’, specifically: *      Payment processor *      PAN storage server (if separate) *      Systems that request tokens *      Systems that request de-tokenization This is no different than PCI requirements for the CDE, but it’s recommended that these systems communicate only with each other. If the token server is on site, Internet and DMZ access should be limited to communications with payment processor.   2.1 Defaults Implementation for most of requirement 2 will be identical, but section 2.1 is most critical in the sense there should be no ‘default’ accounts or passwords for the tokenization server. This is especially true for systems that are remotely managed or have remote customer-care options. All PAN security hinges on effective identity and access management, so establishing unique accounts with strong pass-phrases is essential.   2.2.1 Single function servers 2.2.1 bears mention both from a security standpoint, as well as protection from vendor lock-in. For security, consider an on-premise token server as a standalone function, separate and distinct from applications that make tokenization and de-tokenization requests.   To reduce vendor lock-in, make sure the token service API calls or vendor supplied libraries used by your credit card processing applications are sufficiently abstracted to facilitate switching token services without significant modifications to your PAN processing applications.   2.3 Encrypted communication You’ll want to encrypt non-console administration, per the specification, but all API calls to the token service as well. It’s also important to consider, when using multiple token servers to support failover, scalability or multiple locations that all synchronization occurs over encrypted communications, preferably a dedicated VPN with bi-directional authentication.   3.1 Minimize PAN storage The beauty of tokenization is it’s the most effective solution available for minimizing PAN storage. By removing credit card numbers from every system other than the central token server, you cut the scope of your PCI audit. Look to tokenize or remove every piece of cardholder data you can, and keeping it in the token server as well. You’ll still meet business, legal and regulatory requirements while improving security.   3.2 Authentication data Tokenization does not circumvent this requirement; you must remove sensitive authentication data per sub-sections 3.2.X 3.3 Masks Technically you are allowed to preserve the first six (6) digits and the last four (4) digits of the PAN. However, it’s our recommendation that you examine your business processing requirements to determine if you can fully tokenize the PAN, or at a minimum, only preserve the last four digits for customer verification. The number of possible tokens you can generate with the remaining six digits is too small for many merchants to generate quality random numbers. Please refer to ongoing public discussions on this topic for more information.   When using a token service from your payment processor that you ask for single-use tokens to avoid possible cases of cross-vendor fraud.   3.4 Render PAN unreadable One of the principle benefits of tokenization is that it renders PAN unreadable. However, auditing environments with tokenization requires two specific changes: 1.     Verify that PAN data is actually swapped for tokens in all systems. 2.     For on-premise token servers, verify that the token server adequately encrypts stored PAN, or offers an equivalent form of protection, such as not storing PAN data*.   We do not recommend hashing as it offers poor PAN data protection. Many vendors’ store hashed PAN values in the token database as a means of speedy token lookup, but while common, it’s a poor choice. Our recommendation is to encrypt PAN data, and as many use databases to store information, we believe table, column or row level within the token database is your best bet. Use of full database or file layer encryption can be highly secure, but most solutions offer no failsafe protection when database or token admin credentials are compromised.   We acknowledge our recommendations differ from most, but experience taught us to err on the side of caution when it comes to PAN storage.   *Select vendors offer one-time pad and codebooks options that don’t require PAN storage.   3.5 Key management Token server encrypt the PAN data stored internally, so you’ll need to verify the supporting key management system as best you can. Some token servers offer embedded key management, while others are designed to leverage your existing key management services.   There are very few people who can adequately evaluate key management systems to ensure they are really secure, but at the very least, you can check that the vendor is using industry standard components, or has validated their implementation with a 3rd party. Just make sure they are not storing the keys in the token database unencrypted. It happens.   4.1 Strong crypto on Internet communications As with requirement 2.3, when using multiple token servers to support failover, scalability and/or multi-region support that all synchronization occurs over encrypted communications, preferably a dedicated VPN with bi-directional authentication.   6.3 Secure development One site or 3rd party token servers will both introduce new libraries and API calls into your environment. It’s critical that your development process includes the validation that what you put into production is secure. You can’t take the vendor’s word for it – you’ll need to validate that all defaults, debugging code and API calls are secured. Ultimately

Share:
Read Post

Applied Network Security Analysis: The Malware Analysis Use Case

As we resume our tour of advanced use cases for Network Security Analysis, it’s time to consider malware analysis. Of course most successful attacks involve some kind of malware at some point during the attack. If only just to maintain a presence on the compromised device, some kind of bad stuff is injected. And once the bad stuff is on a device, it’s very very hard to get rid of it – and even harder to be sure. Most folks (including us) recommend you just re-image the device, as opposed to trying to clean the malware. This makes it even more important to detect malware as quickly as possible and (hopefully) block it before a user does something stupid to compromise their device. There are many ways to detect malware, depending on the attack vector, but a lot of what we see today is snuck through port 80 as web traffic. Sure, dimwit users occasionally open a PDF or ZIP file from someone they don’t know, but more often it’s a drive-by download, which means it comes in with all the other web traffic. So we have an opportunity to detect this malware when it enters the network. Let’s examine two situations, one with a purpose-built device to protect against web malware, and another where we’re analyzing malware directly on the network analysis platform. Detecting Malware at the Perimeter As we’ve been saying throughout this series, extending data collection and capture beyond logs is essential to detecting modern attacks. One advantage of capturing the full network packet stream at the ingress point of your network is that you can check for known malware and alert as they enter the network. This approach is better than nothing but it has two main issues: Malware sample accuracy: This approach requires accurate and comprehensive malware samples already loaded into the device to detect the attack. We all know that approach doesn’t work well with endpoint anti-virus and is completely useless against zero-day attacks, and this approach has the same trouble. No blocking: Additionally, once you detect something on the analysis platform, your options to remediate are pretty limited. Alerting on malware entering the network is useful, but blocking it is much better. Alternatively, a new class of network security device has emerged to deal with this kind of sneaky malware, by exploding the malware as it enters the network to understand the behavior of inbound sessions. Again, given the prevalence of unknown zero-day attacks, the ability to classify known bad behavior and see how a packet stream actually behaves can be very helpful. Of course no device is foolproof, but these devices can provide earlier warning of impending problems than traditional perimeter network security controls. Using these devices you can also block the offending traffic at the perimeter if it is detected in time, reducing the likelihood of device compromise. But you can’t guarantee you will catch all malware, so you must figure out the extent of t he compromise. There is also a more reactive approach: analyzing outbound traffic to pinpoint known command and control behavior and targets, which usually indicating a compromised device. At this point the device is already pwned, so you need to contain the damage. Either way, you must figure out exactly what happened and whether you need to sound the alarm. Containing the Damage Based on the analysis on the perimeter, we know both the target device and the originating network address. With our trusty analysis platform we can then figure out the extent of the damage. Let’s walk through the steps: Evaluate the device: First you need to figure out if the device is compromised. Your endpoint protection suite might not be able to catch an advanced attack, so search your analysis platform (and logging system) to find any configuration changes made on the device, and look for any strange behavior – typically through network flow analysis. If the device is still clean all the better. But we will assume it’s not. Profile the malware: Now you know the device is compromised, you need to figure out how. Sure you could just wipe it, but that eliminates the best opportunity to profile the attack. The network traffic and device information enable your analysts to piece together exactly what the malware does, replay the attack to confirm, and profile its behavior. This helps figure out how many other devices have been compromised, because you know what to look for. Determine the extent of the damage: The next step is to track malware proliferation. You can search the analysis platform to look for the malware profile you built in the last step. This might mean looking for communication with the external addresses you identified, identifying command and control patterns, or watching for the indicative configuration changes; but however you proceed, having all that data in one place facilitates identifying compromised devices. Watch for the same attack: You know the saying: “Fool me once, shame on you. Fool me twice, shame on me.” Shame on you if you let the same attack succeed on your network. Add rules to detect and block attacks you have seen for the future. We have acknowledged repeatedly that security professionals get no credit for blocking attacks, but you certainly look like a fool if you get compromised repeatedly by the same attack. You are only as good as your handling of the latest attack. So learn from these attacks; the additional data collection capabilities of network security analysis platforms can give you an advantage, both for containing the damage and for ensuring it doesn’t happen again. As we wrap up this Applied Network Security Analysis series early next week, we will examine the use case of confirming a breach actually happened, and then revisit the key points to solidify our case for capturing network traffic as a key facet of your detection capabilities. Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.