Securosis

Research

Friday Summary- October 30, 2009

This week’s Friday Summary is sponsored by Evilsquirrel Enterprises, your World Domination Specialists. My absolute favorite holiday of the year is Halloween. More than Christmas (possibly because I’m a non-practicing Jew), more than my birthday, and even more than Talk Like a Pirate Day.   Halloween is the ultimate geek holiday. It’s the one time of year we have an excuse to pull out our table saws, microcontrollers, and pneumatics as we build wonderful devices to soil the underwear of all the neighborhood children. I knew I was finally getting it right the first year a group of kids carefully approached our home, then ran off screaming as the motion sensor tripped and the effects kicked in. Between the business and the baby I haven’t really had tine to build anything new this year, but I did finally invest in some commercial-grade fog machines. Fog, light, and sound are absolutely essential for setting a good scene, and go a long way further than any actual decorations.   I’ve previously used the cheap foggers from Party City or the Halloween stores, but never managed to get them to last more than 2 years in a row. I’m hoping this commercial unit will be a bit more reliable… and the 20,000 cubic feet per minute of fog it kicks out can’t hurt. This is the 13th year, 4th location, and 2nd state for our annual Evilsquirrel party. It’s a bit smaller than the “Squirrel Wars” year where we had 300 people show up and 4 live bands, but that’s what happens when everyone runs off and starts careers and families. Needless to say, my friends and I are all tremendously amused that the whole “squirrel” meme is so big these days. Now we don’t seem quite as weird. On to the Friday Summary: Webcasts, Podcasts, Outside Writing, and Conferences Rich quoted in The Register on Microsoft’s new anti-exploitation tool. Adrian on The ABCs of DAM at Dark Reading. The Security and Privacy Conundrum. David Mortman spoke last week to the Ohio CIO Forum about security and privacy risks in the cloud. Rich and Martin on The Network Security Podcast, Episode 171. Favorite Securosis Posts Rich: Mort’s post on IDM. Adrian, Meier and Mort: Most developers don’t know what anti-exploitation measures are, which in an odd way is why Rich’s post to Add Anti-Exploitation to Applications You Didn’t Write is important. We’ve got to start somewhere… Other Securosis Posts Penetration Testing Market Grows and Matures, but Faces Challenges Penetration Testing Market Update, Part 2 Amazon RDS Announced IDM: Identity? Favorite Outside Posts Rich: This Wired article on the anti-vaccination movement. It’s an extremely important article, but here’s the money quote for us security folks: “Looking back over human history, rationality has been the anomaly. Being rational takes work, education, and a sober determination to avoid making hasty inferences, even when they appear to make perfect sense. Much like infectious diseases themselves – beaten back by decades of effort to vaccinate the populace – the irrational lingers just below the surface, waiting for us to let down our guard.” Adrian: Jeremiah’s post on Black Box vs. White Box. QA professional have used this ‘threshold of stability’ approach for years to gate software releases, but it seems counter-intuitive to security professionals. Mortman: Detecting Malice Released Only halfway through and it is completely awesome. Best tech book I’ve read in ages. (I second that -Rich). (Meier thirds it: “Anyone I bring it up to first complains about the $40 eBook, but it’s the best technical book I’ve bought in a while.”) Meier: Amazon Lets Shoppers Pay With a Phrase This is just dumb. First we have a phrase that’s verifiably known to be taken and second I bet if someone did research on any web authentication mechanisms that are identified as “PIN” you could map the majority of those users bank PINs to their other PINs. I don’t get it. Oh and, to change your PayPhrase you have to log in anyway. Way to go, Amazon. Rich (2): I can’t help myself, I had a tie this week. This article from Ivan Arce at Core Security is a month old, but well worth the read. Special – Worst Link of the Week “Women In IT Security Project Management”. This paper is beyond terrible. Not only is it poorly written (which it is), but it doesn’t make a lick of sense. Case in point – check out this bit from the first page: In this study, I have tried to determine if IT security project management is a viable career choice for women. If so, do they have what it takes to be a successful IT Security Project Manager? I would like to emphasize that IT profession cannot be generalized based on gender. No conclusion has been drawn to indicate if one sex is better than the other in any of the subsets within IT field. Isn’t it great how the author, Gurdeep Kaur, simultaneously tells us that she’s going to investigate whether one gender has the ability to do a job, and then claims that you can’t generalize on the basis of gender? You really shouldn’t read the paper, but if you do, it goes downhill from there. The analysis is shallow and suffers largely from citing lots of studies that demonstrate the problem while providing little in the way of solutions. The few suggestions provided are insulting to say the least. I’d quote more but I can’t bring myself to do it. I am amazed that SANS actually posted this to their reading room and granted the author a “Gold Certification”. Top News and Posts China expands cyberyspying. Duh… I hope we are too. Is Your Data Really Secured? by Nati Shalom. Some overlap with our Cloud Data Security series, and worth a read. CISCO acquires ScanSafe. Threat Level’s story on the 2006 Walmart Hack. Hackers foiled by their own installation of L0phtcrack! Nice post on Threat Modeling from the Matasano team. Indeed, software would be great if it wasn’t for the users! Microsoft’s response: Engineers vs. Ninjas on the Microsoft SDL Blog. AV Researcher published AV Tracker tool. NSA to

Share:
Read Post

Penetration Testing Market Update, Part 2

This is part 2 of a series, click here for Part 1 Penetration testing solution and market changes I’m not exactly sure when Core Security Technologies and Immunity started business, but before then there were no dedicated commercial penetration testing tools. There were a number of vulnerability scanners, and plenty of different “micro” tools to help with different parts of a pen test, but no dedicated exploitation tools. Metasploit also changed this on the non-commercial side. For those who aren’t experts in this area, it’s important to remember that a vulnerability assessment is not a penetration test – vulnerability assessment determines if a system may be vulnerable to an attack, while penetration testing determines if that vulnerability is exploitable. Update- Ivan from Core emailed that they started as consulting in 1996, and the first version of Impact was released in 2002. Rather than repeating Nick Selby’s excellent market summary of the three penetration testing tools providers over at IANS, I’ll focus on the changes we’re seeing in the overall market. The market is still dominated by services, with quality ranging from excellent to absolute snake oil. Even using a tool like Core, by far the most user-friendly, you still need a certain skill level to perform a reasonable test. The tools market is increasing, as Core and Immunity have experienced reasonable growth, with extensive growth of the Metasplit user community. Partnerships between vulnerability assessment vendors and penetration testing solution providers have grown. This was pretty much completely driven by Core until the Metasploit acquisition by Rapid7. Core partners with Tenable, Qualys, nCircle, IBM, Lumension, GFI, and eEye. Update- Immunity partners with Tenable, I missed that in my initial research. Web application vulnerability assessment tools (and services) almost always include some level of penetration-testing capabilities. This is a technology requirement for effective results, since it is extremely difficult to accurately validate many web application vulnerability types without some degree of exploitation. VA tools tend to restrict themselves to prevent damaging the application being tested, and (as with nearly any vulnerability assessment), can normally be run against non-production targets with less safety, in order to produce deeper and more accurate results. Any penetration test worth its salt includes web applications within the scope, and pen testing tools are increasing their support for web application testing. I expect to see greater blurring of the lines between vulnerability assessment and penetration testing in the web application area, which will spill over into the infrastructure assessment space. We’ll also see increasing demand for internal penetration testing, especially for web applications. Core will increase its partnerships and integration on the VA side, and could see an acquisition if larger VA vendors (a small list) see growing customer demand for penetration testing – which I do not expect in the short term. The VA market is larger and if those vendors see pen testing client demands, or greater competition from Rapid7, they can leverage their Core partnerships. Core’s Impact Essential tool is the first to target individuals who aren’t full-time security professionals or penetration testers, and run on an automated schedule. While it doesn’t have nearly the depth of the Pro product, it could be interesting for continuous testing. The real question is whether customers perceive it as either reducing their process costs for vulnerability management (via prioritization and elimination of non-exploitable vulnerabilities), or a replacement for an existing VA solution. If Impact Essential can’t be used to cut overall costs, it will be hard to justify in the current economic environment. As Nick concluded, Immunity will need to improve their UI to increase adoption beyond organic growth… unless they plan to stay focused on dedicated penetration testers. They should also consider some VA partnerships, as they will be the only penetration testing tool not partnered or integrated with VA Update- I was incorrect, Immunity also partners with Tenable. Apologies for missing that in my initial research.. I agree with Nick: Immunity is most at risk in the short term from the Metasploit commercialization. If the UI improves, Immunity could use cost to compete, and some VA vendors might add them as an additional partner. Rapid7 just jumped from being one of the less-known VA players to a household name for anyone who pays attention to penetration testing. This is a huge opportunity, but not without risks. Metasploit is an awesome tool (I’ve used it since version 1… in the lab), but not yet enterprise class. The speed, usefulness, and usability of its integration will play a major role in its long-term success and ability to springboard off the large amount of press and additional name recognition associated with this acquisition. H D also needs to aggressively maintain the Metasploit community, or Rapid7 will lose a large fraction of Metasploit’s value and have to pay staff to replace those volunteers. Quality assurance, of the product as well as the exploits, will also be important to maintain; this could reduce the speed of releasing exploits which Metasploit is famous for. Rapid7 also faces risks due to Metasploit’s BSD license. There is nothing to prevent any other vendor from taking and using the code base. This is a common risk when commercializing any free/open source software, and we’ve seen both successes and failures. Conclusion Here’s how I see things developing: For infrastructure/non-web applications we will see growing demand for exploit testing automation. The vulnerability assessment vendors will add native capabilities, and Core (and Immunity, if they choose) will add more native VA capabilities and find themselves competing more with VA vendors. My gut feel is that VA vendors (other than Rapid7) will only add the most basic of capabilities, leaving the pen testing vendors with a technical advantage until both markets completely merge. That might not matter to most organizations, which either won’t understand the technology differentiation, or won’t care. There will continue to be a need for in-depth tools to support professional penetration testers. This market will continue to grow, but will not offer the opportunities of the broader, ‘lights-out’ automated side of the

Share:
Read Post

Penetration Testing Market Grows and Matures, but Faces Challenges

With last week’s acquisition of Metasploit by Rapid7, I thought it might be a good time to do a review of the penetration testing market and the evolving role of pen testing in the security arsenal. We’ve seen a few different shifts over the past few years in how organizations use pen testing, and I believe this acquisition – combined with changes in enterprise infrastructure – indicates that pen testing is becoming more essential, more closely tied to vulnerability assessment, and generally more mature. First, a bit of a disclaimer: I’m approaching this as an analyst, not a penetration tester. Although I’ve used many of the tools in demonstrations and the lab, I’ve never worked as a pen tester and don’t claim to have that skill set. I’m fairly sure my BBS hacking experience from the mid-80’s doesn’t really count. There are two important issues we need to focus on when evaluating penetration testing – changes in need and value, and changes in delivery methods and tools. The value of penetration testing There is sometimes a debate on the value of penetration testing. Some question its usefulness, since a test by a competent practitioner is pretty much guaranteed to succeed, but highly unlikely to find every exploit path into the organization. More comprehensive tests will find more holes, but at a much higher cost. In some verticals (particularly financials and some types of government organizations) the risk is so high that this is an accepted cost, but for less-aware and less-targeted verticals, or small and mid-sized organizations, a basic vulnerability or program assessment can find more issues at lower cost. That’s because, until fairly recently, penetration testing was dominated by external service organizations performing broad network and host based assessments. Tests were used to: Scare management into spending more on security. Get a general sense of how hardened the organization was. Find and fix any obvious holes that might stand out either in an untargeted scan/attack, or to an attacker willing to spend a little more time with limited resources. Basically, a pen test would give you a good sense of how you’d withstand an attack by an opponent at the same skill level as your testing team, for the amount of time/effort you were willing to pay for. Obviously there are a lot of exceptions, and I’m only talking about general market trends. But at this stage, unless you were a big target, a vulnerability assessment (including an internal assessment) would provide sufficient value at a lower cost. That’s still how many tests are used, but we’ve seen a shift in the past few years due to a few changes in the risk and threat landscape. Specifically: An increase in highly targeted attacks. Greater use of web applications, and more web application attacks (one of the single biggest source of losses in recent major reported incidents). A market and economic system for taking advantage of exploited data. Evolution of technologies & vulnerabilities, coupled with much shorter exploit creation/adoption cycles than in the past. For example, zero day attacks were extremely uncommon just 2-3 years ago, but now seem to appear monthly. The bad guys are making serious money, are going after harder targets, and are taking advantage of our rapid adoption of web technologies. They really have to, since we’ve gotten a lot better at securing our networks and endpoints (yes, we really have, from an overall trends standpoint). These factors change the focus and requirements for penetration testing. While this is merely one analyst’s opinion, and some of these are very early trends, here’s what I’m seeing: Organizations are increasing the frequency of vulnerability assessments and penetration testing, to reduce between-assessment risks. In some cases these are continuous programs. Penetration tests are being more closely tied to vulnerability assessments in order to determine risk and prioritize patches and other defenses. The line between a vulnerability assessment and a penetration test is almost completely blurred for web applications – especially custom web applications. There is greater use of, and need for, penetration testing during development and pre-production phases, since some testing is prohibitively risky on a production system. Penetration testing is being more closely tied to vulnerability assessment on non-web systems to help prioritize. A VA doesn’t necessarily tell you how exploitable a target is, and it certainly won’t tell you what the bad guy can potentially gain. A penetration test helps validate the overall risk and determine the potential impact and losses (not in financial terms – that’s for another day). A vulnerability scan can tell you that system X is vulnerable to attack Y, but you often need to go a step further with a pen test to determine if data Z is at risk. This is especially true for web applications, but also important for other types of assets. The overall focus is shifting away from “Can someone break in, and how long will it take them?” to “Where are we most exposed, and what are our potential losses?” Penetration testing is becoming more of a prioritization and secure development tool. See part 2 for how these factors change the solutions and penetration testing market Share:

Share:
Read Post

Add Anti Exploitation to Applications You Didn’t Write

This morning Dan Goodin over at The Register dropped me a line to get my take on a new tool from Microsoft that lets you apply anti-exploitation controls to existing applications. Here’s Dan’s article with my quote, and more information directly from Microsoft. This. Is. Awesome. Here’s why EMET is so significant. Anti-exploitation technologies are incredibly powerful because they reduce the risk that any vulnerability – even a zero day – can actually be exploited to cause harm. They include a bunch of techniques including Data Execution Protection (DEP, which is a software flag enforced at the hardware level), Address Space Layout Randomization (ASLR), and stack protection. As powerful as these techniques are, the software developer needs to design and build their programs to take advantage of them. Most developers don’t do this yet, which makes their software a major potential weak point for any host security. This is especially problematic with web browser plugins that are leveraged by web-based client-side exploits. EMET allows anyone to add certain anti-exploitation protections to any program without requiring recompiling. You can now apply four anti-exploitation techniques to an existing application, no matter where you got it from or who programmed it (see Microsoft’s post for the list and explanation). Since this will break some applications, it’s not for the faint of heart, but EMET has per-process granularity which can help you lock something down, while leaving open the bits that break. It’s very cool, and kudos to Microsoft. We still need to see how well it works in the real world, so hopefully we’ll get some field reports soon. Share:

Share:
Read Post

The First Phishing Email I Almost Fell For

Like many of you, I get a ton of spam/phishing email to my various accounts. Since my email is very public, I get a little more than most people. It’s so bad I use 3 layers of spam/virus filtering, and still have some messages slip through (1 cloud based filter [Postini, which will probably change soon], one on-premise UTM [Astaro], and SpamSieve on my Mac). If something gets through all of that, I still have some additional precautions I take on my desktop to (hopefully) help against targeted malware. Despite all that, I assume that someday I’ll be compromised, and it will probably be ugly. This morning I got the first phishing email in a very long time that almost tricked me into clicking. It came from “Administrator” at one of my hosts and read: Attention! On October 22, 2009 server upgrade will take place. Due to this the system may be offline for approximately half an hour. The changes will concern security, reliability and performance of mail service and the system as a whole. For compatibility of your browsers and mail clients with upgraded server software you should run SSl certificates update procedure. This procedure is quite simple. All you have to do is just to click the link provided, to save the patch file and then to run it from your computer location. That’s all. http://updates.[cut for safety] Thank you in advance for your attention to this matter and sorry for possible inconveniences. System Administrator Two things tipped me off. First, that system is a private one administered by a friend. While he does send updates like this out, he always signs them with his name. Second, the URL is clearly not really that domain (but you have to read the entire thing). And finally, it leads to an Active Server Pages domain, which that administrator never uses since our system is *nix based. But it was early in the morning, I hadn’t had coffee yet, and we often need to upgrade our SSL after a system update on this server, so I still almost clicked on it. According to Twitter this is a Zbot generated message: SecBarbie: RT @mikkohypponen ZBot malware being spammed out right now in emails starting “On October 22, 2009 server upgrade will take place” Ignore it. Thanks Erin! It’s interesting that despite multiple obvious markers this was malicious, and be being very attuned to these sorts of things, I still almost clicked on it. It just goes to show you how easy it is to screw up and make a mistake, even when you’re a paranoid freak who really shouldn’t be let out of the house. Share:

Share:
Read Post

Friday Summary – October 16, 2009

All last week I was out of the office on vacation down in Puerto Vallarta. It was a trip my wife and I won in a raffle at the Phoenix Zoo, which was pretty darn cool. I managed to unplug far more than I can usually get away with these days. I had to bring the laptop due to an ongoing client project, but nothing hit and I never had to open it up. I did keep up with email, and that’s where things got interesting. Before heading down I added the international plan to my iPhone, for about $7, which would bring my per-minute costs in Mexico down from $1 per minute to around $.69 a minute. Since we talked less than 21 minutes total on the phone down there, we lose. For data, I signed up for the 20 MB plan at a wonderfully fair $25. You don’t want to know what a 50 MB plan costs. Since I’ve done these sorts of things before (like the Moscow trip where I could never bring myself to look at the bill), I made sure I reset my usage on the iPhone so I could carefully track how much I used. The numbers were pretty interesting – checking my email ranged from about 500K to 1MB per check. I have a bunch of email accounts, and might have cut that down if I disabled all but my primary accounts. I tried to check email only about 2-3 times a day, only responding to the critical messages (1-4 a day). That ate through the bandwidth so quickly I couldn’t even conceive of checking the news, using Maps, or nearly any other online action. In 4 days I ran through about 14 MB, giving me a bit more space on the last day to occupy myself at the airport. To put things in perspective, a satellite phone (which you can rent for trips – you don’t have to buy) is only $1 per minute, although the data is severely restricted (on Iridium, unless you go for a pricey BGAN). Since I was paying $3/minute on my Russia trip, next time I go out there I’ll be renting the sat phone. So for those of you who travel internationally and want to stay in touch… good luck. -rich On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian’s Dark Reading post on Getting Around Vertical Database Security. Favorite Securosis Posts Rich, Mort, and Adrian: Which Bits Are the Right Bits. We all independently picked this one, which either means it’s really good, or everything else we did this week sucked. Meier: It Isn’t Risk Management If You Can’t Lose. Other Securosis Posts Where Art Thou, Security Logging? IDM: Reality Sets In Barracuda Networks Acquires Purewire Microsoft Security Updates for October 2009 Personal Information Dump Favorite Outside Posts Rich: Michael Howard’s post on the SMBv2 bug and the Microsoft SDL. This kind of analysis is invaluable. Adrian: Well, the entire Protect the Data series really. Mortman: Think, over at the New School blog. Meier: Security Intelligence: Attacking the Kill Chain. (Part three of a series on security principles in network defense.) Top News and Posts Mozilla Launches Plugin Checker. This is great, but needs to be automatic for Flash/QuickTime. Adobe recommends turning off JavaScript in Acrobat/Reader due to major vulnerability. I recommend uninstalling Acrobat/Reader, since it’s probably the biggest single source of cross platform 0-days. Air New Zealand describes reason for outage. Not directly security, but a good lesson anyway. I’ve lost content in the past due to these kinds of assumptions. Google to send information about hacked Web sites to owners. Details of Wal-Mart’s major security breach. Greg Young on enterprise UTM and Unicorns. Microsoft fixes Windows 7 (and other) bugs. Delta being sued over email hack. 29 Bugs fixed by Adobe. California County hoarding data. Mozilla Plugin Check. Paychoice Data Breach. Blog Comment of the Week This week’s best comment comes from Rob in response to Which Bits are the Right Bits: Perhaps it is not well understood that audit logs are generally not immutable. There may also be low awareness of the value of immutable logs: 1) to protect against anti-forensics tools; 2) in proving compliance due diligence, and; 3) in providing a deterrent against insider threats. Share:

Share:
Read Post

It Isn’t Risk Management If You Can’t Lose

I was reviewing the recent Health and Human Services guidance on medical data breach notifications and it’s clear that the HHS either was bought off, or doesn’t understand the fundamentals of risk assessment. Having a little bit of inside experience within HHS, my vote is for willful ignorance. Basically, the HHS provides some good security guidance, then totally guts it. Here’s a bit from the source article with the background: The American Recovery and Reinvestment Act of 2009 (ARRA) required HHS to issue a rule on breach notification. In its interim final rule, HHS established a harm standard: breach does not occur unless the access, use or disclosure poses “a significant risk of financial, reputational, or other harm to individual.” In the event of a breach, HHS’ rule requires covered entities to perform a risk assessment to determine if the harm standard is met. If they decide that the risk of harm to the individual is not significant, the covered entities never have to tell their patients that their sensitive health information was breached. You have to love a situation where the entity performing the risk assessment for a different entity (patients) is always negatively impacted by disclosure, and never impacted by secrecy. In other words, the group that would be harmed by protecting you gets to decide your risk. Yeah, that will work. This is like the credit rating agencies, many aspects of fraud and financial services, and more than a few breach notification laws. The entities involved face different sources of potential losses, but the entity performing the assessment has an inherent bias to mis-assess (usually by under-assessing) the risk faced by the target. Now, if everyone involved is altruistic and unbiased this all works like a charm. Hell, even in Star Trek they don’t think human behavior that perfect. Share:

Share:
Read Post

Friday Summary- October 2, 2009

I hate to admit it, but I have a bad habit of dropping administrative tasks or business development to focus on the research. It’s kind of like programmer days – I loved coding, but hated debugging or documentation. But eventually I realize I haven’t invoiced for a quarter, or forgot to tell prospects we have stuff they can pay for. Those are the nights I don’t sleep very well. Thus I’ve spent a fair bit of time this week catching up on things. I still have more invoices to push out, and spent a lot of time editing materials for our next papers, and my contributions to the next version of the Cloud Security Alliance Guidance report. I even updated our retainer programs for users, vendors, and investors. Not that I’ve sent it to anyone – I sort of hate getting intrusive sales calls, so I assume I’m annoying someone if I mention they can pay me for stuff. Probably not the best trait for an entrepreneur. Thus I’m looking forward to a little downtime next week as my wife and I head off for vacation. It starts tonight at a black tie charity event at the Phoenix Zoo (first time I’ll be in a penguin suit in something like 10 years). Then, on Monday, we head to Puerto Vallarta for a 5 day vacation we won in a raffle at… the Phoenix Zoo. It’s our first time away from the baby since we had her, so odds are instead of hanging out at the beach or diving we’ll be sleeping about 20 hours a day. We’ll see how that goes. And with that, on to the Friday Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian starts a new series on database security over at Dark Reading with a post on SQL Injection. Rich and Martin on the Network Security Podcast, Episode 168. Favorite Securosis Posts Rich: Our intern kicks off his analyst career with a post on “realistic security”. David Meier: IDM: It’s A Process David Mortman and Adrian: Rich’s post on tokenization. And honestly, we did not place that strawman in the audience. Other Securosis Posts SQL Injection Prevention Digital Ant Swarms Database Encryption Benchmarking Favorite Outside Posts Adrian: On the Mozilla Security Blog: A Glimpse Into the Future of Browser Security. Cutting edge? I dunno, but interesting. Rich: Jack Daniel on the Massachusetts privacy law mess. This is why I never get excited about a coming law until it’s been passed, there’s an enforcement mechanism, and it’s being enforced. Meier: Wireless Network Modded to See Through Walls – This brings a whole new level of fun to the Arduino platform. Mortman: Not about Security, but come on, homemade ketchup! Top News and Posts Slashdot links to a bunch of articles on the rise of cybercrime against business banking accounts (usually by compromising the company’s computer, and grabbing their online username/password). Much of the investigative reporting is being done by Brian Krebs at the Washington Post. Competing statistics on phishing. Odds are they’re all wrong, but it’s fun to watch. Judges orders deactivation of a Gmail account after a bank accidentally sends it confidential information. Yet another judge shows a complete lack of understanding of technology. Brian Krebs (again) with the story of how a money mule was recruited. I don’t understand how this person could possibly believe it was legitimate work. Microsoft releases their free Security Essentials antivirus. New malware rewrites bank statements on the fly. This is pretty creative. BreakingPoint on Cisco being a weak link in national infrastructure security. Researchers break secure data storage system. Absolutely no one is surprised. Using BeEF for client exploitation via XSS. New NIST guidance on smart grid security. Wi-Fi Security Paint. But it just doesn’t have the cachet of aluminum foil. Payroll Firm Breached Does it really matter if we call it Enterprise UTM or UTM or Bunch-O-Security-Stuff in a Box? Seriously, cross $200M per year in revenue, and does anyone care? WTF? Bloggers Cause Wisconsin Tourism Federation to Change Name. (Just because it’s my home state –Meier). Blog Comment of the Week This week’s best comment comes from Slavik in response to SQL Injection Prevention: Hi Adrian, good stuff. I just wanted to point out that the fact that you use stored procedures (or packages) is not in itself a protection against SQL injection. It’s enough to briefly glance at the many examples on milw0rm to see how even Oracle with their supplied built-in packages can make mistakes and be vulnerable to SQL injections that will allow an attacker to completely control the database. I agree that if you use only static queries then you’re safe inside the procedure but it does not make your web application safe (especially with databases that support multiple commands in the same call like SQL server batches). Of course, if you use dynamic queries, it’s even worse. Unfortunately, there are times when dynamic queries are necessary and it makes the code very difficult to write securely. The most important advice regarding SQL injection I would give developers is to use bind variables (parametrized queries) in their applications. There are many frameworks out there that encourage such usage and developers should utilize them. Share:

Share:
Read Post

Tokenization Will Become the Dominant Payment Transaction Architecture

I realize I might be dating myself a bit, but to this day I still miss the short-lived video arcade culture of the 1980’s. Aside from the excitement of playing on “big hardware” that far exceeded my Atari 2600 or C64 back home (still less powerful than the watch on my wrist today), I enjoyed the culture of lining up my quarters or piling around someone hitting some ridiculous level of Tempest. One thing I didn’t really like was the whole “token” thing. Rather than playing with quarters, some arcades (pioneered by the likes of that other Big Mouse) issued tokens that would only work on their machines. On the upside you would occasionally get 5 tokens for a dollar, but overall it was frustrating as a kid. Years later I realized that tokens were a parental security control – worthless for anything other than playing games in that exact location, they keep the little ones from buying gobs of candy 2 heartbeats after a pile of quarters hits their hands. With the increasing focus on payment transaction security due to the quantum-entangled forces of breaches and PCI, we are seeing a revitalization of tokenization as a security control. I believe it will become the dominant credit card transaction processing architecture until we finally dump our current plain-text, PAN-based system. I first encountered the idea a few years ago while talking with a top-tier retailer about database encryption. Rather than trying to encrypt all credit card data in all their databases, they were exploring the possibility of concentrating the numbers in one master database, and then replacing the card numbers with “tokens” in all the other systems. The master database would be highly hardened and encrypted, and keep track of which token matched which credit card. Other systems would send the tokens to the master system for processing, which would then interface with the external transaction processing systems. By swapping out all the card numbers, they could focus most of their security efforts on one controlled system that’s easier to control. Sure, someone might be able to hack the application logic of some server and kick off an illicit payment, but they’d have to crack the hardened master server to get card numbers for any widespread fraud. We’ve written about it a little bit in other posts, and I have often recommended it directly to users, but I probably screwed up by not pushing the concept on a wider basis. Tokenization solves far more problems than trying to encrypt in place, and while complex it is still generally easier to implement than alternatives. Well-designed tokens fit the structure of credit card numbers, which may require fewer application changes in distributed systems. The assessment scope for PCI is reduced, since card numbers are only in one location, which can reduce associated costs. From a security standpoint, it allows you to focus more effort on one hardened location. Tokenization also reduces data spillage, since there are far fewer locations which use card numbers, and fewer business units that need them for legitimate functions, such as processing refunds (one of the main reasons to store card numbers in retail environments). Today alone we were briefed on two different commercial tokenization offerings – one from RSA and First Data Corp, the other from Voltage. The RSA/FDC product is a partnership where RSA provides the encryption/tokenization tech FDC uses in their processing service, while Voltage offers tokenization as an option to their Format Preserving Encryption technology. (Voltage is also partnering with Heartland Payment Systems on the processing side, but that deal uses their encryption offering rather than tokenization). There are some extremely interesting things you can do with tokenization. For example, with the RSA/FDC offering, the card number is encrypted on collection at the point of sale terminal with the public key of the tokenization service, then sent to the tokenization server which returns a token that still “resembles” a card number (it passes the LUHN check and might even include the same last 4 digits – the rest is random). The real card number is stored in a highly secured database up at the processor (FDC). The token is the stored value on the merchant site, and since it’s paired with the real number on the processor side, can still be used for refunds and such. This particular implementation always requires the original card for new purchases, but only the token for anything else. Thus the real card number is never stored in the clear (or even encrypted) on the merchant side. There’s really nothing to steal, which eliminates any possibility of a card number breach (according to the Data Breach Triangle). The processor (FDC) is still at risk, so they will need to use a different set of technologies to lock down and encrypt the plain text numbers. The numbers still look like real card numbers, reducing any retrofitting requirements for existing applications and databases, but they’re useless for most forms of fraud. This implementation won’t work for recurring payments and such, which they’ll handle differently. Over the past year or so I’ve become a firm believer that tokenization is the future of transaction processing – at least until the card companies get their stuff together and design a stronger system. Encryption is only a stop-gap in most organizations, and once you hit the point where you have to start making application changes anyway, go with tokenization. Even payment processors should be able to expand use of tokenization, relying on encryption to cover the (few) tokenization databases which still need the PAN. Messing with your transaction systems, especially legacy databases and applications, is never easy. But once you have to crack them open, it’s hard to find a downside to tokenization. Share:

Share:
Read Post

Stupid FUD: Weird Nominum Interview

We see a lot of FUD on a daily basis here in the security industry, and it’s rarely worth blogging about. But for whatever reason this one managed to get under my skin. Nominum is a commercial DNS vendor that normally targets large enterprises and ISPs. Their DNS server software includes more features than the usual BIND installation, and was originally designed to run in high-assurance environments. From what I know, it’s a decent product. But that doesn’t excuse the stupid statements from one of their executives in this interview that’s been all over the interwebs the past couple days: Q: In the announcement for Nominum’s new Skye cloud DNS services, you say Skye ‘closes a key weakness in the internet’. What is that weakness? A: Freeware legacy DNS is the internet’s dirty little secret – and it’s not even little, it’s probably a big secret. Because if you think of all the places outside of where Nominum is today – whether it’s the majority of enterprise accounts or some of the smaller ISPs – they all have essentially been running freeware up until now. Given all the nasty things that have happened this year, freeware is a recipe for problems, and it’s just going to get worse. … Q: Are you talking about open-source software? A: Correct. So, whether it’s Eircom in Ireland or a Brazilian ISP that was attacked earlier this year, all of them were using some variant of freeware. Freeware is not akin to malware, but is opening up those customers to problems. … By virtue of something being open source, it has to be open to everybody to look into. I can’t keep secrets in there. But if I have a commercial-grade software product, then all of that is closed off, and so things are not visible to the hacker. … Nominum software was written 100 percent from the ground up, and by having software with source code that is not open for everybody to look at, it is inherently more secure. … I would respond to them by saying, just look at the facts over the past six months, at the number of vulnerabilities announced and the number of patches that had to made to Bind and freeware products. And Nominum has not had a single known vulnerability in its software. The word “bullsh**” comes to mind. Rather than going on a rant, I’ll merely include a couple of interesting reference points: Screenshot of a cross-site scripting vulnerability on the Nominum customer portal. Link to a security advisory in 2008. Gee, I guess it’s older than 6 months, but feel free to look at the record of DJBDNS, which wasn’t vulnerable to the DNS vuln. As for closed source commercial code having fewer vulnerabilities than open source, I refer you to everything from the recent SMB2 vulnerability, to pretty much every proprietary platform vs. FOSS in history. There are no statistics to support his position. Okay, maybe if you set the scale for 2 weeks. That might work, “over the past 2 weeks we have had far fewer vulnerabilities than any open source DNS implementation”. Their product and service are probably good (once they fix that XSS, and any others that are lurking), but what a load of garbage in that interview… Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.