Securosis

Research

Pumping Out Noise

I kind of get a chuckle from articles like this recent series at Dark Reading on phishing, spam and malware. First came the contradictory posts, both posting that Phishing Attacks are reaching record highs, while simultaneously trumpeting that the king of spam and botnets had been shut down. I don’t suppose it dawned on the editors that if the channel that conveys the phishing attacks is “shut down”, then we are not likely to see “Record Highs.” Then there is the headline that November 24th, the biggest shopping day of the year, could be a “Black Monday” in terms of malware threats … “PC Tools predicted Nov. 24 would be the most active day for malware threats after analyzing worldwide virus data on more 500,000 machines and data from last year’s holiday season”. Then again, maybe not: “And while spam and malware typically surge during the holiday season, this year may actually be a little less active than in years past, says Roger Thompson, chief research officer at AVG Technologies. No one should be especially worried about Nov. 24 …”. Um, yeah. I am all for articles with interesting & topical information, and I understand the need to balance both sides of an issue, but if you are going to use attention grabbing headlines about some huge threat, you should at least provide some links or direction on what to do about it. Missing from all of this was a singularly relevant piece of useful information that most end users could easily use to help themselves in the battle against phishing and malware attacks, namely: DON’T CLICK EMBEDDED EMAIL LINKS. Share:

Share:
Read Post

Going On The Offense

Brian Krebs posted a follow up article on the takedown of fraudulent hosting provider McColo (facilitated by his initial reporting last week). If you think all the nasties out there are hosted in Russia or China, you should really read his article. McColo’s servers weren’t sending out the actual spam; they functioned as the command and control infrastructure for some of the world’s biggest botnets. For those of you who don’t know, spam is rarely sent from static servers anymore; it originates from botnets scattered around the world that are directed by their control network to issue once in a lifetime offers for the best possible deals on male enhancement products. (It’s nice to know everyone has small weewees and lasts about 8 seconds, since otherwise this stuff wouldn’t be so profitable). Since the spam originates from tens of thousands of different systems, it makes it nearly impossible to blacklist based just on IP address. McColo hosted major components of the command infrastructure for spewing out your totally legitimate university diplomas (for a small fee). All those little bots are still out there, but no one is telling them what to do. As Krebs reports, it’s only a matter of time before the network owners reassert control and we can get back to purchasing discount medications and finding true love in former Soviet countries. But what if we took control ourselves and locked out the network? Those servers are still sitting in some building in California, and the ISPs still control the IP addresses. Imagine what we could do if we sent in a research team (or law enforcement) to commandeer all those bots and lock the bad guys out. Yes folks, this is just fantasy today. We don’t have the legal framework to execute such a project without creating risk for the good guys involved. Sure, we could use the botnet to patch all the compromised systems, but that’s effectively breaking into someone’s computer and making changes. I dream of a day when we can more effectively take the fight to the bad guys without worrying about going to jail ourselves. There’s absolutely no chance we can continue this fight indefinitely if we’re always on the defense. But we’re a long way off from having the legal framework and institutions to effectively stand up for ourselves. Share:

Share:
Read Post

Common Applications Are Now The Weakest Link

Edited: I stupidly credited Nate Lawson for Mark Dowd’s work with Sotirov. Dumb mistake, and I apologize. Since my travel is slowing down a bit, I’m finally able catch up a little on my reading. Two articles this week reminded me of something I’ve been meaning to talk about. First, Chris Wysopal talks about how we’ve reached an application security tipping point. How the OS vendors are doing such a (relatively) good job at hardening the operating system that it’s become easier and more lucrative for attackers to go after common applications. Since nearly everyone online has a reasonably common set of Internet-enabled desktop apps running, it’s nearly as effective as targeting the OS. Heck, in some cases these apps are cross platform, and in a few cases we even see cross platform exploits. To top it off, many of these applications do not activate or use anti-exploitation features like ASLR or DEP, even when it’s little more than a checkbox during the development process. Thus, as we saw during Alex Sotirov and Mark Dowd’s demo at Black Hat this year, you can use these applications to totally circumvent host operating system security, even through the web browser. As Chris states: Whoa. Millions of dollars spent on securing the most prevalent piece of software and it could be meaningless? Yes, it’s true. Since attackers typically only need one vulnerability, if it isn”t in the network, and it isn”t in the host configuration, and it isn”t in the OS, they will happily exploit a vulnerability in an application. Mike Andrews also nails it: They don’t just go away, they go to the next level of lowest hanging fruit. It might be other vendors (Apple, Adobe, Google for example) which may not have the focus that Microsoft has been forced to have, or even worse, smaller players like custom websites or things like WordPress, Movable Type, phpbb, vbulletin, etc — software that has a huge install base, but perhaps not the resources to deal with a full-frontal attack. …snip… Think of it security”s own Hydra — cut of one head (vulns in a major vendor), two grow back (vulns in smaller vendors), and that”s a worrying proposition. As I often say on this blog we have a term for this… it’s called job security. Share:

Share:
Read Post

An Amusing Use For DLP

Here’s a valuable lesson for you college students out there, from Dave Meizlik: if your professor is married to one of the leads at a DLP vendor, think twice before plagiarizing a published dissertation. We talked generally for a while about the problem and then it hit me what if I downloaded a bunch of relevant dissertations, fingerprinted them with a DLP solution, and then sent the girls dissertation through the systems analysis engine for comparison? Would the DLP solution be able to detect plagiarism? It almost seemed too simple. … So when we got home from lunch I started up my laptop and RDP”d into my DLP system. I had my wife download a bunch of relevant dissertations from her school”s database, and within minutes I fingerprinted roughly 50 dissertation files, many of which were a couple hundred pages in length, and built a policy to block transmission of any of the data in those files. I then took her students dissertation and emailed it from a client station to my personal email. Now because the system was monitoring SMTP traffic it sent the email (with the student”s paper as an attachment) to the content analysis engine. I waited a second another and then I impatiently hit send receive and there it was, an automated notification telling me that my email had violated my new policy and had been blocked. I suspect that’s one grad student who’s going to be serving fries soon… Share:

Share:
Read Post

Everything Old Is New Again In The Fog Of The Cloud

Look I understand too little too late I realize there are things you say and do You can never take back But what would you be if you didn’t even try You have to try So after a lot of thought I’d like to reconsider Please If it’s not too late Make it a… cheeseburger -Here I Am by Lyle Lovett Sometimes I get a little too smart for my own good and think I can pull a fast one over you fair readers. Sure enough, Hoff called me out over my Cloud Computing Macro Layers post from yesterday. And I quote: I’d say that’s a reasonable assertion and a valid set of additional “layers.” There also not especially unique and as such, I think Rich is himself a little disoriented by the fog of the cloud because as you’ll read, the same could be said of any networked technology. The reason we start with the network and usually find ourselves back where we started in this discussion is because the other stuff Rich mentions is just too damned hard, costs too much, is difficult to sell, isn’t standardized, is generally platform dependent and is really intrusive. See this post (Security Will Not End Up In the Network) as an example. Need proof of how good ideas like this get mangled? How about Web 2.0 or SOA which is for lack of a better description, exactly what RIch described in his model above; loosely coupled functional components of a modular architecture. My response is… well… duh. I mean we’re just talking distributed applications, which we, of course, have yet to really get right (although multiplayer games and gambling are close). But I take a little umbrage with Chris’s assumptions. I’m not proposing some kind of new, modular structure a la SOA, I’m talking more about application logic and basic security than anything else, not bolt-on tools. Because the truth is, it will be impossible to add these things on after the fact; it hasn’t worked well for network security, and sure as heck won’t work well for application security. These aren’t add-on products, they are design principles. They aren’t all new, but as everyone jumps off the cliff into the cloud they are worth repeating and putting into context for the fog/cloud environment. Thus some new descriptions for the layers. Since it’s Friday and all I can think about is the Stone Brewery Epic Vertical Ale sitting in my fridge, this won’t be in any great depth: Network: Traditional network security, and the stuff Hoff and others have been talking about. We’ll have some new advances to deal with the network aspects of remote and distributed applications, such as Chris’ dream of IF-MAP, but we’re still just talking about securing the tubes. Service: Locking down the Internet exposed APIs- we have a fair bit of experience with this and have learned a lot of lessons over the past few years with work in SOA- SOAP, CORBA, DCOM, RPC and so on. We face three main issues here- first, not everyone has learned those lessons and we see tons of flaws in implementations and even fundamental design. Second, many of the designers/programmers building these cloud services don’t have a clue or a sense of history, and thus don’t know those lessons. And finally, most of these cloud services build their own kinds of APIs from scratch anyway, and thus everything is custom, and full of custom vulnerabilities from simple parsing errors, to bad parameterization, to logic flaws. Oh, and lest we not forget, plenty of these services are just web applications with AJAX and such that don’t even realize they are exposing APIs. Fun stuff I refer to as, “job security”. User: This is an area I intend to talk about in much greater depth later on. Basically, right now we rely on static authentication (a single set of credentials to provide access) and I think we need to move more towards adaptive authentication (where we provide an authentication rating based on how strongly we trust that user at that time in that situation, and can thus then adjust the kinds of allowed transactions). This actually exists today- for example, my bank uses a username/password to let me in, but then requires an additional credential for transactions vs. basic access. Transaction: As with user, this is an area we’ve underexplored in traditional applications, but I think will be incredibly valuable in cloud services. We build something called adaptive authorization into our applications and enforce more controls around approving transactions. For example, if a user with a low authentication rating tries to transfer a large sum out of their bank account, a text message with a code will be send to their cell phone with a code. If they have a higher authentication rating, the value amount before that back channel is required goes up. We build policies on a transaction basis, linking in environmental, user, and situational measurements to approve or deny transactions. This is program logic, not something you can add on. Data: All the information-centric stuff we expend endless words on in this blog. Thus this is nearly all about design, with a smidge of framework and shared security services support we can build into common environments (e.g. an adaptive authentication engine or encryption services in J2EE). No loosely coupled functional components, just a simple focus on proper application design with awareness of the distributed environment. But as Chris also says, It should be noted, however that there is a ton of work, solutions and initiatives that exist and continue to be worked on these topics, it’s just not a priority as is evidenced by how people exercise their wallets. Exactly. Most of what we write about cloud computing security will be ignored… … but what would we be if we didn’t even try. You have to try. Share:

Share:
Read Post

Friday Summary

I have to say, Moscow was definitely one of the more interesting, and difficult, places I’ve traveled to. The city wasn’t what I expected at all- everywhere you look there’s a park or big green swatch down major streets. The metro was the cleanest, most fascinating of any city (sorry NY). I never waited more than 45 seconds for a car, and many of the stations are full of beautiful Soviet-era artwork. In other ways it was more like traveling to Japan- a different alphabet, the obvious recognition of being an outsider, and English (or any Western European language) is tough to find outside the major tourist areas. Eating was sometimes difficult as we’d hunt for someplace with an English menu or pictures. But the churches, historical sites, and museums were simply amazing. We did have one amusing (after the fact) incident. I was out there for the DLP Russia conference, at a Holiday Inn outside of Moscow proper. We requested a non-smoking room, which wasn’t a problem. Of course we’re in a country where the average 3 year old smokes, so we expected a little bleed-over. What we didn’t expect was the Philip-Morris conference being held at the same hotel. So much for our non-smoking room, and don’t get me started on the smoking-only restaurant. Then there was my feeble attempt to order room service that led to the room service guy coming to our room, me pointing at things on the menu, and him bringing something entirely different. Oh well, it was a good trip anyway. Now on to the week’s security summary: Webcasts, Podcasts, Outside Writing, and Conferences: I spoke at a DLP Executive Forum in Dallas (a Symantec event). Over at TidBITS I explain how my iPhone rescued me from a travel disaster. Although I wasn’t on the Network Security Podcast this week, Martin interviewed Homeland Security Secretary Chertoff. Favorite Securosis Posts: Rich: I finally get back to discussing database encryption in Data Encryption, Option 1- Media Protection. (Adrian’s follow up is also great stuff). Adrian: Examining how hackers work and think as a model for approaching Data Discovery & Classification. Favorite Outside Posts: Adrian: Nothing has been more fascinating this week than to watch the Spam stories on Brain Krebs blog. Rich: Kees Leune offers up advice for budding security professionals. Top News: One small step at the ISP, one giant leap for the sanity of mankind. 8e6 and Mail Marshal merge. AVG Flags Windows DLL as a virus, scrambles to fix false positive Jeremiah Grossman on how browser security evolves. Apple updates Safari. Oh yeah, and Google Chrome and Firefox also issued updates this week. Google also fixes a critical XSS vulnerability in its login page. Microsoft patches a 7 year old SMB flaw, which leads Chris Wysopal to talk about researcher credit. Researchers hijack storm worm. I think it’s this kind of offensive computing we’ll need to manage cybercrime- you can’t win on defense alone. Blog Comment of the Week: Ted on my Two Kinds of Security Threats post: You get more of what you measure. It’s pretty easy to measure noisy threats, but hard to measure quiet ones. Fundamentally this keeps quiet threats as a “Fear” sell, and nobody likes those. Share:

Share:
Read Post

Brian Krebs: Ultimate Spam Filter

First he exposes the Russian Business Network and forces them to go underground, now he nearly single-handedly stops 2/3rds of spam. Most tech journalists, myself included merely comment on the latest product drivel or market trends. Brian is one of the only investigative journalists actually looking at the roots of cybercrime and fraud. On Tuesday, he contacted the major ISPs hosting McColo– a notorious hosting service whose clients include a long roster of cybercriminals. At least one of those ISPs pulled the plug, and until McColo’s clients are able to relocate we should enjoy some relative quiet. Congrats Brian, awesome work. This also shows the only way we can solve some of these problems- through proactive investigation and offense. We can’t possibly win if all we do is run around and try and block things on our side. Share:

Share:
Read Post

Comments on Database Media Protection

Rich posted an article on database and media encryption (aka Data at Rest) earlier this week, discussing the major alternatives for keeping database media safe. Prior to posting it, he asked me to preview the contents for accuracy, which I did, and I think Rich covers the major textbook approaches one needs to consider. I did want to add a little color to this discussion in terms of threat models and motivation- regarding why these options should be considered, as well as some additional practical considerations in the use and selection of encryption for data at rest. Media Encryption: Typically the motivation behind media encryption is to thwart people from stealing sensitive information in the event that the media is lost or stolen, and falls into the wrong hands. If your backup tape falls off the back of a truck, for example, it cannot be read and the information sold. But there are a couple other reasons as well. Tampering with snapshots or media is another problem encryption helps address, as both media and file encryption resist tampering- both in long-term media storage, and file/folder level encryption for short-term snapshots. If a malicious insider can alter the most recent backups, and force some form of failure to the system, the altered data would be restored. As this becomes the master record of events, the likelihood of catching and discovering this attack would be very difficult. Encrypted backups with proper separation of duties makes this at least difficult, and hopefully impossible. In a similar scenario, if someone was to gain access to backups, or the appliance that encrypts and performs key management, they could perform a type of denial of service attack. This might be to erase some piece of history that was recorded in the database, or as leverage to blackmail a company. Regardless of encryption method, redundancy in key management, encryption software/hardware, and backups becomes necessary; otherwise you have simply swapped one security threat for another. External File or Folder Encryption. If you rely on smaller regional database servers, in bank branch offices for example, theft of the physical device is something to consider. In secured data centers, or where large hardware is used, the odds of this happening are slim. In the case of servers sitting in a rack in an office closet, this is not so rare. This stuff is not stored in a vault, and much in the same way file and folder encryption helps with stolen laptops, it can also help if someone walks off with a server. How and where to store keys in this type of environment needs to be considered as well, for both operational consistency and security. Native Database Object Encryption: This is becoming the most common method for encrypting database data, and while it might sound obvious to some, there are historical reasons why this trend is only recently becoming the standard. The recent popularity is because database encryption tends to work seamlessly with most existing application processes and it usually (now) performs quite well, thanks to optimizations by database vendors. As it becomes more common, the attacks will also become more common. Native database encryption helps address a couple specific issues. The first is that archived data is already in an encrypted state, and therefore backups are protected against privacy or tampering problems. Second, encryption helps enforce separation of duties provided that the access controls, group privileges, and roles are properly set up. However, there are a number of more subtle attacks on the database that need to be considered. How objects and structures are named, and how they are used, and other unencrypted columns, can all ‘leak’ information. While the primary sensitive data is encrypted, if the structures or naming conventions are poorly designed, or compound columns are used, information may be unintentionally available by inference. Also, stored procedures or service accounts that have permissions to examine these encrypted columns can be used to bypass authorization checks and access the data, so both direct and indirect access rights need to be periodically reviewed. In some rare cases I have seen read & write access to encrypted columns left open, as the admin felt that if the data was protected it was safe, but overwriting the column with zeros proved otherwise. Finally, some of the memory scanning database monitoring technologies have access to cached data in its unencrypted state, so make sure caching does not leave a hole you thought was closed. Hopefully this went a little beyond specific tools and their usage, and provided some food for thought. You will need to consider how encryption alters your disaster recovery strategy, both with the backups, as well as with encryption software/hardware and key management infrastructure. It affects the whole data eco-system, so you need to consider all aspects of the data life-cycle this touches. And some of you may consider the threats I raised above as far-fetched, but if you think like a criminal or watch the news often enough, you will see examples of these sorts of attacks from time to time. Share:

Share:
Read Post

Healthcare In The Cloud

Google is launching a cooperative program between Google and Medicare of Arizona. They are teaming up to put patient & health care records onto Google servers so it can be shared with doctors, labs and pharmacies. Arizona seniors will be pioneers in a Medicare program that encourages patients to store their medical histories on Google or other commercial Web sites as part of a government effort to streamline and improve health care. The federal agency that oversees Medicare selected Arizona and Utah for a pilot program that invites patients to store their health records on the Internet with Google or one of three other vendors. From Google & Medicare’s standpoint, this seems like a great way to reduce risk and liability while creating new revenue models. Google will be able to charge for some add-on advertisement services, and possibly data for BI as well. It appears that to use the service, you need to provide some consent, but it is unclear from the wording in the privacy policy if that means by default the data can be used or shared with third parties; time will tell. It does appears that Google does not assume HIPPA obligations because they are not a health care provider. And because of the voluntary nature of the program, it would be hard to get any satisfaction should the data be leaked and damages result. The same may be true for Medicare, because if they are not storing the patient data, there is a grey area of applicability for measures like CA-1386 and HIPPA as well. As Medicare is not outsourcing record storage, unlike other SaaS offerings, they may be able to shrug off the regulatory burden. Is it just me, or does this kind of look like Facebook for medical records? Share:

Share:
Read Post

1 In 4 DNS Servers Still Vulnerable? More Like 4 in 4

I was reading this article over at NetworkWorld today on a study by a commercial DNS vendor that concluded 1 in 4 DNS servers is still vulnerable to the big Kaminsky vulnerability. The problem is, the number is more like 4 in 4. The new attack method that Dan discovered is only slowed by the updates everyone installed, it isn’t stopped. Now instead of taking seconds to minutes to compromise a DNS server, it can take hours. Thus if you don’t put compensating security in place, and you’re a target worth hitting, the attacker will still succeed. This is a case where IDS is your friend- you need to be watching for DNS traffic floods that will indicate you are under attack. There are also commercial DNS solutions you can use with active protections, but for some weird reason I hate the idea of paying for something that’s free, reliable, and widely available. On that note, I’m going to go listen to my XM Radio. The irony is not lost on me. Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.