Securosis

Research

How To Tell If Your PCI Scanning Vendor Is Dangerous

I got an interesting email right before I ran off on vacation from Mark on a PCI issue he blogged about: 13. Arrangements must be made to configure the intrusion detection system/intrusion prevention system (IDS/IPS) to accept the originating IP address of the ASV. If this is not possible, the scan should be originated in a location that prevents IDS/IPS interference. snip… I understand what the intention of this requirement is. If your IPS is blacklisting the scanner IP’s then ASVs don’t get a full assessment because they are a loud and proud scan rather than a targeted attack… However, blindly accepting the originating IP of the scanner leaves the hosts vulnerable to various attacks. Attackers can simply reference various public websites to see what IP addresses they need to use to bypass those detective or preventive controls. I figured no assessor would ask their client to open up big holes just to do a scan, but lo and behold, after a little bit of research it turns out this is surprisingly common. Back to email: It came up when I was told by my ASV “Authorized scanning vendor” that I had to do exclude their IPs. They also provided me with the list of IP’s to exclude. Both [redacted] and [redacted] have told me I needed to do bypass the IDS. When I asked about the exposure they were creating, both told me that their “other customers” do this and it isn’t a problem for them. If your ASV can’t perform a scan/test without having you turn off your IDS/IPS, it might be time to look for a new one. Especially if their source IPs are easy to figure out. For the record, “everyone else does it” is the dumbest freaking reason in the book. Remember the whole jumping off a bridge thing your mom taught you? Share:

Share:
Read Post

Design for Failure

A very thought-provoking ‘Good until Reached For’ post over on Gunnar Peterson’s site this week. Gunnar is tying together a number of recent blog threads to exemplify through the current financial crisis of how security and risk management best practices were not applied. There are many angles to this post, and Gunnar is covering a lot of ground, but the concept that really resonated with me is automation of process without verification. From a personal angle, having a wife who is a real estate broker and many friends in the mortgage and lending industries, I have been hearing quiet complaints for several years now that buyers were not meeting the traditional criteria. People with $40k a year in household income were buying half million dollar homes. A lot of this was attributed to having the entire loan approval process being automated in order to keep up with market demands. Banks were automating the verification process to improve throughput and turnaround because there was demand for home loans. Mortgage brokers steered their clients to banks that were known to have the fastest turnaround, and mostly that was because those were the institutions that were not closely scrutinizing loans. This pushed more banks to further streamline and cutting corners for faster turnaround in order to be competitive; the business was to originate the loans as that is how they made money. The other angle that was quite common was many mortgage brokers had further learned to ‘game the system’ to get questionable loans through. For example, if a lender was known to have a much higher approval rating for college graduates than non-college graduates given equal FICO scores, the mortgage brokers would state the buyer had a college degree knowing full well that no one was checking the details. Verification of ‘Stated Income’ was minimal and thus often fudged. Property appraisers were often pushed to come up with valuations that were not in line with reality as banks were not independently managing this portion of the verification process. When it came right down to it the data was simply not trustworthy. The quote of the Ian Grigg about is interesting as well. I wonder if the comments are ‘tongue in cheek’ as I am not sure that it killed the core skill, rather automation detached personal supervision in some cases, and others overwhelmed the individuals responsible because they could not be competitive and perform the necessary checks. As with software development, if it comes down to adding new features or being secure, new features almost always win. With competitions between banks to make money in this GLBA  fueled land grab, good practices were thrown out the door as they are an impediment to revenue. If you look at the loan process and the various checkpoints and verifications that occur along the way, it is very similar in nature to the goal with Sarbanes-Oxley in verification of accounting practices within IT. But rather than protecting investors from accounting oversight, these controls are in place to protect the banks from risk. To bypass these controls is very disconcerting as these banks understand better than anyone financial history and risk exposure. I think that capture the gist of much of why sanity checks in the process are so important; to make sure we are not fundamentally missing the point of the effort and destroying all the safeguards for security and risk going in. And more and more, we will see business processes automated for efficiency and timeliness, however, software not only needs to meet the functional specifications but risk specifications as well. Ultimately this is why I believe that securing business processes is an inside out game. Rather and rather than bolt security and integrity onto the infrastructure, checks and balances need to be built into the software. This concept is not all that far from what we do today with unit testing and building in debugging capabilities into software, but needs to encompass audit and risk safeguards as well. Gunnar’s point of ‘Design For Failure’ really hits home when viewed in context of the current crisis. Share:

Share:
Read Post

Reminder- There Are No Trusted Sites

Just a short, friendly reminder that there is no such thing as a trusted website anymore, as demonstrated by BusinessWeek. We continue to see trusted websites breached, and rather than leaving a little graffiti on the site the attackers now use that as a platform to attack browsers. It’s one reason I use FireFox with NoScript and only enable the absolute minimum to get a site running. Share:

Share:
Read Post

The Fallacy of Complete and Accurate Risk Quantification

Wow. The American taxpayer now owns AIG. Does that mean I can get a cheap rate? The economic events of the past few days transitioned the months-long saga of financial irresponsibility past merely sturn ing into the realm of truly terrifying. We’ve leaped past the predictable into a maelstrom of uncertainty edging on a black hole of unknowable repercussions. True, the system could stabilize soon; allowing us to rebuild before the shock waves topple the relatively stable average family. But right now it seems the global economy is so convoluted we’re all moving forward like a big herd navigating K2 in a blinding snowstorm with the occasional avalanche. Yeah, I’m scared. Frightened and furious that, yet again, the group think of the financial community placed the future of my family at risk. That we, as taxpayers, will have to bail them out like Chrysler in the 70’s, and the savings and loan institutions of the 80’s. That, in all likelihood, no one responsible for the decisions will be held accountable and they will all go back to lives of luxury. One lesson I’m already taking to heart is that I believe these events are disproving the myth of the reliability of risk management in financial services. On the security side, we often hold up financial services as the golden child of risk management. In that world, nearly everything is quantifiable, especially with credit and market risk (operational is always a bit more fuzzy). Complex equations and tables feed intelligent risk decisions that allow financial institutions to manage their risk portfolios while maximizing profitability. All backed by an insurance industry, also using big math, big heads, and big computers; capable of accepting and distributing the financial impact of point failures. But we are witnessing the failure of that system of risk management on an epic scale. Much of our financial system revolves around risk- distributing, transferring, and quantifying risk to fuel the economy. The simplest savings and loan bank is nothing more than a risk management tool. It provides a safe haven for our assets, and in return is allowed to use those assets for it’s own profitability. Banks make loans and charge interest. They do this knowing a certain percentage of those loans will default, and using risk models decide which are safest, which are riskiest, and what interest rate to charge based on that level of risk. It’s just a form of gambling, but one where they know the odds. We, the banks customers, are protected from bad decisions through a combination of diversification (spreading the risk, rather than just one big loan to one big customer), and insurance (the FDIC here in the US). It’s a system that’s failed before; once spectacularly (the Depression), and again in the 80’s, but overall works well. Thus we have empirical proof that even the simplest form of financial risk management can fail. Fast forward to today. Our system is infinitely more complex than a simple S&L; interconnected in ways that we now know no one completely understands. But we do know some of the failures: Risk ratings firms knowingly under-rated risks to avoid losing the business of financial firms wanting to make those investments. Insurance firms, like AIG, backed these complex financial tools without fully understanding them. Financial firms themselves traded in these complex assets without fully understanding them. The entire industry engaged in massive group think which ignored clear risks of relying on a single factor (the mortgage industry) to fuel other investments. Lack of proper oversight (government, risk rating companies, and insurance companies) allowed this to play out to an extreme. Reduced compartmentalization in the financial system allowed failures to spread across multiple sectors (possibly a deregulation failure). Let’s tie this back to information security risk management. First, please don’t take this as a diatribe against security metrics- of which I’m a firm supporter. My argument is that these events show that complete and accurate risk quantification isn’t really possible, for two big reasons. It is impossible to avoid introducing bias into the system; even a purely mathematical system. The metrics we choose, how we measure them, and how we rate them will always be biased. As with recent events, individual (or group) desires can heavily influence that bias and the resulting conclusions. We always game the system. Complexity is the enemy of risk, yet everything is complex. It’s nearly impossible to fully understand any system worth measuring risk on. Which leads to my message of the day. Quantified risk is no more or less valuable or effective than qualified risk. Let’s stop pretending we can quantify everything, because even when we can (as in the current economic fiasco) the result isn’t necessarily reliable, and won’t necessarily lead to better decisions. I actually think we often abuse quantification to support bad decisions that a qualified assessment would prevent. Now I can’t close without injecting a bit of my personal politics, so stop reading here if you don’t want my two sentence rant… rant I don’t see how anyone can justify voting for a platform of less regulation and reduced government oversight. Now that we own AIG and a few other companies, it seems that’s just a good way to socialize big business. It didn’t work in the 80’s, and it isn’t working now. I support free markets, but damn, we need better regulation and oversight. I’m tired of paying for big business’s big mistakes and people pretending that this time it was just a mistake and it won’t happen again if we just get the government out of the way and lower corporate taxes. Enough of the fracking corporate welfare! /rant Share:

Share:
Read Post

Jay Beale, Kevin Johnson, and Justin Searle Join the Network Security Podcast

Boy am I behind on my blog posts! I have a ton of stuff to get up/announce, and first up is episode 120 of the Network Security Podcast. Martin and I were joined by Justin Searle, Kevin Johnson and Jay Beale from Intelguardians. As well as discussing the news stories of the week, the guys were here to tell us about a new LiveCD they’ve developed, Samurai. It was a great episode with some extremely knowledgeable guys. Full show notes are at netsecpodcast.com. Network Security Podcast, Episode 120 for September 16, 2008 Time: 43:57 Share:

Share:
Read Post

Did They Violate Breach Disclosure Laws?

There’s been an extremely interesting, and somewhat surprising, development in the TJX case the past couple weeks. No, I’m not talking about one of the defendants pleading guilty (and winning the prisoners dilemma), but the scope of the breach. Based on the news reports and court records, it seems TJX wasn’t the only victim here. From ComputerWorld: Toey was one of 11 alleged hackers arrested last month in connection with a series of data thefts and attempted data thefts at TJX and numerous other companies. Besides TJX and BJ’s, the list of publicly identified victims of the hackers includes DSW, OfficeMax, Boston Market, Barnes and Noble, Sports Authority and Forever 21. Huh. Wacky. I don’t seem to recall seeing breach notifications from anyone other than TJX. Since I’ve been out for a few weeks, I decided to hunt a bit and learned the Wall Street Journal beat me to the punch on this story: That’s because only four of the chains clearly alerted their customers to breaches. Two others – Boston Market Corp. and Forever 21 Inc. – say they never told customers because they never confirmed data were stolen from them. The other retailers – OfficeMax Inc., Barnes and Noble Inc., and Sports Authority Inc. – wouldn’t say whether they made consumer disclosures. Computer searches of their Securities and Exchange Commission filings, Web sites, press releases and news archives turned up no evidence of such disclosures. The other companies allegedly targeted by the ring charged last week were: TJX Cos., BJ’s Wholesale Club Inc., shoe retailer DSW Inc., and restaurant chain Dave and Buster’s Inc. They each disclosed to customers they were breached shortly after the intrusions were discovered. The blanket excuse from these companies for not disclosing? “We couldn’t find any definite information that we’d been breached”. Seems to me someone has a bit of legal exposure right now. I wonder if is greater or less than the cost of notification? And don’t forget, thanks to TJX seeing absolutely no effect on their business after the breach, we can pretty effectively kill off the reputation damage argument. Share:

Share:
Read Post

DRM In The Cloud

I have a well-publicized love-hate opinion of Digital Rights Management. DRM can solve some security problems but will fail outright if applied in other areas, most notably consumer media protection. I remain an advocate and believe that an Information Centric approach to data security has a future, and I am continually looking for new uses for this model. Still, few things get me started on a rant like someone who says that DRM is going to secure consumer media, and DRM in the Cloud is predicting just that. New box, same old smelly fish. Be it audio or video, DRM secured content can be quite secure at rest. But when someone actually wants to watch that video is when things get interesting. At some point in the process the video content must leave its protective shell of encryption, and then digital must become analog. Since this data is meaningless to someone unless they can view it or use it, at some point this transition must take place! It is at this transition point from raw data to consumable media when the content is most vulnerable- the delivery point. DRM & Information Centric Security are fantastic for keeping information secret when the people who have access to it want to keep it a secret. They are not as effective when there is a recipient who wants to violate that trust, and fail outright when that recipient has control of the software and hardware used for presentation. I freely admit that if the vendor controls the hardware, the software, and distribution, it can be made economically unfeasible for the average person to steal. And I can hypothesize about how DRM and media distribution can be coupled with cloud computing, but most of these examples involve using vendor approved software, in a vendor approved way, over a reliable high speed connection, using a ‘virtual’ copy that never resides in its entirety on the device that plays it. And a vendor approved device helps a whole lot with making piracy more difficult, but DRM in the Cloud claims universal device support, so that is probably out of the question. But at the end of the day, someone with the time and inclination to pirate the data will do so. Whether they solder connections onto the system bus or reverse engineer the decoder chips, they can and will get unfettered access- quite possibly just for the fun of doing it! The business justification for this effort is odd as well. If the goal is to re-create the success of DVD as stated in the article, then do what DVD did: twice the audio & video quality, far more convenience at a lower cost. Simple. Those success factors gave DVDs one of the fastest adoption curves in history. So why should an “Internet eco-system that re-creates the user experience and commercial success of the DVD” actually recreate the success of DVD? The vendors are not talking about lower price, higher quality, and convenience, so what is the recipe for success? They are talking about putting their content online and addressing how confused people are about buying and downloading! This tells me that the media owners think that they will be successful if they move their stuff onto the Internet and make DRM invisible. If you think just moving content onto the Internet alone makes a successful business model, tell me how much fun it would be to use Google Maps without search, directions and or aerial photos- it’s just maps taken online, right? Further, I don’t know anyone who is confused about downloading; in fact I would say most people have that pretty much down cold. I do know lots of people who are pissed off about DRM being an invasive impediment to normal use; or the fact they cannot buy the music they want; or things like Sony’s rootkit and various underhanded and quasi-criminal tactics used by the industry; and the rising cost of, well, just about everything. Not to get all Friedrich Hayek here, but letting spontaneous market forces determine what is efficient, useful, and desirable based upon perceived value of the offering is a far better way to go about this. This corporate desire to synthetically recreate the success of DVDs is missing several critical elements, most notably, anything to make customers happy. The “Cloud Based DRM” technology approach may be interesting and new, but it will fail in exactly the same way, for exactly the same reasons previous DRM attempts have. If they want to succeed, they need to abandon DRM and provide basic value to the customer. Otherwise, DRM, along with the rest of the flawed business assumptions, looks like a spectacular way to waste time and money. Share:

Share:
Read Post

Tumbleweed Acquired

Sopra Group, through its Axway subsidiary, has acquired Tumbleweed Communications for $143 million. The press release is here. With Tumbleweed’s offerings for email security, secure file transport, and certificate validation, there were just not enough tools in that chest to build a compelling story- either for messaging security or secure transaction processing. And it provides just one more example of why Rothman is right on target. Given that Tumbleweed’s stock price has been flat for the entirety of this decade, this is probably both a welcome change of scenery from the stockholders’ perspective, and a sign of new vision on how best to utilize these technology elements. There are lots of fine email/content security products out there having a very difficult time of expanding their revenue and market share. Without some of the other pieces that most of their competitors have, I am frankly impressed that Tumbleweed has made it this far. Dropping this product line into the Axway suite makes sense as it will add value to most of their solutions, from retail to healthcare, so this looks like a positive outcome. Share:

Share:
Read Post

I Don’t Get It

From the “I really don’t get it” files: First I read that Google’s new Chrome browser & Internet Explorer modifications are threats to existing advertising models. And this is news? I have been using Firefox with NoScript and other add-ons in a VMWare partition that gets destroyed after use for a couple years now. Is there a difference? What’s more, there is an interesting parallel in that both are cleansing browsing history and not allowing certain cookie types, but rather than dub these ‘privacy advancements’, they are being negatively marketed as ‘porn mode’. What’s up with that? Perhaps I should not be puzzled by this Terror database failure, as whenever you put that many programmers on a single project you are just asking for trouble. But I have to wonder what the heck they were doing, to fail this badly with the ‘Terror Database Upgrade’? This is not a very big database- in fact 500k names is puny. And they let go 800 people who were just part of the team? Even if they are cross-referencing thousands of other databases and blobs of information, the size of the data is trivial. Who the heck could have spent $500M on this? What, did they write it in ADA? Can’t find enough good Foxbase programmers? For a couple of million, I bet you could hire a herd of summer interns and re-enter the data into a new system if need be. It’s a “Terror Database” all right, just not the way they intended it to be. MIT develops a network analysis tool that “enables managers to track likely hacking routes”. Wow, really? Oh, wait, don’t we already have a really good tool that does this? Oh yeah, we do, it’s Skybox! Share:

Share:
Read Post

Demobilized and Remotivated

After a hectic week of being locked away in a warehouse in Denver, I’m sitting in a hotel room in Vancouver getting ready to board a ship to Alaska. Now that’s it’s all over I can give a few more details as to what I was up to last week. As I’ve mentioned before, I’m on a federal emergency response team. I won’t identify the team, otherwise I’d have to get approval to write about it, but we’re one of the groups that’s called in to deal with major disasters. Our team is one of a few specialized ones, and aside from regular disaster work we’re dedicated to providing medical response to any incidents involving a weapon of mass destruction. We’re trained to provide medical care and mass decontamination under pretty much any circumstances (thus all the hazmat training). We’ve never actually responded to any WMD incidents, and sometimes I wonder how much longer we’ll have that mission. Back when the team was created there weren’t any significant decontamination resources in the country; even the military only had 1 domestic team. Now, pretty much every fire department has at least some decon capabilities. Still, we’re the most capable team out there in terms of resources and capacity, so perhaps we’ll survive a little longer. The one place we do get used is during designated National Security Events, like the DNC, where we are pre-positioned in case something happens. While it would take us up to 24 hours to travel to a random incident, when we’re pre-positioned we can be there within minutes. Thus I spent a week locked up in a warehouse (and I do mean locked up) just in case something bad happened. Since we were on clock, rather than sitting around all day we crammed in a ton of training. Since I’m just an EMT, and no longer a paramedic, it was nice to go through some of the advanced classes I normally don’t get access to any more. Nice to know I can still pass Advanced Cardiac Life Support; a class I haven’t taken in over 10 years. We covered everything from driving off road vehicles in Level A hazmat suits, to air monitoring, to disaster medicine, to pediatric advanced life support. Living in a warehouse for a week with 58 other people, spending my 12 hour shifts in training and cleaning bathrooms, was a surprisingly motivating experience. There’s really nothing more motivating than working with a well-oiled team under difficult circumstances. While emergency services doesn’t pay the bills any more, it definitely feeds the soul. While on deployment I managed to miss the 1 year anniversary of Securosis, L.L.C. It’s hard to believe a full year has passed and I’ll write more on that later. We’ve got some big plans for the coming year, and I’m excited about some of the opportunities in front of us. But right now it’s time to sign off for a week and enjoy my first real vacation in I can’t remember how long. My wife and I aren’t generally the cruising type, but we figured that’s the best way to see the glaciers on a tight timeline before they all melt. The site and business are in Adrian’s hands as I run off and play with bears and icebergs. I’ll be checking in on email, but don’t expect a response until I get back unless it’s an emergency. I hope you all have as good a week as I’m expecting, and those of you down south please stay safe with all the storms. Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.