Securosis

Research

Will This Be The Next PCI Requirement Addition?

I’m almost willing to bet money on this one… Due to the nature of the recent breaches, such as Hannaford, where data was exfiltrated over the network, I highly suspect we will see outbound monitoring and/or filtering in the next revision of the PCI DSS. For more details on what I mean, refer back to this post. Consider this your first warning. Share:

Share:
Read Post

A Small, Necessary, Legal Change For National Cybersecurity

I loved being a firefighter. In what other job do you get to speed around running red lights, chops someone’s door down with an axe, pull down their ceiling, rip down their walls, cut holes in their roof with a chainsaw, soak everything they own with water, and then have them stop by the office a few days later to give you the cookies they baked for you. Now, if you try and do any of those things when you’re off duty and the house isn’t on fire, you tend to go to jail. But on duty and on fire? The police will arrest the homeowner if they get in your way. Society has long accepted that there are times when the public interest outweighs even the most fundamental private rights. Thus I think it is long past time we applied this principle to cybersecurity and authorized appropriate intervention in support of national (and international) security. One of the major problems we have in cybersecurity today is that the vulnerabilities of the many are the vulnerabilities of everyone. All those little unpatched home systems out there are the digital equivalent of burning houses in crowded neighborhoods. Actually, it’s probably closer to a mosquito-infested pool an owner neglects to maintain. Whatever analogy you want to use, in all cases it’s something that, if it were the physical world, someone would come to legally take care of, even if the owner tried to stop them. But we know of multiple cases on the Internet where private researchers (and likely government agencies) have identified botnets or other compromised systems being used for active attack, yet due to legal fears they can’t go and clean the systems. Even when they know they have control of the botnet and can erase it and harden the host, they legally can’t. Our only option seems to be individually informing ISPs, which may or may not take action, depending on their awareness and subscriber agreements. Here’s what I propose. We alter the law and empower an existing law enforcement agency to proactively clean or isolate compromised systems. This agency will be mandated to work with private organizations who can aid in their mission. Like anything related to the government, it needs specific budget, staff, and authority that can’t be siphoned off for other needs. When a university or other private researcher discovers some botnet they can shut down and clean out, this law enforcement agency can review and authorize action. Everyone involved is shielded from being sued short of gross negligence. The same agency will also be empowered to work with international (and national) ISPs to take down malicious hosting and service providers (legally, of course). Again, this specific mission must be mandated and budgeted, or it won’t work. Right now the bad guys operate with impunity, and law enforcement is woefully underfunded and undermandated for this particular mission. By engaging with the private sector and dedicating resources to the problem, we can make life a heck of a lot harder for the bad guys. Rather than just trying to catch them, we devote as much or more effort to shutting them down. Call me an idealist. (I don’t have any digital pics from firefighting days, so that’s a more-recent hazmat photo. The banda a is to keep sweat out of my eyes; it’s not a daily fashion choice). Share:

Share:
Read Post

Selective Inverse Recency Bias In Security

Nate Silver is one of those rare researchers with the uncanny ability to send your brain spinning off on unintended tangents totally unrelated to the work he’s actually documenting. His work is fascinating more for its process than its conclusions, and often generates new introspections applicable to our own areas of expertise. Take this article in Esquire where he discusses the concept of recency bias as applied to financial risk assessments. Recency bias is the tendency to skew data and analysis towards recent events. In the economic example he uses he compares the risk of a market crash in 2008 using data from the past 60 years vs. the past 20. The difference is staggering; from one major downturn every 8 years (using 60 years of data) vs. a downturn every 624 years (using only 20 years of data). As with all algorithms, input selection deeply skews output results, with the potential for cataclysmic conclusions. In the information security industry I believe we just as frequently suffer from selective inverse recency bias- giving greater credence to historical data over more recent information, while editing out the anomalous events that should drive our analysis more than the steady state. Actually, I take that back, it isn’t just information security, but safety and security in general, and it is likely of a deep evolutionary psychological origin. We cut out the bits and pieces we don’t like, while pretending the world isn’t changing. Here’s what I mean- in security we often tend to assume that what’s worked in the past will continue to work in the future, even though the operating environment around us has completely changed. At the same time, we allow recency bias to intrude and selectively edit out our memories of negative incidents after some arbitrary time period. We assume what we’ve always done will always work, forgetting all those times it didn’t work. From an evolutionary psychology point of view (assuming you go in for that sort of thing) this makes perfect sense. For most of human history what worked for the past 10, 20, or 100 years still worked well for the next 10, 20, or 100 years. It’s only relatively recently that the rate of change in society (our operating environment) accelerated to high levels of fluctuation in a single human lifetime. On the opposite side, we’ve likely evolved to overreact to short term threats over long term risks- I doubt many of our ancestors were the ones contemplating the best reaction to the tiger stalking them in the woods; our ancestors clearly got their asses out of there at least fast enough to procreate at some point. We tend to ignore long term risks and environmental shifts, then overreact to short term incidents. This is fairly pronounced in information security where we need to carefully balance historical data with our current environment. Over the long haul we can’t forget historical incidents, yet we also can’t assume that what worked yesterday will work tomorrow. It’s important to use the right historical data in general, and more recent data in specific. For example, we know major shifts in technology lead to major new security threats. We know that no matter how secure we feel, incidents still occur. We know that human behavior doesn’t change, people will make mistakes, and are predictably unpredictable. On the other hand, firewalls only stop a fraction of the threats we face, application security is now just as important as network security, and successful malware utilizes new distribution channels and propagation vectors. Security is always a game of balance. We need to account for the past, without assuming its details are useful when defending against specific future threats. Share:

Share:
Read Post

Adrian Appears on the Network Security Podcast

I can’t believe I forgot to post this, but Martin was off in Chicago for work this week and Adrian joined me as guest host for the Network Security Podcast. We recorded live at my house, so the audio may sound a little different. If you listen really carefully, you can hear an appearance by Pepper the Wonder Cat, our Chief of Everything Officer here at Securosis. The complete episode is here: Network Security Podcast, Episode 137, February 10, 2009 Time: 32:50 Show Notes: Arizona tracking drug prescriptions. I swear that stuff was from my shoulder surgery, officer! Kaspersky hacked. We mostly talk about the response. Metasploit to add an online service component. We can’t wait to learn more about what they are going to offer beyond password cracking. Melissa Hathaway appointed head of White House office of cybersecurity. We talk about some new info we have on the Heartland breach that isn’t in the press yet, so no link. Share:

Share:
Read Post

Recent Data Breaches- How To Limit Malicious Outbound Connections

Word is slowly coming through industry channels that the attackers in the Heartland breach exfiltrated sniffed data via an outbound network connection. While not surprising, I did hear that the connection wasn’t encrypted- the bad guys sent the data out in cleartext (I’ll leave it to the person who passed this on to identify themselves if they want). Rumor from 2 independent sources is the bad guys are an organized group out of St. Petersburg (yes, Russia, as cliche as that is). This is similar to a whole host of breaches- including (probably) TJX. While I’m not so naive as to think you can stop all malicious outbound connections, I do think there’s a lot we can do to make life harder on the bad guys. First, you need to lock down your outbound connections using a combination of current and next-generation firewalls. You should isolate out your transaction network to enforce tighter controls on it than on the rest of your business network. Traditional firewalls can lock down most outbound port/protocols, but struggle with nested/stealth channels or all the stuff shoveled over port 80. Next-gen firewalls and web gateways (I hate the name, but don’t have a better one) like Palo Alto Networks or Mi5 Networks can help. Regular web gateways (Websense and McAfee/Secure Computing) are also good, but vary more on their outbound control capabilities and tend to be more focused on malware prevention (not counting their DLP products, which we’ll talk about in a second). The web gateway and next gen firewalls will focus on your overall network, while you can lock of the transaction side with tighter traditional firewall rules and segmenting that thing off. Next, use DLP to sniff for outbound cardholder data. The bad guys don’t seem to be encrypting, and DLP will alert on that in a heartbeat (and maybe block it, depending on the channel). You’ll want to proxy with your web gateway to sniff SSL (and only some web gateways can do this) and set the DLP to alert on unauthorized encryption usage. That might be a real pain in the ass, if you have a lot of unmanaged encryption outside of SSL. Also, to do the outbound SSL proxy you need to roll out a gateway certificate to all your endpoints and suppress browser alerts via group policies. I also recommend DLP content discovery to reduce where you have unencrypted stored data (yes, you do have it, even if you think you don’t). As you’ve probably figured out by now, if you are starting from scratch some of this will be very difficult to implement on an existing network, especially one that hasn’t been managed tightly. Thus I suggest you focus on any of your processing/transaction paths and start walling those off first. In the long run, that will reduce both your risks and your compliance and audit costs. Share:

Share:
Read Post

An Analyst Conundrum

Since we’ve jumped on the Totally Transparent Research bandwagon, sometimes we want to write about how we do things over here, and what leads us to make the recommendations we do. Feel free to ignore the rest of this post if you don’t want to hear about the inner turmoil behind our research… One of the problems we often face as analysts is that we find ourselves having to tell people to spend money (and not on us, which for the record, we’re totally cool with). Plenty of my industry friends pick on me for frequently telling people to buy new stuff, including stuff that’s sometimes considered of dubious value. Believe me, we’re not always happy heading down that particular toll road. Not only have Adrian and I worked the streets ourselves, collectively holding titles ranging from lowly PC tech and network admin to CIO, CTO, and VP of Engineering, but as a small business we maintain all our own infrastructure and don’t have any corporate overlords to pick up the tab. Besides that, you wouldn’t believe how incredibly cheap the two of us are. (Unless it involves a new toy.) I’ve been facing this conundrum for my entire career as an analyst. Telling someone to buy something is often the easy answer, but not always the best answer. Plenty of clients have been annoyed over the years by my occasional propensity to vicariously spend their money. On the other hand, it isn’t like all our IT is free, and there really are times you need to pull out the checkbook. And even when free software or services are an option, they might end up costing you more in the long run, and a commercial solution may come with the lowest total cost of ownership. We figure one of the most important parts of our job is helping you figure out where your biggest bang for the buck is, but we don’t take dispensing this kind of recommendation lightly. We typically try to hammer at the problem from all angles and test our conclusions with some friends still in the trenches. And keep in mind that no blanket recommendation is best for everyone and all situations- we have to write for the mean, not the deviation. But in some areas, especially web application security, we don’t just find ourselves recommending a tool- we find ourselves recommending a bunch of tools, none of which are cheap. In our Building a Web Application Security series we’ve really been struggling to find the right balance and build a reasonable set of recommendations. Adrian sent me this email as we were working on the last part: I finished what I wanted to write for part 8. I was going to finish it last night but I was very uncomfortable with the recommendations, and having trouble justifying one strategy over another. After a few more hours of research today, I have satisfied my questions and am happy with the conclusions. I feel that I can really answer potential questions of why we recommend this strategy opposed to some other course of action. I have filled out the strategy and recommendations for the three use cases as best I can. Yes, we ended up having to recommend a series of investments, but before doing that we tried to make damn sure we could justify those recommendations. Don’t forget, they are written for a wide audience and your circumstances are likely different. You can always call us on any bullshit, or better yet, drop us a line to either correct us, or ask us for advice more fitting to your particular situation (don’t worry, we don’t charge for quick advice – yet). Share:

Share:
Read Post

Do You Use DLP? We Should Talk

As an analyst, I’ve been covering DLP since before there was anything called DLP. I like to joke that I’ve talked with more people that have evaluated and deployed DLP than anyone else on the face of the planet. Yes, it’s exactly as exciting as it sounds. But all those references were fairly self-selected. They’ve either been Gartner clients, or our current enterprise clients, that were/are typically looking for help in product selection or dealing with some sort of problem. Many of the rest are vendor-supplied references. This combination skews the conversations towards people picking products, people with problems, or those a vendor think will make them look good. I’m currently working on an article for Information Security magazine on “Real-World DLP”, and I’m hunting for some new references to expand that field a bit. If you are using DLP, successfully or not, and are willing to talk confidentially, please drop me a line. I’m looking for real-world stories, good and bad. If you are willing to go on the record, we’re also looking for good quote sources. The focus of the article is more on implementation than selection, and will be vendor-neutral. To be honest, one reason I’m putting this out in the open is to see if my normal reference channels are skewed. It’s time to see how our current positions and assumptions play out on the mean streets of reality. Of course I’ll be totally pissed if I’ve been wrong this entire time and have to retract everything I’ve ever written on DLP. **Update – Oh yeah, my email address is rmogull, that is with two ‘L’s, at securosis dot com. Please let me know. Share:

Share:
Read Post

Database Security for DBAs

I think I’ve discovered the perfect weight loss technique- a stomach virus. In 48 hours I managed to lose 2 lbs, which isn’t too shabby. Of course I’m already at something like 10% body fat, so I’m not sure how needed the loss was, but I figure if I just write a book about this and hock it in some informercial I can probably retire. My wife, who suffered through 3 months of so-called “morning” sickness, wasn’t all that sympathetic for some strange reason. On that note, it’s time to shift gears and talk about database security. Or, to be more accurate, talk about talking about database security. Tomorrow (Thursday Feb 5th) I will be giving a webcast on Database Security for Database Professionals. This is the companion piece to the webinar I recently presented on Database Security for Security Professionals. This time I flip the presentation around and focus on what the DBA needs to know, presenting from their point of view. It’s sponsored by Oracle, presented by NetworkWorld, and you can sign up here. I’ll be posting the slides after the webinar, but not for a couple of months as we reorganize the site a bit to better handle static content. Feel free to email me if you want a PDF copy. Share:

Share:
Read Post

The Business Justification for Data Security- Version 1.0

We’ve been teasing you with previews, but rather than handing out more bits and pieces, we are excited to release the complete version of the Business Justification for Data Security. This is version 1.0 of the report, and we expect it to continue to evolve as we get more public feedback. Based on some of that initial feedback, we’d like to emphasize something before you dig in. Keep in mind that this is a business justification tool, designed to help you align potential data security investments with business needs, and to document the justification to make a case with those holding the purse strings. It’s not meant to be a complete risk assessment model, although it does share many traits with risk management tools. We’ve also designed this to be both pragmatic and flexible- you shouldn’t need to spend months with consultants to build your business justification. For some projects, you might complete it in an hour. For others, maybe a few days or weeks as you wrangle business unit heads together to force them to help value different types of information. For those of you that don’t want to read a 38 page paper we’re going to continue to post the guts of the model as blog posts, and we also plan on blogging additional content, such as more examples and use cases. We’d like to especially thank our exclusive sponsor, McAfee, who also set up a landing page here with some of their own additional whitepapers and content. As usual, we developed the content completely independently, and it’s only thanks to our sponsors that we can release it for free (and still feed our families). This paper is also released in cooperation with the SANS Institute, will be available in the SANS Reading Room, and we will be delivering a SANS webcast on the topic on March 17th. This was one of our toughest projects, and we’re excited to finally get it out there. Please post your feedback in the comments, and we will be crediting reviewers that advance the model when we release the next version. And once again, thanks to McAfee, SANS, and (as usual) Chris Pepper, our fearless editor. Share:

Share:
Read Post

The Most Powerful Evidence That PCI Isn’t Meant To Protect Cardholders, Merchants, Or Banks

I just read a great article on the Heartland breach, which I’ll talk more about later. There is one quote in there that really stands out: End-to-end encryption is far from a new approach. But the flaw in today”s payment networks is that the card brands insist on dealing with card data in an unencrypted state, forcing transmission to be done over secure connections rather than the lower-cost Internet. This approach avoids forcing the card brands to have to decrypt the data when it arrives. While I no longer think PCI is useless, I still stand by the assertion that its goal is to reduce the risks of the card companies first, and only peripherally reduce the real risk of fraud. Thus cardholders, merchants, and banks carry both the bulk of the costs and the risks. And here’s more evidence of its fundamental flaws. Let’s fix the system instead of just gluing on more layers that are more costly in the end. Heck, let’s bring back SET! Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.