Securosis

Research

Building an SSL Early Warning System

Most security professionals have long understood at least some of the risks of the current ‘web’ or ‘chain’ of trust model for SSL security. To quickly recap for those of you who aren’t hip-deep in this day to day: Your browser knows to trust a digital certificate because it’s signed by a root certificate, generated by a certificate authority, which was included in your browser or operating system. You are trusting that your browser manufacturer properly vetted the organizations which own the roots and sign downstream certificates, and that none of them will issue ‘bad’ certificates. This is not a safe assumption. A new Mac trusts about 175 root certificates, and Apple hasn’t audited any of them. The root certificates are also used to sign certain intermediary certificates, which can then be used to sign other downstream certificates. It’s a chain of trust. You trust the roots, along with every certificate they tell you to trust – both directly and indirectly. There is nothing to stop any trusted (root) certificate authority from issuing a certificate for any domain it chooses. It all comes down to their business practices. To detect a rogue certificate authority, someone who receives a bogus certificate must notice that the certificate they issued is different than the real certificate somehow. If a certificate isn’t signed by a trusted root or intermediary, all browsers warn the user, but they also provide an option to accept the suspicious certificate anyway. That’s because many people issue their own certificates to save money – particularly for internal and private systems. There is a great deal more to SSL security, but this is the core of the problem: we cannot personally evaluate every SSL cert we encounter, so we must trust a core set of root providers to identify (sign) legitimate certs. But the system isn’t centralized, so there are hundreds of root authorities and intermediaries, each with its own business practices and security policies. More than once, we have seen certs fraudulently issued for major brands such as Google and Microsoft, and now we see attackers targeting certificate authorities. We’ve seen two roots hacked this year – Comodo and DigiNotar – and both times the hackers issued themselves fraudulent certs that your browser would accept as valid. There are mechanisms to revoke these things but none of them work well – which is why after major hacks the browser manufactures such as Microsoft, Mozilla, and Apple have to issue software updates. Research in this area has been extensive, with a variety of exploits demonstrated at recent Black Hat/Defcon conferences. I highly recommend you read the EFF’s just-published summary of the DigiNotar issue. It’s a mess. One that’s very hard to fix because: Add-on models, such as Moxie Marlinspike’s Convergence add-on and the Perspectives project are a definite improvement, but only help those educated enough to use them (for the record, I think they are both awesome). The EFF’s SSL Observatory project helps identify the practices of the certificate authorities, but doesn’t attempt to identify breaches or misuse of certificates in real time. DNSSec with DANE could be a big help, but is still nascent and requires fundamental infrastructure changes. Google’s DNS pinning in Chrome is excellent for those using that browser (I don’t – it leaks too much back to Google). I do think this could be a foundation for what I suggest below, but right now it only protects individual users accessing particular sites – for now, only Google. The Google Certificate Catalog is another great endeavor that’s still self-limiting – but again, I think it’s a big piece of what we need. The CA market is big business. There is a very large amount of money involved in keeping the system running (I won’t say working) as it currently does. The browser manufacturers (at least the 3 main ones and maybe Google) would all have to agree to any changes to the core model, which is very deeply embedded into how we use the Internet today. The costs of change would not fall only on evil businesses and browser developers, but would be shared among everyone who uses digital certs today – pretty much every website with users. We don’t even have a way to measure how bad the problem is. DigiNotar knew they had been hacked and had issued bad certs for at least more than a month before telling anyone, and reports claim that these certs were used to sniff traffic in Iran. How many other evil certs are out there? We only notice them when they are presented to someone knowledgeable and paranoid enough to notice, who then reports it. Dan Kaminsky’s post shows just a small slice of how complex this all is. To summarize: We don’t yet have consensus on an alternate system, there are many strong motivations to keep the current system even despite its flaws, and we don’t know how bad the problem is – how many bogus certs have been created, by how many attackers, or how often they are used in real attacks. Imagine how much more confusing this would all be if the DigiNotar hacker had signed certificates in the names of many other certificate authorities. Internally, long before the current hacks, our former intern proposed this as a research area. The consensus was “Yes, it’s a problem and we are &^(%) if a CA issues bad certs”. The problem was that neither he nor we had a solution to propose. But I have an idea that could help us scope out the problem. I call it a ‘transitional’ proposal because it doesn’t solve the problem, but could help identify the issues and raise awareness. Call it an “Early Warning System for SSL” (I’d call it “IDS for SSL”, but you all would burn my house down). The canary in the SSL mine. Conceptually we could build a browser feature or plugin that serves as a sensor. It would need a local list of known certificate signatures for

Share:
Read Post

Payment Trends and Security Ramifications

I write a lot about payment security. Mostly brief snippets embedded in our weekly Incite, but it’s a topic I follow very closely and remain deeply interested in. Early in my career, I developed electronic wallet and payment gateway software for Internet commerce sites, and application embedded payment options. In have been closely following the technical evolution of this market for over 15 years – back in the days of CyberCash, Paymatech, and JECF. But unlike many of the articles I write, payment security affects more than just IT users – it impacts pretty much everyone. And now is a very good time to start paying attention to the payment space because we are witnessing more changes, coming faster than ever. Most of the changes are directly attributable to disruptive nature of mobile devices: they not only offer a convenient new medium for payment, but they also threaten to reduce revenue and brand awareness of the major payment players. So issuing banks, payment processors, card brands, and merchants are all reacting in their own ways. The following are some highlights of trends I have been tracking: 1) Mobile Wallets: A mobile wallet is basically a payment app that authorizes payments from your phone. The app interacts with the point-of-sale terminal in one of several ways, including WiFi, images readers, and text message exchanges. While the technical approaches vary, payment is cleared without providing the merchant with a physical credit card, or even revealing a credit card or bank account number. Many credit card companies look on wallet apps as a way to ‘accelerate’ commerce and reduce consumer reticence to spend money – as credit cards did in the 70s. The flip side is that many card brands are scared by all this. Some are worried about losing their brand visibility – you pay with your phone rather than their branded credit card, and your bill might be from your telephone company without a Visa or Mastercard logo or identification. Customers can choose a payment application and provider, so churn can increase and customer ‘loyalty’ is reduced. Furthermore, the app need not use a credit card al all – like a debit card it could draw funds directly from a bank account. When you think about it, as a consumer, do you really care if it is Visa or Mastercard or iTunes or PayPal, so long as payment is accepted and you get whatever you’re paying for? Sure, you may look for the Visa/Mastercard sticker on the register or door today, but when you and the merchant are both connected to the Internet, do you really care how the merchant processes your payment, so long as they accept your ‘card’ and your risk is no greater than today? When you buy something using PayPal you draw funds from your bank account, from your credit card, or from your PayPal balance – but you are dealing with PayPal, and your bank or credit card provider is barely visible in the transaction. The threat of diminished revenue and diminished brand stickiness – on top of a global reduction in credit card use – is pushing card brands and payment processors into this market as fast as they can go. From what I see, security is taking a back seat to market share. Most of the wallets I review are designed to work now, minimizing software and hardware PoS changes to ensure near-term availability. Basic passwords and phone-presence validations will be in place, but these systems are designed with a security-second mentality. And just like the Chip & Pin systems I will discuss in a moment, mobile wallets could to be more secure than physical cards or reading numbers over the phone, but the payment schemes I have reviewed has are all vulnerable to specific threats – which might compromise the transaction, phone, or wallet app. 2) Smart Cards: These are the Chip & Pin – or Integrated Circuit – systems used widely in Europe. The technical standards are specified by the Europay-Mastercard-Visa (EMV) consortium. Merchants are being encouraged to switch to Chip & Pin with promises of reduced auditing requirements, contrasted against the threat of growing credit card fraud – but merchants know card cloning has been a problem for decades and it has not been enough to get them to endorse smart cards. I recently discussed the issues surrounding in Say Hello to Chip and Pin, but I will recap here briefly. Smart cards are really about three things: 1) new revenue opportunities provided by multi-app cards for affinity group sales, 2) moving liability away from the processor and merchant and onto the consumer, and 3) compatibility with Chip & Pin hardware and software systems used elsewhere in the world. More revenue, less risk, and standardized hardware for multiple markets reduce costs through competition. And a merchant that invests in smart card PoS and register software, is less likely to invest in payment systems that support mobile phones – creating PoS vendor and merchant lock-in. Once again, smart cards are marketed as advanced security – after all it is harder to clone a smart card – despite ample proof that Chip & Pin is hackable. This is about revenue and brand: making more and keeping more. Incremental security benefits are just gravy for the parties behind Chip & Pin. 3) Debit Cards: Mobile wallets may change the debit card landscape. If small cash transactions are facilitated through mobile wallet payments, the need for pocket cash diminishes, as does the need to carry a branded debit card! This is important because, since the Fed cut debit card fees in half, many banks have been looking to make up lost revenue by charging debit card ‘privilege’ fees above and beyond ATM fees. Wells Fargo, for example, makes around 45% of their revenue on fees; this number will shrink under the new law – potentially by billions, across the entire industry. Charging $3 a month for debit card usage will push consumers to look for

Share:
Read Post

Incite 9/14/2011: Mike and the Terrible, Horrible, No Good, Very Bad Day

I have been looking forward to this day… well, since the Falcons’ season was abruptly cut short by a rampaging Pack last January. We had a little teaser with that great game Thursday, and although both teams couldn’t lose, having the Saints drop a tough one was pretty okay. I weathered a tumultuous lockout during the offseason. Even a bumpy pre-season for both my teams (NY Giants and ATL Falcons) couldn’t deter my optimism. Pro football started Sunday and I was fired up. The weekend was going swimmingly. I was able to survive a weekend with the Boss away with her girlfriends. With a little help from our friends, I was able to successfully get the Boy to his football practice, XX2 to her softball game, and both girls to dance practice Saturday. I got to watch a bunch of college football (including that crazy Michigan/Notre Dame game). The kids woke Sunday in a good mood when I got them ready for Sunday school. I got some work done and then got ready to watch the games at a friend’s house. Perfect. Until they started playing the games, that is. The Falcons got crushed. Ouch. They looked horrible, and after all the build-up and expectations it was rather crushing. It was terrible for sure. I do this knock-out pool, where you pick one team a week and if they win, you move on. If they lose, you are out. You can’t pick the same team twice, and it’s a lot of fun. But I’ve shown my inability to get even the easiest games right – I have been knocked out in the first week 2 of the last 3 years. Of course, I picked Cleveland because Cincinnati is just terrible, with a new QB and all. Of course Cleveland lost and I’m out. Yeah, that’s horrible. Just horrible. But things couldn’t get worse, right? The Giants were in Washington and they’ve owned the Redskins for years. Until today. The Giants have a ton of injuries, especially on defense. And it showed. They couldn’t stop a high school team. Their offense wasn’t much better. Man, tough day. Looking at the schedule, both teams dropping their games this week will hurt. Yup, that’s a no good day. And to add insult to injury, as I’m mumbling to myself in the corner, the Boy comes downstairs with his Redskins jersey on. Just to screw with me. Seriously. I know I shouldn’t let an 8-year-old get under my skin, especially the day before his birthday, but I wasn’t happy. Maybe I’ll laugh about it by the time you read this on Wednesday, but while I’m writing this on Sunday night, not so much. I sent him upstairs with a simple choice. He can change his shirt or I could insert a few metatarsals into his posterior region. It’s very bad when I can’t even handle a little chiding from my kids. It was a terrible, horrible, no good, very bad day. But putting everything in context, it wasn’t that bad. I’ve got my health. I do what I love. My biggest problems are about getting everything done. Those are good problems to have. An embarrassment of good fortune, and I’ll take it. Especially given how many around the world were mourning the loss of not only loved ones, but their freedom, as we remember the 9/11 attacks. -Mike Photo credits: “bad day” originally uploaded by BillRhodesPhoto Incite 4 U Design for FAIL: Part of the mantra of most security folks is to think like an attacker. You need to understand your adversary’s mindset to be able to defend against their attacks. There is some truth to that. But do you wonder why more security folks and technology product vendors don’t do the same level of diligence when designing their products. Mostly because it’s expensive, and it’s hard to justify changing things (especially the user experience) based on an attack that may or may not happen. Lenny Z makes a good point in his post Design Information Security With Failure in Mind, where he advocates taking lessons from ship builders. I’d put airplane manufacturers in the same boat. They intentionally push the limits, because people die if a cascade of failures sinks a ship. Do your folks do that with IT systems? With security? If not, you probably should. It’s not about protecting against a Black Swan, but eliminating as much surprise as we can. That’s what we need to do. – MR Jackass punks: No, this isn’t a diatribe against Lulzsec. Imagine you’re sitting at home and you start getting weird emails from some self-proclaimed degenerate who starts talking about showing up at your house. And you get emails from motels this person stayed at, holding you responsible for damages. And the person was on the lam from the law. Heck, they even have their own MySpace page. MySpace? Okay, that’s probably the first clue this is a scam, or a Toyota marketing campaign gone horribly wrong. Toyota set up a site where people could enter the personal details of their friends (or… anyone), who would then be subject to a serious Ashton Kutcher-style punking. Talk about insanely stupid. As much as we bitch about security marketing, this definitely takes the cake. While I don’t think $10M in damages is reasonable, Toyota certainly earned the lawsuit. – RM Pay-nablement: It’s easy to do online payment. The trick is in doing it securely, and I am not so sure that the ‘Buyster’ payment system has done anything novel for security. Buyster links your phone number to a bank account. To use the service you need to enter your phone number and a password – what could go wrong? In return you get a payment token via a message, which you can then pass to a merchant. This model keeps the credit card number off the merchant site, but they would need to modify their systems to accept the token and link to the Buyster payment

Share:
Read Post

Fact-Based Network Security: In Action

As we wrap up our series on Fact-Based Network Security, let’s run through a simple scenario to illustrate the concepts. Remember, the idea is to figure out what on the list will provide the biggest impact for your organization, and then do it. We make trade-offs every day. Some things get done, others don’t. That’s the reality for everyone, so don’t feel bad that you can’t get everything done. Ever. But the difference between a successful security practitioner, and someone looking for a job, is that success is about consistently choosing the right things to get done. Some folks intuitively know what’s important and seem to focus on those things. They exist – I’ve met them. They are rock stars, but when you try to analyze what they do, there isn’t really a pattern. They just know. Sorry, but you probably aren’t one of those folks. So you need a system – you know, a replicable process – to make those decisions. You may not have finely tuned intuition, but you can overcome that by consistently and somewhat ruthlessly getting the most important things done. Scenario: WidgetCo and the Persistent Attacker In our little story, you work for a manufacturer and your company makes widgets. They are valuable widgets, and represent intellectual property that most nations of the world (friend and foe alike) would love to get their hands on. So you know that your organization is a target. Your management gets it – they have a well-segmented network, with firewalls blocking access to the perimeter and another series of enclaves protecting R&D and other sensitive areas. You have IPS on those sensitive segments, as well as some full packet capture gear. Yes, you have a SIEM as well, but you are revisiting that selection. That’s another story for another day. Your users are reasonably sophisticated, but human. You run the security operations team, meaning that your folks do most of the management and configuration of security devices. Knowing that you are a target means you need to assume attackers have compromised your network. But your tight egress filtering hasn’t shown any significant exfiltration. Your team’s task list seems infinite. There are a myriad of ports to open and close on the firewalls to support collaboration with specific business partners. Your company’s sales team needs access to a new logistical application so they can update customers on their shipments of widgets. And of course you are a large customer of a certain flavor of two-factor authentication token for all those reps. Your boss lights up your phone almost daily because she gets a lot of pressure to support those business partners. Your VP of Engineering is doing some cool stuff with a pretty famous research institution in the Northeast. The sales guys are on-site and don’t know what to tell the customer. And your egress filters just blocked an outbound attempt coming from the finance network, maybe due to the 2FA breach. What do you do? No one likes to be told no, but you can’t get everything done. How do you choose? Get back to the risks If you think back to how we define risk, it’s pretty straightforward. Which assets are most important? Clearly it’s the R&D information, which you know is the target of persistent attackers. Sure, customer information is important (to them) and finance information would make some hedge fund manager another billion or two, but it would be bad if the designs for the next-generation widget ended up in the hands of a certain nation-state. And when you think about the outcomes that are important to your business, protecting the company’s IP is the first and highest priority. It supports your billion-dollar valuation, and senior management doesn’t like to screw around with it. Thinking about the metrics that underlie various outcomes, you need to focus on indicators of compromise on those most sensitive networks. So gather configuration data and monitor the logs of those servers. Just to be sure (and to be ready if something goes south) you’ll also capture traffic on those networks, so you can React Faster and Better if and when an alert fires. It’s also a good idea to pay attention to the network topology and monitor for potential exposures, usually opened by a faulty firewall change or some other change error. Your operational system gathers this data on an ongoing basis, so when alerts fire you can jump into action. Saying No In our scenario, the R&D networks are most critical, pure and simple. So you task your operations team to provide access to the research institution as the top priority. Of course, not full unfettered access, but access to a new enclave where the researchers will collaborate. After your team makes the changes, you do a regression analysis, to make sure you didn’t open up any holes, using your network security configuration management tool. No alerts fired and the report came back clean. So you are done at that point, right? We don’t think so. Given the importance of this network, you keep a subset of the ops team with their eyes on the monitors collecting server logs, IDS, and full packet capture data. You have also tightened the egress filters just in case. Sure some folks get grumpy when they are blocked, but you can’t take any chances. Without a baseline of the new traffic dynamics, and without a better feel for the log data, it’s hard to know what is normal and what could be a problem. Admittedly this decision makes the VP of Sales unhappy because his folks can’t get access to the logistical information. They’re forced to have a support team in HQ pull a report and email it to the reps’ devices. It’s horribly inefficient, as the VP keeps telling you. But that’s not all. You also haven’t been able to fully investigate the potential issue on the financial network, although you did install a full packet capture device on that network to start

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.