Securosis

Research

Applied Network Security Analysis: Introduction

Today we launch our next blog series, on a topic we believe is critical to success in today’s threat environment. It is network security analysis, a rather grand and nebulous term, but consider this the next step on the path which started with Incident Response Fundamentals and continued with React Faster and Better. The issues are pretty straightforward. We cannot assume we can stop the attackers, so we have to plan for a compromise. The difference between success and failure breaks down to how quickly you can isolate the attack, contain the damage, and then remediate the issue. So we build our core security philosophy around monitoring critical networks and devices, facilitating our ability to find the root cause of any attack. Revisiting Monitor Everything Back in early 2010, we published a set of Network Security Fundamentals, one of which was Monitor Everything. If you read the comments at the bottom of the post, you’ll see some divergent opinions of what everything means to different folks, but nobody really disagrees with broad monitoring as a core tenet of security nowadays. We can thank the compliance gods for that. To understand the importance of monitoring everything, let’s excerpt some research I published back in early 2008 that is still relevant today. New attacks are happening at a fast and furious pace. It is a fool’s errand to spend time trying to anticipate where the issues are. REACT FASTER first acknowledges that all attacks cannot be stopped. Thus, focus remains on understanding typical traffic and application usage trends and monitoring for anomalous behavior, which could indicate an attack. By focusing on detecting attacks earlier and minimizing damage, security professionals both streamline their activities and improve their effectiveness. That post then discusses some data sources you can (and should) monitor, including firewalls, IDS/IPS, vulnerability scans, network flows, device configurations, and content security devices. But we are still looking at this data in terms of profiling what has happened and using that as a baseline. Then watch for variations beyond tolerance and alert when you see them. We still fundamentally believe in this approach. It’s clearly the place to start for most organizations, for which any data is more than they have now. But for maturing security organizations, let’s examine why logs are only the start. Logs are not enough Back when I was in the SIEM space, it was clear that event logs are a great basis for compliance reporting, because they effectively substantiate implemented controls. As long as the logs are not tampered with, at least. But when you are working to isolate a security issue, the logs tell you what happened, but lack the depth to truly understand how it happened. Isolating a security attack using log data requires having logs from all points in the path between attacker and target. If you aren’t capturing information from the application servers, databases, and applications themselves, visibility is severely impaired. Contrast that against the ability to literally replay an attack from a full network packet capture. You could follow along as the attacker broke your stuff. See the path they took to traverse your network, the exploits they used to compromise devices, the data they exfiltrated, and how they covered their tracks by tampering with the logs. Of course this assumes you are capturing the right network traffic along the attacker’s path, and it might not be feasible to capture all traffic all the time. But still, if you look to implement a full network packet capture sandwich (as we described in the React Faster and Better series), incident responders have much more information to work with. We’ll discuss how to deploy the technology to address some of these issues later in this series. Given that you need additional data to do your job, where should you look? The Network Doesn’t Lie For the purposes of this discussion, let’s assume time starts at the moment an attacker gains a foothold in your network. That could be by compromising a device (through whatever means) already on the network, or by having a compromised device connect to the internal network. At that point the attacker is in the house, so the clock is ticking. What do they do next? An attacker will try to move through your environment to achieve their ultimate goal, whether that be compromising a specific data store or adding to their bot army, or whatever. There are about a zillion specific things the attacker could do, and 99% of them depend on the network in some way. They can’t find another target(s) without using the network to locate it. They can’t attack the target without trying to connect to it, right? Furthermore, even if they are able to compromise the ultimate target, the attackers must then exfiltrate the data. So they will try to use the network to move the data. They need the network, pure and simple. Which means they will leave tracks, but only if you are looking. This is why we favor (as described in React Faster and Better) capturing the full network packet data as possible. Attackers could compromise network devices and delete log records. They could generate all sorts of meaningless traffic to confuse network behavioral analysis. But they can’t alter the packet stream as it’s captured, which becomes the linchpin of the data you’ll collect to perform this advanced network security analysis. Data is not information But just collecting data isn’t enough. You need to use the data to draw conclusions about what’s happening in your environment. That requires indexing the data, supplementing and enriching it with additional context, alerting on the data, and then searching through the data to pursue an investigation. This is all technically demanding. Just capturing the full network packet stream requires a purpose-built data store, which does some black magic to digest and index network traffic at sufficient speed to provide usable, actionable information to shorten the exploit window. To get an idea of the magnitude of this challenge, note

Share:
Read Post

Incite 10/19/2011: The Inquisition

As my kids get older, fundamental aspects of their personalities become more apparent. XX1 won the “most inquisitive” award in kindergarten. 5 years later, she still asks questions. Lots of questions. A seemingly endless stream of questions. The Inquisition went into full effect when we went to the Falcons game last weekend. This is the 4th year we’ve had tickets, so it now becoming more about the game, rather than just about the ice cream and other snacks. From the opening kickoff until the last touchdown in the 4th quarter, I got a steady stream of questions. Which direction are they going? Why was that a penalty? Who would you root for if the Giants played the Falcons? Should I get a Dippin’ Dots or frozen lemonade? What’s pass interference? Questions, questions, questions. Now I like watching my football. I don’t like to talk during the game. If I do talk, it’s about soft zones, off tackles, and shot plays. I felt myself getting a bit frustrated under the constant barrage of questions. Then I remembered this was my evil plan in the first place. I want the kids to love watching football. I want them to have memories of going to NFL games. If they don’t understand the game they won’t want to go with me, and I’ll be sad. So I spent the time and tried to explain a few easy concepts. Like possessions (the Falcons have the ball, and they are going for that end zone), first downs, and kickoffs/punts. And she started to understand. We had a great time and that’s what it’s all about. I love that she asks questions. She wants to learn and when she doesn’t understand, she asks questions until she does. That’s a lot better than nodding like you get it, but being too proud to admit you don’t. This is a great skill, and over time we’ll work on trying to figure some stuff out herself and then ask the remaining questions. But I need to keep in mind that it’s a patience thing for me as well. I don’t have all the answers – certainly not to an endless stream of questions. So I have to get better about admitting I don’t know, and (given all the devices in our house) walking up to one of my magic boxes to figure it out. So as uncomfortable as the Inquisition may be at times, I wouldn’t have it any other way. -Mike Photo credits: “Spanish Inquisition torture method: the rack” originally uploaded by un_owen Incite 4 U Love and Hate, version 1: I never met Dennis Ritchie, but he certainly had a major impact on my life. As a computer science undergrad at Cal, UNIX and C were everything to me. I lived with The C Programming Language. Literally. Along with The UNIX Programming Environment – neither book ever left my backpack. They remain on my bookshelf to this day. And I hated both. I thought C was a miserable language. Pointer issues, memory leaks, awkward syntax, hard-to-find information. The FAQ for proper uses of the null pointer was 100 pages long. Clearly a language is screwed if it takes 100 pages to describe just one aspect of the language (mostly things you must not do). When I read Creators Admit UNIX, C Hoax, I laughed my ass off because I thought it was true – C was a freakin’ prank. Only years later did a couple UNIX experts really teach me C and UNIX (no, they don’t teach you languages at Cal, they just assume you’re plugged into The Matrix and will imprint them into your brain as needed). Only when they handed me a copy of Using C on the UNIX System did I really start to admire the power of the C language and the beauty of UNIX’s architecture. Both are incredibly powerful, and the essence of flexible and extensible. Ritchie’s passing is a good time to reflect on their landmark achievements and celebrate all the things that we use almost every minute of the day, which have been built on those two standards. – AL If there are so many detection techniques, why do they still suck? Lenny Z highlights the current state of the art for malware detection in a couple articles at SearchSecurity: How antivirus software works: Virus detection technique, and in the deeper Antimalware product suites: Understanding capabilities and limitations, on full endpoint suites. But he begs the question: with all this technology, why can’t we stop the bad guys? Because they have changed tactics. They are going after users and applications, preying on those who haven’t updated their devices and the simply stupid (or ignorant, which is just as good for their purposes). Yes, there are a plenty of easy targets. But whining about what we can’t do isn’t my style, so let’s step back to fundamentals. Assume that devices (at least some of them) are compromised. The ones that must not get compromised (high value assets) should be locked down – even if users squeal like stuck pigs. Monitor the hell out of everything, and do some egress filtering and/or DLP monitoring to make sure stuff doesn’t get out. But we cannot assume that anti-malware provides any security. – MR You already had to do it: There has been a lot of hubbub this week over recent guidance from the SEC that public companies should report on cyber-security risk. This is interesting, because my understanding has been that companies have always been required to report any potentially material risk, no matter its origin. We have seen companies report major breach losses for a while, and in rare cases they report some of the cyber risk (usually as an add-on to a public breach). That the SEC felt they needed to issue additional guidance means that companies were either confused (I don’t see what’s confusing – a loss is a loss), trying to play games, or simply not reporting. So I don’t

Share:
Read Post

Tokenization Guidance: Merchant Advice

The goal of tokenization is to reduce the scope of PCI database security assessment. This means a reduction in the time, cost, and complexity of compliance auditing. We want to remove the need to inspect every system for security settings, encryption deployments, network security, and application security, as much as possible. For smaller merchants tokenization can make self-assessment much more manageable. For large merchants paying 3rd-party auditors to verify compliance, the cost savings is huge. PCI DSS still applies to every system in the logical and physical network associated with the payment transaction systems, de-tokenization, and systems that store credit cards – what the payment industry calls “primary account number”, or PAN. For many merchants this includes a major portion – if not an outright majority – of information systems under management. The PCI documentation refers to these systems as the “Cardholder Data Environment”, or CDE. Part of the goal is to shrink the number of systems encompassed by the CDE. The other goal is to reduce the number of relevant checks which must be made. Systems that store tokenized data, even if not fully isolated logically and/or physically from the token payment gateway to servers, need fewer checks to ensure compliance with PCI DSS. The ground rules So how do we know when a server is in scope? Let’s lay out the ground rules, first for systems that always require a full security analysis: Token server: The token server is always in scope if it resides on premise. If the token server is hosted by a third party, the calling systems and the API are subject to inspection. Credit card/PAN data storage: Anywhere PAN data is stored, encrypted or not, is in scope. Tokenization applications: Any application platform that requests tokenized values, in exchange for the credit card number, is in scope. De-tokenization applications: Any application platform that can make de-tokenization requests is in scope. In a nutshell, anything that touches credit cards or can request de-tokenized values is in scope. It is assumed that administration of the token server is limited to a single physical location, and not available through remote network services. Also note that PAN data storage is commonly part of the basic token server functionality, but they are separated in some cases. If PAN data storage and token generation server/services are separate but in-house (i.e., not provided as a service) then both are in scope. Always. Determining system scope For the remaining systems, how can you tell if tokenization will reduce scope, and by how much? For each of your remaining systems, here is how to tell: The first check to make for any system is for the capability to make requests to the token server. The focus is on de-tokenization, because it is assumed that every other system that has access to the token server or its server API, is passing credit card numbers and fully in scope. If this capability exists – through user interface, programmatic interface, or any other means, then PAN is accessible and the system is in scope. It is critical to minimize the number of people and programs that can access the token server or service, both for security and to redue scope. The second decision concerns use of random tokens. Suitable token generation methods include random number generators, sequence generators, one-time pads, and unique code books. Any of these methods can create tokens that cannot be reversed back to credit cards without access to the token server. I am leaving hashed-based tokens off this list because they are relatively insecure (reversible), because providers routinely fail to salt their tokens, or salt with ridiculously guessable values (i.e., the merchant ID). Vendors and payment security stakeholders are busy debating encrypted card data versus tokenization, so it’s worth comparing them again. Format Preserving Encryption (FPE) was designed to secure payment data without breaking applications and databases. Application platforms were programmed to accept credit card numbers, not huge binary strings, so FPE was adopted to improve security with minimum disruption. FPE is entrenched at many large merchants, who don’t want the additional expense of moving to tokenization, and so are pushing for acceptance of FPE as a form of tokenization. The supporting encryption and key management systems are accessible – meaning PAN data is available to authorized users, so FPE cannot remove systems from the audit scope. Proponents of FPE claim they can segregate the encryption engine and key management, so therefore it’s just as secure as random numbers. Only the premise is a fallacy. FPE advocates like to talk about logical separation between sensitive encryption/decryption systems and other systems which only process FPE-encoded data, but this is not sufficient. The PCI Council’s guidance does not exempt systems which contain PAN (even encrypted using FPE) from audit scope, and it is too easy for an attacker or employee to cross that logical separation – especially in virtual environments. This makes FPE riskier than tokenization. Finally, strive to place systems containing tokenized data outside the “Cardholder Data Environment” using network segmentation. If they are in the CDE, they need to be in scope for PCI DSS – if for no other reason than because they provide an attacker point for access to other card storage, transaction processing, and token servers. Configure firewalls, network configuration, and routing, to separate CDE systems from non-CDE systems which don’t directly communicate with them. Systems that are physically and logically isolated from the CDE, provided they meet the ground rules and use random tokens, are completely removed from audit scope. Under these conditions tokenization is a big win, but there are additional advantages… Determining control scope As above, a fully isolated system with random tokens means you can remove the system from scope. Consider the platforms which have historically stored credit card data but do not need it: customer service databases, shipping & receiving, order entry, etc. This is where you can take advantage of tokenization. For all systems which can be removed from audit scope, you can

Share:
Read Post

Database Security Market Sizing and Guesstimation

I read Ericka Chickowski’s Dark Reading post on Database Security Market Growth today. While I generally agree with the estimated rate of growth, I am mystified by the market sizing. Where did this number come from? Is $755M wrong? I don’t know. But I am certain nobody else does either. I get asked about the size of the database security market every month. Simple question, impossible answer. Why? For starters, even if you agree on what constitutes database security, you would need to distinguish between databases specific products and general-purpose products with some database capabilities? Once you choose the ground rules for what’s in and what’s out, it’s basically a bunch of guesses about what vendors are earning. Understanding how much money a specific product earns is difficult with small firms that only have one or two products; and giant firms bundle many products, services, and maintenance together – making it impossible to assess what goes where. Was that money for the database licenses you purchased, the app and middleware stack, the user training, the professional services for customization, or the security? For an example of what I mean, let’s look at these facets in more depth: Security Technology: What technologies comprise DB security? What’s really in and what’s out? I consider encryption, access control, database assessment, database activity monitoring, auditing, label security, and masking as parts of the database security market. Sometimes I throw patch management in, but it’s really a more general process. Some of these are built into the database but most are third party add-ons. Your first step is to set the ground rules: which technologies will you include? Application of Security: You need to ask, “Is it really database security, or is it generic security applied to databases?” There are many assessment tools on the market. Each has limited database capabilities. However, because they don’t log into the database with database credentials, they cannot perform a thorough scan. These products are not database assessment tools. Encryption is similar to patch management, in that the tools can be applied to more than just databases: ciphers applied to data at the application layer are not considered database encryption, but products at the OS layer are. You need to pull a large percentage of the overall products from your market sizing analysis to reflect reality. It’s like a giant series of Venn diagrams – each security technology forms a overlapping bubble, and part of each applies to databases. You need to determine their intersection. Platform Inclusion: What do we mean when we talk about databases? Is it just the major relational platforms? Do you consider ‘Open’ platforms like MySQL, PostgreSQL, and Derby? Do you include Teradata and mainframe databases? Do you include flat-file ‘databases’ and NoSQL datastores? The lines between relational and non-relational, and between non-relational database security and to file security, are becoming increasingly blurry. The trend is to a data services market, and the term ‘database’ is gradually losing its meaning. This is important because the relevant security technologies are increasingly diverse – file security tools, for example, might now be the best way to secure a flat file database. Revenue Calculations: To calculate revenue from any given vendor you have to figure it out the hard way: ask new customers. Vendors lie about their revenue. Even the ones who have nothing to hide still do it. You can’t believe what they tell you. Ever. Small companies are bad, and large ones are even worse. For example, what portion of a deal was actually for DB security, and how much was for totally different stuff? Large firms frequently tell me about their million-dollar security sales, but I later find out the price was negotiated for database licenses, with auditing thrown in for free. And it’s very hard to contradict them until you speak with customers. You can’t tell from the balance sheet. Software, tools, and services get bundled at a single price; so you don’t really know the percentage spent on security unless the customer estimates for you. Security sales reps will tell you the entirety of such a deal was for database security, which means you cannot pay attention to what they say without corroboration. Estimating market size is a series of guesses, all added together, which is why we stopped doing it. When a market is small and the vendors are still private, you can get a very good idea of the revenue picture. For example, before the big vendors jumped into DAM, we had an excellent idea of that market’s size. If you are reading market size projections for database security, keep in mind that whoever is making them is guessing, wrong, or both. Our point is that you need a really good reason to even ask this question. If you are looking at market sizing and trends in order to predict revenue, modify your career path, or justify expenses, you need to accept that you just won’t have any accuracy. If you are looking to make investments in a particular firm, understand that some product verticals grow at 20% overall, but the majority of the overall growth is from one or two firms – the rest grow at 8-10%. If you are trying to figure out specific product lines, you will need to dig in and do some serious homework to get answers with any meaning. Share:

Share:
Read Post

Friday Summary: October 14, 2011

It started with a corn chip. I was eating corn chips – a fresh bag – and they tasted like hell. I had a tomato and some strawberries, thinking eating healthy would be good, but my body said otherwise. They made me feel poorly. I was in the airport waiting for my flight to the Bay Area, thinking “What the hell are they putting in this stuff – it’s a freakin’ corn chip?” I anticipated that my trip would be emotionally exhausting, and I would be run down from all the work, but I ended up feeling better than I had in years. I mean, after a couple days, I was feeling really good. Part of it was being able to see good friends, and some of it was a few days without working. But it was more than that – after a week I realized that I had been eating really well and it was making me feel much better. The food we ate in Berkeley was largely locally grown fruits and vegetables, organic meats and grains. Every time I went to dinner at someone’s house it was food out of the garden. Well, the Scotch was not locally crafted, but everything else was. I mentioned this over dinner one night and I got an earful. My friends went into an entire story about how their farm animals can tell the difference between genetically engineered corn and the ‘real’ stuff, and tend to leave it uneaten. In fact it was the health of their pets – average lifespan of their dogs extended by 35%, and fewer incidents of cancer – that convinced my hosts to go on a natural food diet. They told me they had gone vegetarian, but later realized it was not a meat vs. no meat issue, but a crap food issue. They went through the process of finding raw foods and bought a place where they could have a year-round garden. They are eating meat again – but it took a long time to find food that was not totally bastardized. I have to say that chicken taste like chicken. That may sound stupid, but the slow degradation of eating grocery store or fast food chicken prevents you from realizing just how far off what’s being sold to you is. Taste and texture. The real stuff cooks differently as well, and it just tastes great! I guess I always knew home grown tasted better – but I was not aware how much. The pork was white meat and tasted great. The eggs tasted nothing like what I get in the supermarket. None of the bottled sauces, syrups, or seasonings – it was all homemade. I had known there is a huge difference in produce – especially tomatoes – as you can’t find a tomato that tastes like anything but water at mainstream grocery stores. But between the engineering on the tomato varieties so they remain firm for shipment, and the fact that they’re picked weeks before they are ripe and instead turned red with gas… no wonder the taste is absent. Good food makes eating more fun. This resulted in a very weird experience when I got back home – walking through the grocery store, I felt as if half the stuff on the shelves was poisonous. In fact I had trouble finding anything I wanted to eat – even if you read the label, you can’t determine what’s in these ‘products’, but it’s likely not food. And for those who know me, given my metabolism, getting enough food is usually a problem. Trying to eat healthy was compounding the issue. So I decided to do something about it, and jump in with both feet. In fact I am making up for lost time. I’ve started driving 25 miles down to the good grocery stores to get better food. I have decided to grow more food, and in the last few days discovered fruit trees that thrive in desert heat; I ordered a half dozen peach, apricot, aprium, apple, and almond trees. I purchased Valencia ‘summer’ oranges to fill the summer gap in citrus – I already have 9 trees that ripen at differnt times of the year. I have replaced most of the sugar in the house with unprocessed stuff, stocking up on honey and maple syrup. I am researching beehives – I have space way out back in mind. I have replaced all the flour in the house with different grades of whole wheat and buckwheat flour. I have designed a garden enclosure – in CAD – to keep the million-and-one different varieties or critters out of the garden I will be building shortly. I have found seeds for vegetables that thrive in the desert heat. I am looking for someone in Phoenix who sells non-steroid, non-hormone and low/no antibiotic beef. Heck, I am even considering a chicken coop. Even as I type this, it sounds radical to me. So much so that I am afraid Rich is going to come over here, place me in an arm-bar, and scream ‘Hippie!’ in my ear. But so far I am feeling better and meals taste a lot better, so what the hell. It’s more work, and in the short term will be much higher cost, but so far I think it’s worth it. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Rich quoted in SC Magazine. Favorite Securosis Posts Rich: David on Architectural Limbo. Remember, folks, Mr. Mortman builds and runs things in the cloud for a living… this isn’t just theory. Adrian Lane: The Securosis Nexus (and) Beta Test FAQ. Nexusey Goodness. Mike Rothman: New Series: Tokenization Guidance. No hidden agendas. No vendor sniping. Just a clear focus on what you need to do. A perfect example of how Securosis research is just different. Kudos to Adrian. Read the series. David Mortman: Good versus bad FAIL. Other Securosis Posts Tokenization Guidance: PCI Supplement Highlights. Incite 10/12/2011: Impact and Legacy. Isolated Computing.

Share:
Read Post

The Securosis Nexus (and) Beta Test FAQ

We’ve been getting some questions about the beta test, so I decided to put an FAQ together which we will also post within the system. If you have any other questions, please feel free to ask: General What is the Securosis Nexus? The Securosis Nexus is an online environment to help you get your job done better and faster. It provides pragmatic research on security topics that tells you exactly what you need to know, backed with industry-leading expert advice to answer your questions. The Nexus was designed to be fast and easy to use, and to get you the information you need as quickly as possible. Who is it for? The Nexus is for anyone with security responsibilities at their organization. We know that most of the people who work on security don’t have ‘Security’ in their titles, but need to protect their business every bit as much as the Chief Information Security Officer of a Fortune 500 company. The Nexus provides pragmatic research and advice for everyone, even if security is only one of the many hats you wear. What’s “pragmatic research”? Pragmatic research is information you can use. The Nexus doesn’t waste your time on theory and background information (although it is available for the curious). The content tells you exactly what you need to get your job done, and makes it very easy to find. Does that mean it tells me how to configure my products? No. The Nexus provides everything except product-specific information. What’s “expert advice”? Through our Ask an Analyst feature you can submit questions directly to our analysts, each with decades of security experience. We know sometimes reading research isn’t enough and you need direct advice from an experienced professional to get a clear answer on your specific issue. What makes this any different from a wiki, Gartner, or something like LinkedIn? The research is specific to security, and the Nexus presents data in several different ways, making it as quick and easy to locate information as possible. Unlike a wiki, all the content is written by professional research analysts, edited by folks who know how to write and topics are covered completely – not a hodge-podge of whatever people want to contribute. It’s far more structured and pragmatic than big analyst firms like Gartner. And unlike LinkedIn and social media sites, you are guaranteed answers to your questions. The Nexus is not just academic reference data – it is written by people who have built and deployed security products for a living. What else is included? Research and specific answers to your questions is core, but the Nexus includes much more. It offers videos, checklists, podcasts, templates, and other tools to help you get your job done. All the research can be rated and commented on, which helps ensure the content is useful and up to date as well as helping us to improve the content over time based on your feedback. The system tracks your history so you never forget what you read, and enables you to build a custom library of your favorite content. Good questions are anonymized and tied back to the content, to help others with the same problems. And we are just getting going, there will be even more capabilities in the coming months. What are the platform requirements? The Nexus should work with all current browsers, as well as Internet Explorer version 7 and later. Although we don’t have an iOS app yet, we have optimized the site to work well on the iPad. Beta Testing When does the beta start? If you are reading this, it has started. Why can’t I log in yet? We are running the beta in phases and will be adding people on an ongoing basis. We are very conservative, and really want to ensure the system is ready before we let too many people in. We will email you when your name comes up, and we plan to eventually include everyone who signs up for the beta. What can I expect in the beta? This is a real beta test – while the entire system is functional, there will be some bugs. We have set up forum for feedback and will directly answer system questions (but not research questions) there. During the beta, we will be adding research on a daily basis. The beta is opening with the first layer of PCI information, but we have a ton more to add before we open the system to the public (and ask people to pay for it). We will post announcements on the portal page as we add material throughout the beta. Right now, the weakest area is multimedia and tools/templates – such as checklists and PowerPoint samples. We will be adding these along with the rest of the content throughout the beta period. Ask an Analyst is completely open for business, so please do your best to stump us. Is the beta free? Yes. In exchange for your help testing, we provide access to all the content as we build it, plus the Ask an Analyst tool for questions. Will I get a free membership after the beta? No. The Nexus will be competitively priced (think hundreds, not thousands), but beta testers will need to subscribe after we open it up to the public. But until then you get all the free research and advice you can eat. Where should I leave feedback? Please use the beta forum linked on the portal page. That provides direct access to our developers and doesn’t clutter up the comments or the rest of the live system. After the beta, will you delete my account? No – you won’t have access, but your account will stay there if you want to come back. You should also review our privacy policy. Privacy Policy The Securosis Nexus does not sell your information to anyone, ever. We do retain the right to sell or distribute bulk statistics (e.g., what content is most viewed, what topics create the

Share:
Read Post

Incite 10/12/2011: Impact and Legacy

As have been overly reported over the past week, Steve Jobs is gone. As Rich so adroitly pointed out, “His death hit me harder than I expected. Because not only do we not have a Steve Jobs in security, we no longer have one at all.” You know, someone who seems to be the master of the universe. Perfection personified. Of course, the reality is never perfection. But what’s perfect is imperfection. Jobs failed. Jobs started over. He took chances and ultimately triumphed. Jobs had the perspective you wished you could have. This is clearly demonstrated by what I believe to be the best speech written in my lifetime (at least so far), Steve Jobs’ Stanford Commencement speech. Why? Because if you pay attention, really pay attention to the words, it’s about the human struggle. Do what you love. Follow your own path. Don’t settle for mediocrity. Live each day to the fullest. Realize we are here for a short time, and act accordingly. It’s not trite. You can and should strive for this. You see impact and legacy works itself out, depending on the actions you take every day. Probably none of us will have an impact like Steve Jobs. Nor should we. You don’t need to be Steve Jobs. Just be you. You don’t have to change the world. Just make it a little better. Be a giver, not a taker. Believe in some kind of karma. Pay it forward. Do the right thing. Lead by example, and hopefully people around you will do the right thing ti. If that happens, we all win, collectively. I’m not going to say don’t change the world. Or don’t try. We need folks who want to change things on a massive scale, and will do the work to make it happen. My point is that it doesn’t have to be you. As Steve Jobs said, “Your time is limited, so don’t waste it living someone else’s life. Don’t be trapped by dogma – which is living with the results of other people’s thinking. Don’t let the noise of others’ opinions drown out your own inner voice. And most important, have the courage to follow your heart and intuition. They somehow already know what you truly want to become. Everything else is secondary.” Change happens in many forms. We all want to leave the world better than when we got here. That’s what I’m working for. It’s not my place to strive for a legacy or to worry about my impact. All I can do is get up every day and do something positive. Some days will go better than others. And eventually (hopefully many years from now), I’ll be gone. Then it will be up to others to figure out my impact and legacy. Since I don’t know when my time will be up, I had better get back to work. –Mike Photo credits: “Legacy Parkway shield” originally uploaded by CountyLemonade Incite 4 U Take my cards, give me back my wallet: It’s always interesting to see the market value of anything. Not just what you think something is worth. But what someone is actually willing to pay. So thanks to Imperva for mining some bad sites and posting the Current Value of Credit Cards on the Black Market. If you take a look at what’s in my wallet, you’ll see about $15 bucks worth of cards (2 AmEx, a MasterCard, and a bank card). My wallet is worth at least $30, since it’s nice Corinthian leather (said in my best Ricardo Montalban voice). So take my cards, but I’ll fight you for my wallet. – MR Free malware scans: Google announced a Free Safe Browsing Alert for Network Administrators this week, alerting IT when malware is discovered by Google on their machines. The service leverages their malware detection capability announced last year, which discovers malware through a combination of user generated Safe Browsing data and Google’s site indexing crawlers. IT admins can register for alerts when Google discovers malware on the public servers within their control. This free tool will be disruptive to all the security vendors positioning malware detection as a ‘must-have’ feature – so long as it works. Hard to see how folks can continue charging a premium for this ‘differentiating’ service. – AL How about a tour of Alaska? We all know that no matter what you do, bad stuff still happens. As we always say around here, you will be breached at some point. The true test of your security mettle isn’t if you keep the bad guys out, but how you respond when they get in. A lot of that is in the heart of our paper on advanced incident response. One of the main things we talk about in that paper is knowing when, and how, to escalate your incident response process and bring in the next level of experts. While we didn’t explicitly mention it, having your command and control center for air combat drones infected with a virus would be pretty high on the list. It seems the folks on the ground failed to escalate and let the cybersecurity experts get involved. The cybersecurity command learned about it by reading Wired. If a four star general is learning that your control center for those buzzing things sometimes armed with missiles might be a staging depot for the latest warez, it might be time to break out your cold weather gear. -RM Maybe actually do something: OK, time for some snark. I just had to see what pearls of wisdom were in the article 8 ways to become a cloud security expert. Basically it’s a list of conferences and a few blogs. So let me get this straight. Go to RSA or the CSA Congress and you are all of a sudden an expert. C’mon, man! I have a different idea. Why don’t you actually do something in the cloud and protect it. Yeah, maybe build an instance, harden it, configure some security

Share:
Read Post

Tokenization Guidance: PCI Supplement Highlights

The PCI DSS Tokenization Guidelines Information Supplement – which I will refer to as “the supplement” for the remainder of this series – is intended to address how tokenization may impact Payment Card Industry (PCI) Data Security Standard (DSS) scope. The supplement is divided into three sections: a discussion of the essential elements of a tokenization system, PCI DSS scoping considerations, and new risk factors to consider when using tokens as a surrogate for credit card numbers. It’s aimed at merchants who process credit card payment data and fall under PCI security requirements. At this stage, if you have not downloaded a copy, I recommend you do so now. It will provide a handy reference for the rest of this post. The bulk of that document covers tokenization systems as a whole: technology, workflow, security, and operations management. The tokenization overview does a good job of introducing what tokenization is, what tokens look like, and the security impact of different token types. The diagrams do an excellent job of illustrating of how token substitution fits within normal payment processing flow, providing a clear picture of how an on-site tokenization system – or a tokenization service – works. The supplement stresses the need for authorization and network segmentation – the two critical security tools needed to secure a token server and reduce compliance scope. The last section of the supplement helps readers understand the risks inherent to using tokens – which are new and distinct from the issues of traditional security controls. Using tokens directly for financial exchange, instead of as simple references to the real financial data in a private token database, carries its own risk – a hacker could use the tokens to conduct transactions, without needing to crack the token database. Should they penetrate the IT systems, even if there is no credit card, if it can be used as a financial instrument, hackers will misuse it. If the token can initiate a transaction, force a repayment, or be used as money, there is risk. This section covers a couple critical risk factors merchants need to consider; although this has little to do with the token service – it is simply an effect of how tokens are used. Those were the highlights of the supplement – now the lowlights. The section on PCI Scoping Considerations is convoluted and ultimately unsatisfying. I wanted bacon but only got half a piece of Sizzlean. Seriously, it was one of those “Where’s the beef?” moments. Okay, I am mixing my meats – if not my metaphors – but I must say that initially I thought the supplement was going to be an excellent document. They did a fantastic job answering the presales questions of tokenization buyers in section 1.3: simplification of merchant validation, verification of deployment, and unique risks to token solutions. But after my second review, I realized the document does offer “scoping considerations”, but does not provide advice, nor a definitive standard for auditing or scope reduction. That’s when I started making phone calls to others who have read the supplement – and they were as perplexed as I was. Who will evaluate the system and what are the testing procedures? How does a merchant evaluate a solution? What if I don’t have an in-house tokenization server – can I still reduce scope? Where is the self-assessment questionnaire? The supplement does not improve user understanding of the critical questions posed in the introduction. As I waded through page after page, I was numbed by the words. It slowly lulled me to sleep with stuff that sounded like information – but wasn’t. Here’s an example: The security and robustness of a particular tokenization system is reliant on many factors including the configuration of the different components, the overall implementation, and the availability and functionality of the security features for each solution. No sh&$! Does that statement – which sums up their tokenization overview – help you in any way? Is this statement be true for every software or hardware system? I think so. Uselessly vague statements like this litter the supplement. Sadly, the first paragraph of the ‘guidance’ – a disclaimer repeated at the foot of each page, quoted from Bob Russo in the PCI press release – reflects the supplement’s true nature: “The intent of this document is to provide supplemental information. Information provided here does not replace or supersede requirements in the PCI Data Security Standard”. Tokenization should replace some security controls and should reduce PCI DSS scope. It’s not about layering. Tokenization replaces one security model for another. Technically there is no need to adjust the PCI DSS specification to account for a tokenization strategy – they can happily co-exist – with one system handling non-sensitive systems and the other handling those which store payment data. But not providing a clear definition of which is which, and what merchants will be held accountable for, demonstrates the problem. It seems clear to me that, based on this supplement, PCI DSS scope will never be reduced. For example, section 2.2 rather emphatically states “If the PAN is retrievable by the merchant, the merchant’s environment will be in scope for PCI DSS.” Section 3.1, “PCI DSS Scope for Tokenization”, starts from the premise that everything is in scope, including the tokenization server, as it should be. But what falls out of scope and how is not made clear in section 3.1.2 “Out-of-scope Considerations”, where one would expect to find such information. Rather than define what is out of scope, it outlines many objectives to be met, seemingly without regard for where the credit card vault resides, or the types of tokens used. Section 3.2, titled “Maximizing PCI DSS Scope Reduction”, states that “If tokens are used to replace PAN in the merchant environment, both the tokens, and the systems they reside on will need to be evaluated to determine whether they require protection and should be in scope of PCI DSS”. From this statement, how can anything then be out of

Share:
Read Post

Isolated Computing

IBM, with researchers at North Carolina State University, has annnounced an effective way to protect information and processes in multi-tenant environments – such as cloud and virtual deployments. In what they are calling the Strongly Isolated Computing Environment, installed below the hypervisor. The teaser is that the code is a mere 300 lines – a very small footprint means simplicity, which in turn implies both performance and security. A new technique called Strongly Isolated Computing Environment (SICE) aims to isolate sensitive information and workload from the rest of the functions performed by a hypervisor, which serves as gateway to a virtual, cross-platform workspace shared by users in a cloud system. This is positioned as VMM security for x86 architectures, residing in the BIOS. The code leverages the Systems Management Mode (SMM) of the Intel processor – think of it as something between a mini embedded OS and a hardware debugger. SMM is a general utility used for things such as power management, cryptographic subprocesses, and the occasional attack vector. The flexibility of this feature makes the approach interesting. But make no mistake: this is not ‘cloud’ security. This is quasi-hardware security for the benefit of virtual machine managers. Hijacking the overused ‘cloud’ term is purely PR. While the research is not fully public at this time, it’s clear their goal is to provide secure containers for data and processes in multi-tenant environments. I find this interesting as, despite wide use of virtualization, questions on how best to secure the hypervisor – and the partitions that run on top of it – are still open for debate. And plenty of companies are offering different ideas for how to make this work. Technically the NC State team’s proposal is not a new approach. Isolating critical functions at the OS/BIOS/hardware layer has been done before – sometimes all three at once, with each layer validating the other. Nor is reducing attack surface a novel concept. And that’s why I am skeptical – given that every few years we are presented with a ‘new’ approach to security, which is as a rule nothing more than cycling through the different layers of the computing infrastructure. Network centric security, or host or OS security, or application layer, or perhaps user and and information centric security. For example, if you are using information centric security, you work at the data (DRM) or application (DLP) layer. The problem is that we have been cycling around for 20 years, and we never settle on a final answer. Chris Hoff has written a ton about this perpetual cycle, and suggested why we should expect virtualization and security functions to evolve directly into the CPU. I think this is the first of many efforts we will see. Placing these functions in the BIOS/SMM could be the right solution – or just the next step before it’s fully embedded in the hardware. And then we’ll find that’s not flexible enough and place protections in the OS…. Share:

Share:
Read Post

Good versus bad FAIL

On reflection I talk about failure a lot. As I look back at my own career experience, FAIL has commonly appeared at inopportune times. Though it’s hard to say you can pinpoint a good time to fail. It’s part of both the business and human experience, so to me failure can be positive and productive, and position you for future success. But not always, and a lot depends on the form it takes. I guess when I think of the wrong kind of failure, I point to Andreas’ post on Network World, Fail a security audit already – it’s good for you. I do understand where he’s coming from. As I mentioned, failure can serve as a catalyst for action, as a good way to assess progress (ask the ATL Falcons about that), or as a way to figure out when it’s time to pack up your tent and move on. I guess my issue is with looking at an audit as a good venue for failure. Why? An audit is an awfully low bar for anything. Yes, I understand that’s a crass generalization. Many auditors are very talented and can find unseen issues and add value. But many aren’t that. Many adhere blindly to their checklists and ensure your security controls fit into a clean little box, even if there isn’t much clean about security in today’s environment. Have you ever heard the story about the scorpion and the frog? I think of it because many auditors adhere to their playbooks, disregarding actual circumstances – like the scorpion in that story. To be clear, the auditor will find something. They always do, or they understand they won’t be invited back. That doesn’t mean the stuff they find really matters. So what’s a better approach? How can you leverage an audit failure to your best advantage? Script it out and use the auditor as a piece of your evil plans. It’s okay – that’s how things get done in the real world. If you are a clued-in security professional, you know where the issues are. At least some of them. You also may face some organizational resistance to fixing issues. So you might direct the audit to miraculously find the issues you want/need fixed. Don’t make it too easy, but make sure they find what you need them to find. Amazingly enough, if something shows up in an audit’s findings of fact, it forces a decision. The decision may be to do nothing, but that will at least be a conscious decision to not address the risk. Then you can move onto the next thing and stop tilting at windmills. Or get the action you need. Either way it’s a win. So I’m all for failing. But fail correctly. Fail with a purpose. Use failure to your advantage. In some cases, actually stage your failure to make a point. I guess my real point is that any failure you face shouldn’t be a total surprise, though that will happen from time to time. Surprise failure is the kind you need to avoid. But that’s another story for another day. Photo credit: “Fail Whale Pale Ale” originally uploaded by jamesplankton Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.