Securosis

Research

Database Security Market Sizing and Guesstimation

I read Ericka Chickowski’s Dark Reading post on Database Security Market Growth today. While I generally agree with the estimated rate of growth, I am mystified by the market sizing. Where did this number come from? Is $755M wrong? I don’t know. But I am certain nobody else does either. I get asked about the size of the database security market every month. Simple question, impossible answer. Why? For starters, even if you agree on what constitutes database security, you would need to distinguish between databases specific products and general-purpose products with some database capabilities? Once you choose the ground rules for what’s in and what’s out, it’s basically a bunch of guesses about what vendors are earning. Understanding how much money a specific product earns is difficult with small firms that only have one or two products; and giant firms bundle many products, services, and maintenance together – making it impossible to assess what goes where. Was that money for the database licenses you purchased, the app and middleware stack, the user training, the professional services for customization, or the security? For an example of what I mean, let’s look at these facets in more depth: Security Technology: What technologies comprise DB security? What’s really in and what’s out? I consider encryption, access control, database assessment, database activity monitoring, auditing, label security, and masking as parts of the database security market. Sometimes I throw patch management in, but it’s really a more general process. Some of these are built into the database but most are third party add-ons. Your first step is to set the ground rules: which technologies will you include? Application of Security: You need to ask, “Is it really database security, or is it generic security applied to databases?” There are many assessment tools on the market. Each has limited database capabilities. However, because they don’t log into the database with database credentials, they cannot perform a thorough scan. These products are not database assessment tools. Encryption is similar to patch management, in that the tools can be applied to more than just databases: ciphers applied to data at the application layer are not considered database encryption, but products at the OS layer are. You need to pull a large percentage of the overall products from your market sizing analysis to reflect reality. It’s like a giant series of Venn diagrams – each security technology forms a overlapping bubble, and part of each applies to databases. You need to determine their intersection. Platform Inclusion: What do we mean when we talk about databases? Is it just the major relational platforms? Do you consider ‘Open’ platforms like MySQL, PostgreSQL, and Derby? Do you include Teradata and mainframe databases? Do you include flat-file ‘databases’ and NoSQL datastores? The lines between relational and non-relational, and between non-relational database security and to file security, are becoming increasingly blurry. The trend is to a data services market, and the term ‘database’ is gradually losing its meaning. This is important because the relevant security technologies are increasingly diverse – file security tools, for example, might now be the best way to secure a flat file database. Revenue Calculations: To calculate revenue from any given vendor you have to figure it out the hard way: ask new customers. Vendors lie about their revenue. Even the ones who have nothing to hide still do it. You can’t believe what they tell you. Ever. Small companies are bad, and large ones are even worse. For example, what portion of a deal was actually for DB security, and how much was for totally different stuff? Large firms frequently tell me about their million-dollar security sales, but I later find out the price was negotiated for database licenses, with auditing thrown in for free. And it’s very hard to contradict them until you speak with customers. You can’t tell from the balance sheet. Software, tools, and services get bundled at a single price; so you don’t really know the percentage spent on security unless the customer estimates for you. Security sales reps will tell you the entirety of such a deal was for database security, which means you cannot pay attention to what they say without corroboration. Estimating market size is a series of guesses, all added together, which is why we stopped doing it. When a market is small and the vendors are still private, you can get a very good idea of the revenue picture. For example, before the big vendors jumped into DAM, we had an excellent idea of that market’s size. If you are reading market size projections for database security, keep in mind that whoever is making them is guessing, wrong, or both. Our point is that you need a really good reason to even ask this question. If you are looking at market sizing and trends in order to predict revenue, modify your career path, or justify expenses, you need to accept that you just won’t have any accuracy. If you are looking to make investments in a particular firm, understand that some product verticals grow at 20% overall, but the majority of the overall growth is from one or two firms – the rest grow at 8-10%. If you are trying to figure out specific product lines, you will need to dig in and do some serious homework to get answers with any meaning. Share:

Share:
Read Post

Friday Summary: October 14, 2011

It started with a corn chip. I was eating corn chips – a fresh bag – and they tasted like hell. I had a tomato and some strawberries, thinking eating healthy would be good, but my body said otherwise. They made me feel poorly. I was in the airport waiting for my flight to the Bay Area, thinking “What the hell are they putting in this stuff – it’s a freakin’ corn chip?” I anticipated that my trip would be emotionally exhausting, and I would be run down from all the work, but I ended up feeling better than I had in years. I mean, after a couple days, I was feeling really good. Part of it was being able to see good friends, and some of it was a few days without working. But it was more than that – after a week I realized that I had been eating really well and it was making me feel much better. The food we ate in Berkeley was largely locally grown fruits and vegetables, organic meats and grains. Every time I went to dinner at someone’s house it was food out of the garden. Well, the Scotch was not locally crafted, but everything else was. I mentioned this over dinner one night and I got an earful. My friends went into an entire story about how their farm animals can tell the difference between genetically engineered corn and the ‘real’ stuff, and tend to leave it uneaten. In fact it was the health of their pets – average lifespan of their dogs extended by 35%, and fewer incidents of cancer – that convinced my hosts to go on a natural food diet. They told me they had gone vegetarian, but later realized it was not a meat vs. no meat issue, but a crap food issue. They went through the process of finding raw foods and bought a place where they could have a year-round garden. They are eating meat again – but it took a long time to find food that was not totally bastardized. I have to say that chicken taste like chicken. That may sound stupid, but the slow degradation of eating grocery store or fast food chicken prevents you from realizing just how far off what’s being sold to you is. Taste and texture. The real stuff cooks differently as well, and it just tastes great! I guess I always knew home grown tasted better – but I was not aware how much. The pork was white meat and tasted great. The eggs tasted nothing like what I get in the supermarket. None of the bottled sauces, syrups, or seasonings – it was all homemade. I had known there is a huge difference in produce – especially tomatoes – as you can’t find a tomato that tastes like anything but water at mainstream grocery stores. But between the engineering on the tomato varieties so they remain firm for shipment, and the fact that they’re picked weeks before they are ripe and instead turned red with gas… no wonder the taste is absent. Good food makes eating more fun. This resulted in a very weird experience when I got back home – walking through the grocery store, I felt as if half the stuff on the shelves was poisonous. In fact I had trouble finding anything I wanted to eat – even if you read the label, you can’t determine what’s in these ‘products’, but it’s likely not food. And for those who know me, given my metabolism, getting enough food is usually a problem. Trying to eat healthy was compounding the issue. So I decided to do something about it, and jump in with both feet. In fact I am making up for lost time. I’ve started driving 25 miles down to the good grocery stores to get better food. I have decided to grow more food, and in the last few days discovered fruit trees that thrive in desert heat; I ordered a half dozen peach, apricot, aprium, apple, and almond trees. I purchased Valencia ‘summer’ oranges to fill the summer gap in citrus – I already have 9 trees that ripen at differnt times of the year. I have replaced most of the sugar in the house with unprocessed stuff, stocking up on honey and maple syrup. I am researching beehives – I have space way out back in mind. I have replaced all the flour in the house with different grades of whole wheat and buckwheat flour. I have designed a garden enclosure – in CAD – to keep the million-and-one different varieties or critters out of the garden I will be building shortly. I have found seeds for vegetables that thrive in the desert heat. I am looking for someone in Phoenix who sells non-steroid, non-hormone and low/no antibiotic beef. Heck, I am even considering a chicken coop. Even as I type this, it sounds radical to me. So much so that I am afraid Rich is going to come over here, place me in an arm-bar, and scream ‘Hippie!’ in my ear. But so far I am feeling better and meals taste a lot better, so what the hell. It’s more work, and in the short term will be much higher cost, but so far I think it’s worth it. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Rich quoted in SC Magazine. Favorite Securosis Posts Rich: David on Architectural Limbo. Remember, folks, Mr. Mortman builds and runs things in the cloud for a living… this isn’t just theory. Adrian Lane: The Securosis Nexus (and) Beta Test FAQ. Nexusey Goodness. Mike Rothman: New Series: Tokenization Guidance. No hidden agendas. No vendor sniping. Just a clear focus on what you need to do. A perfect example of how Securosis research is just different. Kudos to Adrian. Read the series. David Mortman: Good versus bad FAIL. Other Securosis Posts Tokenization Guidance: PCI Supplement Highlights. Incite 10/12/2011: Impact and Legacy. Isolated Computing.

Share:
Read Post

Tokenization Guidance: PCI Supplement Highlights

The PCI DSS Tokenization Guidelines Information Supplement – which I will refer to as “the supplement” for the remainder of this series – is intended to address how tokenization may impact Payment Card Industry (PCI) Data Security Standard (DSS) scope. The supplement is divided into three sections: a discussion of the essential elements of a tokenization system, PCI DSS scoping considerations, and new risk factors to consider when using tokens as a surrogate for credit card numbers. It’s aimed at merchants who process credit card payment data and fall under PCI security requirements. At this stage, if you have not downloaded a copy, I recommend you do so now. It will provide a handy reference for the rest of this post. The bulk of that document covers tokenization systems as a whole: technology, workflow, security, and operations management. The tokenization overview does a good job of introducing what tokenization is, what tokens look like, and the security impact of different token types. The diagrams do an excellent job of illustrating of how token substitution fits within normal payment processing flow, providing a clear picture of how an on-site tokenization system – or a tokenization service – works. The supplement stresses the need for authorization and network segmentation – the two critical security tools needed to secure a token server and reduce compliance scope. The last section of the supplement helps readers understand the risks inherent to using tokens – which are new and distinct from the issues of traditional security controls. Using tokens directly for financial exchange, instead of as simple references to the real financial data in a private token database, carries its own risk – a hacker could use the tokens to conduct transactions, without needing to crack the token database. Should they penetrate the IT systems, even if there is no credit card, if it can be used as a financial instrument, hackers will misuse it. If the token can initiate a transaction, force a repayment, or be used as money, there is risk. This section covers a couple critical risk factors merchants need to consider; although this has little to do with the token service – it is simply an effect of how tokens are used. Those were the highlights of the supplement – now the lowlights. The section on PCI Scoping Considerations is convoluted and ultimately unsatisfying. I wanted bacon but only got half a piece of Sizzlean. Seriously, it was one of those “Where’s the beef?” moments. Okay, I am mixing my meats – if not my metaphors – but I must say that initially I thought the supplement was going to be an excellent document. They did a fantastic job answering the presales questions of tokenization buyers in section 1.3: simplification of merchant validation, verification of deployment, and unique risks to token solutions. But after my second review, I realized the document does offer “scoping considerations”, but does not provide advice, nor a definitive standard for auditing or scope reduction. That’s when I started making phone calls to others who have read the supplement – and they were as perplexed as I was. Who will evaluate the system and what are the testing procedures? How does a merchant evaluate a solution? What if I don’t have an in-house tokenization server – can I still reduce scope? Where is the self-assessment questionnaire? The supplement does not improve user understanding of the critical questions posed in the introduction. As I waded through page after page, I was numbed by the words. It slowly lulled me to sleep with stuff that sounded like information – but wasn’t. Here’s an example: The security and robustness of a particular tokenization system is reliant on many factors including the configuration of the different components, the overall implementation, and the availability and functionality of the security features for each solution. No sh&$! Does that statement – which sums up their tokenization overview – help you in any way? Is this statement be true for every software or hardware system? I think so. Uselessly vague statements like this litter the supplement. Sadly, the first paragraph of the ‘guidance’ – a disclaimer repeated at the foot of each page, quoted from Bob Russo in the PCI press release – reflects the supplement’s true nature: “The intent of this document is to provide supplemental information. Information provided here does not replace or supersede requirements in the PCI Data Security Standard”. Tokenization should replace some security controls and should reduce PCI DSS scope. It’s not about layering. Tokenization replaces one security model for another. Technically there is no need to adjust the PCI DSS specification to account for a tokenization strategy – they can happily co-exist – with one system handling non-sensitive systems and the other handling those which store payment data. But not providing a clear definition of which is which, and what merchants will be held accountable for, demonstrates the problem. It seems clear to me that, based on this supplement, PCI DSS scope will never be reduced. For example, section 2.2 rather emphatically states “If the PAN is retrievable by the merchant, the merchant’s environment will be in scope for PCI DSS.” Section 3.1, “PCI DSS Scope for Tokenization”, starts from the premise that everything is in scope, including the tokenization server, as it should be. But what falls out of scope and how is not made clear in section 3.1.2 “Out-of-scope Considerations”, where one would expect to find such information. Rather than define what is out of scope, it outlines many objectives to be met, seemingly without regard for where the credit card vault resides, or the types of tokens used. Section 3.2, titled “Maximizing PCI DSS Scope Reduction”, states that “If tokens are used to replace PAN in the merchant environment, both the tokens, and the systems they reside on will need to be evaluated to determine whether they require protection and should be in scope of PCI DSS”. From this statement, how can anything then be out of

Share:
Read Post

Isolated Computing

IBM, with researchers at North Carolina State University, has annnounced an effective way to protect information and processes in multi-tenant environments – such as cloud and virtual deployments. In what they are calling the Strongly Isolated Computing Environment, installed below the hypervisor. The teaser is that the code is a mere 300 lines – a very small footprint means simplicity, which in turn implies both performance and security. A new technique called Strongly Isolated Computing Environment (SICE) aims to isolate sensitive information and workload from the rest of the functions performed by a hypervisor, which serves as gateway to a virtual, cross-platform workspace shared by users in a cloud system. This is positioned as VMM security for x86 architectures, residing in the BIOS. The code leverages the Systems Management Mode (SMM) of the Intel processor – think of it as something between a mini embedded OS and a hardware debugger. SMM is a general utility used for things such as power management, cryptographic subprocesses, and the occasional attack vector. The flexibility of this feature makes the approach interesting. But make no mistake: this is not ‘cloud’ security. This is quasi-hardware security for the benefit of virtual machine managers. Hijacking the overused ‘cloud’ term is purely PR. While the research is not fully public at this time, it’s clear their goal is to provide secure containers for data and processes in multi-tenant environments. I find this interesting as, despite wide use of virtualization, questions on how best to secure the hypervisor – and the partitions that run on top of it – are still open for debate. And plenty of companies are offering different ideas for how to make this work. Technically the NC State team’s proposal is not a new approach. Isolating critical functions at the OS/BIOS/hardware layer has been done before – sometimes all three at once, with each layer validating the other. Nor is reducing attack surface a novel concept. And that’s why I am skeptical – given that every few years we are presented with a ‘new’ approach to security, which is as a rule nothing more than cycling through the different layers of the computing infrastructure. Network centric security, or host or OS security, or application layer, or perhaps user and and information centric security. For example, if you are using information centric security, you work at the data (DRM) or application (DLP) layer. The problem is that we have been cycling around for 20 years, and we never settle on a final answer. Chris Hoff has written a ton about this perpetual cycle, and suggested why we should expect virtualization and security functions to evolve directly into the CPU. I think this is the first of many efforts we will see. Placing these functions in the BIOS/SMM could be the right solution – or just the next step before it’s fully embedded in the hardware. And then we’ll find that’s not flexible enough and place protections in the OS…. Share:

Share:
Read Post

New Series: Tokenization Guidance

Tokenization Guidance. I have wanted to write this post since the middle of August. Every time I started writing another phone phone call came in from a merchant, payment processor, technology vendor, or someone loosely associated with a Payment Card Industry (PCI) task force or steering committee (SIG). And every conversation yielded some new sliver of information that changed what I wanted to say, or implied some research work had already been conducted that was far more interesting and useful than anything being provided to the public. This in turned prompted more calls, new conversations, more digging and – like a good mystery novel – prompted me to iteratively peel back another layer of the onion. I’ve finally reached a point where I believe I have enough of the story to understand what was published and why it’s not what they should have published. But enough of the preamble: let’s back up and dive into the subject at hand. As of August 12, 2011, the PCI task force driving the study of tokenization published an “Information Supplement” called the PCI DSS Tokenization Guidelines. More commonly known as the ‘Tokenization Guidance’ document, it discussed the dos and don’ts of using token surrogates for credit card data. The only problem is that this document is sorely lacking in actual guidance. Even the section on “Maximizing PCI DSS Scope Reduction” is a collection of broad generalizations on security, rather than practical advice. After spending the better part of the last two weeks with this wishy-washy paper, a better title would be “Quasi-acceptance of Tokenization Without Guidance”. And all my conversations indicate that this opinion is universally held outside the PCI council. “We read the guidance but we don’t know what falls out of scope!” is the universal merchant response to the tokenization information supplement. “Where are the audit guidelines?” is the second most common statement. The tokenization guidlines provides an overview of the elements of a tokenization system, along with the promise of reduced compliance requirements, but they don’t provide a roadmap on how to get there. Let’s make one thing very clear right from the start: There is very wide interest in tokenization because it promises better security, lower risk and – potentially – significant cost reductions for compliance efforts. Merchants want to reduce the work they must do in order to comply with the PCI requirements – which is exactly why they are interested in tokenization technologies. Security and lower risk are secondary benefits. But without a concrete idea of the actual cost reduction – or worse, an understanding of how they will be audited once tokenization is deployed – they are dragging their feet on adoption. There is no good reason to omit a basic cookbook for scope reduction when using tokenization. I am going to take the guesswork out of it and provide real guidance for evaluating tokenization, and clarify how to benefit from tokenization. This will be in the form of concrete, actionable steps for merchants deploying tokenization, with checklists for auditors reviewing tokenization systems. I’ll fill in the gaps from the PCI supplement, poke at the topics they decided it was politically unpalatable to discuss, and specify what you can reasonably omit from the scope of your assessment. Given an overview of what you can reasonably consider to be out of scope, I’ll advise you on how to approach compliance and follow up with some checklists to make it easier. This is more than I can cover in a simple post, so I will cover these topics over the next two weeks, ultimately wrapping this into my own tokenization guidance white paper. The series will have four parts: Key points from supplement: Outline what the PCI information supplement on tokenization means and discuss the important aspects of the technology for users to focus on. We’ll discuss what is missing from the guidance and what does – and does not – help reduce PCI assessment effort. Guidance for merchants: How tokenization changes PCI compliance. We’ll discuss critical areas of concern when deciding to adopt a tokenization solution, with guidance on reducing audit scope. This will encompass areas including implementation tradeoffs, integration, rollout, and vendor lock-in. The audit process: How tokenization impacts the auditing process, how to work with your assessor to establish testing criteria, and where to look to reduce the scope of your audit. We’ll provide guidance for working with QSAs and self assessment. Checklists: The guidance describes major components of the technology but lacks operational guidelines for assessors or merchants. As with the original PCI-DSS documents, I’ll include an audit checklist to supplement the PCI standard on what should be considered out of scope, and where you can shave time from your auditing process. I will present information I feel should have been included in the tokenization supplement. And I will advise against use of some technologies and deployment models that frankly should not have been lumped in with the supplement as they don’t simplify and reduce risks in the way any merchant should be looking for. I am willing to bet that some of my recommendations will make many interested stakeholders quite angry. I accept this as unavoidable – my guidance is geared toward making the lives of merchants who buy tokenization solutions easier, rather than to avoid conflict with vendor products. No technology vendor or payment provider ever endorses any guidance that negatively impacts their sales, so I expect blowback. As always, if you think some of my recommendations are BS, I encourage you to comment. We are open to criticism and alternate viewpoints, and we always factor relevant comments into our final research. I do ask vendors to identify yourselves. I will also assume some prior knowledge of tokenization and PCI-DSS. There is a ton of research on the Securosis blog and the Research Library on these subjects. If you are not fully up to speed on tokenization systems, or are interested in learning more about tokenization in general, I suggest you review

Share:
Read Post

Friday Summary: September 23, 2011

At 20 years old, you are on a precipice of perception: you are an adult but many adults view you as a kid. In the back of your mind you worry a bit about how adults will perceive you. It was with trepidation that I met my best friend’s Mom in college. My friend George – someone I had only known a couple months, but felt like we had known each other for years – invited me to dinner. I was surprised when his truck stopped in front of my house and he was not in it – instead his mother was. The truck screeched to a halt and out popped the highest energy person I have ever met; with a hearty “Hi there,” she was literally effervescent with energy. I was reserved, wondering how the famous ‘Doctor’ would treat me – as a child or as an adult. She waved again, told me to get my ass out of the street and in the truck. I obliged, somewhat taken aback, and hopped in the passenger seat. She rolled up to the red light, looked both ways, and floored it! We screeched through the intersection, oncoming traffic be damned; up the street, fraternity boys racing for the sidewalks, we headed for home. I was in the passenger seat, looking at this 50-ish Mom in utter disbelief. She was flying through the streets of Berkeley. “Oh, shit!” she said, stubbing out a cigarette. “Don’t tell George I did that. He’ll have a fit I am driving his truck like this.” Then she started telling a dirty joke, and believe me, OB/GYN doctors have some some raunchy ones. It was at this point I relaxed, and I knew we were going to be friends. And we were. We have been very close for the last 24 years. I am not afraid to say I am closer to her that I was her son – and I consider George to be my brother. She would have adopted me had I been under 18; I know because she told me several years later she tried. While I had my own place at Cal Berkeley during college, I lived with them. When I graduated I visited every free weekend. Even when I moved to Arizona, every Bay Area visit during the last 10 years included a mandatory stop to visit my friend and drink mocha-java coffee and talk about whatever: the stock market, politics, sailboat racing, Scotch, gardening, her crappy neighbor, broken sewer lines, etc. The details never mattered – it was always fun. She had a phenomenal intellect and a razor sharp wit. And the food – the food – was always memorable. My friend passed away this week from cancer. Her 5th bout in the last 11 years. She never mentioned that, as she was determined to keep all these struggles a secret. But I knew – one way and another I pieced it together. And I kept my mouth shut because I knew she would be pissed off it I let on – there is no value in embracing such things. And at any sign of pity she would have whacked me in the head before kicking me out of the house. So I called an visited as often as I could and never said a word, never acted differently. After all, life is to be enjoyed, and she lived it exactly the way she wanted to. And I’ll always remember her as that energetic, wickedly funny person person who just wanted to have fun. There will never be pity or regret, but she will be missed. Oh, and a short summary this week. Webcasts, Podcasts, Outside Writing, and Conferences Adrian’s DR post on Segregating DBA And Admin Duties. Favorite Securosis Posts David Mortman: Security Management 2.0: Migration. Adrian Lane: Home Invasion: What would you do? Other Securosis Posts Incite 9/21/2011: Where’s Waldo? Friday Summary: September 16, 2011. Favorite Outside Posts David Mortman: Don’t Hit the Snooze Button on DigiNotar Alarm Bells. Adrian Lane: Top 10 Most Overhyped Technology Terms. Very entertaining read by Amrit Williams. But just so that does not go to his head, he does really suck at Twitter. Rich: Criminal Hack versus FOIA request: The Showdown. Read this one and just think about it for a moment. Anonymous and Lulzsec look pretty petty and malicious. Project Quant Posts DB Quant: Index. NSO Quant: Index of Posts. NSO Quant: Health Metrics–Device Health. NSO Quant: Manage Metrics–Monitor Issues/Tune IDS/IPS. NSO Quant: Manage Metrics–Deploy and Audit/Validate. Research Reports and Presentations Tokenization vs. Encryption: Options for Compliance. Security Benchmarking: Going Beyond Metrics. Understanding and Selecting a File Activity Monitoring Solution. Database Activity Monitoring: Software vs. Appliance. Share:

Share:
Read Post

Security Management 2.0: Migration

As we wrap up our Security Management 2.0 series, we have completed quite a journey. You have undertaken a disciplined and objective process to determine if it’s worth moving to a new security management platform. Assuming that your decision is to move, now it gets real. You need to implement and migrate your existing environment to the new thing, while maintaining service levels and without opening your organization to any additional risk. Walk in the park, right? Let’s address these migration issues, so hopefully you can learn from some of my pain. I started work at a previous employer two days after an IT consultancy performed a server migration. Coincidentally, at the same time I was helping a friend at a major bank review his data center migration plans. I’ll tell you that the bank had every phase of the change-over planned down in half-day increments, with backup plans in place following months of migration rehearsals. Let’s just say the IT consultancy had less elaborate plans. Bank employees knew their systems were critical and treated the migration as such – IT consultants, not so much. When I walked into the offices at my new job every server was down, removed from their racks, sitting in a pile by the door. The consultancy was assembling the new hardware – and had been for more than a day. Their plan was to finish the hardware in a day or so; when they finished that, they would install the operating systems. Then when they had the identity management system working, they planned to install the applications and import customer data. Out in the hallway, a few dozen very angry sales people paced the halls, idle, 3 weeks before the close of the quarter. It was a bad day for everyone. The IT consultancy’s contract was terminated that day. After plugging the old servers back in and dispersing the lynch mob outside the server room, I planned out how to migrate to the new servers without any additional downtime. It was not just for the business’s sake, but to ensure my personal safety as well. While I did not go to the same extremes as my friend’s team at a certain giant, I acknowledged my servers were no less critical to our business, and a seamless migration of services was mandatory. What can we learn from this somewhat transformative experience? A flash cutover never really is. We recommend you start deploying the new SIEM long before you get rid of the old. At best, you’ll deprecate portions of the older system after newer replacement capabilities are online, but you will likely want the older system as a fallback until the new functions have been vetted and tuned. We have learned the importance of this staging process the hard way. Ignore it at your own peril, keeping in mind that your security management platform sustains several key business functions. We have broken the migration process into two phases: planning and implementation. Your plan needs to be very clear and specific about when things get installed, how data gets migrated, when you cut over from the old systems to the new, and who performs the work. Plan The Planning step leverages much of the work you have done up to this point in the process of evaluating replacement options – you just need to tune it for the migration. Review: First, go back through some of the documents you created earlier in this series. First are the platform evaluation documents, which will help to understand what the current system provides, as well as the key areas of deficiency to address. These documents become the priority list for the migration effort, and form the foundation of the migration task list. Next, leverage what you learned during the Proof of Concept (PoC). When evaluating your new security management platform provider, you conducted a mini-deployment exercise. Use the findings from that exercise – what worked and what didn’t – to feed subsequent planning and address issues it identified. Focus on Incremental Success: What do you install first? Do you work top down or bottom up? Do you keep both systems operational throughout the entire migration, or do you shut down portions of the old as each node migrates? We recommend you use your deployment model as a guide. You can learn more about these models by checking out our Understanding and Selecting a SIEM paper. When using a mesh deployment model, it’s often easiest to make sure a single node/location is fully functional before moving on to the next. With ring architectures, it’s best to get the central SIEM platform operational and then gradually add nodes around it. Hierarchal models are best deployed top-down, with the central server first, followed by regional aggregation nodes in order of criticality, then down to the collector level. The point is to make sure the project is broken up to ensure success happens incrementally, and avoid proceeding down any wrong paths. Allocate resources: Who is going to do the work? When are they going to do it? How long will it take to deploy the platform, data collector and/or log management support system(s)? This is also the time to engage professional services and enlist the new vendor’s assistance. The vendor presumably does these implementations all day long, so they should have expertise at estimating these timelines. You may also want to engage them to perform some (or all) of the work in tandem with your staff, at least for the first few locations until you get the process down. Define the Timeline: Estimate the time it will take to deploy the servers, install the collectors, and implement your policies. Add some time in for testing and verification. There is likely some ‘guesstimation’ on your part, but you have some reasonable metrics to base your plan on, from the PoC and prior experience with SIEM. You did document the PoC, right? Plan the project commencement date and publish to the team. Solicit feedback and

Share:
Read Post

Friday Summary: September 16, 2011

It was the idea of a party that got me thinking about it: I loved the 1990’s. It was a great decade – for me at least. I had just graduated college and pretty much everything was new. During that decade I met my wife, got married, got my first place on my own, bought my first house, got my first promotion to CTO, was finally able to buy a car that cost more than a week’s salary, made good money, was best man at four friends’ weddings, started my first company, finally got to travel the US, and made many lasting friendships. The silicon valley was a great place to work back then – it seemed like every week there was some amazing new technology to work on, or an exciting new trend. This last decade sucked. I closed my first company, nearly lost every penny in the tech crash, had serious doubts about what I wanted to do with my life, was uncertain whether I wanted to stay in technology, suffered health issues, avoided the news every day in case ‘W’ did something else to piss me off, worked with jerks, moved friends out of their foreclosed homes, watched other friends implode, and finally closed my wife’s real estate office. It certainly has not been all bad, but there have been an inordinate number of poop storms. It feels like I have been enduring this depression – the economic one that technically started in 2007 with the real estate collapse – since the 2001 tech collapse. Everything good of the 90s was counterbalanced by the bad of the 2000s. My attitude and optimism took a severe beating. But things are getting much better – even though some places in Phoenix still look post-apocalyptic. I get to live at home now: no more interstate commute. I no longer work on Monkey Island. In the last 18 months or so, while the work load is staggering, this little business of ours has been growing. And I could not ask for better business partners! Technology is interesting again. I have finally gotten a life/work balance I am comfortable with. I don’t tie my entire sense of self worth to my work any longer. The family is healthy and happy, my wife is embarking on a new career, and it feels like we have turned a corner. So my wife and I decided it was time to come out of our doldrums and do something fun. As a symbol, we chose to revive our Halloween party – which we used to throw in the Bay Area for 80-100 people. We debated it for a long time – were we really in the mood? It was decided we would do a coming out of the depression party – a 1940s theme to commemorate the last time the US came out of a depression. We’ll arrange the living room like a scene from ‘Casablanca’, throw in some jazz & swing music, and top it off with classic cocktails. I think it should be a good time and I feel strangely optimistic. I doubt any of the three people reading this will be in Phoenix the weekend before Halloween, but if you are, let me know and I’ll scrounge up an invite. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian in webcast: Security Mgmt 2.0: Time to Replace Your SIEM? Favorite Securosis Posts Rich: Payment Trends and Security Ramifications. Awesome summary. Adrian Lane: Payment Trends and Security Ramifications. Yeah, I am picking my own post. Mike Rothman: Fact-Based Network Security: In Action. Someone else will link to Rich’s great SSL post, so let me highlight the kind of post I like best. Applied use of theory, even if it is a concocted scenario. You should read all the posts in this series. It’s good stuff if I do say so myself, and I do. David Mortman: Building an SSL Early Warning System. Other Securosis Posts Incite 9/14/2011: Mike and the Terrible, Horrible, No Good, Very Bad Day. Security Management 2.0: Making the Decision. Recently on the Heavy Feed. Friday Summary: September 9, 2011. Speaking at OWASP: September 22 and 23. Security Management 2.0: Vendor Evaluation – Driving the PoC. Security Management 2.0: Negotiation. Favorite Outside Posts Rich: Criminal Hack versus FOIA request: The Showdown. Read this one and just think about it for a moment. Anonymous and Lulzsec look petty and malicious. Adrian Lane: Protecting against XSS. Good analysis of XSS and tips on how to handle it. Mike Rothman: Surviving 9/11: Ten Years Later. Haunting story from Penelope Trunk about her experience surviving the 9/11 attack. And how she learned to be OK stepping off the fast track. “I am not a person who waited until the end of my life to slow down. I’m someone who stopped competing.” Word to that. David Mortman: DigiNotar: surveying the damage with OCSP. Project Quant Posts DB Quant: Index. NSO Quant: Index of Posts. NSO Quant: Health Metrics–Device Health. NSO Quant: Manage Metrics–Monitor Issues/Tune IDS/IPS. NSO Quant: Manage Metrics–Deploy and Audit/Validate. NSO Quant: Manage Metrics–Process Change Request and Test/Approve. Research Reports and Presentations Tokenization vs. Encryption: Options for Compliance. Security Benchmarking: Going Beyond Metrics. Understanding and Selecting a File Activity Monitoring Solution. Database Activity Monitoring: Software vs. Appliance. React Faster and Better: New Approaches for Advanced Incident Response. Top News and Posts LexisNexis’ study on the true cost of fraud. Huh. Intel and McAfee Unveil DeepSAFE. Debated whether qualified as news, given my first-hand knowledge that hardware-level security hooks for A/V, identity, and OS have been under serious consideration at Intel since at least 1998. But now we have a live implementation so I am interested to see the value it provides. Apache HTTP Server 2.2.21 Released Several important security fixes. Patch Tuesday Blocks More DigiNotar Certificates. Adobe, Windows Security Patches via Krebs. Microsoft Windows 8 will ship with built-in anti-virus. Blog Comment of the Week Remember, for every comment selected, Securosis makes a

Share:
Read Post

Payment Trends and Security Ramifications

I write a lot about payment security. Mostly brief snippets embedded in our weekly Incite, but it’s a topic I follow very closely and remain deeply interested in. Early in my career, I developed electronic wallet and payment gateway software for Internet commerce sites, and application embedded payment options. In have been closely following the technical evolution of this market for over 15 years – back in the days of CyberCash, Paymatech, and JECF. But unlike many of the articles I write, payment security affects more than just IT users – it impacts pretty much everyone. And now is a very good time to start paying attention to the payment space because we are witnessing more changes, coming faster than ever. Most of the changes are directly attributable to disruptive nature of mobile devices: they not only offer a convenient new medium for payment, but they also threaten to reduce revenue and brand awareness of the major payment players. So issuing banks, payment processors, card brands, and merchants are all reacting in their own ways. The following are some highlights of trends I have been tracking: 1) Mobile Wallets: A mobile wallet is basically a payment app that authorizes payments from your phone. The app interacts with the point-of-sale terminal in one of several ways, including WiFi, images readers, and text message exchanges. While the technical approaches vary, payment is cleared without providing the merchant with a physical credit card, or even revealing a credit card or bank account number. Many credit card companies look on wallet apps as a way to ‘accelerate’ commerce and reduce consumer reticence to spend money – as credit cards did in the 70s. The flip side is that many card brands are scared by all this. Some are worried about losing their brand visibility – you pay with your phone rather than their branded credit card, and your bill might be from your telephone company without a Visa or Mastercard logo or identification. Customers can choose a payment application and provider, so churn can increase and customer ‘loyalty’ is reduced. Furthermore, the app need not use a credit card al all – like a debit card it could draw funds directly from a bank account. When you think about it, as a consumer, do you really care if it is Visa or Mastercard or iTunes or PayPal, so long as payment is accepted and you get whatever you’re paying for? Sure, you may look for the Visa/Mastercard sticker on the register or door today, but when you and the merchant are both connected to the Internet, do you really care how the merchant processes your payment, so long as they accept your ‘card’ and your risk is no greater than today? When you buy something using PayPal you draw funds from your bank account, from your credit card, or from your PayPal balance – but you are dealing with PayPal, and your bank or credit card provider is barely visible in the transaction. The threat of diminished revenue and diminished brand stickiness – on top of a global reduction in credit card use – is pushing card brands and payment processors into this market as fast as they can go. From what I see, security is taking a back seat to market share. Most of the wallets I review are designed to work now, minimizing software and hardware PoS changes to ensure near-term availability. Basic passwords and phone-presence validations will be in place, but these systems are designed with a security-second mentality. And just like the Chip & Pin systems I will discuss in a moment, mobile wallets could to be more secure than physical cards or reading numbers over the phone, but the payment schemes I have reviewed has are all vulnerable to specific threats – which might compromise the transaction, phone, or wallet app. 2) Smart Cards: These are the Chip & Pin – or Integrated Circuit – systems used widely in Europe. The technical standards are specified by the Europay-Mastercard-Visa (EMV) consortium. Merchants are being encouraged to switch to Chip & Pin with promises of reduced auditing requirements, contrasted against the threat of growing credit card fraud – but merchants know card cloning has been a problem for decades and it has not been enough to get them to endorse smart cards. I recently discussed the issues surrounding in Say Hello to Chip and Pin, but I will recap here briefly. Smart cards are really about three things: 1) new revenue opportunities provided by multi-app cards for affinity group sales, 2) moving liability away from the processor and merchant and onto the consumer, and 3) compatibility with Chip & Pin hardware and software systems used elsewhere in the world. More revenue, less risk, and standardized hardware for multiple markets reduce costs through competition. And a merchant that invests in smart card PoS and register software, is less likely to invest in payment systems that support mobile phones – creating PoS vendor and merchant lock-in. Once again, smart cards are marketed as advanced security – after all it is harder to clone a smart card – despite ample proof that Chip & Pin is hackable. This is about revenue and brand: making more and keeping more. Incremental security benefits are just gravy for the parties behind Chip & Pin. 3) Debit Cards: Mobile wallets may change the debit card landscape. If small cash transactions are facilitated through mobile wallet payments, the need for pocket cash diminishes, as does the need to carry a branded debit card! This is important because, since the Fed cut debit card fees in half, many banks have been looking to make up lost revenue by charging debit card ‘privilege’ fees above and beyond ATM fees. Wells Fargo, for example, makes around 45% of their revenue on fees; this number will shrink under the new law – potentially by billions, across the entire industry. Charging $3 a month for debit card usage will push consumers to look for

Share:
Read Post

Speaking at OWASP: September 22 and 23

Gunnar Peterson and I will be presenting at OWASP September 20-23rd. OWASP AppSec USA will be at the Minneapolis Convention center in – you guessed it – Minneapolis, Minnesota. This year’s theme is “Your life is in the cloud”, so there are plenty of talks on mobile app security and how to weave security into your cloud environment. Gunnar is presenting on Mobile Web Services, discussing mobile application vulnerabilities in the web services layer. I’ll be presenting CloudSec 12-Step, a look at foundational security precautions developers need to consider when building and deploying cloud applications. They have scheduled many other great talks as well. And personally, I am willing to bet autumn weather in Minnesota will be awesome! Okay, perhaps my perspective is skewed – Arizona just set a record for the hottest August in history – some 33 days this summer over 110 degrees – but regardless, Minnesota should be very nice. Come by and check out the presentations. As always, we look forward to seeing friends – shoot us an email if you want to meet up that week. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.