Securosis

Research

Friday Summary: August 12, 2011

Believe it or not, I’m not the biggest fan of travel. Oh, I used to be, maybe 10+ years ago when I was just starting to travel as part of my career. Being in your 20’s and getting paid to literally circle the globe isn’t all bad… especially when you’re single. But the truth is I got tired of travel long before I started a family. Traveling every now and then is a wonderful experience that can change the lens with which you view the world. Hitting the airport once or twice a month, on the other hand, does little more than disrupt your life (and I know plenty of people who travel even more than that). I miss being on a routine, and I really miss the strong local social bonds I used to have. Travel killed my chances of moving up to my next Black Belt. It wrecked my fitness consistency (yes, I still work out a ton, but not so much with other people, and bad hotel gyms and strange roads aren’t great for the program). It killed my participation in mountain rescue, although for a couple years it did let me ski patrol in Colorado while I lived in Phoenix. That didn’t suck. It mostly hurt my relationships with my “old” friends because I just wasn’t around much. Folks I basically grew up with, as we all congregated in Boulder (mostly) as we started college, and learned to rely on each other as surrogate family. Complete with Crazy Uncle Wade at the head of the Thanksgiving table (Wade is now in the Marshall Islands, after working as an electrician in Antarctica). On the other hand, I now have a social group that’s scattered across the country and the world. I see some of these people more than my local friends here in Phoenix, and we’re often on expense accounts without a curfew. I was sick last week at Black Hat and DefCon, but managed to spend a little quality time with folks like Chris Hoff, Alex Hutton, Martin and Zach from the Podcast, two good friends from Gartner days, Jeremiah, Ryan, Mike A., and the rest of the BJJ crew, and even some of these people’s spouses. Plus so many more that going to DefCon (in particular) now feels more like a week of summer camp than a work conference. With beer. And parties in the biggest clubs in Vegas (open bar). And… well, we’re not 13 anymore. What’s amazing and awesome is almost none of us work together, and most of us don’t live anywhere near each other. And it isn’t unusual to roll into some random city (for a client gig, not even a conference), and find out someone else is also in town. We live strange lives as digital nomads who combine social media and frequent flyer miles to create a personal network that’s different from seeing the same faces every weekend at the Rio (Boulder thing), but likely as strong. I don’t think this could exist without both the technical and physical components. I still miss the consistency of life with a low-travel job. But in exchange I have the kinds of adventures other people write books about, and get to share them with a group of people I consider close friends, even if I can’t invite them over for a BBQ without enough time to get through their personal gropings at the airport. -Rich On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian quoted on tokenization. Favorite Securosis Posts Mike Rothman: NoSQL and No Security. Nothing like poking a big security hole in an over-hyped market. Who needs DB security anyway? Adrian Lane: Data Security Lifecycle 2.0: Functions, Actors, and Controls. Why? Because the standard data security lifecycle fails when applied to cloud services – you need to take location and access into account. Our goal is to make the model simple to use, so please give us your feedback. David Mortman: Use THEIR data to tell YOUR story. Rich: Words matter: You stop attacks, not breaches. I know, I know, we should stop thinking marketing will ever change. But everyone has their windmill. Other Securosis Posts Say Hello to Chip and Pin. Incite 8/10/2011: Back to the Future. Introducing the Data Security Lifecycle 2.0. Data Security Lifecycle 2.0 and the Cloud: Locations and Access. Fact-Based Network Security: Defining ‘Risk’. Incite 8/3/2011: The Kids Are Our Future. Words matter: You stop attacks, not breaches. Cloud Security Training: August 16-18, Washington DC. Security has always been a BigData problem. New Blog Series: Fact-Based Network Security: Metrics and the Pursuit of Prioritization. Favorite Outside Posts Mike Rothman: Marcus Ranum: Dangerous Cyberwar Rhetoric. Ranum can pontificate with the best of them, but this perspective is dead on. Attribution is harder, and even more important, as the lines between “cyber” and physical war inevitably blur. Adrian Lane: Comments about the $200,000 BlueHat prize. ErrataRob clarifies the security bounty program. David Mortman: Metricon 6 Wrap-Up. Chris Pepper: Badass of the Week: Abram A. Heller. Totally badass without being an ass. Rich: Sunset of a Blog. Glenn is a good friend and one of the people who helped launch my writing career, especially on the Mac side (via TidBITS). This post shows the difference between a blogger and a writer. Research Reports and Presentations Security Benchmarking: Going Beyond Metrics. Understanding and Selecting a File Activity Monitoring Solution. Database Activity Monitoring: Software vs. Appliance. React Faster and Better: New Approaches for Advanced Incident Response. Measuring and Optimizing Database Security Operations (DBQuant). Network Security in the Age of Any Computing. The Securosis 2010 Data Security Survey. Monitoring up the Stack: Adding Value to SIEM. Top News and Posts Microsoft Security Program & Vulnerability Data Now Available. Did Airport Scanners Give Boston TSA Agents Cancer? TSA says that’s BS. Survey Finds Smartphone Apps Store Too Much Personal Data. What? No way! 22 Reasons to Patch Your Windows PC via Krebs. Cameron Threatens To Shut Down UK Social Networks.

Share:
Read Post

Data Security Lifecycle 2.0: Functions, Actors, and Controls

In our last post we added location and access attributes to the Data Security Lifecycle. Now let’s start digging into the data flow and controls. To review, so far we’ve completed our topographic map for data: This illustrates, at a high level, how data moves in and out of different environments, and to and from different devices. It doesn’t yet tell us which controls to use or where to place them. That’s where the next layer comes in, as we specify locations, actors (‘who’), and functions: Functions There are three things we can do with a given datum: Access: View/access the data, including copying, file transfers, and other exchanges of information. Process: Perform a transaction on the data: update it, use it in a business processing transaction, etc. Store: Store the data (in a file, database, etc.). The table below shows which functions map to which phases of the lifecycle: Each of these functions is performed in a location, by an actor (person). Controls Essentially, a control is what we use to restrict a list of possible actions down to allowed actions. For example, encryption can be used to restrict access to data, application controls to restrict processing via authorization, and DRM storage to prevent unauthorized copies/accesses. To determine the necessary controls; we first list out all possible functions, locations, and actors; and then which ones to allow. We then determine what controls we need to make that happen (technical or process). Controls can be either preventative or detective (monitoring), but keep in mind that monitoring controls that don’t tie back into some sort of alerting or analysis merely provide an audit log, not a functional control. This might be a little clearer for some of you as a table: Here you would list a function, the actor, and the location, and then check whether it is allowed or not. Any time you have a ‘no’ in the allowed box, you would implement and document a control. Tying It together In essence what we’ve produced is a high-level version of a data flow diagram (albeit not using standard programming taxonomy). We start by mapping the possible data flows, including devices and different physical and virtual locations, and at which phases in its lifecycle data can move between those locations. Then, for each phase of the lifecycle in a location, we determine which functions, people/systems, and more-granular locations for working with the data are possible. We then figure out which we want to restrict, and what controls we need to enforce those restrictions. This looks complex, but keep in mind that you aren’t likely to do it for all data within an entire organization. For given data in a given application/implementation you’ll be working with a much more restrictive subset of possibilities. This clearly becomes more involved with bigger applications, but practically speaking you need to know where data flows, what’s possible, and what should be allowed, to design your security. In a future post we’ll show you an example, and down the road we also plan to produce a controls matrix which will show you where the different data security controls fit in. Share:

Share:
Read Post

Say Hello to Chip and Pin

No, it’s not a Penn & Teller rip-off act – it’s a new credit card format. On August 9th Visa announced that they are going to aggressively encourage merchants to switch over to Chip and Pin (CAP) ‘smart’ credit cards. Europay-Mastercard-Visa (EMV) developed a smart credit card format standard many years ago, and the technology was adopted by many other countries over the next decade. In the US adoption has never really happened. That’s about to change, because Visa will give merchants a pass on PCI compliance if they adopt smart cards, or let them assume 100% of fraud liability if they don’t. Why the new push? Because it helps Visa’s and Mastercard’s bottom lines. There are a couple specific reasons Visa wants this changeover, and security is not at the top of their list. The principal benefit is that CAP cards allow applications to be installed and run on the card. This opens up new revenue opportunities for card issuers, as they bolster affinity programs and provide additional card functionality. Things like card co-branding, recurring payments, coupons, discounted pricing from merchants, card-to-card gifting, and pre-paid transit tokens are all examples. Second, they feel that CAP opens up new markets and will engender broader use of the cards. The smart card industry in general is worried about loss of market share to smart phones that can provide the same features as CAP-based smart cards. In fact we see payment applications of all types popping up, many of which are (now) sponsored by credit card companies to avoid market share erosion. Finally, the card companies want to issue a single card type, standardizing cards and systems across all markets. Don’t get me wrong – Security absolutely is a benefit of CAP. ‘Smart’ credit cards are much harder to forge, offering much better security for ‘card present’ transactions, as the point-of-sale terminal can electronically validate the card. And the card can encrypt data locally, making it much easier to support (true) end-to-end encryption so sensitive data is not exposed while processing payments. Most smart cards do not help secure Internet purchases or card-not-present transactions over the phone. What scares me about this announcement is that Visa is willing to waive PCI DSS compliance for merchants that switch 75% or more of their transaction to CAP-based smart cards! Vissa is offering this as an incentive for large merchants to make the change. The idea is that the savings on security, audit preparation, and remediation will offset the costs of the new hardware and software. Visa has not specified whether this will be limited to the POS part of the audit, or if they mean all parts of the security specification, but the press release suggests the former. Merchants have resisted this change because the terminals are expensive! To support CAP you need to swap out terminals at a hefty per-terminal cost, upgrade supporting point-of-sale software, and alter some payment processing systems. Even small businesses – gas stations, fast food, grocery stores, etc. – will require sizable investment to support CAP. Pricing obviously varies, but tends to run about $1,000 to $1600 per terminal. Small merchants who are not subject to external auditing will not benefit from the audit waiver that can save larger merchants so much, so they are expected to continue dragging their feet on adoption. One last nugget for thought: If EMV can enforce end-to-end encryption, from terminal to payment processor, will they eventually disallow merchants from seeing any card or payment data? Will Visa fundamentally disrupt the existing card application space? Share:

Share:
Read Post

Incite 8/10/2011: Back to the Future

Getting old just sucks. OK, I’m not really old, but I feel that way. I think I’m suffering from the fundamental problem Rich described a few weeks ago. I think I’m 20, so I do these intense exercise programs and athletic pursuits. Lo and behold, I get hurt. First it was the knees. My right knee has bothered me for years. And I was looking at the end of my COBRA health benefits from my last employer, so I figured I’d get it checked out before my crap insurance kicked in (don’t get me started on health insurance). Sure enough there was some inflammation. Technically they called it patellar tendinitis. My trusty doctor proscribed some anti-inflammatories and suggested a few weeks of physical therapy to strengthen my quads to alleviate the issue. But that took me out of my pretty intense routines for a few months. That wouldn’t normally be a huge problem, but softball season was starting, and I had lost some of my fitness. OK, a lot of my fitness. I was probably still ahead of most of the old dudes I play with, but all the same – when you aren’t in shape, you are more open to injury. You know how this ends. I plant the wrong way crossing home and I can feel the jolt through my entire body. My middle back and shoulder tighten up right away. It wouldn’t be the first time I’ve pinched a nerve, so I figure within a day or two with some good stretching it’ll be fine. I take a trip and three days later try some yoga. Yeah, that didn’t work out too well. I made it through 10 minutes of that workout before saying No Mas. Since when did I become Roberto Duran? Oh crap, this may be a bit more serious than I figured. It probably didn’t help that the next day we loaded up the family truckster and drove 7 hours to see the girls at camp. When I woke up the next day I could hardly move. I’m not one to complain, but I was pretty well immobile. Once we got to Maryland, I got a deep tissue massage. No change. My doctor called in some relaxants. I tried to persevere. No dice. I flew home to see my doc, who thought there was a disc problem. An MRI would confirm. And confirm it did. I have a degenerative disk (C5-6 for those orthopedists out there). It took about two weeks but finally settled down. I’m going to try to rehab it with more PT and more stretching and less impact. I don’t want to do shots. I definitely don’t want to do surgery. So I’ve got to adapt. P90X may not be the best idea. Not 6 days a week, anyway. I can build up a good sweat doing yoga, and maybe I’ll even buy that bike the Boss has been pushing for. Or perhaps take a walk. How novel! I’m not going to settle for a sedentary existence. I like beer and food too much for that to end well. But I don’t need to kill myself either. So I’m searching for the middle ground. I know, for most of my life I didn’t even know there was a thing called middle ground. But as I get older I need to find it. Because life is a marathon, not a sprint. I can’t go back to the future in a broken down DeLorean, now can I? -Mike Photo credits: “Lateral X-Ray of Neck Showing Flexion | Donald Corenman, MD | Spine Surgery Colorado” originally uploaded by neckandback Incite 4 U Long live speeds and feeds: Coming from a networking background, I have a serious disdain for vendors differentiating based on speed. Or how many signatures something ships with. Or any other aspect of the device with little bearing on real world performance. After the 40gbps IPS rhetoric died down a few years ago, I hoped we were past the “my tool is bigger than yours” marketing. Yeah, not so much. Our pals at Check Point dive back into the speeds/feeds muck with their new big box, and NetworkWorld needs a visit from the clue bat for buying into the 1Tbps firewall. Check Point did map out a path to 1tbps, but it’ll take them 4 years to get there. But hey, a 1tbps firewall generates some page views. By the way, there are a handful of customers that even need 100gbps of perimeter throughput. But long live the speeds and feeds! – MR I guess the parents were also in the room: When I worked for the State of Colorado, my first real IT job was as a systems and network administrator at the University of Colorado in Boulder. I had a wacky boss who wasn’t the most stable of individuals. When I got my Classified Staff status he informed me that I now didn’t have to worry about being fired. I quote, “even if you have sex with a student on a desk in front of the class, they’ll just suspend you with pay”. (When he finally went off the deep end it took them years of demotions to finally get him to quit). I’ve always thought of PCI QSA (assessment) companies that way. It has always seemed that no matter what they did, there weren’t any consequences. I wouldn’t say that’s changing, but a company called Chief Security Officers is the first to have its QSA status revoked. No one is saying why, but I suspect less than satisfactory performance, with consistency. – RM CSA Vendor Guide: CSA is offering a Security Registry for Cloud providers in order to help customers compare cloud security offerings across vendors. The CSA has a questionnaire for each provider – basically an RFP/RFI – and will publish the results for customers. The good news is that this will provide some of the high-level information that is really hard to find – or entirely absent

Share:
Read Post

NoSQL and No Security

Of all of the presentations at Black Hat USA 2011, I found Brian Sullivan’s presentation on “Server-Side JavaScript Injection: Attacking NoSQL and Node.js” the most startling. While I was aware of the poor security of most NoSQL database installations – especially their lack of support for authorization and authentication – I was not aware of their susceptibility to injection of both commands and code. Apparently Mongo and many of the NoSQL databases are nothing more than JavaScript processing engines, without the stigma of authentication. Most of these products are subject to several classes of attack, including injection, XSS, and CSRF. Brian demonstrated blind NoSQL injection scripts that can both discover database contents and run arbitrary commands. He cataloged an entire Mongo database with a couple lines of PHP. Node.js is a commonly used web server – it’s lightweight and simple to deploy. It’s also insecure as hell! Node and NoSQL are basically new JavaScript based platforms – with both server and client functionality – which makes them susceptible to client and server side attacks. These attacks are very similar to the classic browser, web server, and relational database attacks we have observed over the past decade. When you mix in facilities like JSON (to objectify data elements) you get a bunch of methods which provide an easy way for attackers to inject code onto the server. Brian demonstrated the ability to inject persistent changes to the Node server, writing an executable to the file system using Node.js calls and then running it. But it got worse from there: JSONP – essentially JSON with padding – is intended to provide cross-origin resource sharing. Yes, it’s a tool to bypass the same-origin policy. By wrapping query results in a callback, you can take action based upon the result set without end user participation. Third-party code can make requests and process the results – easily hijacking the session – without the user being aware of what’s going on. These are exactly the same vulnerabilities we saw with browsers, web servers, and database servers 10 years ago. Only the syntax is different. What’s worrisome is the rapid adoption rate of these platforms – cheap, fast, and easy is always attractive for developers looking to get their applications running quickly. But it’s clear that the platforms are not ready for production applications – they should be reserved for proofs of concept do to their complete lack of security controls. I’ll update this post with a link when the slide deck is posted. It’s worth your time to review just how easy these compromises are, but he also provides a few hints for how to protect yourself at the end of the presentation. Share:

Share:
Read Post

Introducing the Data Security Lifecycle 2.0

Four years ago I wrote the initial Data Security Lifecycle and a series of posts covering the constituent technologies. In 2009 I updated it to better fit cloud computing, and it was incorporated into the Cloud Security Alliance Guidance, but I have never been happy with that work. It was rushed and didn’t address cloud specifics nearly sufficiently. Adrian and I just spent a bunch of time updating the cycle and it is now a much better representation of the real world. Keep in mind that this is a high-level model to help guide your decisions, but we think this time around we were able to identify places where it can more specifically guide your data security endeavors. (As a side note, you might notice I use “data security” and “information-centric security” interchangeably. I think infocentric is more accurate, but data security is more recognized, so that’s what I tend to use.) If you are familiar with the previous model you will immediately notice that this one is much more complex. We hope it’s also much more useful. The old model really only listed controls for data in different phases of the lifecycle – and didn’t account for location, ownership, access methods, and other factors. This update should better reflect the more complex environments and use cases we tend to see these days. Due to its complexity, we need to break the new Lifecycle into a series of posts. In this first post we will revisit the basic lifecycle, and in the next post we will add locations and access. The lifecycle includes six phases from creation to destruction. Although we show it as a linear progression, once created, data can bounce between phases without restriction, and may not pass through all stages (for example, not all data is eventually destroyed). Create: This is probably better named Create/Update because it applies to creating or changing a data/content element, not just a document or database. Creation is the generation of new digital content, or the alteration/updating of existing content. Store: Storing is the act committing the digital data to some sort of storage repository, and typically occurs nearly simultaneously with creation. Use: Data is viewed, processed, or otherwise used in some sort of activity. Share: Data is exchanged between users, customers, and partners. Archive: Data leaves active use and enters long-term storage. Destroy: Data is permanently destroyed using physical or digital means (e.g., cryptoshredding). These high-level activities describe the major phases of a datum’s life, and in a future post we will cover security controls for each phase. But before we discuss controls we need to incorporate two additional aspects: locations and access devices. Share:

Share:
Read Post

Data Security Lifecycle 2.0 and the Cloud: Locations and Access

In our last post we reviewed the Data Security Lifecycle, but other than some minor wording changes (and a prettier graphic thanks to PowerPoint SmartArt) it was the same as our four-year-old original version. But as we mentioned, quite a bit has changed since then, exemplified by the emergence and adoption of cloud computing and increased mobility. Although the Lifecycle itself still applies to basic, traditional infrastructure, we will focus on these more complex use cases, which better reflect what most of you are dealing with on a day to day basis. Locations One gap in the original Lifecycle was that it failed to adequately address movement of data between repositories, environments, and organizations. A large amount of enterprise data now transitions between a variety of storage locations, applications, and operating environments. Even data created in a locked-down application may find itself backed up someplace else, replicated to alternative standby environments, or exported for processing by other applications. And all of this can happen at any phase of the Lifecycle. We can illustrate this by thinking of the Lifecycle not as a single, linear operation, but as a series of smaller lifecycles running in different operating environments. At nearly any phase data can move into, out of, and between these environments – the key for data security is identifying these movements and applying the right controls at the right security boundaries. As with cloud deployment models, these locations may be internal, external, public, private, hybrid, and so on. Some may be cloud providers, other traditional outsourcers, or perhaps multiple locations within a single data center. For data security, at this point there are four things to understand: Where are the potential locations for my data? What are the lifecycles and controls in each of those locations? Where in each lifecycle can data move between locations? How does data move between locations (via what channel)? Access Now that we know where our data lives and how it moves, we need to know who is accessing it and how. There are two factors here: Who accesses the data? How can they access it (device & channel)? Data today is accessed from all sorts of different devices. The days of employees only accessing data through restrictive applications on locked-down desktops are quickly coming to an end (with a few exceptions). These devices have different security characteristics and may use different applications, especially with applications we’ve moved to SaaS providers – who often build custom applications for mobile devices, which offer different functionality than PCs. Later in the model we will deal with who, but the diagram below shows how complex this can be – with a variety of data locations (and application environments), each with its own data lifecycle, all accessed by a variety of devices in different locations. Some data lives entirely within a single location, while other data moves in and out of various locations… and sometimes directly between external providers. This completes our “topographic map” of the Lifecycle. In our next post we will dig into mapping data flow and controls. In the next few posts we will finish covering background material, and then show you how to use this to pragmatically evaluate and design security controls. Share:

Share:
Read Post

Use THEIR data to tell YOUR story

I’m in the air (literally) on the way to Metricon 6; so I’m thinking a lot about metrics, quantification, and the like. Of course most of the discussion at Metricon will focus on how practitioners can build metrics programs to make their security programs more efficient, maybe more effective, and certainly more substantiated (with data, as opposed to faith). Justifiably so – to mature the practice of security we need to quantify it better. But I can’t pass up the opportunity to poke a bit at the type of quantification that comes from the vendor community. Surveys and analyses which always end up building a business case for security products and services. The latest masterpiece from the king of vendor-sponsored quantification, Larry Ponemon, is the 2nd annual cost of cyber-crime survey – sponsored by HP/ArcSight. To be clear, I’m not picking (too much) on Dr. Larry, but I wanted to put the data he presents in the report (PDF) in the proper context and talk briefly about how a typical end user should use reports like this. First of all, Ponemon interviewed 50 end users to derive his data. It’s been a long time since I’ve done the math to determine statistical significance, but I can’t imagine that a sample size of 50 qualifies. When you look at some of the results, his findings are all over the map. The high level sound bites include a median annualized cost of $5.9 million from “cyber crime,” whatever that means. The range of annualized losses goes from $1.5 to $36.5 million. That’s a pretty wide range, eh? His numbers are up fairly dramatically from last year, which plays into the story that things are bad and getting worse. Unsurprisingly, that’s good for generating FUD (Fear, Uncertainty, and Doubt). And that’s what we need to keep in mind about these surveys. Being right is less important than telling a good story, but we’ll get to that. Let’s contrast that against Verizon Business’s 2011 DBIR, which used 761 data points from their own data, data from the US Secret Service, and additional data from Dutch law enforcement as a supplement. 761 vs 50. I’m no mathematician, but which data set sounds more robust and representative of the overall population to you? Even better is one of Larry’s other findings, which I include in its entirety because it must be seen to be believed. The most costly cyber crimes are those caused by malicious code, denial of service, stolen or hijacked devices and malicious insiders. These account for more than 90 percent of all cyber crime costs per organization on an annual basis. Mitigation of such attacks requires enabling technologies such as SIEM and enterprise GRC solutions. Really? Mitigation of malicious code attacks requires SIEM and GRC? Maybe I’m splitting hairs here, but this kind of absolute statement make me nuts. The words matter. I understand the game. Ponemon needs to create some urgency for ArcSight’s prospects to justify the report, so throw a little love at SIEM and GRC. Rock on. Yeah, the cynic is in the house. This statement is then justified by some data that says surveyed customers using SIEM lost on average 25% less than those without SIEM. Those folks with SIEM were able to detect faster and contain more effectively. Which is true in my experience. But only if the company makes a significant and ongoing investment. Right – to the tune of millions of dollars. I wonder if any of those 50 companies had, let’s say, a failed SIEM implementation? Were they counted in the SIEM bucket? Again, let’s not confuse correctness of the data with the story you need to tell to do your job. That’s the value of these reports. They provide data, that is not your own, allowing you to tell a story internally. Lord knows our organizations want to see hard costs, showing real losses, to justify continued spending on security. This is the same message I deliver with our Data Breaches presentation. The data doesn’t matter – the story does. A key skill for any management position is the ability to tell a story. In the security business, our stories must paint a picture of what can happen if the organization takes its eyes off the ball. If the money is spent elsewhere and the flanks are left unprotected. Understand that your VP of Sales is telling his/her story, about how further investment in sales is important. VPs of manufacturing tell stories about the need to upgrade equipment in the factories, and so on and so forth. So your story needs to be good. Not all of us are graced with a breach to create instant urgency for continued security investment. Though if you believe Ponemon’s data, fewer and fewer escape unscathed each year. So you need to create your own story – preferably leveraging another organization’s pain rather than your own. In this case, the empirical correctness of the data isn’t important. It’s how the data allows you to make the points you need. Share:

Share:
Read Post

Fact-Based Network Security: Defining ‘Risk’

As we mentioned when introducing this series on fact-based network security, we increasingly need to use data to determine our priorities. This enables us to focus on activities that will have the greatest business impact. But that begs the question: how you determine what’s important? The place to start is with your organization’s assets. Truth be told, importance and beauty are both in the eye of the beholder, so this process challenges even the most-clued in security professionals. You will need to deal with subjectivity and the misery of building consensus (about what’s important), and ultimately the answer will continue to evolve in light of the dynamic nature of business. But you still need to do it. You can’t spend a bunch of time protecting devices no one cares about. But it’s always good to start conversations with a good idea of the answer, so we recommend you start by defining relative asset value. We have long held that estimating (value = purchase price + some number you make up – depreciation) is ridiculous. We haven’t stopped many folks from doing it, but we’ll just say there isn’t a lot of precision in that approach, and leave it at that. So what to do? Let’s get back to the concept of relative, which is the key. A reasonable approach would be to categorize assets into a handful of buckets (think 3-4) by their importance to the business. For argument’s sake we’ll call them: critical, important, and not so important. Then spend time looking through the assets and sorting them into those categories. You can use a quick and dirty method of defining relative value which I first proposed in the Pragmatic CSO. Ask a few simple questions of both yourself and business leadership about the assets… What does it cost us if this system goes down? This is the key question, and it’s very hard to get a precise answer, but try. Whether it’s lost revenue, or brand impact, or customer satisfaction, or whatever – push executives to really help you understand what happens to the business if that system is not available. Who uses this system? This is linked to the first question, but can yield different and interesting perspectives. If five people in Accounting use the system, that’s one thing. If every employee on the shop floor does, that’s another. And if every customer you have uses the system, that would be a much different thing. So a feel for the user community can give you an idea of the system’s criticality. How easy are the assets to replace? Of course, having a system fail is a bad thing, but how bad depends on replacement cost. If your CRM system goes down, you can go online to something like Salesforce.com and be up and running in an hour or two. Obviously that doesn’t include data migration, etc. But some systems are literally irreplaceable – or would require so much customization as to be effectively irreplaceable – and you need to know which are which. Understand you will to need to abstract assets into something bigger. Your business leadership doesn’t have an opinion about server #3254 in the data center. But if you discuss things like the order management system or the logistics system, they’ll be able to help you figure out (or at least confirm) relative importance of assets. With answers to those questions, you should be able to dump each group of assets into an importance bucket. The next step involves evaluating the ease of attacking these critical assets. We do this to understand the negative side of the equation – asset value to the business is the positive. If the asset has few security controls or resides in an area that is easy to get to (such as Internet-facing servers), the criticality of its issues increases. So when we prioritize efforts, we can factor in not just the value to the business, but also the likelihood of something bad happening if you don’t address an issue. By the way, try to keep delusion our of this calculation. It’s no secret that some parts of your infrastructure receive a lot of attention and protection and some don’t. Be brutally honest about that, because it will enable you to focus on brittle areas as needed. Like the asset side, focus on relative ease of attack and the associated threat models. You can use categories like: Swiss cheese, home safe, bank vault, and Fort Knox. And yes, we are joking about the category names. You should be left with a basic understanding of your ‘risk’. But don’t confuse this idea of risk with an economic quantification, which is how most organizations define risk. Instead this understanding provides an idea of where to find the biggest steaming pile of security FAIL. This is helpful as you weigh the inflow of events, alerts, and change requests in terms of their importance to your organization. And keep in mind that these mostly subjective assessments of value and ease of attack change – frequently. That’s why it’s so important to keep things simple. If you need to go back and revisit the priorities list every time you install a new server, the list won’t be useful for more than a day. So keep it high level, and plan to revisit these ratings every month or so. At this point, we need to start thinking about operational metrics we can/should gather to guide operations based on outcomes important to your business. That’s the subject of our next post. Share:

Share:
Read Post

Incite 8/3/2011: The Kids Are Our Future

The Boss and I have been getting into Fallen Skies lately. Yeah, it’s another sci-fi show with aliens trying to take down the human race and loot our planet for our resources. They’d better hurry up, since there may not be much left when the real aliens show up, but that’s another story. In the last episode we saw, the main guy (Noah Wyle of ER) made the point that our kids are our future, and we need to keep them safe. That thought resonates with me, and thankfully I’m not dealing with aliens trying to make them into drugged-out slaves. We are dealing with a lot of bad stuff that can happen online. The severity of the issue became very apparent to me in the spring, when XX1 made a comment about playing some online card game. My spidey sense started tingling, and I went into full interrogation mode. Turns out she clicked on some ad on one of her approved websites, which then took her to some kind of card game. So she clicked on that and started playing. And so on, and so on. Instantly I checked out the machine. Thankfully it’s a Mac and she can’t install software. I did a full cleaning of the stuff that could be problematic and then had to have that talk about why it’s bad to click ads on the Internet. We then talked a bit about Google searches, checking out images, and the like. But in reality, I didn’t have much clue of where to start and what to teach her. So I asked a few friends what they’ve done to prepare their kids for the online world. Yep – I got the same quizzical stare I saw in the mirror. That’s why I’m getting involved in the HacKid conference. Chris Hoff (yes, @beaker himself) started the conference in Boston last October, and there will be conferences in San Jose (Sept 17/18) and Atlanta (Oct 1/2) this year. HacKid is not just about security, by the way. It’s about getting our kids (ages 5-17) excited about technology, with lots of intro material on things like programming and robotics and soldering and a bunch of other stuff. Truth be told, orchestrating HacKid is a huge amount of work. Thankfully we’ve got a great board of advisors in ATL to help out, and I know it will be time well spent. I’m confident all the kids will gain some appreciation for technology, beyond the latest game for them to play on the iPad. I also have no doubt they’ll learn about about how to protect themselves online, which is near and dear to my heart. But most of all, I can’t wait to see that look of wonder. You know, when you think you’ve just seen the coolest, most amazing thing in the world. Hoff said there was a lot of that look in Boston, and I can’t wait to see it in Atlanta. Remember, the kids are our future and this is a great place to start teaching them about the role technology will play in their future. Registration is open for the Atlanta conference, so check it out, bring your kids, get involved, and reap the benefits. See you there. -Mike Photo credits: “Play, kids, learn, Mill Park Library, Yarra Plenty Library service” originally uploaded by Kathyrn Greenhill Incite 4 U Shopping list next: I can imagine it now. I’ll get the grocery list via text from the Boss, and then the follow up. “Don’t forget the DDoS, that neighbor is pissing me off again.” According to Krebs, it’s getting easier to buy all sorts of cyber attacks. Even down to a kit to build your own bot army. Can you imagine the horse trading that will happen on the playground with our kids? It’ll be like real-life Risk, with the kids trading 10,000 bots in India for 300 credit card numbers. Law enforcement seems to be getting better at finding and stopping these perps, but it’s still amazing how rapidly the cybercrime ecosystem evolves. – MR Don’t call it a comeback. Call it Back to FUD: Stuxnet is making a comeback?. Seriously Mr. McGurk? Does this mean we need to disconnect our uranium centrifuges from that Windows 98 machine I use to fuel my personal reactor? So if I see you at Black Hat, don’t hesitate to tell me I’m glowing. Does this mean we patch our OS and update our AV signatures? Or are you predicting 4 new 0-days we need to prepare for? Does it mean pissed-off US government employees foreign governments are going to attack the US infrastructure? Or are you asking for all public infrastructure to be rearchitected and redeployed? Oh, wait, it’s budget time – we need to get our FUD on. – AL Do they offer gardening in the big house? Looks like the good guys bagged one of anonymous/LulzSec’s top dogs, Topiary. This 18-year-old plant was hiding out in his folks’ basement in rural Scotland. Of course the spin unit of anon has jumped into gear and is talking about the inability to arrest an idea. That’s true, but a few more high-profile arrests (and they are coming) and we’ll see how willing these cloistered kids will be to give up their freedom. Rich tweeted what a lot of us think. These are a bunch of angry kids, who probably got bullied in schools and are now turning the tables. But they barked up the wrong tree by antagonizing governments and law enforcement. We’ll see how well they do in jail, where the bullies are much different. – MR Who’s afraid of the big, bad (cloud security) wolf?: Vivek Kundra is saying that cloud security fears are overblown and that the US government is not afraid of public cloud infrastructure. From our research I believe both these statements are absolutely correct! Cloud infrastructure is neither more nor less secure that traditional IT infrastructure – it all depends upon how you deploy,

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.