Securosis

Research

Incite 8/17/2011: Back to School

What would you do if you could go back to school? Seriously. If you could turn back the clock and go back to grade school or even high school? No real responsibility. No one depending on you for food and/or shelter. Gosh, I’d do so many things differently. I’d buy a few shares of Microsoft when they went public (and I’d also send a note to my 1999 self to sell it). Ah, the magic of hindsight. What I wouldn’t do is bitch about it. It’s funny that my kids were actually excited to go back to school. We figured they’d be bitching a lot more, especially given how much fun they have over the summer. Thankfully, they aren’t at the stage where they dread the end of summer vacation and the return to the structure and routine of the school year. The Boss is clearly doing something right because the girls jumped right in. The Boy not so much. Not because he doesn’t like school, but more because time he’s working is time he’s not outside playing ball with his buddies. The biggest thing we try to get across every year is the importance of a strong work ethic. Unless there is an activity right after school, the kids grab a snack and jump right into their homework, which must be done, to The Boss’s satisfaction, before they can do anything else. We’re constantly harping on the fact that hard work can overcome a lot of mistakes and issues. Also that it’s okay to get something wrong and to make mistakes. But it’s not okay not to give it proper effort. The most gratifying thing about it all? Seeing one of the kids “get it.” Last year XX1 spent countless hours preparing for a big test, and she aced it. She saw the direct correlation between hard work and positive results. Rich and I were joking the other day that we both did the bare minimum as long as we could throughout public school. We got by on our charming personalities. Okay, maybe not… All the same, if we applied our current work ethic to our school endeavors? Who knows what we’d accomplish. But we would also miss out on a number of great parties and save some liver damage. Okay, a lot of liver damage. Oh yeah, the balance discussion. That’s one secret we won’t share until the kids graduate from college. So don’t ruin it for us, okay? -Mike Note: Yes, I’m kidding. All work and no play is not the way to go through childhood. Photo credits: “Back to School Bong Sale” originally uploaded by designwallah Incite 4 U Fixing is the hard part: I’m kind of surprised at the tepid response to Microsoft’s $250k prize for advancement of exploit mitigation. Imagine that, folks get paid a bit for finding a bug and being able to exploit it, but now can get paid a lot for actually fixing the issue. I think this is great and we should all applaud Microsoft. First for finally understanding that for the price of one engineer (fully loaded), they could put in place a meaningful economic incentive for a researcher. But also to start driving toward a culture of fixing things instead of just breaking them. Stormy did a great job of making that case as well. – MR And you thought your network was tough…: We often call the DefCon network “The World’s Most Hostile Network” since you can assume at least a few hundred – possibly thousands – of hackers are on it eating their latest software toys. What not everyone knows is that there are actually multiple networks at DefCon, some of which are probably reasonably secure, but that isn’t what I’m going to talk about today. Ryan Barnett over at Tactical Web Application Security wrote a great post on what web apps can learn from casino surveillance. I’m a huge fan of monitoring at all levels, and when it comes to web apps we definitely aren’t doing enough (in most cases). Ryan’s post does a good job of keying in on the main difference between apps and networks (spoiler – is has to do with who is allowed in). As a side note, back in Gartner days Ray Wagner (still there) and myself were proponents of using slot machine security standards for voting machines. But it seems the price of democracy doesn’t won’t cover the same security used for nickel slots. Then again the payout of the voting machines usually isn’t 97% either. – RM DAM market maturing: The Database Activity Monitoring market continues to see activity, with GreenSQL receiving another $2.2 million in venture funding from Atlantic Capital partners. Like children, most startups are not very interesting until they are a couple years old. Companies need to mature both product functionality and vision. GreenSQL is reaching that point: their first product was an open source reverse proxy for SQL statements. Now they offer core SQL statement blocking function like other DAM vendors, but they also offer a performance boost through a database caching service as well. Like the rest of the DAM players, they are morphing into something else – with the addition of masking, usage profiles, and application specific rule sets. Integrating a number of previously separate functions into a more integrated offering. Yet another sign of an increasingly mature market. DAM(n) funny how that happens. With Imperva slated for IPO and lots of interest in the basic monitoring capabilities, expect continued M&A activity. I expect we’ll need to change the way we think about DAM into a larger database security context by this time next year. – AL A different kind of hacking: Most of us were taught that two wrongs don’t make a right. The consistent attacks on law enforcement do nothing but endanger folks who make significant sacrifices. Our own Adrian provided some context about the situation in Arizona for this story about the continued posting of personal information about law

Share:
Read Post

Proxies and the Cloud (Public and Private)

Recently I had a conversation with a security vendor offering a proxy-based solution for a particular problem (yes, I’m being deliberately obscure). Their technology is interesting, but fundamental changes in how we consume IT resources challenge the very idea that a proxy can effectively address this problem. The two most disruptive trends in information technology today are mobility and the cloud. With mobility we gain (and demand) anywhere access as the norm, redistributing access across varied devices. At the same time, cloud computing redefines both the data center and the architectures within data centers. Even a private internal cloud dramatically changes the delivery of IT resources. So both delivery and consumption models change simultaneously and dramatically – both distributing and consolidating resources. What does this have to do with proxies? Generally they have been a great solution to a tough problem. It’s a royal pain to distribute security controls across all endpoints, for both performance and management reasons. For example, there is no DLP or URL filtering solution on the market that can fully enforce the same sorts of rules on an endpoint as on a server. Fortunately for us, our traditional IT architectures naturally created chokepoints. Even mobile users needed them to pipe back into the core for normal business/access reasons – quite aside from security. But we’ve all seen this eroding over time. That erosion now reminds me of those massive calving glaciers that sunk the Titanic – not the slow-movers that created all those lovely fjords. From the networking issues inherent to private cloud, to users accessing SaaS resources directly without going through an enterprise gateway, the proxy model is facing challenges. In some cloud deployments you can’t use them at all. There are a many things I still like proxies for, but here are some rough rules I use in figuring out when they make sense. If you have a bunch of access devices in a bunch of locations, you either need to switch to an agent or reroute everything to the proxy (not always easy to do). Proxies don’t need to be in your core network – they can be in the cloud (like our VPN server, which we use for browsing on public WiFi). This means putting more trust in your cloud provider, depending on what you are doing. Proxies in private cloud and virtualization (e.g., encryption or network traffic analysis) need to account for (potentially) mobile virtual machines within the environment. This requires carefully architecting both physical and virtual networks, and considering how to define provisioning rules for the cloud. With a private cloud, unless you move to agents, you’ll need to build inline virtual proxies, bounce traffic out of the cloud, or find a hypervisor-level proxy (not many today – more coming). Performance varies. But the reality is that the more we adopt cloud, the fewer fixed checkpoints we’ll have, and the more we will have to evolve our definition of ‘proxy’ away from its currently meaning. Share:

Share:
Read Post

Hammers and Homomorphic Encryption

Researchers at Microsoft are presenting a prototype of encrypted data which can be used without decrypting. Called homomorphic encryption, the idea is to keep data in a protected state (encrypted) yet still useful. It may sound like Star Trek technobabble, but this is a real working prototype. The set of operations you can perform on encrypted data is limited to a few things like addition and multiplication, but most analytics systems are limited as well. If this works, it would offer a new way to approach data security for publicly available systems. The research team is looking for a way to reduce encryption operations, as they are computationally expensive – their encryption and decryption demand a lot of processing cycles. Performing calculations and updates on large data sets becomes very expensive, as you must decrypt the data set, find the data you are interested in, make your changes, and then re-encrypt altered items. The ultimate performance impact varies with the storage system and method of encryption, but overhead and latency might typically range from 2x-10x compared to unencrypted operations. It would be a major advancement if they could dispense away with the encryption and decryption operations, while still enabling reporting on secured data sets. The promise of homomorphic encryption is predictable alteration without decryption. The possibility of being able to modify data without sacrificing security is compelling. Running basic operations on encrypted data might remove the threat of exposing data in the event of a system breach or user carelessness. And given that every company even thinking about cloud adoption is looking at data encryption and key management deployment options, there is plenty of interest in this type of encryption. But like a lot of theoretical lab work, practicality has an ugly way of pouring water on our security dreams. There are three very real problems for homomorphic encryption and computation systems: Data integrity: Homomorphic encryption does not protect data from alteration. If I can add, multiply, or change a data entry without access to the owner’s key: that becomes an avenue for an attacker to corrupt the database. Alteration of pricing tables, user attributes, stock prices, or other information stored in a database is just as damaging as leaking information. An attacker might not know what the original data values were, but that’s not enough to provide security. Data confidentiality: Homomorphic encryption can leak information. If I can add two values together and come up with a consistent value, it’s possible to reverse engineer the values. The beauty of encryption is that when you make a very minor change to the ciphertext – the data you are encrypting – you get radically different output. With CBC variants of encryption, the same plaintext has different encrypted values. The question with homomorphic encryption is whether it can be used while still maintaining confidentiality – it might well leak data to determined attackers. Performance: Performance is poor and will likely remain no better than classical encryption. As homomorphic performance improves, so do more common forms of encryption. This is important when considering the cloud as a motivator for this technology, as acknowledged by the researchers. Many firms are looking to “The Cloud” not just for elastic pay-as-you-go services, but also as a cost-effective tool for handling very large databases. As databases grow, the performance impact grows in a super-linear way – layering on a security tool with poor performance is a non-starter. Not to be a total buzzkill, but I wanted to point out that there are practical alternatives that work today. For example, data masking obfuscates data but allows computational analytics. Masking can be done in such a way as to retain aggregate values while masking individual data elements. Masking – like encryption – can be poorly implemented, enabling the original data to be reverse engineered. But good masking implementations keep data secure, perform well, and facilitate reporting and analytics. Also consider the value of private clouds on public infrastructure. In one of the many possible deployment models, data is locked into a cloud as a black box, and only approved programatic elements ever touch the data – not users. You import data and run reports, but do not allow direct access the data. As long as you protect the management and programmatic interfaces, the data remains secure. There is no reason to look for isolinear plasma converters or quantum flux capacitors when when a hammer and some duct tape will do. Share:

Share:
Read Post

Friday Summary: August 12, 2011

Believe it or not, I’m not the biggest fan of travel. Oh, I used to be, maybe 10+ years ago when I was just starting to travel as part of my career. Being in your 20’s and getting paid to literally circle the globe isn’t all bad… especially when you’re single. But the truth is I got tired of travel long before I started a family. Traveling every now and then is a wonderful experience that can change the lens with which you view the world. Hitting the airport once or twice a month, on the other hand, does little more than disrupt your life (and I know plenty of people who travel even more than that). I miss being on a routine, and I really miss the strong local social bonds I used to have. Travel killed my chances of moving up to my next Black Belt. It wrecked my fitness consistency (yes, I still work out a ton, but not so much with other people, and bad hotel gyms and strange roads aren’t great for the program). It killed my participation in mountain rescue, although for a couple years it did let me ski patrol in Colorado while I lived in Phoenix. That didn’t suck. It mostly hurt my relationships with my “old” friends because I just wasn’t around much. Folks I basically grew up with, as we all congregated in Boulder (mostly) as we started college, and learned to rely on each other as surrogate family. Complete with Crazy Uncle Wade at the head of the Thanksgiving table (Wade is now in the Marshall Islands, after working as an electrician in Antarctica). On the other hand, I now have a social group that’s scattered across the country and the world. I see some of these people more than my local friends here in Phoenix, and we’re often on expense accounts without a curfew. I was sick last week at Black Hat and DefCon, but managed to spend a little quality time with folks like Chris Hoff, Alex Hutton, Martin and Zach from the Podcast, two good friends from Gartner days, Jeremiah, Ryan, Mike A., and the rest of the BJJ crew, and even some of these people’s spouses. Plus so many more that going to DefCon (in particular) now feels more like a week of summer camp than a work conference. With beer. And parties in the biggest clubs in Vegas (open bar). And… well, we’re not 13 anymore. What’s amazing and awesome is almost none of us work together, and most of us don’t live anywhere near each other. And it isn’t unusual to roll into some random city (for a client gig, not even a conference), and find out someone else is also in town. We live strange lives as digital nomads who combine social media and frequent flyer miles to create a personal network that’s different from seeing the same faces every weekend at the Rio (Boulder thing), but likely as strong. I don’t think this could exist without both the technical and physical components. I still miss the consistency of life with a low-travel job. But in exchange I have the kinds of adventures other people write books about, and get to share them with a group of people I consider close friends, even if I can’t invite them over for a BBQ without enough time to get through their personal gropings at the airport. -Rich On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian quoted on tokenization. Favorite Securosis Posts Mike Rothman: NoSQL and No Security. Nothing like poking a big security hole in an over-hyped market. Who needs DB security anyway? Adrian Lane: Data Security Lifecycle 2.0: Functions, Actors, and Controls. Why? Because the standard data security lifecycle fails when applied to cloud services – you need to take location and access into account. Our goal is to make the model simple to use, so please give us your feedback. David Mortman: Use THEIR data to tell YOUR story. Rich: Words matter: You stop attacks, not breaches. I know, I know, we should stop thinking marketing will ever change. But everyone has their windmill. Other Securosis Posts Say Hello to Chip and Pin. Incite 8/10/2011: Back to the Future. Introducing the Data Security Lifecycle 2.0. Data Security Lifecycle 2.0 and the Cloud: Locations and Access. Fact-Based Network Security: Defining ‘Risk’. Incite 8/3/2011: The Kids Are Our Future. Words matter: You stop attacks, not breaches. Cloud Security Training: August 16-18, Washington DC. Security has always been a BigData problem. New Blog Series: Fact-Based Network Security: Metrics and the Pursuit of Prioritization. Favorite Outside Posts Mike Rothman: Marcus Ranum: Dangerous Cyberwar Rhetoric. Ranum can pontificate with the best of them, but this perspective is dead on. Attribution is harder, and even more important, as the lines between “cyber” and physical war inevitably blur. Adrian Lane: Comments about the $200,000 BlueHat prize. ErrataRob clarifies the security bounty program. David Mortman: Metricon 6 Wrap-Up. Chris Pepper: Badass of the Week: Abram A. Heller. Totally badass without being an ass. Rich: Sunset of a Blog. Glenn is a good friend and one of the people who helped launch my writing career, especially on the Mac side (via TidBITS). This post shows the difference between a blogger and a writer. Research Reports and Presentations Security Benchmarking: Going Beyond Metrics. Understanding and Selecting a File Activity Monitoring Solution. Database Activity Monitoring: Software vs. Appliance. React Faster and Better: New Approaches for Advanced Incident Response. Measuring and Optimizing Database Security Operations (DBQuant). Network Security in the Age of Any Computing. The Securosis 2010 Data Security Survey. Monitoring up the Stack: Adding Value to SIEM. Top News and Posts Microsoft Security Program & Vulnerability Data Now Available. Did Airport Scanners Give Boston TSA Agents Cancer? TSA says that’s BS. Survey Finds Smartphone Apps Store Too Much Personal Data. What? No way! 22 Reasons to Patch Your Windows PC via Krebs. Cameron Threatens To Shut Down UK Social Networks.

Share:
Read Post

Data Security Lifecycle 2.0: Functions, Actors, and Controls

In our last post we added location and access attributes to the Data Security Lifecycle. Now let’s start digging into the data flow and controls. To review, so far we’ve completed our topographic map for data: This illustrates, at a high level, how data moves in and out of different environments, and to and from different devices. It doesn’t yet tell us which controls to use or where to place them. That’s where the next layer comes in, as we specify locations, actors (‘who’), and functions: Functions There are three things we can do with a given datum: Access: View/access the data, including copying, file transfers, and other exchanges of information. Process: Perform a transaction on the data: update it, use it in a business processing transaction, etc. Store: Store the data (in a file, database, etc.). The table below shows which functions map to which phases of the lifecycle: Each of these functions is performed in a location, by an actor (person). Controls Essentially, a control is what we use to restrict a list of possible actions down to allowed actions. For example, encryption can be used to restrict access to data, application controls to restrict processing via authorization, and DRM storage to prevent unauthorized copies/accesses. To determine the necessary controls; we first list out all possible functions, locations, and actors; and then which ones to allow. We then determine what controls we need to make that happen (technical or process). Controls can be either preventative or detective (monitoring), but keep in mind that monitoring controls that don’t tie back into some sort of alerting or analysis merely provide an audit log, not a functional control. This might be a little clearer for some of you as a table: Here you would list a function, the actor, and the location, and then check whether it is allowed or not. Any time you have a ‘no’ in the allowed box, you would implement and document a control. Tying It together In essence what we’ve produced is a high-level version of a data flow diagram (albeit not using standard programming taxonomy). We start by mapping the possible data flows, including devices and different physical and virtual locations, and at which phases in its lifecycle data can move between those locations. Then, for each phase of the lifecycle in a location, we determine which functions, people/systems, and more-granular locations for working with the data are possible. We then figure out which we want to restrict, and what controls we need to enforce those restrictions. This looks complex, but keep in mind that you aren’t likely to do it for all data within an entire organization. For given data in a given application/implementation you’ll be working with a much more restrictive subset of possibilities. This clearly becomes more involved with bigger applications, but practically speaking you need to know where data flows, what’s possible, and what should be allowed, to design your security. In a future post we’ll show you an example, and down the road we also plan to produce a controls matrix which will show you where the different data security controls fit in. Share:

Share:
Read Post

Say Hello to Chip and Pin

No, it’s not a Penn & Teller rip-off act – it’s a new credit card format. On August 9th Visa announced that they are going to aggressively encourage merchants to switch over to Chip and Pin (CAP) ‘smart’ credit cards. Europay-Mastercard-Visa (EMV) developed a smart credit card format standard many years ago, and the technology was adopted by many other countries over the next decade. In the US adoption has never really happened. That’s about to change, because Visa will give merchants a pass on PCI compliance if they adopt smart cards, or let them assume 100% of fraud liability if they don’t. Why the new push? Because it helps Visa’s and Mastercard’s bottom lines. There are a couple specific reasons Visa wants this changeover, and security is not at the top of their list. The principal benefit is that CAP cards allow applications to be installed and run on the card. This opens up new revenue opportunities for card issuers, as they bolster affinity programs and provide additional card functionality. Things like card co-branding, recurring payments, coupons, discounted pricing from merchants, card-to-card gifting, and pre-paid transit tokens are all examples. Second, they feel that CAP opens up new markets and will engender broader use of the cards. The smart card industry in general is worried about loss of market share to smart phones that can provide the same features as CAP-based smart cards. In fact we see payment applications of all types popping up, many of which are (now) sponsored by credit card companies to avoid market share erosion. Finally, the card companies want to issue a single card type, standardizing cards and systems across all markets. Don’t get me wrong – Security absolutely is a benefit of CAP. ‘Smart’ credit cards are much harder to forge, offering much better security for ‘card present’ transactions, as the point-of-sale terminal can electronically validate the card. And the card can encrypt data locally, making it much easier to support (true) end-to-end encryption so sensitive data is not exposed while processing payments. Most smart cards do not help secure Internet purchases or card-not-present transactions over the phone. What scares me about this announcement is that Visa is willing to waive PCI DSS compliance for merchants that switch 75% or more of their transaction to CAP-based smart cards! Vissa is offering this as an incentive for large merchants to make the change. The idea is that the savings on security, audit preparation, and remediation will offset the costs of the new hardware and software. Visa has not specified whether this will be limited to the POS part of the audit, or if they mean all parts of the security specification, but the press release suggests the former. Merchants have resisted this change because the terminals are expensive! To support CAP you need to swap out terminals at a hefty per-terminal cost, upgrade supporting point-of-sale software, and alter some payment processing systems. Even small businesses – gas stations, fast food, grocery stores, etc. – will require sizable investment to support CAP. Pricing obviously varies, but tends to run about $1,000 to $1600 per terminal. Small merchants who are not subject to external auditing will not benefit from the audit waiver that can save larger merchants so much, so they are expected to continue dragging their feet on adoption. One last nugget for thought: If EMV can enforce end-to-end encryption, from terminal to payment processor, will they eventually disallow merchants from seeing any card or payment data? Will Visa fundamentally disrupt the existing card application space? Share:

Share:
Read Post

Incite 8/10/2011: Back to the Future

Getting old just sucks. OK, I’m not really old, but I feel that way. I think I’m suffering from the fundamental problem Rich described a few weeks ago. I think I’m 20, so I do these intense exercise programs and athletic pursuits. Lo and behold, I get hurt. First it was the knees. My right knee has bothered me for years. And I was looking at the end of my COBRA health benefits from my last employer, so I figured I’d get it checked out before my crap insurance kicked in (don’t get me started on health insurance). Sure enough there was some inflammation. Technically they called it patellar tendinitis. My trusty doctor proscribed some anti-inflammatories and suggested a few weeks of physical therapy to strengthen my quads to alleviate the issue. But that took me out of my pretty intense routines for a few months. That wouldn’t normally be a huge problem, but softball season was starting, and I had lost some of my fitness. OK, a lot of my fitness. I was probably still ahead of most of the old dudes I play with, but all the same – when you aren’t in shape, you are more open to injury. You know how this ends. I plant the wrong way crossing home and I can feel the jolt through my entire body. My middle back and shoulder tighten up right away. It wouldn’t be the first time I’ve pinched a nerve, so I figure within a day or two with some good stretching it’ll be fine. I take a trip and three days later try some yoga. Yeah, that didn’t work out too well. I made it through 10 minutes of that workout before saying No Mas. Since when did I become Roberto Duran? Oh crap, this may be a bit more serious than I figured. It probably didn’t help that the next day we loaded up the family truckster and drove 7 hours to see the girls at camp. When I woke up the next day I could hardly move. I’m not one to complain, but I was pretty well immobile. Once we got to Maryland, I got a deep tissue massage. No change. My doctor called in some relaxants. I tried to persevere. No dice. I flew home to see my doc, who thought there was a disc problem. An MRI would confirm. And confirm it did. I have a degenerative disk (C5-6 for those orthopedists out there). It took about two weeks but finally settled down. I’m going to try to rehab it with more PT and more stretching and less impact. I don’t want to do shots. I definitely don’t want to do surgery. So I’ve got to adapt. P90X may not be the best idea. Not 6 days a week, anyway. I can build up a good sweat doing yoga, and maybe I’ll even buy that bike the Boss has been pushing for. Or perhaps take a walk. How novel! I’m not going to settle for a sedentary existence. I like beer and food too much for that to end well. But I don’t need to kill myself either. So I’m searching for the middle ground. I know, for most of my life I didn’t even know there was a thing called middle ground. But as I get older I need to find it. Because life is a marathon, not a sprint. I can’t go back to the future in a broken down DeLorean, now can I? -Mike Photo credits: “Lateral X-Ray of Neck Showing Flexion | Donald Corenman, MD | Spine Surgery Colorado” originally uploaded by neckandback Incite 4 U Long live speeds and feeds: Coming from a networking background, I have a serious disdain for vendors differentiating based on speed. Or how many signatures something ships with. Or any other aspect of the device with little bearing on real world performance. After the 40gbps IPS rhetoric died down a few years ago, I hoped we were past the “my tool is bigger than yours” marketing. Yeah, not so much. Our pals at Check Point dive back into the speeds/feeds muck with their new big box, and NetworkWorld needs a visit from the clue bat for buying into the 1Tbps firewall. Check Point did map out a path to 1tbps, but it’ll take them 4 years to get there. But hey, a 1tbps firewall generates some page views. By the way, there are a handful of customers that even need 100gbps of perimeter throughput. But long live the speeds and feeds! – MR I guess the parents were also in the room: When I worked for the State of Colorado, my first real IT job was as a systems and network administrator at the University of Colorado in Boulder. I had a wacky boss who wasn’t the most stable of individuals. When I got my Classified Staff status he informed me that I now didn’t have to worry about being fired. I quote, “even if you have sex with a student on a desk in front of the class, they’ll just suspend you with pay”. (When he finally went off the deep end it took them years of demotions to finally get him to quit). I’ve always thought of PCI QSA (assessment) companies that way. It has always seemed that no matter what they did, there weren’t any consequences. I wouldn’t say that’s changing, but a company called Chief Security Officers is the first to have its QSA status revoked. No one is saying why, but I suspect less than satisfactory performance, with consistency. – RM CSA Vendor Guide: CSA is offering a Security Registry for Cloud providers in order to help customers compare cloud security offerings across vendors. The CSA has a questionnaire for each provider – basically an RFP/RFI – and will publish the results for customers. The good news is that this will provide some of the high-level information that is really hard to find – or entirely absent

Share:
Read Post

NoSQL and No Security

Of all of the presentations at Black Hat USA 2011, I found Brian Sullivan’s presentation on “Server-Side JavaScript Injection: Attacking NoSQL and Node.js” the most startling. While I was aware of the poor security of most NoSQL database installations – especially their lack of support for authorization and authentication – I was not aware of their susceptibility to injection of both commands and code. Apparently Mongo and many of the NoSQL databases are nothing more than JavaScript processing engines, without the stigma of authentication. Most of these products are subject to several classes of attack, including injection, XSS, and CSRF. Brian demonstrated blind NoSQL injection scripts that can both discover database contents and run arbitrary commands. He cataloged an entire Mongo database with a couple lines of PHP. Node.js is a commonly used web server – it’s lightweight and simple to deploy. It’s also insecure as hell! Node and NoSQL are basically new JavaScript based platforms – with both server and client functionality – which makes them susceptible to client and server side attacks. These attacks are very similar to the classic browser, web server, and relational database attacks we have observed over the past decade. When you mix in facilities like JSON (to objectify data elements) you get a bunch of methods which provide an easy way for attackers to inject code onto the server. Brian demonstrated the ability to inject persistent changes to the Node server, writing an executable to the file system using Node.js calls and then running it. But it got worse from there: JSONP – essentially JSON with padding – is intended to provide cross-origin resource sharing. Yes, it’s a tool to bypass the same-origin policy. By wrapping query results in a callback, you can take action based upon the result set without end user participation. Third-party code can make requests and process the results – easily hijacking the session – without the user being aware of what’s going on. These are exactly the same vulnerabilities we saw with browsers, web servers, and database servers 10 years ago. Only the syntax is different. What’s worrisome is the rapid adoption rate of these platforms – cheap, fast, and easy is always attractive for developers looking to get their applications running quickly. But it’s clear that the platforms are not ready for production applications – they should be reserved for proofs of concept do to their complete lack of security controls. I’ll update this post with a link when the slide deck is posted. It’s worth your time to review just how easy these compromises are, but he also provides a few hints for how to protect yourself at the end of the presentation. Share:

Share:
Read Post

Introducing the Data Security Lifecycle 2.0

Four years ago I wrote the initial Data Security Lifecycle and a series of posts covering the constituent technologies. In 2009 I updated it to better fit cloud computing, and it was incorporated into the Cloud Security Alliance Guidance, but I have never been happy with that work. It was rushed and didn’t address cloud specifics nearly sufficiently. Adrian and I just spent a bunch of time updating the cycle and it is now a much better representation of the real world. Keep in mind that this is a high-level model to help guide your decisions, but we think this time around we were able to identify places where it can more specifically guide your data security endeavors. (As a side note, you might notice I use “data security” and “information-centric security” interchangeably. I think infocentric is more accurate, but data security is more recognized, so that’s what I tend to use.) If you are familiar with the previous model you will immediately notice that this one is much more complex. We hope it’s also much more useful. The old model really only listed controls for data in different phases of the lifecycle – and didn’t account for location, ownership, access methods, and other factors. This update should better reflect the more complex environments and use cases we tend to see these days. Due to its complexity, we need to break the new Lifecycle into a series of posts. In this first post we will revisit the basic lifecycle, and in the next post we will add locations and access. The lifecycle includes six phases from creation to destruction. Although we show it as a linear progression, once created, data can bounce between phases without restriction, and may not pass through all stages (for example, not all data is eventually destroyed). Create: This is probably better named Create/Update because it applies to creating or changing a data/content element, not just a document or database. Creation is the generation of new digital content, or the alteration/updating of existing content. Store: Storing is the act committing the digital data to some sort of storage repository, and typically occurs nearly simultaneously with creation. Use: Data is viewed, processed, or otherwise used in some sort of activity. Share: Data is exchanged between users, customers, and partners. Archive: Data leaves active use and enters long-term storage. Destroy: Data is permanently destroyed using physical or digital means (e.g., cryptoshredding). These high-level activities describe the major phases of a datum’s life, and in a future post we will cover security controls for each phase. But before we discuss controls we need to incorporate two additional aspects: locations and access devices. Share:

Share:
Read Post

Data Security Lifecycle 2.0 and the Cloud: Locations and Access

In our last post we reviewed the Data Security Lifecycle, but other than some minor wording changes (and a prettier graphic thanks to PowerPoint SmartArt) it was the same as our four-year-old original version. But as we mentioned, quite a bit has changed since then, exemplified by the emergence and adoption of cloud computing and increased mobility. Although the Lifecycle itself still applies to basic, traditional infrastructure, we will focus on these more complex use cases, which better reflect what most of you are dealing with on a day to day basis. Locations One gap in the original Lifecycle was that it failed to adequately address movement of data between repositories, environments, and organizations. A large amount of enterprise data now transitions between a variety of storage locations, applications, and operating environments. Even data created in a locked-down application may find itself backed up someplace else, replicated to alternative standby environments, or exported for processing by other applications. And all of this can happen at any phase of the Lifecycle. We can illustrate this by thinking of the Lifecycle not as a single, linear operation, but as a series of smaller lifecycles running in different operating environments. At nearly any phase data can move into, out of, and between these environments – the key for data security is identifying these movements and applying the right controls at the right security boundaries. As with cloud deployment models, these locations may be internal, external, public, private, hybrid, and so on. Some may be cloud providers, other traditional outsourcers, or perhaps multiple locations within a single data center. For data security, at this point there are four things to understand: Where are the potential locations for my data? What are the lifecycles and controls in each of those locations? Where in each lifecycle can data move between locations? How does data move between locations (via what channel)? Access Now that we know where our data lives and how it moves, we need to know who is accessing it and how. There are two factors here: Who accesses the data? How can they access it (device & channel)? Data today is accessed from all sorts of different devices. The days of employees only accessing data through restrictive applications on locked-down desktops are quickly coming to an end (with a few exceptions). These devices have different security characteristics and may use different applications, especially with applications we’ve moved to SaaS providers – who often build custom applications for mobile devices, which offer different functionality than PCs. Later in the model we will deal with who, but the diagram below shows how complex this can be – with a variety of data locations (and application environments), each with its own data lifecycle, all accessed by a variety of devices in different locations. Some data lives entirely within a single location, while other data moves in and out of various locations… and sometimes directly between external providers. This completes our “topographic map” of the Lifecycle. In our next post we will dig into mapping data flow and controls. In the next few posts we will finish covering background material, and then show you how to use this to pragmatically evaluate and design security controls. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.