Securosis

Research

Incite 12/19/2012: Celebration

As we say goodbye to Old Man 2012 and get ready to welcome Baby New Year 2013, it is time for some downtime and reflection. This will be the last Incite of the year. My focus over the next two weeks will be enjoying the accomplishments of the past 12 months. Which, by the way, is very hard for me. I came into the world with the unsatisfied gene. No matter how good it is, it can be better. No matter how much got done, I could have done more. With every accomplishment, I have already started looking towards the next goal because there are always more things to do, different windmills to tilt at, and another mountain to climb. But not this year. I will make a concerted effort to acknowledge where I’ve been and how far I’ve come. Both personally and professionally. And it will be a long time coming. The Boss and I were talking last night and she mentioned that we need to enjoy things a bit more. To have more fun. We have a great lifestyle and comforts I couldn’t have imagined, growing up in a much more modest situation, but it always seems we’re running from one place to the next. Fighting yet another fire, working on the next project, or filling up our social calendar. She is exactly right. I also need to celebrate life. We keep being reminded how fleeting it is. I will take some time to appreciate my good health and the health of my family. I will enjoy the quality time I get to spend with the people I care about. And I’ll be thankful for every day I get. At some point in the future, I will get an invitation to stop playing this game called life. Until then I plan to make the most of it. When I say ‘celebrate’, don’t expect a big blowout bash or any other ostentatious showing of prosperity. I like to celebrate in a low-key fashion. I’m not into material things, so I don’t celebrate a good year by buying things I don’t need. I’m also painfully aware that it’s still tough out there. Good fortune has overlooked many folks who have more talent and work harder than I do. These folks continue to struggle as the global economy continues its slow arduous recovery. More to the point, I know success is fleeting, and I have personally been down a lot more than I have been up. I’ll smile a bit thinking back on the last year, but I am all too aware there is more work to be done, and on January 1 the meter resets to 0. See? There I go again, moving forward even when I’m trying to stay in one place. We have wrapped up our 2013 planning at Securosis, and we have a good plan. As good as 2012 has been, it can get better. We will launch the Nexus, we will continue investing in our cloud security curriculum, and we will continue researching, using our unique Totally Transparent Research model. And there will also be a surprise or two out of us next year. It will all be a lot of work and I look forward to it. If it was easy everyone would be doing it. And I would be remiss if I didn’t thank all of you for reading our stuff, adding comments to our posts, telling us when we’re wrong, and tipping one (or ten) back when we see each other in person. Every company is built on relationships, and we at Securosis are very very fortunate to have great relationships with great folks at all levels and functions within the security ecosystem. I wake up some days and pinch myself that I get to pontificate all day, every day. Yup, that calls for a celebration. It must be beer o’clock somewhere. From all of us at Securosis, have a great holiday, be safe, and we’ll see you in 2013. –Mike Photo credits: Celebrate You – Celebrate Life! originally uploaded by Keith Davenport Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. Building an Early Warning System Deploying the EWS Determining Urgency Understanding and Selecting an Enterprise Key Manager Management Features Newly Published Papers Implementing and Managing Patch and Configuration Management Defending Against Denial of Service Attacks Securing Big Data: Security Recommendations for Hadoop and NoSQL Environments Pragmatic WAF Management: Giving Web Apps a Fighting Chance Incite 4 U Title fail: I got pretty excited – any article called “Information Security as a Business Enabler” is bound to give me fodder to lampoon for hours. I mean, a business enabler? C’mon, man! Normally security is a business disabler. I remember trying to position digital certificates as an enabler of new business processes back in the day, and getting laughed out of the customer’s office. So you can imagine how disappointed I was to see the article is really about doing an impact analysis and post-mortem after a breach. I mean, the article is solid and makes points we have been talking about for years. But how this has anything to do with business enablement is beyond me. So some editor is either trolling for views or didn’t read the article. Either way it’s title FAIL. – MR HP contracts small guy syndrome: In last week’s Incite, both Mike and I commented on Gartner’s criticism of Amazon’s and HP’s service level agreements (SLAs) for their respective clouds. Lo and behold, HP responded with an amusing blog post this week. Remember that the only real cloud security controls you have are those guarantee in the contract, so it’s amusing that HP first felt the need to ‘educate’ us all on what an

Share:
Read Post

The CloudSec Chicken or the DevOps Egg?

I am on a plane headed home after a couple days of business development meetings in Northern California, and I am starting to notice a bit of a chasm in the cloud security world. Companies, for good reason, tend to be wary of investing in new products and features before they smell customer demand (the dream-build-pray contingent exempted). The winners of the game invest just enough ahead of the curve that they don’t lose out too badly to a competitor, but they don’t pay too much for shiny toys on the shelf. Or they wait and buy a startup. This is having an interesting inhibiting effect on the security industry, particularly in terms of the cloud. Security companies tend to sell to security buyers. Security buyers, as a group, are not overly informed on the nitty-gritty details of cloud security and operations. So demand from traditional security buying centers is somewhat limited. Dev and Ops, however, are deep in the muck of cloud. They are the ones tasked with building and maintaining this infrastructure. They buy from different vendors and have different priorities, but are often still tasked with meeting security policy requirements (if they exist). They have the knowledge and tools, and in many cases (such as identity, access, and entitlement management), the implementation ball is in their court. The result is that Dev and Ops are the ones spending on cloud management tools, many of which include security features. Security vendors aren’t necessarily seeing these deals, and thus the demand. Also, their sales forces are poorly aligned to talk to the right buying centers, in the right language, which inhibits opportunities. Because they don’t see the opportunity they don’t have the motivation to build solutions. It’s better to cloudwash, spin the marketing brochures, and wait. My concern is that we see more security functionality being pushed into the DevOps tool sets. Not that I care who is selling and buying as long as the job gets done, but my suspicion is that this is inhibiting at least some of the security development we need, as cloud adoption continues and we start moving into more advanced deployment scenarios. There are certainly some successes out there, but especially on the public cloud and the programmatic/software defined security side, advancement is lacking (specifically more API support for security automation and orchestration). There are reasonable odds that both security teams and security vendors will fall behind, and there are some things DevOps simply will not do, which may result in a cloud security gap – as we have seen in application security and other fast-moving areas that broke ‘traditional’ models. It will probably also mean missed opportunities for some security vendors, especially as infrastructure vendors eat their lunch. This isn’t an easy problem for the vendors to solve – they need to tap into the right buying centers and align sales forces before they will see enough demand – and their tools will need to offer more than pure security, to appeal to the DevOps problem space. The problem is easier for security pros – educate yourself on cloud, understand the technical nuances and differences from traditional infrastructure and operating models, and get engaged with DevOps beyond setting policies that break with operational realities. Or maybe I’m just bored on an airplane, and spent too much time driving rental cars the past few days. Share:

Share:
Read Post

Friday Summary: December 13, 2012—You, Me, and Twitter

I have an on again / off again, love/hate relationship with Twitter. Those of you who follow me might have noticed I suddenly went from barely posting to fully re-engaging with the community. Sometimes I find myself getting fed up with the navel gazing of the echo chamber, as we seem to rehash the same issues over and over again, looking for grammatical and logical gotchas in 140 characters. Twitter lacks context and nuance, and so all too easily degrades into little more than a political talk show. When I’m in a bad mood, or am drowning at work, it’s one of the first things to go. But Twitter also plays a powerful, positive role in my life. It connects me to people in a unique manner unlike any other social media. As someone who works at home alone, Twitter is my water cooler, serving up personal and professional interactions across organizational and geographic boundaries. It isn’t a substitute for human proximity, but satisfies part of that need while providing a stunning scope and scale. Twitter, for me, isn’t a substitute for physical socialization, but is instead an enhancer that extends and augments our reach. When a plane disgorges me in some foreign city, any city, it is Twitter that guarantees I can find someone to have a beer or coffee with. It’s probably good that it wasn’t invented until I was a little older, a little more responsible, and a lot married. As a researcher it is also one of the most powerful tools in the arsenal. Need a contact at a company? Done. Have a question on some obscure aspect of security or coding? Done. Need to find some references using a product? Done. It’s a real-time asynchronous peer network – which is why it is so much better for this stuff than LinkedIn or Facebook. But as a professional, and technically an executive (albeit on a very small scale) Twitter challenges me to decide where to draw the line between personal and professional. Twitter today is as much, or more, a media tool as a social network. It is an essential outlet for our digital personas, and plays a critical role in shaping public perceptions. This is as true for a small-scale security analyst as for the Hollywood elite or the Pope. What we tweet defines what people think of us, like it or not. For myself I made the decision a long time ago that Twitter should reflect who I am. I decided on honesty instead of a crafted facade. This is a much bigger professional risk than you might think. I regularly post items that could offend customers, prospects, or anyone listening. It also reveals more about me than I am sometimes comfortable with in public. For example, I know my tweet stream is monitored by PR and AR handlers from companies of all sizes. They now know my propensity for foul language, the trials and tribulations of my family life, my favorite beers, health and workout histories, travel schedules, and more. I don’t put my entire life up there, but that’s a lot more than I want in an analyst database (yes, they exist). One day Twitter will help me fill a cancelled meeting on a business development trip, and the next it will draw legal threats or lose me a deal. Tweets also have a tendency to reflect what’s on my mind at a point in time, but completely out of context. Take this morning for example: I tweeted out my frustration at the part of the industry and community that spends inordinate time knocking others down in the furtherment of its own egos and agendas. But I failed to capture the nuance of my thought, and the tweet unfortunately referred to the entire industry. That wasn’t my intention, and I tried to clarify, but additional context is a poor substitute for initial clarity. My choice was to be honest or crafted. Either Twitter reflects who I am, or I create a digital persona not necessarily aligned with my real self. I decided I would rather reveal too much about who I am than play politician and rely on a ‘managed’ image. Twitter is never exactly who I am, but neither is any form of writing or public interaction. This explains my relationship with Twitter. It reflects who I am, and when I’m down and out I see (and use) Twitter as an extension of my frustration. When I’m on top Twitter is a source of inspiration and connection. It really isn’t any different than physical social interaction. As an introvert, when I’m in a bad mood, the last thing I want is to sit in a crowded room listening to random discussions. When I’m flying high (metaphorically – I’m not into that stuff despite any legalization) I have no problem engaging in spirited debate on even the most inane subjects, without concern for the consequences. For me, Twitter is an extension of the real world, bringing the same benefits and consequences. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian’s Dark Reading post on Big Data Security Recommendations. Rich quoted on DLP at TechTarget. Favorite Securosis Posts Mike Rothman (and David Mortman): The CloudSec Chicken or the DevOps Egg. I had a very similar conversation regarding the impact of SDN on network security this week. It’s hard to balance being ahead of the market and showing ‘thought leadership’ against building something the market won’t like. Most of the network security players are waiting for VMWare to define the interfaces and interactions before they commit to much of anything. Adrian Lane: Can we effectively monitor big data?. Yes, it’s my post, but I think DAM needs to be re-engineered to accommodate big data. Rich: Building an Early Warning System: Deploying the EWS. Mike is taking a very cool approach with this series. Other Securosis Posts Selecting an Enterprise Key Manager. Incite 12/12/2012: Love the Grind. Building an Early Warning System: Determining

Share:
Read Post

Selecting an Enterprise Key Manager

Now that you have a better understanding of major key manager features and options we can spend some time outlining the selection process. This largely comes down to understanding your current technical and business requirements (including any pesky compliance requirements), and trying to plan ahead for future needs. Yes, this is all incredibly obvious, but with so many different options out there – from HSMs with key management to cloud key management services – it’s too easy to get distracted by the bells and whistles and miss some core requirements. Determine current project requirements Nobody buys a key manager for fun. It isn’t like you find yourself with some spare budget and go, “Hey, I think I want a key manager”. There is always a current project driving requirements, so that’s the best place to start. We will talk about potential future requirements, but never let them interfere with your immediate needs. We have seen projects get diverted or derailed as the selection team tries to plan for every contingency, then buys a product that barely meets current needs, which quickly causes more major technical frustration. List existing systems, applications, and services involved: The best place to start is to pull together a list of the different systems, application stacks, and services that need key management. This could be as simple as “Oracle databases” or as complex as a list of different databases, programming languages used in applications, directory servers, backup systems, and other off-the-shelf application stacks. Determine required platform support: With your list of systems, applications, and services in hand; determine exactly what platform support you need. This includes which programming languages need API support, any packaged applications needing support (including versions), and even potentially the operating systems and hardware involved. This should also include encryption algorithms to support. Be as specific as possible because you will send this list to prospective vendors to ensure their product can support your platforms. Map out draft architectures: Not only is it important to know which technical platforms need support, you also need an idea of how you plan on connecting them to the key manager. For example there is a big difference between connecting a single database to a key manager in the same data center, and supporting a few hundred cloud instances connected back to a key manager in a hybrid cloud scenario. If you plan to one or more key managers in an enterprise deployment, with multiple different platforms, in different locations within your environment, you will want to bee sure you can get all the appropriate bits connected – and you might need features such as multiple network interfaces. Calculate performance requirements: It can be difficult to accurately calculate performance needs before you start testing, but do your best. The two key pieces to estimate are the number of key operations per second and your network requirements (both speed and number of concurrent network connections/sockets). If you plan to use a key manager that also performs cryptographic operations include those performance requirements. List any additional encryption support required: We have focused almost exclusively on key management, but as we mentioned some key managers also support a variety of other crypto operations. If you plan to use the tool for encryption, decryption, signing, certificate management, or other functions; make a list and include any detailed requirements like the ones above (platform support, performance, etc.). Determine compliance, administration, reporting, and certification requirements: This is the laundry list of any compliance requirements (such as which operations require auditing), administrative and systems management features (backup, administrator separation of duties, etc.), reporting needs (such as pre-built PCI reports), and certifications (nearly always FIPS 140). We have detailed these throughout this series, and you can use it as a guide when you build your checklist. List additional technical requirements: By now you will have a list of your core requirements but there are likely additional technical features on your required or optional list. And last but not least spend some time planning for the future. Check with other business units to see if they have key management needs or are already using a key manager. Talk to your developers to see if they might need encryption and key management for an upcoming project. Review any existing key management, especially in application stacks, that might benefit from a more robust solution. You don’t want to list out every potential scenario to the point where no product can possibly meet all your needs, but it is useful to take a step back before you put together an RFP, to see if you can take a more strategic approach. Write your draft RFP Try to align your RFP to the requirements collected above. There is often a tendency to ask for every feature you have ever seen in product demos, but this frequently results in bad RFP responses that make it even harder to match vendor responses against your priorities. (That’s a polite way of saying that an expansive cookie-cutter RFP request is likely to result in a cookie-cutter response rather than one tailored to your needs). Enough of the soapbox – let’s get into testing. Testing Integrating a key manager into your operations will either be incredibly easy or incredibly tricky, depending on everything from network topography to applications involved to overall architecture. For any project of serious scale it is absolutely essential to test compatibility and performance before you buy. Integration Testing The single most crucial piece to test is whether the key manager will actually work with whatever application or systems you need to connect it with. Ideally you will test this in your own environment before you buy, but this isn’t always possible. Not all vendors can provide test systems or licenses to all potential customers, and in many situations you will need services support or even custom programming to integrate the key manager. Here are some suggestions and expectations for how to test and minimize your risk: If you are deploying the

Share:
Read Post

Incite 12/12/2012: Love the Grind

As I boarded the bus, which would take me to the train, which would take me into NYC to work my engineering co-op job at Mobil Oil, I had plenty of time to think. I mostly thought about how I never wanted to be one of those folks who do a 75-90 minute commute for 25 years. Day in, day out. Take the bus to the train to the job. Leave the job, get on the train and get home at 7 or 8 pm. I was 19 at the time. I would do cool and exciting things. I’d jet around the world as a Captain of Industry. Commuting in my suit and tie was not interesting. No thanks. Well, it’s 25 years later. Now I can appreciate those folks for who they were. They were grinders. They went to work every day. They did their jobs. Presumably they had lives and hobbies outside work. After 20-something years in the workforce, I have come to realize it is a grind even if I don’t have a commute and I do jet around the world, working on interesting problems and meeting interesting people. But it’s still a grind. And it’s not just work where you have to grind. After almost a decade wrangling 3 kids, that’s a grind too. Get them to activities, help with homework and projects, teach them right from wrong. Every day. Grind it out. But here’s the thing. I viewed those salarymen taking the bus to the train every day as faceless automatons, just putting in their time and waiting to die. But for some activities, being a grind doesn’t make them bad. And grinding doesn’t have to make you unhappy. In order to have some semblance of contentment, and dare I say, happiness, you need to learn to love the grind. It’s a rare person who has exciting days every day. The folks who can do what they want and be spontaneous all the time are few and far between. Or lucky. Or born into the right family… so still lucky. The rest of us have responsibilities to our loved ones, to our employers, to ourselves. Again, that doesn’t mean some days the grind doesn’t get the better of me. That’s part of the deal. Some days you beat the grind, other days the grind beats you. So you get up the next day and grind some more. At some point, you appreciate the routine. At least I do. I have been fortunate enough to travel the world – mostly for work. I have seen lots of places. Met lots of people. I enjoy those experiences, but there is something about getting up in my own bed and getting back to the grind that I love. The grind I chose. And the grind changes over time. At some point I hope to spend less time grinding for a job. But that doesn’t mean I’ll stop grinding. There is always something to do. Though I do have an ulterior motive for grinding day in and day out. I can’t make the case to my kids about the importance of the work ethic unless I do it. They need to see me grinding. Then they’ll learn to expect the grind. And eventually to love it. Because that’s life. –Mike PS: Happy 12/12/12. It will be the last time we see this date for 100 years. And then it will be in the year 2112, and Rush will finally have their revenge… Photo credits: Angle Grinder originally uploaded by HowdeeDoodat Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. Building an Early Warning System Deploying the EWS Determining Urgency Understanding and Selection an Enterprise Key Manager Management Features Newly Published Papers Implementing and Managing Patch and Configuration Management Defending Against Denial of Service Attacks Securing Big Data: Security Recommendations for Hadoop and NoSQL Environments Pragmatic WAF Management: Giving Web Apps a Fighting Chance Incite 4 U Responsible agonizing: I don’t expect us to ever reach consensus on the disclosure debate. There are far too many philosophical and religious underpinnings, mired in endless competing interests, for us to ever agree. What’s responsible to one party always looks irresponsible to another, and even the definition of responsible changes with the circumstances. That’s why I am so impressed with Cody Brocious (Daeken)’s heartfelt discussion of his thought process and the implications of his disclosure this summer of a serious vulnerability in hotel locks. For those not following the story, Cody devised a way to easily unlock a particular lock model widely used in hotels, with under $50 in hardware. He discovered it years ago but only made it public this summer. A few weeks ago criminals were discovered using his technique for real world theft and the manufacturer subsequently had to open up a massive, very expensive, response. Cody weighs his thoughts on his decision to disclose and the consequences. Whatever your disclosure beliefs, this is the kind of thought and focus on customers/users that we should not only hope for, but expect. – RM How much information is enough? Early in my career as a network analyst product differentiation was generally based on speeds and feeds. My thing is bigger than your thing, so you should buy it. We still see that a bit in network security, but as we move towards understanding the value of security and threat intelligence (check out the Early Warning series to learn more) I wonder how big is big enough. Over on the Risk I/O blog they talk about crowdsourcing vulnerability intelligence, but it’s really about aggregating information to determine activity patterns. Once you reach a certain point, does it really matter whether a vendor or service provider fields 5 billion or

Share:
Read Post

Building an Early Warning System: Deploying the EWS

Now that we have covered the concepts behind the Early Warning System, it’s time to put them into practice. We start by integrating a number of disparate technology and information sources as the basis of the system – building the technology platform. We need the EWS to aggregate third-party intelligence feeds and scan for those indicators within your environment to highlight attack conditions. When we consider important capabilities of the EWS, a few major capabilities become apparent: Open: The job of the EWS is to aggregate information, which means it needs to be easy to get information in. Intelligence feeds are typically just data (often XML), which makes integration relatively simple. But also consider how to extract information from other security sources such as SIEM, vulnerability management, identity, endpoint protection, network security, and get it all into the system. Remember that the point is not to build yet another aggregation point – it is to take whatever is important from each of those other sources and leverage it to determine Early Warning Urgency. Scalable: You will use a lot of data for broad Early Warning analysis, so scalability is an important consideration. But computational scalability is likely to be more important – you will be searching and mining the aggregated data intensively, so you need robust indexing. Search: Early warning doesn’t lend itself to absolute answers. Using threat intelligence you evaluate the urgency of an issue and look for the indicators in your environment. So the technology needs to make it easy for you to search all your data sources, and then identify at-risk assets based on the indicators you found. Urgency Scoring: Early Warning is all about making bets on which attackers and attacks and assets are the most important to worry about, so you need a flexible scoring mechanism. As we mentioned earlier, we are fans of quantification and statistical analysis; but for an EWS you nee a way to weight assets, intelligence sources, and attacks – so you can calculate an urgency score. Which might be as simple as red/yellow/green urgency. Some other capabilities can be useful in the Early Warning process – including traditional security capabilities such as alerting and thresholding. Again, you don’t know quite what you are looking for initially, but once you determine that a specific attack requires active monitoring you will want to set up appropriate alerts within the system. Alternatively, you could take an attack pattern and load it into an existing SIEM or other security analytics solution. Similarly, reporting is important, as you look to evaluate your intelligence feeds and your accuracy in pinpointing urgent attacks. As with more traditional tools customization of alerts, dashboards, and reports enables you to configure the tool to your own requirements. That brings us to the question of whether you should repurpose existing technology as an Early Warning System. Let’s first take a look at the most obvious candidate: the existing SIEM/Log Management platform. Go back to the key requirements above and you see that integration is perhaps the most important criterion. The good news is that most SIEMs are built to accept data from a variety of different sources. The most significant impediment right now is the relative immaturity of threat intelligence integration. Go into the process with your eyes open, and understand that you will need to handle much of the integration yourself. The other logical candidate is the vulnerability management platform – especially in light of its evolution toward serving as a more functional asset repository, with granular detail on attack paths and configurations. But VM platforms aren’t there yet – alerting and searching tend to be weaker due to the heritage of the technology. But over time we will see both SIEM and VM systems mature as legitimate security management platforms. In the meantime your VM system will feed the EWS, so make sure you are comfortable getting data out of the VM. Big Data vs. “A Lot of Data” While we are talking about the EWS platform, we need to address the elephant in the discussion: Big Data. We see the term “Big Data” used to market everything relating to security management and analytics. Any broad security analysis requires digesting, indexing, and analyzing a lot of security data. In our vernacular, Big Data means analysis via technologies like Hadoop, MapReduce, NoSQL, etc. These technologies are great, and they show tremendous promise for helping to more effectively identify security attacks. But they may not be the best choices for an Early Warning System. Remember back to the SIEM evolution, when vendors moved to purpose-built datastores and analysis engines because relational databases ran out of steam. But the key to any large security system is what you need to do, and whether the technology can handle it, scalably. The underlying technology isn’t nearly as important as what it enables you do. We know there will be a mountain of data, from all sorts of places in all sorts of formats. So focus on openness, scalability, and customization. Turning Urgency into Action Once you get an Early Warning alert you need to figure out whether it requires action, and if so what kind to take. Validation and remediation are beyond our scope here – we have already covered them in Malware Analysis Quant, Evolving Endpoint Malware Detection, Implementing and Managing Patch and Configuration Management, and other papers which examined the different aspects of active defense and remediation. So we will just touch on the high-level concepts. Validate Urgency: The first order of business is to validate the intelligence and determine the actual risk. The early warning alert was triggered by a particular situation, such as a weaponized exploit was in the wild or vulnerable devices. Perhaps a partner network was compromised by a specific attack. In this step you validate the risk and take it from concept to reality by finding exposed devices, or perhaps evidence of attack or successful compromise. In a perfect world you would select an attack scenario and

Share:
Read Post

Building an Early Warning System: Determining Urgency

The Early Warning series has leveraged your existing internal data and integrated external threat feeds, in an effort to get out ahead of the inevitable attacks on your critical systems. This is all well and good, but you still have lots of data without enough usable information. So we now focus on the analysis aspect of the Early Warning System (EWS). You may think this is just rehashing a lot of the work done through our SIEM, Incident Response, and Network Forensics research – all those functions also leverage data in an effort to identify attacks. The biggest difference is that in an early warning context you don’t know what you’re looking for. Years ago, US Defense Secretary Donald Rumsfeld described this as looking for “unknown unknowns”. Early warning turns traditional security analysis on its head. Using traditional tools and tactics, including those mentioned above, you look for patterns in the data. The traditional approaches require you to know what you are looking for – accomplished by modeling threats, baselining your environment, and then looking for things out of the ordinary. But when looking for unknown unknowns you don’t have a baseline or a threat model because you don’t yet know what you’re looking for. As a security professional your BS detector is probably howling right now. Most of us gave up on proactively fighting threats long ago. Will you ever truly become proactive? Is any early warning capability bulletproof? Of course not. But EWS analysis gives us a way to narrow our focus, and enables us to more effectively mine our internal security data. It offers some context to the reams of data you have collected. By combining threat intelligence you can make informed guesses at what may come next. This helps you figure out the relevance and likelihood of the emerging attacks. So you aren’t really looking for “unknown unknowns”. You’re looking for signs of emerging attacks, using indicators found by others. Which at least beats waiting until your data is exfiltrated to figure out a that new Trojan is circulating. Much better to learn for the misfortunes of others and head off attackers before they finish. It comes back to looking at both external and internal data, and deciding to how urgently you need to take action. We call this Early Warning Urgency. A very simple formula describes it. Relevance * Likelihood * Proximity = Early Warning Urgency Relevance The first order of business is to determine the relevance to your organization of any threat intelligence. This should be based on the threat and whether it can be used in your environment. Like the attack path analysis described in Vulnerability Management Evolution, real vulnerabilities which do not exist in your environment do not pose a risk. A more concrete example is worrying about StuxNet even if you don’t have any control systems. That doesn’t mean you won’t pay any attention to StuxNet – it uses a number of interesting Windows exploits, and may evolve in the future – but if you don’t have any control systems its relevance is low. There are two aspects of determining relevance: Attack surface: Are you vulnerable to the specific attack vector? Weaponized Windows 2000 exploits aren’t relevant if you don’t have any Windows 2000 systems in your environment. Once you have patched all instances of a specific vulnerability on your devices, you get a respite from worrying about that exploit. This is how the asset base and vulnerability information within your internal data collection provide the context to determine early warning urgency. Intelligence Reliability: You need to evaluate each threat intelligence feed on an ongoing basis to determine its usefulness. If a certain feed triggers many false positives it becomes less relevant. On the other hand, if a feed usually nails a certain type of attack, you should take its warnings of another attack of that type particularly seriously. Note that attack surface isn’t necessarily restricted to your own assets and environment. Service providers, business partners, and even customers represent indirect risks to your environment – if one of them is compromised, the attack might have a direct path to your assets. We will discuss that threat under Proximity, below. Likelihood When trying to assess the likelihood of an early warning situation requiring action, you need to consider the attacker. This is where adversary analysis comes into play. We discussed this a bit in Defending Against Denial of Service. Threat intelligence includes speculation regarding the adversary; this helps you determine the likelihood of a successful attack, based on the competence and motive of the attacker. State-sponsored attackers, for instance, generally demand greater diligence than pranksters. You can also weigh the type of information targeted by the attack to determine your risk. You probably don’t need to pay much attention to credit card stealing trojans if you don’t process credit cards. Likelihood is a squishy concept, and most risk analysis folks consider all sorts of statistical models and analysis techniques to solidify their assessments. We certainly like the idea of quantifying attack likelihood with fine granularity, but we try to be realistic about the amount of data you will have to analyze. So the likelihood variable tends to be more art than science; but over time, as threat intelligence services aggregate more data over a longer period, they will be able to provide better founded and more quantified analysis. Proximity How early do you want the warning to be? An Early Warning System can track not only direct attacks on your environment, but also indirect attacks on organizations and individuals you connect with. We call this proximity. Direct attacks have a higher proximity factor and greater urgency. If someone attacks you it is more serious than if they go after your neighbor. The attack isn’t material (or real) until it is launched directly against you, but you will want to encompass some other parties in your Early Warning System. Let’s start with business partners. If a business partner is compromised, the attacker

Share:
Read Post

Can we effectively monitor big data?

During the big data research project I found myself thinking about how I would secure a NoSQL database if I was responsible for a cluster. One area I can’t help thinking about is Database Activity Monitoring; how I would implement a solution for big databases? The only currently available solution I am aware of is very limited in what it provides. And I think the situation to stay that way for a long time. The ways to collect data with big data clusters, and to deploy monitoring, are straightforward. But analyzing queries will remain a significant engineering challenge. NoSQL tasks are processed very differently than on relational platforms, and the information at your disposal is significantly less. First some background: With Database Activity Monitoring, you judge a user’s behavior by looking at the queries they send to the database. There are two basic analysis techniques for relational databases: either to examine the metadata associated with relational database queries, or to examine the structure and content of queries themselves. The original and most common method is metadata examination – we look at data including user identity, time of day, origin location of the query, and origin application of the query. Just as importantly we examine which objects are requested – such as column definitions – to see if a user may be requesting sensitive data. We might even look at frequency of queries or quantity of data returned. All these data points can indicate system misuse. The second method is to examine the query structure and variables provided by the user. There are specific indicators in the where clause of a relational query that can indicate SQL injection or logic attacks on the database. There are specific patterns, such as “1=1”, designed to confuse the query parser into automatically taking action. There are content ‘fingerprints’, such as social secuirty number formats, which indicate sensitive data. And there are adjustments to the from clause, or even usage of optional query elements, designed to mask attacks from the Database Activity Monitor. But the point is that relational query grammars are known, finite, and fully cataloged. It’s easy for databases and monitors to validate structure, and then by proxy user intent. With big data tasks – most often MapReduce – it’s not quite so easy. MapReduce is a means of distributing a query across many nodes, and reassembling the results from each node. These tasks look a lot more like code than structured relational queries. But it gets worse: the query model could be text search, or an XPath XML parser, or SPARQL. A monitor would need to parse very different query types. Unfortunately we don’t necessarily know the data storage model of the database, which complicates things. Is it graph data, tuple-store, quasi-relational, or document storage? We get no hints from the selection’s structure or data type, because in a non-relational database that data is not easily accessible. There is no system table to quickly consult for table and column types. Additionally, the rate at which data moves in and out of the cluster makes dynamic content inspection infeasible. We don’t know the database storage structure and cannot even count on knowing the query model without some inspection and analysis. And – I really hate to say this because the term is so overused and abused – but understanding the intention of a MapReduce task is a halting problem: it’s at least difficult, and perhaps impossible, to dynamically determine whether it is malicious. So where does that leave us? I suspect that Database Activity Monitoring for NoSQL databases cannot be as effective as relational database monitoring for a very long time. I expect solutions to work purely by analyzing available metadata available for the foreseeable future, and they will restrict themselves to cookie-cutter MapReduce/YARN deployments in Hadoop environments. I imagine that query analysis engines will need to learn their target database (deployment, data storage scheme, and query type) and adapt to the platforms, which will take several cycles for the vendors to get right. I expect it to be a very long time before we see truly useful systems – both because of the engineering difficulty and because of the diversity of available platforms. I wish I could say that I have seen innovative new approaches to this problem, and they are just over the horizon, but I have not. With so many customers using these systems and pumping tons of information into them – much of it sensitive – demand for security will come. And based on what’s available today I expect the tools to lean heavily toward logging tools and WAF. That’s my opinion. Share:

Share:
Read Post

Incite 12/5/2012: Travel Tribulations

Travel is an occupational hazard for industry analysts. There are benefits to meeting face to face with clients, and part of the gig is speaking at events and attending conferences. That means planes, trains, and automobiles. I know there are plenty of folks who fly more than I do, but that was never a contest I wanted to win. As long as I make Platinum on Delta, I’m good. I get my upgrades and priority boarding, and it works. With the advent of TSA Pre-check, I’m also exposed to a lot less security theater. Sure there are airports and terminals where I still need to suffer the indignity of a Freedom Fondle, but they are few and far between now. More often I’m through security and on my way to the gate within 5 minutes. So the travel is tolerable for me. Last weekend, I took The Boy on a trip to visit a family member celebrating a milestone birthday. It was a surprise and our efforts were appreciated. To save a little coin, we opted for the ultra low-cost Spirit Airlines. So we had to pack everything into a pair of backpacks, as I’ll be damned if I’ll pay $35 (each way) to bring a roller bag. But we’re men, so we can make due with two outfits per day and only one pair of shoes. Let’s just acknowledge that if the girls were on the trip I would have paid out the wazoo for carry-on bags. The Boy doesn’t like to fly, so I spent most of the trip trying to explain how the plane flies and what turbulence is. He’s 9 so safety statistics didn’t get me anywhere either. So I resorted to modern day parenting, pleading with him to play a game on his iPod touch. We made it to our destination in one piece and had a great time over the weekend. Though he didn’t sleep nearly enough, so by Sunday morning he was cranky and had a headache. Things went downhill from there. By the time we got to the airport for our flight home he was complaining about a headache and tummy ache. Not what you want to hear when you’re about to get on a plane. Especially not after he tossed his cookies in the terminal. Clean up on Aisle 4. He said he felt better, so I was optimistic he’d be OK. My optimism was misplaced. About 15 minutes after takeoff he got sick again. On me. The good news (if there is good news in that situation) is that he only had Baked Lays and Sprite in his stomach. Thankfully not the hot dog I had gotten him earlier. The only thing worse than being covered in partially digested Lays is wearing hot dog chunks as a hat. Not sure what about a hot dog would have settled his stomach, and evidently I wasn’t thinking clearly either. I even had the airsick bag ready at hand. My mistake? I didn’t check whether I could actually open the bag, as it was sealed shut with 3-4 pieces of gum. Awesome. The flight attendants didn’t charge me for the extra bags we needed when he continued tossing his cookies or the napkins I needed to clean up. It was good that plastic garbage bags were included in my ultra-low-cost fare. And it was a short flight, so the discomfort was limited to 90 minutes. The Boy was a trooper and about midway through the flight started to feel better. We made it home, showered up, and got a good story out of the experience. But it reminded me how much easier some things are now the kids are getting older. Sure we have to deal with pre-teen angst and other such drama, but we only get covered in their bodily fluids once or twice a year nowadays. So that is progress, I guess. –Mike Photo credits: Puking Pumpkin originally uploaded by Nick DeNardis Heavy Research We’re back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. Building an Early Warning System External Threat Feeds Internal Data Collection and Baselining Understanding and Selecting an Enterprise Key Manager Management Features Technical Features, Part 2 Technical Features, Part 1 Newly Published Papers Implementing and Managing Patch and Configuration Management Defending Against Denial of Service Attacks Securing Big Data: Security Recommendations for Hadoop and NoSQL Environments Pragmatic WAF Management: Giving Web Apps a Fighting Chance Incite 4 U Privacy is still dead. Next. It’s amazing to me there is still pushback about decrypting SSL on outbound traffic in a corporate environment. It’s like the inmates are running the asylum. Folks complain about privacy issues because you can look at what pr0n sites they are perusing during work. Even when you tell them you are monitoring their stuff, ostensibly to look for proof of exfiltration. Don’t these folks realize that iPads on LTE are for pr0n anyway? Not that I’d know anything about that. Maybe set up an auto-responder on email and point folks directly to your Internet usage policy when they bitch about web monitoring. Unless you are in a country that doesn’t allow you to monitor. Then just reimage the machine and move on. – MR Out with a whisper: In the past many database exploits required active usage of credentials to exploit a vulnerability. And there were almost guaranteed to be available as most databases came pre-configured with test and ‘public’ accounts, which could be leveraged into administrative access with the right credentials. For the most part these easy to access credentials have been removed from out-of-the-box configurations and are much less likely to be accessible by default. Any DBA who runs configuration assessments will immediately see this type of access flagged in their reports, and

Share:
Read Post

Enterprise Key Manager: Management Features

It’s one thing to collect, secure, and track a wide range of keys; but doing so in a useful, manageable manner demonstrates the differences between key management products. Managing disparate keys from distributed applications and systems, for multiple business units, technical silos, and IT management teams, is more than a little complicated. It involves careful segregation and management of keys; multiple administrative roles; abilities to organize and group keys, users, systems, & administrators; appropriate reporting; and an effective user interface to tie it all together. Role management and separation of duties If you are managing more than a single set of keys for a single application or system you need a robust role-based access control system (RBAC) – not only for client access, but for the administrators managing the system. It needs to support ironclad separation of duties, and multiple levels of access and administration. Role management and separation of duties An enterprise key manager should support multiple roles, especially multiple administrative roles. Regular users never directly access the key manager, but system and application admins, auditors, and security personnel may all need some level of access at different points of the key management lifecycle. For instance: A super-admin role for administration of the key manager itself, with no access to the actual keys. Limited administrator roles that allow access to subsets of administrative functions such as backup and restore, creating new key groups, and so on. An audit and reporting role for viewing reports and audit logs. This may be further subsetted to allow access only to certain audit logs (e.g., a specific application). System/application manager roles for individual systems and application administrators who need to generate and manage keys for their respective responsibilities. Sub-application manager roles which only have access to a subset of the rights of a system or application manager (e.g., create new keys only but not view keys). System/application roles for the actual technical components that need access to keys. Any of these roles may need access to a subset of functionality, and be restricted to groups or individual key sets. For example, a database security administrator for a particular system gains full access to create and manage keys only for the databases associated with those systems, but not to manage audit logs, and no ability to create or access keys for any other applications or systems. Ideally you can build an entitlement matrix where you take a particular role, then assign it to a specific user and group of keys. Such as assigning the “application manager” role to “user bob” for group “CRM keys”. Split administrative rights There almost always comes a time where administrators need deeper access to perform highly-sensitive functions or even directly access keys. Restoring from backup, replication, rotating keys, revoking keys, or accessing keys directly are some functions with major security implications which you may not want to trust to a single administrator. Most key managers allow you to require multiple administrators to apporve for these functions, to limit the ability of any one administrator to compromise security. This is especially important when working with the master keys for the key manager, which are needed for taks including replication and restoration from backup. Such functions which involve the master keys are often handled through a split key. Key splitting provides each administrator with a portion of a key, all or some of which are required. This is often called “m of n” since you need m sub-keys out of a total of n in existence to perform an operation (e.g., 3 of 5 admin keys). These keys or certificates can be stored on a smart card or similar security device for better security. Key grouping and segregation Role management covers users and their access to the system, while key groups and segregation manage the objects (keys) themselves. No one assigns roles to individual keys – you assign keys to groups, and then parcel out rights from there (as we described in some examples above). Assigning keys and collections of keys to groups allows you to group keys not only by system or application (such as a single database server), but for entire collections or even business units (such as all the databases in accounting). These groups are then segregated from each other, and rights are assigned per group. Ideally groups are hierarchical so you can group all application keys, then subset application keys by application group, and then by individual application. Auditing and reporting In our compliance-driven security society, it isn’t enough to merely audit activity. You need fine-grained auditing that is then accessible with customized reports for different compliance and security needs. Type of activity to audit include: All access to keys All administrative functions on the key manager All key operations – including generating or rotating keys A key manager is about as security-sensitive as it gets, and so everything that happens to it should be auditable. That doesn’t mean you will want to track every time a key is sent to an authorized application, but you should have the ability for when you need it. Some tools support Reporting Raw audit logs aren’t overly useful on a day to day basis, but a good reporting infrastructure helps keep the auditors off your back while highlighting potential security issues. Key managers may include a variety of pre-set reports and support creation of custom reports. For example, you could generate a report of all administrator access (as opposed to application access) to a particular key group, or one covering all administrative activity in the system. Reports might be run on a preset schedule, emailing summaries of activity out on a regular basis to the appropriate stakeholders. User interface In the early days of key management everything was handled using command line interfaces. Most current systems implement graphical user interfaces (often browser based) to improve usability. There are massive differences in look and feel across products, and a GUI that fits the workflow of your staff can save a great

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.