Securosis

Research

Continuous Security Monitoring: Defining CSM

In our introduction to Continuous Security Monitoring we discussed the rapid advancement of attacks, and why that means you can never “get ahead of the threat”. That means you need to react faster to what’s happening, which requires shortening the window of exposure by embracing extensive security monitoring. We tipped our hats to both PCI Council and the US government for requiring monitoring as a key aspect of their mandates. The US government pushed it a step further by including continuous in its definition of monitoring. We love the term ‘continuous’, but this one word has caused a lot of confusion in folks responsible for monitoring their environments. As we are prone to do, it is time to wade through the hyperbole to define what we mean by Continuous Security Monitoring, and then identify some of the challenges you will face in moving towards this ideal. Defining CSM We will not spend any time defining security monitoring – we have been writing about it for years. But now we need to delve into how continuous any monitoring really needs to be given recent advances in attack tactics. Many solutions claim to offer “continuous monitoring”, but all to many simply scan or otherwise assess devices every couple of days — if that often. Sorry, but no. We have heard many excuses for why it is not practical to monitor everything continuously, including concerns about consumption of device resources, excessive bandwidth usage, and inability to deal with an avalanche of alerts. All those issues ring hollow because intermittent assessment leaves a window of exposure for attackers, and for critical devices you don’t have that luxury. Our definition of continuous is more in line with the dictionary definition: con.tin.u.ous: adjective \kən-ˈtin-yue-əs\ – marked by uninterrupted extension in space, time, or sequence The key word there is uninterrupted: always active. The constructionist definition of continuous security monitoring should be that the devices in question are monitored at all times – there is no window where attackers can make a change without it being immediately detected. But we are neither constructionist nor religious – we take a realistic and pragmatic approach, which means accepting that not every organization can or should monitor all devices at all times. So we include asset criticality in our usage of CSM. Some devices have access to very important stuff. You know, the stuff that if leaked will result in blood (likely yours and your team’s) flowing through the halls. The stuff that just cannot be compromised. Those devices need to be monitored continuously. And then there is everything else. In the “everything else” bucket land all those devices you still need to monitor and assess, but not as urgently or frequently. You will monitor these devices periodically, so long as you have other methods to detect and identify compromised devices, like network analytics/anomaly detection and/or aggressive egress filtering. The secret to success at CSM is in choosing your high-criticality assets well, so we will get into that later in this series. Another critical success factor is discovering when new devices appear, classifying them quickly, and getting them into the monitoring system quickly. This requires strong process and technology to ensure you have visibility into all of your networks, can aggregate the data you need, and have sufficient computational horsepower for analysis. Adapting the Network Security Operations process map we published a few years back, here is our Continuous Security Monitoring Process. The process is broken down into three phases. In the Plan phase you define policies, classify assets, and continuously discover new assets within your environment. In the Monitor phase you pull data from devices and other sources, to aggregate and eventually analyze, in order to fire an alert if a potential attack or other situation of concern becomes apparent. You will monitor not only to detect attacks, but also to confirm changes and identify unauthorized changes, and substantiate compliance with organizational and regulatory standards (mandates). In the final phase you take action (really determine what action, if any, to take) by validating the alert and escalating as needed. As with all our process models, not all these activities will work or fit in your environment. We publish these maps to give you ideas about what you’ll need to do – they always require customization to your own needs. The Challenge of Full Visibility As we mentioned above, the key challenge in CSM is classifying assets, but your ability to do so is directly related to the visibility of your environment. You cannot monitor or protect devices you don’t know about. So the key enabler for this entire CSM concept is an understanding of your network topology and the devices that connect to your networks. The goal is to avoid an “oh crap” moment, when a bunch of unknown devices and/or applications show up – and you have no idea what they are, what they have access to, or whether they are steaming piles of malware. So we need to be sure you are clear on how to do discovery in this context. There are a number of discovery techniques, including actively scanning your entire address space for devices and profiling what you find. That works well enough and is how most vulnerability management offerings handle discovery, so active discovery is one requirement. But a full address space scan can have a substantial network impact, so it isn’t appropriate during peak traffic times. And be sure to search both your IPv4 and IPv6 address spaces. You don’t have IPv6, you say? You will want to confirm that – many devices have IPv6 turned on by default, broadcasting those addresses to potential attackers. You should supplement active discovery with a passive capability that monitors network traffic and identifies new devices from their network communications. Sophistication passive analysis can profile devices and identify vulnerabilities, but passive monitoring’s primary goal is to find new unmanaged devices faster, then trigger a full active scan on identification. Passive discovery is also helpful for identifying devices hidden behind firewalls and on protected segments, which block active discovery and vulnerability scanning. It is also important to visualize your network topology – a drill-down map

Share:
Read Post

Why. Continuous. Security. Monitoring? [New Series]

Remember the old marketing tagline, “Get Ahead of the Threat?” It seems pretty funny now, doesn’t it? Given the kinds of attacks we are facing and attackers’ increasing sophistication, we never see the threats coming and being even marginally reactive seems like a pipe dream. The bad news is that it will not get easier any time soon. Don’t shoot the messenger, but understand that is the reality of today’s information security landscape. The behavior of most organizations over the past decade hasn’t helped, either. Most companies spend the bulk of their security budget on protective controls that have been proven over and over again to be ineffective. Part of this is due to compliance mandates for ancient technologies, but only very forward-thinking organizations have invested sufficiently in the detection and response aspects of their security programs. Unfortunately organizations become enlightened only after cleaning up major data breaches. For the unenlightened detection and response remain horribly under-resourced and underfunded. At the same time the US government has been pushing a “continuous monitoring” (CM) agenda on both military and civilian agencies to provide “situational awareness,” which is really just a fancy term for understanding what the hell is happening in your environment at any given time. The problem is that CM applies to a variety of operations disciplines in the public sector, and it doesn’t really mean ‘continuous’. CM is a good first step, but as with most first steps, too many organizations take it for the destination rather than the first step of a long journey. We have always strongly advocated security monitoring, and have published a ton of research on these topics, from our philosophical foundation: Monitor Everything, to our SIEM research: (Understanding and Selecting, SIEM Replacement). And don’t forget our process modeling of Network Security Operations, which is all about security monitoring. So we don’t need to be sold on the importance of security monitoring, but evidently the industry still needs to be convinced, given the continued failure of even large organizations to realize they must combine a strong set of controls with (at least) equally strong capabilities for detection, monitoring, and incident response. To complicate matters technology continues to evolve, which means the tools and processes for a comprehensive security monitoring look different than even 18 months ago, and they will look different again 18 months from now. So we are spinning up a series called Continuous Security Monitoring (CSM) to evaluate these advancements, fleshing out our definition of CSM and breaking down the decision points and technology platforms to provide this cornerstone of your security program. React Faster and Better We have gotten a lot of mileage from our React Faster and Better concept, which really just means you need to accept and plan for the reality that you cannot stop all attacks. Even more to the point (and potentially impacting your wallet), success is heavily determined by how quickly you detect attacks and how effectively you respond to them. We suggest you read that paper for a detailed perspective on what is involved in incident response – along with ideas on the organization, processes, and tools required to do it well. This series is not a rehash of that territory – instead it will help you assemble a toolkit (including both technology and process) to monitor your information assets to detect attacks more quickly and effectively. If you don’t understand the importance of this aspect of security, just consider that a majority of breaches (at least according to the latest Verizon Business Data Breach Report) continue to be identified by third parties, such as payment processors and law enforcement. That means organizations have no idea when they are compromised. And that is a big problem. Why CSM? We can groan all day and night about how behind the times the PCI-DSS remains, or how the US government has defined Continuous Monitoring. But attackers innovate and move much more quickly than regulation, and that is not going to change. So you need to understand these mandates for what they are: a low bar to get you moving toward a broader goal of continuous security monitoring. But before we take the security cynical approach and gripe about what’s wrong, let’s recognize the yeoman’s work already done to highlight the importance of monitoring to protecting information (data). Without PCI and the US government mandating security data aggregation and analysis we would still be spending most of our time evangelizing the need for even simplistic monitoring in the first place. The fact that we don’t is a testament to the industry’s ability to parlay a mandate into something productive. That said, if you are looking to solve security problems and identify advanced attackers, you need to go well beyond the mandates. This series will introduce what we call “Continuous Security Monitoring” and dig into the different sources of data you need to figure out how big your problem is. See what we did there? You have a problem and we won’t argue that – your success hinges on determining what has been compromised and for how long. As with all our research we will focus on tangible solutions that can be implemented now, while positioning yourself for future advances. We will make sure to discuss the technologies that enable Continuous Security Monitoring, and identify pitfalls to avoid as you progress. As a reminder, we develop our research using our Totally Transparent Research methodology to make sure that you all have an opportunity to let us know when we are right – and more importantly when we are wrong. Finally, we would like to thank Qualys, Tenable, and Tripwire for agreeing to potentially license the paper at the end of this process. After the July 4th holiday we will get going fast and furious. But no race cars will be harmed in the production of this series… Share:

Share:
Read Post

New Paper: Quick Wins with Website Protection Services

Simple website compromises can feel like crimes with no clear victims. Who cares if the Joey’s Bag of Donuts website gets popped? But that is not a defensible position any more. Attackers don’t just steal data from these websites – they also use them to host malware, command and control nodes, and proxies to defeat IP reputation systems. Even today, strange as it sounds, far too many websites have no protection at all. They are built on vulnerable technologies without a thought for securing critical data, and then let loose in a very hostile world. These sites are sitting ducks for script kiddies and organized crime. In this paper we took a step back to write about protecting websites using Security as a Service (SECaaS) offerings. We used our Quick Wins framework to focus on how Website Protection Services can protect web properties quickly and without fuss. Of course it’s completely valid to deploy and manage your own devices to protect your websites; but Mr. Market tells us every day that the advantages of an always-on, simple-to-deploy, and secure-enough service consistently win out over yet another complex device in the network perimeter. The landing page is in our Research Library. You can also download Quick Wins with Website Protection Services (PDF) directly. We would like to thank Akamai Technologies for licensing the content in this paper. Obviously we wouldn’t be able to do the research we do, or offer it to you folks for this most excellent price, without companies licensing our content. Share:

Share:
Read Post

Incite 7/3/2013: Independence

During the week of July 4th in the US we cannot help but think about independence. First of all, it’s a great excuse for a party and BBQ, right? To celebrate our escape from the tyranny of rulers from a far-off land, we eat and drink beer until we want to puke, and blow up fireworks made in other far-off lands. Being serious for a moment (but only a moment, we promise), independence means a lot of things to a lot of people, and now is a good time to revisit what it means to you, and make sure your choices reflect your beliefs. With the recent media frenzy around Snowden and NSA surveillance, many folks are questioning how the government justifies their actions under the heading of defending independence. Lots of folk aren’t sure which presents a greater threat to our independence – the bad guys or the government. Regardless of which side of that fence you take, folks in the US at least have an opportunity to discuss and exercise our rights to maintain that independence. Many folks, in many countries, take to the streets in protest every day, fighting like hell to get half the rights Americans have. So as you slug down your tenth beer on Thursday, keep that in mind. The truth is that I don’t really think much about those macro issues. I’m one of the silly few who still appreciate that living in the US affords me opportunities I couldn’t readily get elsewhere. I choose to be thankful that the founding fathers had the stones to fight for this country, and the foresight to put in place a system that has held up pretty well for a couple hundred years. Is it perfect? No, nothing is. But compared to the other options it is definitely okay. I struggle to be optimistic about most things, but I’m pretty optimistic about the opportunities ahead of me, and I’ll be drinking to that on Thursday. And I may even drink some American beer for good measure. But independence has a different context in my day-to-day life. I spend a lot of my time ensuring my kids grow up as independent, productive members of society. Whether that means leading by example by showing them a strong work ethic, providing for their needs (and my kids want for nothing), or helping them navigate today’s tech-enabled social-media-obsessed reality, the more we can prepare them for the real world the less unsettling their path to adulthood will be. That’s why we send them away to camp each year. Sure it’s fun (as I described last week), but it also allows them to learn independence before they are really independent. A side benefit is that the Boss and I get a few weeks of independence from the day-to-day challenges of being actively engaged parents. I’m not sure what the Founding Fathers would have thought about sending their kids away to camp (although I’m sure the political pundits on cable news has an idea – they know what the founders would have thought about everything), and I don’t much care. It works for us. And with that it is time to head down to see the Braves scalp the Marlins tonight. Summer camp isn’t only for kids. –Mike Photo credit: “Independence Mine State Park” originally uploaded by Kwong Yee Cheng Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. Database Denial of Service Introduction API Gateways Key Management Developer Tools Access Provisioning Security Enabling Innovation Security Analytics with Big Data Deployment Issues Integration New Events and New Approaches Use Cases Introduction Newly Published Papers Email-based Threat Intelligence: To Catch a Phish Network-based Threat Intelligence: Searching for the Smoking Gun Understanding and Selecting a Key Management Solution Building an Early Warning System Implementing and Managing Patch and Configuration Management Incite 4 U (Certified) content is king: Mozilla’s new mixed content blocker feature rolled out with the beta release of Firefox version 23. The feature provides three basic advantages: content privacy, man-in-the-middle (MitM) protection, and multi-site validation. It provides these capabilities by forcing HTTP to HTTPS and validating site certificates. All content is encrypted, and provided the site certs are valid, you get reasonable assurance that you are connected to your intended site. Content from non-HTTPS sources is ignored. I am one of the last at Securosis who continues to use Firefox as my primary browser. It has a bunch of weird usability anomalies but I find my combination of basic features and security extensions (NoScript, Ghostery, & 1Password) indispensable. – AL You must tuna SIEM: Yes, that is a lame play on the REO Speedwagon album (You Can Tune a Piano, but You Can’t Tuna Fish), but Xavier’s point “Out of the Box” SIEM? Never is right on the money. These tools need tuning, period. Xavier “… demonstrate[s] that a good SIEM must be one deployed for your devices and applications by you and for your business!” It cannot be generic – the out-of-the-box stuff provides a starting point but requires substantial work to be operational and useful in your environment. Xavier even includes screen shots and pokes fun at built-in compliance reports. One-click PCI reports? Not so much. This post is full of win! But don’t lose sight of the point. Monitoring out of the box is not very useful. Just dealing with the noise would be a full-time gig. So make sure that any planned deployment has adequate time and resources allocated to tuna SIEM. Unless you want some more very expensive shelfware. – MR Spurn the scumbag, update the law: Over the past twenty or thirty years technology has moved so rapidly, and changed society so fundamentally, that our laws haven’t come close to keeping up. The problem is exacerbated by lobbying efforts and elected officials and aides who lack the fundamental knowledge

Share:
Read Post

The doctor is in the house (and knocking your site down)

Andy Ellis (yes, @csoandy) had a good educational post on DNS Reflection attacks. The DrDos (no, Digital Research DOS isn’t making a comeback – dating myself FTW) has proven an effective way for attackers to scale Denial of Service (DoS) attacks to over 100gbps. Andy explains how DNS Reflection works, why it’s hard to deal with, and what targets can do to defend themselves. The first line of defense is always capacity. Without enough bandwidth at the front of your defenses, nothing else matters. This needs to be measurable both in raw bandwidth, as well as in packets per second, because hardware often has much lower bandwidth capacity as packet sizes shrink. He also mentions filtering out DNS requests and protecting your DNS servers, among other tactics. If you haven’t had the pleasure of being pummeled by a DoS, and having it magnified by reflection attacks, you probably will. So learning as much as you can and making sure you have proper defenses can help you keep sites up and running. Share:

Share:
Read Post

Standards don’t move fast enough

Branden Williams is exactly right: 2013 is a pivotal year for PCI DSS. A new version of the guidance will hit later this year. So why is 2013 so important for PCI DSS? In this next revision (which will be released this year, enforced in 2015, and retired at the end of 2017) the standard has to play catch up. It’s notoriously been behind the times when it comes to the types of attacks that merchants face (albeit, most merchants don’t even follow PCI DSS well enough to see if compliance could prevent a breach), but now it’s way behind the times on the technologies that drive business. Enforced in 2015. Yeah, 2015. You know, things change pretty quickly in technology – especially for attackers. But the rub is that the size and disruption of infrastructure changes for the large retailers who control the PCI DSS mean they cannot update their stuff fast enough. So they only update the DSS on a 3-year cycle to allow time to implement the changes (and keep the ROC). Let’s be clear: attackers are not waiting for the new version of PCI to figure out ways to bust new technologies. Did you think they were waiting to figure out how to game mobile payments? Of course not – but no specific guidance will be in play for at least 2 years. Regardless of whether it’s too little, it’s definitely too late. So what to do? Protect your stuff, and put PCI (and the other regulatory mandates) into the box that it belongs. A low bar you need to go beyond if you want to protect your data. Photo credit: “Don’t let this happen to you! too little, too late order coal now!” originally uploaded by Keijo Knutas Share:

Share:
Read Post

Incite 6/26/2013: Camp Rules

June is a special time for us. School is over and we take a couple weeks to chill before the kids head off to camp. Then we head up to the Delaware beach where the Boss and I met many moons ago, and then put the kids on the bus to sleepaway camp. This year they are all going for 6 1/2 weeks. Yes, it’s good to be our kids. We spend the rest of the summer living vicariously through the pictures we see on the camp’s website. The title of today’s Incite has a double meaning. Firstly, camp does rule. Just seeing the kids renew friendships with their camp buddies at the bus stop and how happy they are to be going back to their summer home. If it wasn’t for all these damn responsibilities I would be the first one on the bus. And what’s not to love about camp? They offer pretty much every activity you can imagine, and the kids get to be pseudo-independent. They learn critical life lessons that are invaluable when they leave the nest. All without their parents scrutinizing their every move. Camp rules! But there are also rules that need to be followed. Like being kind to their bunkmates. Being respectful to their counselors and the camp administrators. Their camp actually has a list of behavioral expectations we read with the kids, which they must sign. Finally, they need to practice decent hygiene because we aren’t there to make sure it happens. For the girls it’s not a problem. 3 years ago, when XX1 came back from camp, she was hyper-aware of whether she had food on her face after a meal and whether her hair looked good. Evidently there was an expectation in her bunk about hygiene that worked out great. XX2 has always been a little fashionista and takes time (too much if you ask me) for her appearance, so we know she’ll brush her hair and keep clean. We look forward to seeing what new look XX2’s going with in the pictures we see every couple of days. The Boy is a different story. At home he needs to be constantly reminded to put deodorant on, and last summer he didn’t even know we packed a brush for his hair. Seriously. He offered a new definition for ‘mophead’ after a month away. Being proactive, I figured it would be best if I laid out the camp rules very specifically for the Boy. So in the first letter I sent him I reminded him of what’s important: Here is my only advice: Just have fun. And more fun. And then have some additional fun after that. That’s your only responsibility for the next 6 1/2 weeks. And you should probably change your underwear every couple of days. Also try not to wear your Maryland LAX shorts every day. Every other day is OK… The Boss thought it was pretty funny until she realized I was being serious. Boys will be boys – even 44-year-old boys… –Mike Photo credit: “Outhouse Rules” originally uploaded by Live Life Happy Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. API Gateways Access Provisioning Security Enabling Innovation Security Analytics with Big Data Deployment Issues Integration New Events and New Approaches Use Cases Introduction Network-based Malware Detection 2.0 Deployment Considerations The Network’s Place in the Malware Lifecycle Scaling NBMD Evolving NBMD Advanced Attackers Take No Prisoners Newly Published Papers Email-based Threat Intelligence: To Catch a Phish Network-based Threat Intelligence: Searching for the Smoking Gun Understanding and Selecting a Key Management Solution Building an Early Warning System Implementing and Managing Patch and Configuration Management Incite 4 U You, yes you. You are a security jerk. @ternus had a great post abut being an Infosec Jerk, which really hits on core issue hindering organizations’ willingness to take security seriously. It comes down to an incentive problem, as most behaviors do. @ternus sums it up perfectly: “Never attribute to incompetence that which can be explained by differing incentive structures.” Developers and ops folks typically have little incentive to address security issues. But they do have incentive to ship code or deploy servers and apps. We security folks don’t add much to the top line so we need to meet them more than halfway, and the post offers some great tips on how to do that. Also read The Phoenix Project to get a feel for how to make a process work with security built in. Or you can continue to be a jerk. How’s that working out so far? – MR False confidence: No, it’s not surprising that most companies don’t use big data for security analytics, per the findings of a recent McAfee study. Most security teams don’t know what big data is yet, much less use it for advanced threat and event analysis. But the best part of the study was the confidence of the respondents – over 70% were confident they could identify insider threats and external attacks. Which is ironic as that is the percentage of breaches detected by people outside their organization. Maybe it’s not their security products that give them confidence, but the quality of their customers or law enforcement who notify them of breaches. But seriously, if we agree that big data can advances security the reason most customers can’t harness that value is that they are waiting for their vendors to deliver, but the vendors are not quite there yet. – AL You break it, you own it: Although it is very far from perfect, one of the more effective security controls in the Apple universe is the application vetting process. Instead of running an open marketplace, Apple reviews all iOS and Mac apps that come into their stores. They definitely don’t catch everything, but it is impossible to argue that this process hasn’t reduced the spread of malware – the number

Share:
Read Post

Network-based Malware Detection 2.0: Deployment Considerations

As we wrap up Network-based Malware Detection 2.0, the areas of most rapid change have been scalability and accuracy. That said, getting the greatest impact on your security posture from NBMD requires a number of critical decisions. You need to determine how the cloud fits into your plans. Early NBMD devices evaluated malware within the device (on-box sandbox), but recent advances and new offerings have moved some or all the analysis to cloud compute farms. You also need to figure out whether to deploy the device inline, in order to block malware before it gets in. Blocking whatever you can may sound like an easy decision, but there are trade-offs to consider – as there always are. To Cloud or Not to Cloud? On-box or in-cloud malware analysis has become one of those religious battlegrounds vendors use to differentiate their offerings from one another. Each company in this space has a 70-slide deck to blow holes in the competition’s approach. But we have no use for technology religion so let’s take an objective look at the options. Since the on-box analysis of early devices, many recent offerings have shifted to cloud-based malware analysis. The biggest advantage to local analysis is reduced latency – you don’t need to send the file anywhere so you get a quick verdict. But there are legitimate issues with on-device analysis, starting with scalability. You need to evaluate every file that comes in through every ingress point unless you can immediately tell that it’s bad from a file hash match. That require an analysis capability on every Internet connection to avoid missing something. Depending on your network architecture this may be a serious problem, unless you have centralized both ingress and egress to a small number of locations. But for distributed networks with many ingress points the on-device approach is likely to be quite expensive. In the previous post we presented the 2nd Derivative Effect (2DE), whereby customers benefit from the network effect of working with a vendor who analyzes a large quantity of malware across many customers. The 2DE affects the cloud analysis choice two ways. First, with local analysis, malware determinations need to be sent up to a central distribution point, normalized, de-duped, and then distributed to the rest of the network. That added step extends the window of exposure to the malware. Second, the actual indicators and tests need to be distributed to all on-premise devices so they can take advantage of the latest tests and data. Cloud analysis effectively provides a central repository for all file hashes, indicators, and testing – significantly simplifying data management. We expect cloud-based malware analysis to prevail over time. But your internal analysis may well determine that latency is more important than cost, scalability, and management overhead – and we’re fine with that. Just make sure you understand the trade-offs before making a decision. Inline versus out-of-band The next deployment crossroads is deciding where NMBD devices sits in the network flow. Is the device deployed inline so it can block traffic? Or will it be used more as a monitor, inspecting traffic and sending alerts when malware goes past? We see the vast majority of NBMD devices currently deployed out-of-band – delaying the delivery of files during analysis (whether on-box or in the cloud) tends to go over like a lead balloon with employees. They want their files (or apps) now, and they show remarkably little interest in how controlling malware risk may impact their ability to work. All things being equal, why wouldn’t you go inline, for the ability to get rid of malware before it can infect anything? Isn’t that the whole point of NBMD? It is, but inline deployment is a high wire act. Block the wrong file or break a web app and there is hell to pay. If the NBMD device you championed goes down and fails closed – blocking everything – you may as well start working on your resume. That’s why most folks deploy NBMD out-of-band for quite some time, until they are comfortable it won’t break anything important. But of course out-of-band deployment has its own downsides, well beyond a limited ability to block attacks before it’s too late. The real liability with out-of-band deployment is working through the alerts. Remember – each alert requires someone to do something. The alert must be investigated, and the malware identified quickly enough to contain any damage. Depending on staffing, you may be cleaning up a mess even when the NBMD device flags a file as malware. That has serious ramifications for the NMBD value proposition. In the long run we don’t see much question. NBMD will reside within the perimeter security gateway. That’s our term for the single box that encompasses NGFW, NGIPS, web filter, and other capabilities. We see this consolidation already, and it will not stop. So NMBD will inherently be inline. Then you get a choice of whether or not to block certain file types or malware attacks. Architecture goes away as a factor, and you get a pure choice: blocking or alerting. Deploying the device inline gives the best of both worlds and the choice. The Egress Factor This series focuses on the detection part of the malware lifecycle. But we need to at least touch on preventative techniques available to ensure bad stuff doesn’t leave your network, even if the malware gets in. Remember the Securosis Data Breach Triangle. If you break the egress leg and stop exfiltration you have stopped the breach. It’s simple to say, but not to do. Everything is now encapsulated on port 80 or 443, and we have new means of exfiltration. We have seen tampering with consumer storage protocols (Google Drive/Dropbox) to slip files out of a network, as well as exfiltration 140 characters at a time through Twitter. Attackers can be pretty slick. So what to do? Get back to aggressive egress filtering on your perimeter and block the unknown. If you cannot identify an application in the outbound stream, block it. This requires NGFW-type application inspection and classification capabilities and a broad application library, but ultimately

Share:
Read Post

Talking Head Alert: Mike on Phishing Webcast

If you have nothing better to do tomorrow at 2 pm EDT, and want to learn a bit about what’s new in phishing (there is a lot of it, but that’s not new) and how to use email-based threat intelligence to deal with it, join me and the folks from Malcovery Security on a webcast tomorrow. I will be covering the content in the Email-based Threat Intelligence paper, and the folks from Malcovery will be sharing a bunch of their research into phishing trends. It should be an interesting event, so don’t miss it… You can register now. Share:

Share:
Read Post

Incite 6/12/2013: The Wall of Worry

Anxiety is something we all deal with on a daily basis. It is a feature of the human operating system. Maybe it’s that mounting pile of bills, or an upcoming doctor’s appointment, or a visit from your in-laws, or a big deadline at work. It could be anything but the anxiety triggers our fight or flight mechanisms, causes stress, and takes a severe toll over time on our health and well being. Culturally I come from a long line of worriers. Neuroses are just something we get used to, because everyone I know has them (including me) – some are just more vocal about it than others. I think every generation thinks they have it tougher than the previous. But this isn’t a new problem. It’s the same old story, although things do happen faster now and bad news travels instantaneously. I stumbled across a review of a 1934 book called You Can Master Life, which put everything into context. If you recall, 1934 was a pretty stressful time in the US. There was this little thing called the Great Depression, and it screwed some folks up. I recently learned my great-grandfather lost the bank he owned at the time, so I can only imagine the strain he was under. The book presents a worry table, which distinguishes between justified and unjustified worries and then systematically reasons why you don’t need to worry about most things. For instance it seems this fellow worried about 40% of the time about disasters that never happened, and another 30% about past actions that he couldn’t change. Right there, 70% of his worry had no basis in reality. When he was done he had figured out how to eliminate 92% of his unjustified fears. So what’s the secret to defeating anxiety? What, of this man, is the first step in the conquest of anxiety? It is to limit his worrying to the few perils in his fifth group. This simple act will eliminate 92% of his fears. Or, to figure the matter differently, it will leave him free from worry 92% of the time. Of course that assumes you have rational control over what you worry about. And who can really do that? I guess what works best for me is to look at it in terms of control. If I control it then I can and should worry. If I don’t I shouldn’t. Is NSA surveillance (which Adrian and I discuss below) concerning? Yes. Can I really do anything about it – beyond stamping my feet and blasting the echo chamber with all sorts of negativity? Nope. I only control my own efforts and integrity. Worrying about what other folks do, or don’t do, doesn’t help my situation. It just makes me cranky. They say Wall Street climbs a wall of worry, and that’s fine. If you spend your time climbing a similar wall of worry you may achieve things, but it will be at great cost. Not just to you but to those around you. Take it from me – I know all about it. To be clear, this is fine tuning stuff. I would not ever minimize the severity of a medical anxiety disorder. Unfortunately I have some experience with that as well, and folks who cannot control their anxiety need professional help. My point is that for those of us who just seem to find things to worry about, a slightly different attitude and focus on things you can control can do wonders to relieve some of that anxiety and make your day a bit better. –Mike Photo credit: “Stop worrying about pleasing others so much, and do more of what makes you happy.” originally uploaded by Live Life Happy Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. API Gateways Security Enabling Innovation Security Analytics with Big Data Integration New Events and New Approaches Use Cases Introduction Network-based Malware Detection 2.0 The Network’s Place in the Malware Lifecycle Scaling NBMD Evolving NBMD Advanced Attackers Take No Prisoners Quick Wins with Website Protection Services Deployment and Ongoing Management Protecting the Website Are Websites Still the Path of Least Resistance? Newly Published Papers Email-based Threat Intelligence: To Catch a Phish Network-based Threat Intelligence: Searching for the Smoking Gun Understanding and Selecting a Key Management Solution Building an Early Warning System Implementing and Managing Patch and Configuration Management Incite 4 U Snowing the NSA: Once again security (and/or monitoring) is front and center in the media this week. This time it’s the leak that the NSA has been monitoring social media and webmail traffic for years. Perhaps under the auspices of a secret court, and perhaps not. I believe Rob Graham’s assessment that the vast majority of intelligence personnel bend over backward to protect citizen’s rights. But it is still shocking to grasp the depth of our surveillance state. Still, as I mentioned above, I try not to worry about things I can’t control. So how did Edward Snowden pull off the leak? The NY Times has a great article about the gyrations required by reporters over a 6-month period to get the story. A Rubik’s Cube? Really? Snowden came clean, but they would have found him eventually – we always leave a trail. Another interesting link regarding the situation is how someone social engineered the hotel where Snowden was staying to get his room number and determine that he already checked out. If you want to be anonymous, probably beter not to use your real name, eh? – MR Present Tense: As someone who has been blogging on privacy for almost a decade, I am surprised by how vigorous public reaction has been to spying on US citizens via telecom carriers. When Congress and the senate granted immunity to telecoms for spying on users back in 2008, was it not obvious that Corporate entities are now the third party data harvester, and government

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.