Securosis

Research

Making an Impact with Security Awareness Training: Quick Wins and Sustained Impact

Our last post explained Continuous Contextual Content as a means to optimize the effectiveness of a security awareness program. CCC acknowledges that users won’t get it, at least not initially. That means you need to reiterate your lessons over and over (and probably over) again. But when should you do that? Optimally when their receptivity is high – when they just made a mistake. So you determine the relative risk of users, and watch for specific actions or alerts. When you see such behavior, deliver the training within the context of what they see then. But that’s not enough. You want to track the effectiveness of your training (and your security program) to get a sense of what works and what doesn’t. If you can’t close the loop on effectiveness, you have no idea whether your efforts are working, or how to continue improving your program. To solidify the concepts, let’s go through a scenario which works through the process step by step. Let’s say you work for a large enterprise in the financial industry. Senior management increasingly worries about ransomware and data leakage. A recent penetration test showed that your general security controls are effective, but in their phishing simulation over half your employees clicked a fairly obvious phish. And it’s a good thing your CIO has a good sense of humor, because the pen tester gained full access to his machine via a well crafted drive-by attack which would have worked against the entire senior team. So your mission, should you choose to accept it, is to implement security awareness training for the company. Let’s go! Start with Urgency As mentioned, your company has a well-established security program. So you can hit the ground running, using your existing baseline security data. Next identify the most significant risks and triage immediate action to start addressing them. Acting with urgency serves two purposes. It can give you a quick win, and we all know how important it is to show value immediately. As a secondary benefit you can start to work on training employees on a critical issue right away. Your pen test showed that phishing poses the worst problems for your organization, so that’s where you should focus initial efforts. Given the high-level support for the program, you cajole your CEO into recording a video discussing the results of the phishing test and the importance of fixing the issue. A message like this helps everyone understand the urgency of addressing the problem and that the CEO will be watching. Following that, every employee completes a series of five 3-5 minute training videos walking them through the basics of email security, with a required test at the end. Of course it’s hard to get 100% participation in anything, so you’ve already established consequences for those who choose not to complete the requirement. And the security team is available to help people who have a hard time passing. It’s a balance between being overly heavy-handed against the importance of training users to defend themselves. You need to ensure employees know about the ongoing testing program, and that they’ll be testing periodically. That’s the continuous part of the approach – it’s not a one-time thing. Introduce Contextual Training As you execute on your initial phishing training effort, you also start to integrate your security awareness training platform with existing email, web, and DNS security services. This integration involves receiving an alert when an employee clicks a phishing message, automatically signing them up for training, and delivering a short (2-3 minute) refresher on email security. Of course contextual training requires flexibility, because an employee might be in the middle of a critical task. But you can establish an expectation that a vulnerable employee needs to complete training that day. Similarly, if an employee navigates to a known malicious site, the web security service sends a trigger, and the web security refresher runs for that employee. The key is to make sure the interruption is both contextual and quick. The employee did this, so they need training immediately. Even a short delay will reduce the training’s effectiveness. Additionally, you’ll be running ongoing training and simulations with employees. You’ll perform some analysis to pinpoint the employees who can’t seem to stop clicking things. These employees can get more intensive training, and escalation if they continue to violate corporate policies and put data at risk. Overhaul Onboarding After initial triage and integration with your security controls, you’ll work with HR to overhaul the training delivered during their onboarding process. You are now training employees continuously, so you don’t need to spend 3 hours teaching them about phishing and the hazards of clicking links. Then onboarding can shift, to focus on establishing a culture of security from Day 1. This entails educating new employees on online and technology policies, and acceptable use expectations. You also have an opportunity to set expectations for security awareness training. Make clear that employees will be tested on an ongoing basis, and inform them who sees the results (their managers, etc.), along with the consequences of violating acceptable use policies. Again, a fine line exists between being draconian and setting clear expectations. If the consequences have teeth (as they should), employees must know, and sign off on their understanding. We also recommend you test each new employee within a month of their start date to ensure they comprehend security expectations and retained their initial lessons. Start a Competition Once your program settles in over six months or so, it’s time to shake things up again. You can set up a competition, inviting the company to compete for the Chairperson’s Security Prize. Yes, you need to get the Chairperson on board for this, but that’s usually pretty easy because it helps the company. The prize needs to be impactful, and more than bragging rights. Maybe you can offer the winning department an extra day of holiday for the year. And a huge trophy. Teams love to compete for trophies they can display prominently in their area. You’ll set the ground rules, including

Share:
Read Post

Making an Impact with Security Awareness Training: Continuous Contextual Content

As we discussed in the first post of our Making an Impact with Security Awareness Training series, organizations need to architect training programs around a clear definition of success, both to determine the most appropriate content to deliver, and also to manage management expectations. The definition of success for any security initiative is measurable risk reduction, and that applies just as much to security awareness training. We also covered the limitations of existing training approaches – including weak generic content, and a lack of instrumentation & integration, to determine the extent of risk reduction. To overcome these limitations we introduced the concept of Continuous, Contextual Content (3C) as the cornerstone of the kind of training program which can achieve security initiatives. We described 3C as: “It’s giving employees the necessary training, understanding they won’t retain everything. Not the first time anyway. Learning requires repetition, but why repeat training to someone that already gets it? That’s a waste of time. Thus to follow up and focus on retention, you want to deliver appropriate content to the employee when they need it. That means refreshing the employee about phishing, not at a random time, but after they’ve clicked on a phishing message.” Now we can dig in to understand how to move your training program toward 3C. Start with Users Any focus on risk reduction requires first identifying employees who present the most risk to the organization. Don’t overcomplicate your categorization process, or you won’t be able to keep it current. We suggest 4-6 groups categorized by their access to critical information. Senior Management: These individuals have the proverbial keys to the kingdom, so they tend to be targeted by whaling and other adversary campaigns. They also tend to resist extensive training given their other responsibilities. That said, if you cannot get senior management to lead by example and receive extensive training, you have a low likelihood of success with the program overall. Finance: This team has almost the same risk profile as senior management. They access financial reporting systems and the flow of money. Stealing money is the objective of many campaigns, so these folks need a bit more love to prepare for the inevitable attacks. HR and Customer Service: Attackers target Human Resources and Customer Service frequently as well, mostly because they provide the easiest path into the organization; attackers then continue toward their ultimate goal. Interacting with the outside world makes up a significant part these groups’ job functions, so they need to be well-versed in email attacks and safe web browsing. Everyone else: We could define another dozen categories, but that would quickly pass the point of diminishing returns. The key for this group is to ensure that everyone has a baseline understanding of security, which they can apply when they see attacks. Once you have defined your categories you design a curriculum for each group. There will be a base level of knowledge, for the everyone else group. Then you extend the more advanced curricula to address the most significant risks to each specific group, by building a quick threat model and focusing training to address it. For example senior management needs a deep understanding of whaling tactics they are likely to face. Keep in mind that the frequency of formal training varies by group. If the program calls for intensive training during on-boarding and semi-annual refreshers, you’ll want more frequent training for HR and Customer Service. Given how quickly attack tactics change, updating training for those groups every quarter seems reasonable to keep them current. Continuous Just as we finish saying you need to define the frequency for your different user groups, the first “C” is continuous. What gives? A security training program encompasses both formal training and ad-hoc lessons as needed. Attackers don’t seem to take days off, and the threat landscape changes almost daily. Your program needs to reflect the dynamic nature of security and implement triggers to initiate additional training. You stay current by analyzing threat intelligence looking for significant new attacks that warrant additional training. Ransomware provides a timely example of this need. A few years ago when the first ransomware attack hit, most employees were not prepared to defend against the attack and they certainly didn’t know what to do once the ransomware locked their devices. For these new attack vectors, you may need to put together a quick video explaining the attack and what to do in the event the employee sees it. To be clear, speed matters here so don’t worry about your training video being perfect, just get something out there to prepare your employees for an imminent attack. Soon enough your security training vendor will update existing training and will introduce new material based on emerging attacks, so make sure you pay attention to available updates within the training platform. Continuous training also involves evaluating not just potential attacks identified via threat intel but also changes in the risk profile of an employee. Keep on top of the employee’s risk profile, integrate with other security tools, including email security gateways, web security proxies and services, web/DNS security tools, DLP, and other content inspection technologies, security analytics including user behavior analytics (UBA), etc. These integrations set the stage for contextual training. Contextual If any of the integrated security monitors or controls detects an attack on a specific user, or determines a user did something which violates policy, it provides an opportunity to deliver ad hoc training on that particular attack. The best time to train an employee and have the knowledge stick remains when they are conscious of its relevance. People have different learning styles, and their receptivity varies, but they should be much more receptive right after making a mistake. Then their fresh experience which puts the training in context. Similar to teaching a child not to touch a hot stove after they’ve burnt their hand, showing an employee how to detect a phishing message is more impactful right after they clicked on a phishing message. We’ll dig in with a detailed example in our next post. To wrap up our earlier frequency discussion, you have

Share:
Read Post

Firestarter: Advanced Persistent Tenacity

Mike and Rich discuss the latest Wired piece in Notpetya and how advanced attacks, despite the hype, are very much still alive and well. These days you might be a victim not because you are targeted, but because you are a pivot to a target or share some underlying technology. As a new Apache Struts vulnerability rolls out, we thought it a good time to re-address some fundamentals and evaluate the real risks of both widespread and targeted attacks. **Watch or listen:** —- Share:

Share:
Read Post

Making an Impact with Security Awareness Training: Structuring the Program

We have long been fans of security awareness training. As explained in our 2013 paper Security Awareness Training Evolution, employees remain the last line of defense, and in all too many cases those defenses fail. We pointed out many challenges facing security awareness programs, and have since seen modest improvement in some of those areas. But few organizations rave about their security awareness training, which means we still have work to do. In our new series, Making an Impact with Security Awareness Training, we will put the changes of the last few years into proper context, and lay out our thoughts on how security awareness training needs to evolve to provide sustainable risk reduction. First we need to thank our friends at Mimecast, who have agreed to potentially license the content at the end of the project. After 10 years, Securosis remains focused on producing objective research through transparent methodology. So we need security companies which understand the importance of our iterative process of posting content to the blog and letting you, our readers, poke holes in it. Sometimes our research takes unanticipated turns, and we appreciate our licensee’s willingness to allow us to write impactful research – not just stuff which covers their products. Revisiting Security Awareness Training Evolution Before we get going on making an impact, we need to revisit where we’re coming from. Back in 2013 we identified the challenges of security awareness training as: Engaging students: Researchers have spent a lot of time discovering the most effective ways to structure content to teach information with the best retention. But most security awareness training materials seem to be stuck in the education dark ages, and don’t take advantage of these insights. So the first and most important issue is that training materials aren’t very good. For all training, content is king. Unclear objectives: When training materials attempt to cover every possible attack vector they get diluted, and students retain very little of the material. Don’t try to boil the security ocean with an overly broad curriculum. Focus on specific real threats which are likely in your environment. Incentives: Employees typically don’t have any reason to retain information past the completion of training, or to use it on a daily basis. If they click the wrong thing IT will come to clean up the mess, right? Without either positive or negative incentives, employees forget courses as soon as they finish. Organizational headwinds: Political or organizational headwinds can sabotage your training efforts. There are countless reasons other groups within your organization might resist awareness training, but many of them come back to a lack of incentive – mostly because they don’t understand how important it is. And failure to make your case is your problem. The industry has made minor progress in these areas, mostly in the area of engaging content. The short and entertaining content emerging from many awareness training companies does a better job of engaging employees. Compelling characters and a liberal sprinkling of humor help make their videos more impactful and less reminiscent of root canal. But we can’t say a lot of the softer aspects, such as incentives and the politics of who controls training, have improved much. We believe improving attitudes toward security awareness training requires first defining success and getting buy-in for the program early and often. Most organizations haven’t done a great job selling their programs – instead defaulting to the typical reasons for security awareness training, such as a compliance mandate or a nebulous desire to having fewer employees click malicious links. Being clear about what success means as you design the program (or update an existing program) will pay significant dividends down the road. Success by Design If you want your organization to take security awareness training seriously, you need to plan for that. If you don’t know what success looks like you are unlikely to get there. To define success you need a firm understanding of why the organization needs it. Not just because it’s the right thing to do, or because your buddy found a cool vendor with hilarious content. We are talking about communicating business justification for security awareness training, and more importantly what results you expect from your organization’s investment of time and resources. As mentioned above, many training programs are created to address a compliance requirement or a desire to control risk more effectively. Those reasons make sense, even to business people. But quantifying the desired outcomes presents challenges. We advise organizations to gather a baseline of issues to be addressed by training. How many employees click on phishing messages each week when you start? How many DLP alerts do you get indicating potential data leakage? These numbers enable you to define targets and work towards them. We recommend caution – you need to manage expectations, avoiding assumptions of perfection. That means understanding which risks training can alleviate and which it cannot. If the attack involves clicking a link, training can help. If it’s preventing a drive-by download delivered by a compromised ad network, there’s not much employees can do. Once you have managed expectations it’s time to figure out how to measure employee engagement. You might send out a survey to gain feedback on the content. Maybe you will set up a game where different business units can compete. Games and competition can provide effective incentives for participation. You don’t need to offer expensive prizes. Some groups put in herculean effort to win a trophy and bragging rights. To be clear, employees might need to participate in the training to keep their jobs. Continued employment offers a powerful incentive to participate, but not necessarily to retain the material or have it impact day-to-day actions. So we need a better way to connect training to corporate results. The True Measure: Risk Reduction The most valuable outcome is to reduce risk, which gives security awareness training its impact on corporate results. It’s reasonable to expect awareness training to result in fewer successful attacks and less loss: risk reduction. Every other security control and investment needs to reduce risk, so why hasn’t security awareness

Share:
Read Post

Firestarter: Black Hat and AI… What Could Go Wrong?

In this episode we review the lessons of this year’s Black Hat and DEF CON. In particular, we talk about how things have changed with the students we have in class, now that we’ve racked up over 5 years of running trainings on cloud security. then we delve into one of the biggest, and most confusing, trends… the mysteries of Artificial Intelligence and Machine Learning. Considering our opinions of natural intelligence, you might guess where this heads… Watch or listen: Share:

Share:
Read Post

Firestarter: It’s a GDPR Thing

Mike and Rich discuss the ugly reality that GDPR really is a thing. Not that privacy or even GDPR are bad (we’re all in favor), but they do require extra work on our part to ensure that policies are in place, audits are performed, and pesky data isn’t left lying around in log files unexpectedly. Watch or listen: Share:

Share:
Read Post

Scaling Network Security: The Scaled Network Security Architecture

After considering the challenges of existing network security architectures (RIP Moat) we laid out a number of requirements for the new network security. This includes the needs for scale, intelligence, and flexibility. That’s all well and good, but how do you get there? We’ll wrap up this series by discussing a couple key architectural constructs which will influence how you build your future network security architecture. But before we go into specifics, let’s wrap a few caveats around the architecture. Not everything works for every organization. There may be cultural impediments to some of the ideas we recommend. We point this out because any new way of doing things can face resistance from folks who will be impacted. Yo will need to decide which ideas are suitable for your current problems, and which battles are not worth fighting. There may also be technical challenges, especially with very large networks. Not so much conceptually – faster networks and increased flexibility are already common, regardless of the size of your network. The challenge is more in terms of phasing migration. But nothing we will recommend requires a flash cutover, nor are any of these ideas incompatible with existing network security constructs. We have always advocated customer-controlled migration, which entails deciding when you will embrace new capabilities – not some arbitrary requirement from a vendor or any other influencer. Access Control Everywhere Our first construct to hit is access control everywhere. This is pretty fundamental because network security is about controlling access to key resources. Duh. We have been making pointing out that segmentation is your friend for years. But in traditional networks it became very hard to do true access control scalably, because data flows weren’t predictable, workloads and data move around, and users need to connect from wherever they are. The advent of software defined everything (including networks) has given us an opportunity to more effectively manage who gets access to what, and when. The key is setting the policy. Yes, you start with critical data and who can & should access it from where to set your baseline. But the larger the network and the more dispersed employees and resources (including mobility and the cloud) are, the tougher it is. So you do the best you can with the initial set of policies, and then hit it from the other side. Your new network security should be able to monitor traffic flows and suggest a workable access control policy. Obviously you’ll need to scrutinize and tune the policy while comparing it against the initial cut you took, but this will accelerate your effort. Returning to the need for flexibility, you should be able to adapt policies as needed. Sometimes even on the fly, within parameters defined by policy. That doesn’t mean you need to embrace machines making policy changes without human oversight or intervention, at least at first. In a customer-controlled migration you determine the pace of automation, enabling you to get comfortable with policies and ensure maximum uptime and security. Applying Security Controls With segmentation reducing attack surface by preventing unauthorized access to critical resources, you still need to ensure authorized connections and sessions are not doing anything malicious. But devices get compromised, so we can’t forget the prevention and detection tactics we’ve been using on our networks for decades. Those are still very much needed, but as described under requirements, we need to be more intelligent about when security controls are used. You have probably spent a couple million ($CURRENCY) on network security controls, so you might as well make the best use of that investment. Once again we return to the importance of policy-based network security. Depending on the source, destination, application, time of day, geography, and about a zillion other attributes (okay, we may be exaggerating a bit), we want to leverage a set of controls to protect data. Not every control applies to every session, so the network security platform needs to selectively apply controls. Decryption Before you start worrying about which controls to apply to which traffic, you need to make sure you can actually inspect the sessions. With more and more network traffic encrypted nowadays, before you can apply security controls you will likely need to decrypt. We wrote about this at length in Security and Privacy on the Encrypted Network, but things have changed a bit over the past few years. The standard approach to network decryption involves intercepting the connection to the destination (called person-in-the-middle) and then decrypting the session using a master key. The decryption device then routes the decrypted stream to the appropriate security control per policy, and then sets up a separate encrypted connection to the destination server. And yes, our political correctness may be getting the best of us, but we’re pretty sure that network security equipment is not gender-binary, so we like ‘person’ in the middle. Any network security platform will need to provide decryption capabilities as needed. But that’s getting more complicated, as described in the TLS 1.3 Controversy. Clearly a person in the middle weakens the overall security of a connection, because any organization (some good – like your internal security team; and some bad – like adversaries) could theoretically get in the middle to sniff the session. The TLS 1.3 specification addresses that weakness by implementing Perfect Forward Security, which uses a different key for each session to prevent a single master key which could monitor everything. Obviously not being able to get in the middle of network sessions eliminates your ability to inspect traffic and enforce security policies on the network. To be clear, it will take a long time for TLS 1.3 to become pervasive; in the meantime your connections can negotiate down to TLS 1.2, which still allows person-in-the-middle. But we need to start thinking about different, likely endpoint-centric, approaches to inspecting traffic before it hits the encrypted network. Contextual Protection Assuming we can inspect traffic on the network, we want to implement a policy-centric security approach. That means identifying the traffic and determining which security control(s) are appropriate based on the specifics of the connection. Context helps

Share:
Read Post

Scaling Network Security: The New Network Security Requirements

In our last post we bid adieu to The Moat, given the encapsulation of almost everything into standard web protocols and the movement of critical data to an expanding set of cloud services. Additionally, the insatiable demand for bandwidth further complicates how network security scales. So it’s time to reframe the requirements of the new network security. Basically, as we rethink network security, what do we need it to do? Scale Networks have grown exponentially over the past decade. With 100gbps networks commonplace and the need to inspect traffic at wire speed, let’s just say scale is towards the top of the list of network security requirements. Of course as more and more corporate systems move from data centers to cloud services, traffic dynamics change fundamentally. But pretty much every enterprise we run into still has high speed networks, which need to be protected. So you can’t punt on scaling up your network security capabilities. How has network security scaled so far? Basically using two techniques. Bigger Boxes: The old standby is to throw more iron at the problem. Yet at some point the security controls just aren’t going to get there – whether in performance or cost feasibility, or both. There is certainly a time and a place for bigger and faster equipment, we aren’t disputing that. But your network security strategy cannot depend on the unending availability of bigger boxes to scale. Limit Inspection: The other option is to selectively decide where and what kind of security inspection takes place. In this model, some (hopefully) lower risk traffic is not inspected. Of course that ultimately forces you to hope that you’ve selected what to inspect correctly. We’re not big fans of hope as a security strategy. The need for speed isn’t just pegged to increasing network speeds – it’s also dependent on the types of attacks you’ll see and the amount of traffic preprocessing required. For example with today’s complicated attacks you may need to perform multiple kinds of analyses to detect an attack, which requires more compute power. Additionally, with the increasing amount of encrypted traffic on networks, you need to decrypt packets prior to inspection, which is also tremendously resource intensive. Even if you are looking at a network security appliance rated for 80gbps throughput for threat detection, you need to really understand the kind of inspection being performed, and whether it would detect the attacks you are worried about. We don’t like to compromise on either spending a crapton of money to buy the biggest security box you can find (which still might not be big enough) or deciding to just not inspect some portion of traffic. The scaling requirements for the new network security are: No Security Compromises: You need the ability to inspect traffic which may be part of an attack. Period. To be clear that doesn’t mean all traffic on the the network, but you need to be able to enforce security controls where necessary. Capacity Awareness: I think I saw a bumper sticker once which said “TRAFFIC HAPPENS.” And it does. So you need to support a peak usage scenario without having to pre-provision for 100% usage. That’s what’s so attractive about the cloud. You can scale up and contract your environment as needed. It’s not easy on your networks, but that’s the mentality we want to use. Understand that security controls are capacity constrained, and make sure those devices are not overwhelmed with traffic and don’t start dropping packets. So what happens when network speeds are upgraded, which does happen from time to time? You want to upgrade your security controls on your timetable. Which coincidentally brings both scaling requirements into alignment. You can’t compromise on security just because network speeds increased. And a network upgrade actually represents a legitimate burst. So if you can satisfy those two requirements, you’ll be able to gracefully handle network upgrades without impacting your security posture. Intelligent and Flexible The key to not compromising on security is to intelligently apply the controls required. For example not all traffic needs to be run through the network-based sandbox or the DLP system. Some network sessions between two trusted tiers in your application architecture just require access control. In fact you might not need security inspection at all on some sessions. In all cases you should to be making the decisions about where security makes sense, not being forced by the capabilities of your equipment. This requires the ability to enforce a security policy and implement security controls where they are needed. Classification: Figuring out which controls should be applied to the network session depends first on understanding the nature of the session. Is it associated with a certain application? Is the destination a segment or server you know holds sensitive data? Policy-based: Once you know the nature of the traffic, you need the ability to apply an appropriate security policy. That means some controls are in play and others aren’t. For example if it’s an encrypted traffic stream you’ll need to decrypt it first, so off to the SSL decryption gear. Or as we described above, if it’s traffic between trusted segments, you can likely skip running it through a network sandbox. Multiple Use Cases: Security controls are used both in the DMZ/perimeter and within the data center, so your new network security environment should reflect those differences. There is likely more inspection required for inbound traffic from the Internet than for traffic from a direct connection to your public cloud. Both are external networks, but they generally require different security policies. Cloud Awareness: You can’t forget about the cloud, even though network security can differ significantly from your corporate networks. So whatever kinds of policies you implement on-premise, you’ll want an analogy in the cloud. Again, the controls may be different and deployment will be different, but the level of protection must be consistent regardless of where you data resides. The new network security architecture is about intelligently applying security controls at scale, with a clear understanding that your applications, attackers, and technology infrastructure constantly evolve. Your networks will

Share:
Read Post

Scaling Network Security: RIP, the Moat

The young people today laugh at folks with a couple decades of experience when they rue about the good old days, when your network was snaked along the floors of your office (shout out for Thicknet!), and trusted users were on the corporate network, and untrusted users were not. Suffice it to say the past 25 years have seen some rapid changes to technology infrastructure. First of all, in a lot of cases, there aren’t even any wires. That’s kind of a shocking concept to a former network admin who fixed a majority of problems by swapping out patch cords. On the plus side, with the advent of wireless and widespread network access, you can troubleshoot a network from the other side of the world. We’ve also seen continuing insatiable demand for network bandwidth. Networks grow to address that demand each and every year, which stresses your ability to protect them. Network security solutions still need to inspect and enforce policies, regardless of how fast the network gets. Looking for attack patterns in modern network traffic requires a totally different amount of computing power than it did in the old days. So a key requirement to ensure that your network security controls can keep pace with network bandwidth, which may be Mission: Impossible. Something has to give at some point, if the expectation remains that the network will be secure. In this “Scaling Network Security” series, we will look at where secure networking started and why it needs to change. We’ll present requirements for today’s network which will take you into the future. Finally we’ll wrap up with some architectural constructs we believe will help scale up your network security controls. Before we get started we’d like to thank Gigamon, who has agreed to be the first licensee of this content at the conclusion of the project. If you all aren’t familiar with our Totally Transparent Research methodology, it takes a forward-looking company to let us do our thing without controlling the process. We are grateful that we have many clients who are more focused on impactful and educational research than marketing sound bites or puff pieces about their products. The Moat Let’s take a quick tour through the past 20 years of network security. We appreciate the digression – we old network security folks get a bit nostalgic thinking about how far we’ve come. Back in the day the modern network security industry really started with the firewall, which implemented access control on the network. Then a seemingly never-ending set of additional capabilities were introduced in the decades since. Next was network Intrusion Detection Systems (IDS), which looked for attacks on the network. Rather than die, IDS morphed into IPS (Intrusion Prevention Systems) by adding the ability to block attacks based on policy. We also saw a wave of application-oriented capabilities in the form of Application Delivery Controllers (ADC) and Web Application Firewalls (WAF), which applied policies to scale applications and block application attacks. What did all of these capabilities have in common? They were all based on the expectation that attackers were out there. Facing an external adversary, you could dig a moat between them and your critical data to protect it. That was best illustrated with the concept of Default Deny, a central secure networking concept for many years. It held that if something wasn’t expressly authorized, it should be denied. So if you didn’t set up access to an application or system, it was blocked. That enabled us to dramatically reduce attack surface, by restricting access to only those devices which should be accessed. Is Dead… The moat worked great for a long time. Until it didn’t. A number of underlying technology shifts chipped away at the underlying architecture, starting with the Web. Yeah, that was a big one. The first was encapsulation of application traffic into web protocols (predominately ports 80 and 443) as the browser became the interface of choice for pretty much everything. Firewalls were built to enforce access controls by port and protocol, so this was problematic. Everything looked like web traffic, which you couldn’t really block, so the usefulness of traditional firewalls was dramatically impaired, putting much more weight on deeper inspection using IPS devices. But the secure network would not go quietly into the long night, so a new technology emerged a decade ago, which was unfortunately called the Next Generation Firewall (NGFW). It actually provides far more capabilities than an old access control device, providing the ability to peek into application sessions, profile them, and both detect threats and enforce policies at the application level. These devices were really more Network Security Gateways than firewalls, but we don’t usually get to come up with category names, so it’s NGFW. The advent of NGFW was a boon to customers who were very comfortable with moat-based architectures. So they spent the last decade upgrading to the NGM architecture: Next Generation Moat. Scaling Is a Challenge Yet as described above, networks have continued to scale, which has increased the compute power required to implement a NGM. Yes, network processors have gotten faster, but not as fast as packet processors. Then you have the issue of the weakest link. If you have network security controls which cannot keep pace you run the risk of dropping packets, missing attacks, or more likely both. To address this you’d need to upgrade all your network-based security controls at the same time as your network to ensure protection at peak usage. That seriously complicates upgrades. So your choice is between: $$$ and Complexity: Spend more money (multi-GB network security gateways aren’t cheap) and complicate the upgrade project to keep network and network security controls in lockstep. Oversubscribe security controls: You can always take the bet that even though the network is upgraded, bandwidth consumption takes some time to scale up beyond what network security controls can handle. Of course you don’t want all your eggs in one basket, or more accurately all your controls focused on one area of the environment. That’s why you implemented compensating controls within application stacks and on endpoint devices. But

Share:
Read Post

SecMon State of the Union: The Buying Process

Now that you’ve revisited your important use cases, and derived a set of security monitoring requirements, it’s time to find the right fit among the dozens of alternatives. To wrap up this series we will bring you through a reasonably structured process to narrow down your short list, and then testing the surviving products. Once you’ve chosen the technical winner, you need to make the business side of things work – and it turns out the technical winner is not always the solution you end up buying. The first rule of buying anything is that you are in charge of the process. You’ll have vendors who will want you to use their process, their RFP/RFP language, their PoC Guide, and their contract language. All that is good and fine… if you want to by their product. But more likely you want the best product to solve your problems, which means you need to be driving the process. Our procurement philosophy hinges on this. What we have with security monitoring is a very crowded and noisy market. We have a set of incumbents from the SIEM space, and a set of new entrants wielding fancy math and analytics. Both groups have a set of base capabilities to address the key use cases: threat detection, forensics and response, and compliance automation. But differentiation occurs at the margins of these use cases, so that’s where you will be making your decision. But no vendor is going to say, “We suck at X, but you should buy us because Y is what’s most important to you.” Even though they should. It’s up to you to figure out each vendor’s true strengths and weaknesses, and cross-reference them against your requirements. That’s why it’s critical to have a firm handle on your use cases and requirements before you start talking to vendors. We divide vendor evaluation into two phases. First we will help you define a short list of potential replacements. Once you have the short list you will test one or two new platforms during a Proof of Concept (PoC) phase. It is time to do your homework. All of it. Even if you don’t feel like it. The Short List The goal at this point is to whittle the list down to 3-5 vendors who appear to meet your needs, based on the results of a market analysis. That usually includes sending out RFIs, talking to analysts (egads!), or using a reseller or managed service provider to assist. The next step is to get a better sense of those 3-5 companies and their products. Your main tool at this stage is the vendor briefing. The vendor brings in their sales folks and sales engineers (SEs) to tell you how their product is awesome and will solve every problem you have. And probably a bunch of problems you didn’t know you had too. But don’t sit through their standard pitch – you know what is important to you. You need detailed answers to objectively evaluate any new platform. You don’t want a 30-slide PowerPoint walkthrough and generic demo. Make sure each challenger understands your expectations ahead of the meeting so they can bring the right folks. If they bring the wrong people cross them off. It’s as simple as that – it’s not like you have time to waste. Based on the use cases you defined earlier in this process, have the vendor show you how their tool addresses each issue. This forces them to think about your problems rather than their scripted demo, and shows off capabilities which will be relevant to you. You don’t want to buy from the best presenter – identify the product that best meets your needs. This type of meeting could be considered cruel and unusual punishment. But you need this level of detail before you commit to actually testing a product or service. Shame on you if you don’t ask every question to ensure you know everything you need. Don’t worry about making the SE uncomfortable – this is their job. And don’t expect to get through a meeting like this in 30 minutes. You will likely need a half-day minimum to work through your key use cases. That’s why you will probably only bring 3-5 vendors in for these meetings. You will be spending days with each product during proof of concept, so try to disqualify products which won’t work before wasting even more effort on them. This initial meeting can be a painful investment of time – especially if you realize early that a vendor won’t make the cut – but it is worth doing anyway. You can thank us later. The PoC After you finish the ritual humiliation of every vendor sales team, and have figured out which products can meet your requirements, it’s time to get hands-on with the systems and run each through its paces for a couple days. The next step in the process, the Proof of Concept, is the most important – and vendors know that. This is where sales teams have a chance to win, so the tend bring their best and brightest. They raise doubts about competitors and highlight their own successes. They have phone numbers for customer references handy. But for now forget all that. You are running this show, and the PoC needs to follow your script – not theirs. Given the different approaches represented by SIEM and security analytics vendors, you are best served by testing at least one of each. As you read through our recommended process, it will be hard to find time for more than a couple, but given your specific environment and adversaries, seeing which type best meets your requirements will help you pick the best platform for your needs. Preparation Many security monitoring vendors have a standard testing process they run through, basically telling them what data to provide and what attacks to look for – sometimes even with their resources running their product. It’s like ordering off a price fixe menu. You pick a few key use cases, and then the SE delivers what you ordered. If the

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.