Securosis

Research

Firestarter: So you want to multicloud?

This is our first in a series of Firestarters covering multicloud. Using more than one IaaS cloud service provider is, well, a bit of a nightmare. Although this is widely recognized by anyone with hands-on cloud experience that doesn’t mean reality always matches our desires. From executives worried about lock in to M&A activity we are finding that most organizations are being pulled into multicloud deployments. In this first episode we lay out the top level problems and recommend some strategies for approaching them. Watch or listen: Share:

Share:
Read Post

What We Know about the Capital One Data Breach

I’m not a fan of dissecting complex data breaches when we don’t have any information. In this case we do know more than usual due to the details in the complaint filed by the FBI. I want to be very clear that this post isn’t to blame anyone and we have only the most basic information on what happened. The only person we know is worthy of blame here is the attacker. As many people know Capital One makes heavy use of Amazon Web Services. We know AWS was involved in the attack because the federal complaint specifically mentions S3. But this wasn’t a public S3 bucket. Again, all from the filed complaint: The attacker discovered a server (likely an instance – it had an IAM role) with a misconfigured firewall. It presumably had a software vulnerability or was vulnerable due to to a credential exposure. The attacker compromised the server and extracted out its IAM role credentials. These ephemeral credentials allow AWS API calls. Role credentials are rotated automatically by AWS, and much more secure than static credentials. But with persistent access you can obviously update credentials as needed. Those credentials (an IAM role with ‘WAF’ in the title) allowed listing S3 buckets and read access to at least some of them. This is how the attacker exfiltrated the files. Some buckets (maybe even all) were apparently encrypted, and a lot of the data within those files (which included credit card applications) was encrypted or tokenized. But the impact was still severe. The attacker exfiltrated the data and then discussed it in Slack and on social media. Someone in contact with the attacker saw that information, including attack details in GitHub. This person reported it to Capital One through their reporting program. Capital One immediately involved the FBI and very quickly closed the misconfigurations. They also began their own investigation. They were able to determine exactly what happened very quickly, likely through CloudTrail logs. Those contained the commands issued by that IAM role from that server (which are very easy to find). They could then trace back the associated IP addresses. There are many other details on how they found the attacker in the complaint, and it looks like Capital One did quite a bit of the investigation themselves. So misconfigured firewall (Security Group?) > compromised instance > IAM role credential extraction > bucket enumeration > data exfiltration. Followed by a rapid response and public notification. As a side note, it looks like the attacker may have been a former AWS employee, but nothing indicates that was a factor in the breach. People will say the cloud failed here, but we saw breaches like this long before the cloud was a thing. Containment and investigation seem to have actually run far faster than would have been possible on traditional infrastructure. For example Capital One didn’t need to worry about the attacker turning off local logging – CloudTrail captures everything that touches AWS APIs. Normally we hear about these incidents months or years later, but in this case we went from breach to arrest and disclosure in around two weeks. I hope that someday Capital One will be able to talk about the details publicly so the rest of us can learn. No matter how good you are, mistakes happen. The hardest problem in security is solving simple problems at scale. Because simple doesn’t scale, and what we do is damn hard to get right every single time. Share:

Share:
Read Post

DisruptOps: Build Your Own Multi-Cloud Security Monitoring in 30 Minutes or Less with StreamAlert

Build Your Own Multi-Cloud Security Monitoring in 30 Minutes or Less with StreamAlert One of the most difficult problems in cloud security is building comprehensive multi-account/multi-cloud security monitoring and alerting. I’d say maybe 1 out of 10 organizations I assess or work with have something effective in place when I first show up. That’s why I added a major monitoring lab based on AirBnB’s StreamAlert project to the Securosis Advanced Cloud Security and Applied DevSecOps training class (we still have some spots available for our Black Hat 2019 class). Read the full post at DisruptOps Share:

Share:
Read Post

Apple Flexes Its Privacy Muscles

Apple events follow a very consistent pattern, which rarely changes beyond the details of the content. This consistency has gradually become its own language. Attend enough events and you start to pick up the deliberate undertones Apple wants to communicate, but not express directly. They are the facial and body expressions beneath the words of the slides, demos, and videos. Five years ago I walked out of the WWDC keynote with a feeling that those undertones were screaming a momentous shift in Apple’s direction. That privacy was emerging as a foundational principle for the company. I wrote up my thoughts at Macworld, laying out my interpretation of Apple’s privacy principles. Privacy was growing in importance at Apple for years before that, but that WWDC keynote was the first time they so clearly articulated that privacy not only mattered, but was being built into foundational technologies. This year I sat in the WWDC keynote, reading the undertones, and realized that Apple was upping their privacy game to levels never before seen from a major technology company. That beyond improving privacy in their own products, the company is starting to use its market strength to pulse privacy throughout the tendrils that touch the Apple ecosystem. Regardless of motivations – whether it be altruism, the personal principles of Apple executives, or simply shrewd business strategy – Apple’s stance on privacy is historic and unique in the annals of consumer technology. The real question now isn’t whether they can succeed at a technical level, but whether Apple’s privacy push can withstand the upcoming onslaught from governments, regulators, the courts, and competitors. Apple has clearly explained that they consider privacy a fundamental human right. Yet history is strewn with the remains of well-intentioned champions of such rights. How privacy at Apple changed at WWDC19 When discussing these shifts in strategy, at Apple or any other technology firm, it’s important to keep in mind that the changes typically start years before outsiders can see them, and are more gradual than we can perceive. Apple’s privacy extension efforts started at least a couple years before WWDC14, when Apple first started requiring privacy protections to participate in HomeKit and HealthKit. The most important privacy push from WWDC19 is Sign In with Apple, which offers benefits to both consumers and developers. In WWDC sessions it became clear that Apple is using a carrot and stick approach with developers: the stick is that App Review will require support for Apple’s new service in apps which leverage competing offerings from Google and Facebook, but in exchange developers gain Apple’s high security and fraud prevention. Apple IDs are vetted by Apple and secured with two-factor authentication, and Apple provides developers with the digital equivalent of a thumbs-up or thumbs-down on whether the request is coming from a real human being. Apple uses the same mechanisms to secure iCloud, iTunes, and App Store purchases, so this seems to be a strong indicator. Apple also emphasized they extend this privacy to developers themselves. That it isn’t Apple’s business to know how developers engage with users inside their apps. Apple serves as an authentication provider and collects no telemetry on user activity. This isn’t to say that Google and Facebook abuse their authentication services, Google denies this accusation and offers features to detect suspicious activity. Facebook, on the other hand, famously abused phone numbers supplied for two-factor authentication, as well as a wide variety of other user data. The difference between Sign In with Apple and previous privacy requirements within the iOS and Mac ecosystems is that the feature extends Apple’s privacy reach beyond its own walled garden. Previous requirements, from HomeKit to data usage limitations on apps in the App Store, really only applied to apps on Apple devices. This is technically true for Sign In with Apple, but practically speaking the implications extend much further. When developers add Apple as an authentication provider on iOS they also need to add it on other platforms if they expect customers to ever use anything other than Apple devices. Either that or support a horrible user experience (which, I hate to say, we will likely see plenty of). Once you create your account with an Apple ID, there are considerable technical complexities to supporting non-Apple login credentials for that account. So providers will likely support Sign In with Apple across their platforms, extending Apple’s privacy reach beyond its own platforms. Beyond sign-in Privacy permeated WWDC19 in both presentations and new features, but two more features stand out as examples of Apple extending its privacy reach: a major update to Intelligent Tracking Prevention for web advertising, and HomeKit Secure Video. Privacy preserving ad click attribution is a surprisingly ambitious effort to drive privacy into the ugly user and advertising tracking market, and HomeKit Secure Video offers a new privacy-respecting foundation for video security firms which want to be feature competitive without the mess of building (and securing) their own back-end cloud services. Intelligent Tracking Prevention is a Safari feature to reduce the ability of services to track users across websites. The idea is that you can and should be able to enable cookies for one trusted site, without having additional trackers monitor you as you browse to other sites. Cross-site tracking is endemic to the web, with typical sites embedding dozens of trackers. This is largely to support advertising and answer a key marketing question: did an ad lead to you visit a target site and buy something? Effective tracking prevention is an existential risk to online advertisements and the sites which rely on them for income, but this is almost completely the fault of overly intrusive companies. Intelligent Tracking Prevention (combined with other browser privacy and security features) is a stick and privacy preserving ad click attribution is the corresponding carrot. It promises to enable advertisers to track conversion rates without violating user privacy. An upcoming feature of Safari, and a proposed web standard, Apple promises that browsers will remember ad clicks for seven days. If

Share:
Read Post

DisruptOps: The Security Pro’s Quick Comparison: AWS vs. Azure vs. GCP

I’ve seen a huge increase in the number of questions about cloud providers beyond AWS over the past year, especially in recent months. I decided to write up an overview comparison over at DisruptOps. This will be part of a slow-roll series going into the differences across the major security program domains – including monitoring, perimeter security, and security management. Here’s an excerpt: The problem for security professionals is that security models and controls vary widely across providers, are often poorly documented, and are completely incompatible. Anyone who tells you they can pick up on these nuances in a few weeks or months with a couple training classes is either lying or ignorant. It takes years of hands-on experience to really understand the security ins and outs of a cloud provider. … AWS is the oldest and most mature major cloud provider. This is both good and bad, because some of their enterprise-level options were basically kludged together from underlying services weren’t architected for the scope of modern cloud deployments. But don’t worry – their competitors are often kludged together at lower levels, creating entirely different sets of issues. … Azure is the provider I run into the most when running projects and assessments. Azure can be maddening at times due to lack of consistency and poor documentation. Many services also default to less secure configurations. For example if you create a new virtual network and a new virtual machine on it, all ports and protocols are open. AWS and GCP always start with default deny, but Azure starts with default allow. … Like Azure, GCP is better centralized, because many capabilities were planned out from the start – compared to AWS feature which were only added a few years ago. Within your account Projects are isolated from each other except where you connect services. Overall GCP isn’t as mature as AWS, but some services – notably container management and AI – are class leaders. Share:

Share:
Read Post

Selecting Enterprise Email Security: the Buying Process

To wrap up this series we will bring you through a process of narrowing down the shortlist and then testing products and/or services in play. With email it’s less subjective because malicious email is… well, malicious. But given the challenges of policy management at scale (discussed in our last post), you’ll want to ensure a capable UX and sufficient reporting capabilities as well. Let’s start with the first rule of buying anything: you drive the process. You’ll have vendors who want you to use their process, their RFP/RFP language, their PoC guide, and their contract language. All that is good and well if you want to buy their product. But what you want is the best product to solve your problems, which means you need to drive your selection process. We explained in our introduction that a majority of attacks start with a malicious email. So selecting the best platform remains critical for enterprises. You want to ensure your chosen vendor addresses the email-borne threats of not just today, but tomorrow as well. A simple fact of the buying process is that no vendor ever says “We’re terrible at X, but you should buy us because Y is what’s most important to you.” Even though they should. It’s up to you to figure out each vendor’s real strengths and weaknesses and line them up against your requirements. That’s why it’s critical to have a firm handle on your requirements before you start talking to vendors. The first step is to define your short list of 2-3 vendors who appear to meet your needs. You accomplish this by talking to folks on all sides of the decision. Start with vendors but also talk to friends, third parties (like us), and possibly resellers or managed service providers. When meeting vendors stay focused on how their tool addresses your current threats and their expectations for the next wave of email attacks. Make any compliance or data protection issues (or both) very clear because they drive the architecture and capabilities you need to test. Don’t be afraid to go deep with vendors. You will spend a bunch of time testing platforms, so you should ask every question you can to make an educated decision. The point of the short list is to disqualify products that won’t work early in the process so you don’t waste time later. Proof of Concept Once you have assembled the short list it’s time to get hands-on with the email security platforms and run each through its paces through a Proof of Concept (PoC) test. The proof of concept is where sales teams know they have a chance to win or lose, so they bring their best and brightest. They raise doubts about competitors and highlight their own capabilities and successes. They have phone numbers for customer references handy. But forget all that now. You are running this show, and the PoC needs to follow your script – not theirs. Preparation Vendors design PoC processes to highlight their product strengths and hide weaknesses. Before you start any PoC be clear about the evaluation criteria. Your criteria don’t need to be complicated. Your requirements should spell out the key capabilities you need, with a plan to further evaluate each challenger based on squishier aspects such as set-up/configuration, change management, customization, user experience/ease of use, etc. With email it all starts with accuracy. So you’ll want to see how well the email security platforms detect and block malicious email. Of course you could stop there and determine the winner based on who blocks 99.4%, which is better than 99.1%, right? Yes, we’re kidding. You also need to pay attention to manageability at scale. The preparation involves figuring out the policies you’ll want to deploy on the product. These policies need to be consistent across all of the products and services you test. Here are some ideas on policies to think about: Email routing Blocked attacks (vs. quarantined) Spam/phishing reporting Email plug-in Threat intelligence feeds to integrate Disposition of email which violates policy Attributes requiring email encryption Integration with enterprise security systems: SIEM, SOAR, help desk And we’re sure there are a bunch of other policy drivers we missed. Work with the vendor’s sales team to make sure you can exercise each product or service to its fullest capabilities. Make sure to track additional policies, above and beyond the policies you defined for all the competitors – you want an apples to apples comparison, but also want to factor in additional capabilities offered by any competitors. One more thing: we recommend investing in screen capture technology. It is hard to remember what each tool did and how – especially after you have worked a few unfamiliar tools through the same paces. Capture as much video as you can of the user experience – it will come in handy as you reach the decision point. Without further ado, let’s jump into the PoC. Testing Almost every email system (Exchange, Office 365, Google Suite, etc.) provides some means of blocking malicious email. So that is the base level for comparison. The next question becomes whether you want to take an active or passive approach during the PoC. In an active test you introduce malicious messages (known malware and phishing messages) into the environment to track whether the product or service catches messages which should be detected. A passive test uses the product against your actual mail stream, knowing it will get a bunch of spam, phishes, and attacks if you look at enough messages. To undertake an active test you need access to these malicious messages, which isn’t a huge impediment as there are sites which provide known phishing messages, and plenty of places to get malware for testing. Of course you’ll want to take plenty of precautions to ensure you don’t self-inflict a real outbreak. There is risk in doing an active test, but it enables you to evaluate false negatives (missing malicious messages), which create far more damage than false positives (flagging

Share:
Read Post

Selecting Enterprise Email Security: Scaling to the Enterprise

As we continue down the road of Selecting Enterprise Email Security, let’s hone in on the ‘E’ word: Enterprise. Email is a universal application, and scaling up protection to the enterprise level is all about managing email security in a consistent way. So this post will dig into selecting the security platform, integrating with other enterprise security controls, and finally some adjacent services which can improve the security of your email and so should be considered as part of broad protection. Platform The first choice is which platform you will build your email security on. Before you can compare one vendor against another you need to determine where the platform will run: in the cloud or on-premise. Although it’s not really much of a decision anymore. Certain industries and use cases favor one over the other. But overall, email security is clearly moving to the cloud. The cloud is compelling for email security because it removes some problematic aspects of managing the platform from your responsibility. When you get hit with a spam flood, if your platform is in the cloud, upgrading devices to handle the load is not your problem. When the underlying product needs to be updated, patching it is not your problem. You don’t need to make sure detections are updated. The cloud provider takes care of all that, which means you can focus on other stuff. Leveraging cloud security shifts a whole bunch of problems onto your provider. Bravo! Another essential aspect of enterprise email security is the ability to recover and keep business running in case of a mail system outage. Your email security platform can provide resilience/continuity for your email system by sending and receiving messages, even if your primary email system is down or shaky. If you’ve ever had a widespread email outage and lived to tell the tale, it’s a no-brainer – ensuring the uninterrupted flow of messages tends to be Job #1, #2 and #3 for the IT group. So in what use cases or industries does an on-premise email security gateway make sense? In highly sensitive environments where email absolutely, positively, cannot run through a service provider’s network. Email encryption enables you to protect mail even as it passes through the cloud, but that adds a lot of overhead and complexity. Some industries and verticals – think national security – find the cloud simply unacceptable. Or perhaps we should say isn’t acceptable yet because at some point we expect you to look back nostalgically at your data center – a bit like how you think fondly about wired telephones today. To avoid any ambiguity, aside from those kinds of high-security environments, we believe email security platforms should reside in the cloud. Content Protection Blocking malicious email is the top requirement of an email security platform, but a close second is advanced content protection. This could involve DLP-like scanning of messages and encrypting messages and/or attachments, depending on message content and enterprise policies. Most email security offerings offer content analysis, and typically built-in encryption as well. In terms of content analysis, you’ll want sophisticated analysis to be a core feature. That means “DLP-light,” which we described years ago (Intro, Technologies, Process). It’s not full DLP but provides sufficient content analysis to detect sensitive data, and enough customization to handle your particular data and requirements. The platform should be able to fingerprint sensitive data types and use built-in, industry-specific, and customizable dictionaries to pinpoint sensitive content. Once a potential violation is identified you’ll want sufficient policy granularity to enable different actions depending on message content, destination, attachment, etc. The more involved the employee can be in handling those issues (with reporting and oversight, of course), the less your central Security team will get bogged down dealing with DLP alerts – a huge issue for full DLP solutions. Speaking of actions, depending on content analysis and policy, the message in question could be blocked or automatically encrypted. The most prevalent means of email encryption is the secure delivery server, which provides control over encrypted files (messages) by encrypting and sending them to a secure messaging service/server. The recipient gets a link to the secure message, and with proper authentication can access it via the service. Having sensitive data in a place you control enables you to set policies regarding expiration, printing, replying and forwarding, etc. based on the sensitivity of the content. Integration The base email security platform scans your inbound email, drops spam, analyzes and explodes attachments, rewrites URLs, identifies imposter attacks, looks for sensitive content, and may encrypt a subset of messages which cannot leave your environment in the clear. But to scale email security to your enterprise, you’ll want to integrate it with other enterprise controls. Email Platform The integration point that rises above all others is your email platform, especially if it is in the cloud (most often Office 365 or G Suite). It’s trivial to route your inbound email to a security platform, which then passes clean email to your server. Integration with the platform enables you to protect outbound email, and also to scan internal email as discussed in our last post. You have options to integrate your security platform with your email server whether email runs in the cloud or not, and whether security runs in the cloud or not. Just be wary of the complexity of managing dozens of email routing rules and ensuring that outbound email from a specific group is sent through the proper gateway or service on the way out. Again, this isn’t overly complicated, but it requires diligence (particularly at scale) because if you miss a route, mail can be unprotected. Keep in mind that integration for internal email scanning is constrained by the capabilities of the email provider’s API. The big email service providers have robust APIs which provide sufficient access; but for any provider, see exactly what’s available. Management An enterprise email security gateway is a key part of your security infrastructure, so it should be tightly integrated into

Share:
Read Post

Selecting Enterprise Email Security: Detection Matters

As we covered in the introduction to our Selecting Enterprise Email Security series, even after over a decade of trying to address the issue, email-borne attacks are still a scourge on pretty much every enterprise. That doesn’t mean the industry hasn’t made progress – it’s just that between new attacker tactics and the eternal fallibility of humans clicking on things, we’re arguably in about the same place we’ve been all along. As you are considering upgrading technologies to address these email threats, let’s focus on detection – the cornerstone of any email security strategy. To improve detection we need to address issues on multiple fronts. First we’ll look at threat research, which is critical to identify attacker tactics and maintain information sources of known malicious activity. Then you need to ensure detection will scale to your needs, as well as implement some attack specific detection in case of phishing and Business Email Compromise (BEC). Finally we’ll evaluate use of internal email analysis as another mechanism to identify malicious activity within the environment. Threat Research: the Foundation of Detection The general tactics used to detect email attacks, such as behavioral analysis and file-based antivirus, are commoditized. There is little value in these tactics themselves, but many detection techniques working together can be highly effective. It’s a bit like mixing a cocktail. You can have five different liquors, but knowing the proportions of each liquor to use lets you concoct tasty cocktails. Modern detection is largely about knowing what tactics and techniques to use, and even more about being able to adapt their composition and mixture because attacks always change. So threat research is contingent on a mature and robust analytics capability. It’s about blending sources like multiple AV engines, malicious URL databases, and sender reputation databases to determine the optimal mix and weighting of each input. It’s necessary to have a sufficiently large corpus of both good and bad email to identify common components and patterns of malicious email, which then filters back into the detection cocktail. Threat research requires analytics infrastructure and data scientists to run it effectively. During the courting process with potential vendors it’s helpful to understand their threat research capability in terms of resourcing/investment, skills, and output. Sure, having a research team find a new and innovative attack and getting tons of press is laudable, but it doesn’t help you detect malicious email. We recommend you focus on meat and potatoes activity, like how often detections are changed, and how long it takes a new finding to be rolled out to protect all customers. Applied Threat Research Once you are comfortable with a potential provider’s threat research foundation, the next area to evaluate is how that information is put to use within a gateway or service. For instance, how do behavioral detections work within the gateway or service? You’ll want to know how the offering protects URLs. You learned about their URL database above, but what happens when a URL is not in the database? Do they render it in a sandbox? Do they use techniques like URL rewriting and stripping malicious domains from email to protect users from attacks? Then focus on finding malicious attachments. How are inbound files analyzed? Does the provider have a sandbox service to perform analysis? What is the latency entailed in analyzing a file, and in the meantime is the message held or sent to the user, while the sandbox runs in the background? Will the service convert files to a safe format and deliver that, while maintaining availability of the original? What about impersonation attacks (often called Business Email Compromise [BEC]), where attackers try to convince employees that a message is legitimate, and to take some unauthorized action (like wiring a ton of money to their bank account)? This is another form of social engineering, but these attacks can be detected by looking for header anomalies and watching for sender spoofing approaches (such as changing the display name and using lookalike domains). Even something simple like marking messages that come from outside your domain can trigger employees to scrutinize messages a bit more carefully before clicking a link or taking action. And let’s not forget about phishing. Does the provider have a means of tracking phishing campaigns across their customer base? Can they identify phishing sites and help have them taken down? Phishing is old news, but like many email attacks, seems to have a half-life measured in decades. Finally, how easy is it to categorize users and build appropriate policies for the group? For example some groups have legitimate business requirements to get files from external sources (including HR for resumes, Finance for invoices, etc.). But some employee groups shouldn’t get many email attachments at all, or are likely to click links to compromised sites. So managing these policies at enterprise scale makes a big difference in the effectiveness of detection. We’ll discuss this more in our next post. Internal Analysis to Detect Proliferation Historically email security happened upon receipt of email. Once it was deemed legitimate, a message went on its way to the user, and if the gateway missed an attack you hoped to detect it using another control. Over the past few years more enterprises have started evaluating internal email traffic to detect missed attacks (those dreaded false negatives). For example you can identify lateral movement of an attack campaign by tracking the same email to multiple employees. The ability to monitor and even remove malicious emails from a user’s mailbox can offer a measure of retrospective protection, addressing the fact that you will miss some attacks. But once you identify a message as bad, you can find out which users received it, how many opened it, and whether they clicked the link – and remove it from their inboxes before more damage occurs. Another advantage of integrating security with internal email servers is outbound protection. You can check email for sensitive data and malicious attachments before it is sent, providing an earlier chance to stop an attack than

Share:
Read Post

Selecting Enterprise Email Security: Introduction

It’s 2019, and we’re revisiting email security. Wait; what? Did we step out of a time machine and end up in 2006? Don’t worry – you didn’t lose the past 13 years in a cloud of malware (see what we did there?). But before we discuss the current state of email security, we thought we should revisit what we wrote in our 2012 RSA Guide about email security. We thought we were long past the anti-spam discussion, isn’t that problem solved already? Apparently not. Spam still exists, that’s for sure, but any given vendor’s efficiency varies from 98% to 99.9% effective on any given week. Just ask them. Being firm believers in Mr. Market, clearly there is enough of an opportunity to displace incumbents, as we’ve seen new vendors emerge to provide new solutions, and established vendors to blend their detection techniques to improve effectiveness. There is a lot of money spent specifically for spam protection, and it’s a visceral issue that remains high profile when it breaks, thus it’s easy to get budget. Couple that with some public breaches from targeted phishing attacks or malware infections through email, and anti-spam takes on a new focus. Again To be clear, that was seven years ago. The more things change, the more they stay the same. We, as an industry, still struggle with protecting email – which remains the number one attack vector. That’s some staying power! We can be a little tongue-in-cheek here, but it underlies a continued problem that seems to defy a solution – employees. Email users remain the weakest link, clicking all sorts of stuff they shouldn’t. Over and over again. You’ve probably increased your investment in security awareness training, as it seems most enterprises are moving in that direction. We recently wrote a paper on Making an Impact with Security Awareness Training to cover that very topic. So check that out. In this series, Selecting Enterprise Email Security, we’re going to hit on the technologies and how to evaluate them to protect your email. Before we get into that, let’s first thank our initial licensee, Mimecast, who has graciously agreed to potentially license this report at the end of the project. Remember, you benefit by gaining access to our research, gratis, because folks like Mimecast understand the importance of educating the industry. Steady Progress We can joke a bit about the Groundhog Day nature of email security, but let’s acknowledge that the industry’s made progress. Email providers (including Microsoft and Google) take security far more seriously, bundling detection capabilities into their base email SaaS offerings. Although not the best (we’ll dig into that later in this series), but we prefer even mediocre security built-in to none at all. The arms race of detecting email-borne threats continues, with security vendors making significant investments in complementary technologies (such as malware analysis and security awareness training), purpose-built phishing solutions emerging, and a focus on threat intelligence to help the industry learn from common attacks. As in many other aspects of security, the emergence of better and more accurate analytics has improved detection. Security vendors have access to billions and billions of both good and bad emails to train machine learning engines, and they have. All the major companies hire as many data scientists as they can find to continually refine detection. We’ll dig into how to figure out which detection capabilities make an impact (and which don’t) in our next post. New Attacks Unfortunately it turns out adversaries aren’t standing still either. They continue to advance phishing techniques, especially for campaigns which last hours rather than days. They hit fast and hard, and then their phishing sites are taken down. Financial fraudsters have automated many of their processes and packaged them up into easily accessible phishing kits to keep overwhelming defenders. We also see new attacks, like BEC (Business Email Compromise), where attackers spoof an internal email address to impersonate a senior executive (perhaps the CFO) requesting a lower-level employee transfer money to a random bank account. And unfortunately far too many employees fall for the ruse, assuming what looks like an internal email is legit. And that’s not all. We see continued innovation in both defeating endpoint defenses (even fancy new next-generation AV products) and preying on the gullibility of employees with social engineering attacks. So your email system is still a major delivery vehicle for attacks, whether you run it in your data center or someone else’s. That means we need to make sure your email security platform can protect your environment. We’ll go through the latest technological advancements, and define selection criteria to drive your evaluation of enterprise email security solutions. We’ll start by digging into the latest and greatest detection techniques, then walk through enterprise features needed to scale up email security. Finally we’ll wrap up by providing perspective on procurement, including how to most effectively test email security services. Again, thanks to Mimecast for licensing this content so you can be brought up to date on the latest and greatest in email security. Share:

Share:
Read Post

DisruptOps: Cloud Security CoE Organizational Models

Cloud Security CoE Organizational Models In the first post of our Cloud Security Center of Excellence series we covered the two critical aspects of being successful at cloud security: accountability and empowerment. Without accepting accountability to secure all the organization’s cloud assets, and being empowered to make changes to the environment in the name of improved security, it’s hard to enforce a consistent baseline of security practices that can dramatically reduce an organization’s attack surface. Read the full post at DisruptOps Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.