Securosis

Research

Old Dog, New Tricks [Final Incite: June 24, 2024]

TL;DR: Back in December, I took a job as head of strategy and technology for a candy-importing company called Dorval Trading. To explain the move I dusted off the confessor structure, and also performed a POPE evaluation of the opportunity below. I’ll be teaching at Black Hat this summer, so I hope to see many of you there. Otherwise you can always reach me at my Securosis email, at least until Rich cancels my account. It’s another sunny day in the spring. Mike walks into the building. It’s so familiar, yet different. It’s been over 4 years since he’s been here, and it seems lighter. Airier. But the old bones are there. He takes a look around and feels nostalgic. Mike knows this is probably the last time he’ll be here. It’s a very strange feeling. He steps into the booth, as he has done so many times before. He came here to talk through pretty much every major transition since 2006, as a way to document what was going on, and to consider the decisions that needed to be made and why. Confessor: Hi Mike. 4 years is a long time. What have you been up to? Mike: It’s nice to be back. I’ve kept myself occupied, that’s for sure. As we recovered from COVID, Rich and I were faced with some big decisions. DisruptOps was acquired, and Rich decided to join Firemon and lead the Cloud Defense product. I was initially going to keep on the Securosis path, but I got an opportunity to join Techstrong and jumped at it. Confessor: So you and Rich went your separate ways. How did that work out? Mike: Yes and no. Although we don’t work together full-time anymore, we still collaborate quite a bit. We’re in the process of updating our cloud security training curriculum, and will launch CCSKv5 this summer. So I still see plenty of Rich… (Mike gets quiet and looks off into space.) Confessor: What’s on your mind? It seems heavy, but not in a bad way. Kind of like you are seeing ghosts. Mike: I guess I am. This is probably the last time I’ll be here. You see, I’ve taken a real turn in my career. It’s so exciting but bittersweet. Security is what I’ve done for over 30 years. It’s been my professional persona. It’s how I’ve defined my career and who I am to a degree. But security is no longer my primary occupation. Confessor: Do tell. It must be a pretty special opportunity to get you to step out of security. Mike: Would you believe I’ve joined a candy-importing company? I’m running strategy and technology for a business I’ve known for over 40 years. It was very unexpected, but makes perfect sense. Confessor: How did you stumble into this? Mike: Stumble is exactly right. You see, Dorval Trading is a family business started by my stepmother’s parents in 1965. She’s been running it since 1992, and as she was looking to her future, she realized she could use some help. So I did some consulting last year after I left Techstrong, and it was a pretty good fit. The company has been around for almost 60 years, and a lot of the systems and processes need to be modernized. We don’t do any direct e-commerce, and since COVID haven’t really introduced a lot of new products. So there is a lot of work to do. Even better, my brother has joined the company as well. After over 20 years in financial services doing procurement operations, he’ll be focused on optimizing our data and compliance initiatives. So I get to see my family every day, and thankfully that’s a great thing for me. Confessor: Candy?!?! No kidding. What kind of candy? I’m asking for a friend. Mike (chuckling): Our primary product is Sour Power, the original sour candy, which we’ve imported from the Netherlands since 1985. We also have a line of taffy products, and import specialty candies from Europe. If you grew up in the Northeast US, you may be familiar with Sour Power. And now we sell throughout the country. Confessor: So, no more security? Really? Mike: Not exactly. I have been in the business 30 years, and still have lots of friends and contacts. I’m happy to help them out if and when I can. I’ll still teach a few cloud security classes a year, and may show up on IANS calls or an event from time to time. I joined the advisory board of Query.ai, which is a cool federated security search company, and I’m certainly open to additional advisory posts if I can be of help. Learning a new business takes time, but I’m not starting from scratch. In the short time I’ve been with Dorval, I’ve confirmed that business is business. You have to sell more than you spend. You need to have great products and work to build customer loyalty. But there are nuances to working with a perishable, imported product. I also leverage my experience in the security business. I learned a lot about launching products, dealing with distribution channels, and even incident response. In the candy business you need to be prepared for a product recall. So we did a tabletop exercise working through a simulated recall scenario. The key to the exercise was having a strong playbook and making sure everyone knew their job. The recall simulation seemed so familiar, but different at the same time. Which is a good way to sum up everything about my new gig. It turns out the biggest candy conference of the year was the week after RSA, so I couldn’t make it to SF for the conference this year. I did miss seeing everyone, especially at the Disaster Recovery Breakfast. I will be at Black Hat this year, where I’m teaching the maiden voyage of CCSKv5. I look forward to seeing many old friends there. Confessor: So this is it, I guess? Mike: It is. But that’s

Share:
Read Post

The THIRTEENTH Annual Disaster Recovery Breakfast: Changing of the Guard

What a long, strange trip it’s been over the last 3 years. In fact, the last time I saw many of you was at the last Disaster Recovery Breakfast in 2020. Within two weeks of that event, the world shut down due to COVID. Well, a lot has changed since then. DisruptOps was acquired by Firemon in September 2021. In early 2022, Rich decided he wanted to see our cloud security vision through and dedicate his full-time efforts to the Cloud Defense product. In July of 2022, I decided to partner with Alan Shimel and Mitch Ashley and join Techstrong as head of the research business. We still do cloud security training and house our cloud security content in Securosis, but we’ve both moved on. Our long-time venue for the DRB, Jillian’s (then TableTop Tap House) in San Fransisco, didn’t survive the pandemic. They went out of business in early 2022 and took our deposit for the 2022 DRB with them. Ouch. But given the lack of venues and the rescheduling of the RSA Conference to June 2022, we couldn’t pull off the breakfast last year. But this year, we are back. But it’s different. We have a different venue, which is The Pink Elephant (142 Minna St). We have a different organizer, which is now Techstrong and our Security Boulevard site. We have mostly the same sponsors, so we need to thank our pals at IANS, LaunchTech, and AimPoint Group. Their support is critical. So yes, we’ve had a changing of the guard. But what isn’t different is breakfast. It’s still a place where you can grab some breakfast and see some friends without the pomp and circumstance of a major conference. We hope to see you there. Share:

Share:
Read Post

Heading to Techstrong

The phone rang. On the other end, I heard a booming voice many of you are familiar with. “Hey Mikey! What’s shaking? What’s your plan now that Rich is with Firemon?” It was Alan Shimel, my good friend and head of Techstrong Group. It was maybe 10 minutes after Rich’s announcement had hit Twitter. I told Alan I would stay the course, but he had other ideas. “We should do something together. Think about it.” So I did. We had a call a few days later and started sketching out what it would look like if I joined Alan and the team. I’d want to build a research team since that’s what I love to do. I’d also like to have a hand in developing the corporate strategy. Alan said that sounded great; when can I start? I wasn’t there yet. I needed to know more about the business. I needed to spend some more time with the team. So I made the pilgrimage down to Boca to do a working session with Alan and see what we could work out. I learned that Techstrong is at the center of some pretty disruptive technology shifts, like DevOps (yes, DevOps.com is ours), cloud-native computing, containers (containerjournal.com), microservices, and of course, security (securityboulevard.com). There is an excellent events business with tons of virtual events. I’ve been a guest on TechstrongTV more times than I could count, so I know about their video capabilities. And the company has a top-notch customer list. So there is an exciting platform to build on. But could I have an impact? Next, I dug into the research business that another old friend, Mitchell Ashley, created. There are some short reports and they did some speaking gigs, but Techstrong Research didn’t have a point of view about where the markets are heading. So it was “research,” but not the kind of research I do. So yeah, I can have an impact on Techstrong Research. The timing also felt right. My youngest kids are off to college in August, so it’s a good time to make some changes. It’s not like my partners at Securosis haven’t done a similar thing. Adrian headed off into corporate cloud land a couple of years ago. Rich made a move to Firemon earlier this year. As much as I loved the 12 years with Securosis, I’m ready to tilt at another windmill. Though it had to be the right situation, and I found that with Techstrong. I’m happy to say I’m taking my talents to ~~South Beach~~ Boca. I’ve taken the role of Chief Strategy Officer of Techstrong Group and General Manager of Techstrong Research.   The intangibles made this an easy decision for me. It’s about working with my friends. It always has been. I have been fortunate to work with Rich and Adrian for the past 12 years. When we spun out DisruptOps, I was able to work with Jody Brazil, Brandy Peterson, and Matt Eberhart. And now I get to work with my good friends Alan, Mitch, and Parker. I have no illusions about how much work lies ahead. I’m back to building a research business, and it’s very exciting. Ultimately I’m a builder, and I’m lucky to have the opportunity to build with another set of good friends. Securosis is still a thing. Rich and I will continue to run our cloud security curriculum and training activities here. But Securosis will no longer function as an analyst firm. I’ll continue to support existing clients, but that work will transition to Techstrong Research when it makes sense. I’m not sure if this is good or bad, but you’ll see a lot more of me. I’ll be visible across the Techstrong network, writing, speaking, and interviewing exciting companies. I’ll be publishing trends and forward-looking research and ensuring that Techstrong has a strong point of view about where technology is going. I’ll be at Black Hat, so if you are there, let me know. It’ll be great to meet up, and I can fill you in on all the cool stuff we do at Techstrong.   Share:

Share:
Read Post

SOC 2025: Operationalizing the SOC

So far in this series, we’ve discussed the challenges of security operations, making sense of security data, and refining detection/analytics, which are all critical components of building a modern, scalable SOC. Yet, there is an inconvenient fact that warrants discussion. Unless someone does something with the information, the best data and analytics don’t result in a positive security outcome. Security success depends on consistent and effective operational motions. Sadly, this remains a commonly overlooked aspect of building the SOC. As we wrap up the series, we’re going to go from alert to action and do it effectively and efficiently, every time (consistently), which we’ll call the 3 E’s. The goal is to automate everything that can be automated, enabling the carbon (you know, humans) to focus on the things that suit them best. Will we get there by 2025? That depends on you, as the technology is available, it’s a matter of whether you use it. The 3 E’s First, let’s be clear on the objective of security operations, which is to facilitate positive security outcomes. Ensuring these outcomes is to focus on the 3 E’s. Effectiveness: With what’s at stake for security, you need to be right because security is asymmetric. The attackers only need to be right once, and defenders need to defeat them every time. In reality, it’s not that simple, as attackers do need to string together multiple successful attacks to achieve their mission, but that’s beside the point. A SOC that only finds stuff sometimes is not successful. You want to minimize false positives and eliminate false negatives. If an alert fires, it should identify an area of interest with sufficient context to facilitate verification and investigation. Efficiency: You also need to do things as quickly as possible, consuming a minimum of resources due to limited available resources and the significant damage (especially against an attack like ransomware) that can happen in minutes. You need tooling that makes the analyst’s job easier, not harder. You also need to facilitate the communication and collaboration between teams to ensure escalation happens cleanly and quickly. Breaking down the barriers between traditional operational silos becomes a critical path to streamlining operations. Every Time (Consistency): Finally, you need the operational motions to be designed and executed the same way, every time. But aren’t there many ways to solve a problem? Maybe. But as you scale up your security team, having specific playbooks to address issues makes it easy to onboard new personnel and ensure they achieve the first two goals: Effectiveness and Efficiency. Strive to streamline the operational motions (as associated playbooks) over time, as things change and as you learn what works in your environment. Do you get to the 3 E’s overnight? Or course not. It takes years and a lot of effort to get there. But we can tell you that you never get there unless you start the journey. Defining Playbooks The first step to a highly functioning SOC is being intentional. You want to determine the proper operational motions for categories of attacks before you have to address them. The more granular the playbook, the less variance you’ll get in the response and the more consistent your operations. Building the playbooks iteratively allows you to learn what works and what doesn’t, tuning and refining the playbook every time you use it. These are living documents and should be treated as such. So how many playbooks should you define? As a matter of practice, the more playbooks, the better; but you can’t boil the ocean, especially as you get started. Begin by enumerating the situations you see most frequently. These typically include phishing, malware attacks/compromised devices, ransomware, DDoS, unauthorized account creation, and network security rule changes. To be clear, pretty much any alert could trigger a playbook, so ultimately you may get to dozens, if not hundreds. But start with maybe the top 5 alerts detected in your environment and start with those. What goes into a playbook? Let’s look at the components of the playbook: Trigger: Start with the trigger, which will be an alert and have some specific contextual information to guide the next steps. Enrichment: Based on the type of alert, there will be additional context and information helpful to understanding the situation and streamlining the work of the analyst handling the issue. Maybe it’s DNS reputation on a suspicious IP address or an adversary profile based on the command and control traffic. You want to ensure the analyst has sufficient information to dig into the alert immediately. Verification: At this point, a determination needs to be made as to whether the issue warrants further investigation. What’s required to make that call? For a malware attack, maybe it’s checking the email gateway for a phishing email that arrived in the user’s inbox. Or a notification from the egress filter that a device contacted a suspicious IP address. For each trigger, you want to list the facts that will lead you to conclude this is a real issue and assess the severity. Action: Upon verification, what actions need to be taken? Should the device be quarantined and a forensic image of the device be captured? Should an escalation of privileges or firewall rule change get rolled back? You’ll want to determine what needs to be done and document that motion in granular detail, so there are no questions about what should be done. You’ll also look for automation opportunities, which we’ll discuss later in the post. Confirmation: Was the action step(s) successful? Next, confirm whether the actions dictated in the playbook happened successfully. This may involve scanning the device (or service) to ensure the change was rolled back or making sure the device is not accessible anymore to an attacker. Escalation: What’s next? Does it get routed to a 2nd tier for further verification and research? Is it sent directly to an operations team to be fixed if it can’t be automated? Can the issue be closed out because you’ve gotten the confirmation that

Share:
Read Post

SOC 2025: Detection/Analytics

We spent the last post figuring out how to aggregate security data. Alas, a lake of security data doesn’t find attackers, so now we have to use it. Security analytics has been all the rage for the past ten years. In fact, many security analytics companies have emerged promising to make sense of all of this security data. It turns out analytics aren’t a separate thing; they are part of every security thing. That’s right, analytics drive endpoint security offerings. Cloud security products? Yup. Network security detection? Those too. It’s hard to envision a security company of scale without analytics playing a central role in providing value to their customers. As a security leader, what do you have to know about analytics and detection as you figure out how the SOC should evolve? First, it’s not about [analytics technique A] vs. [analytics technique B]. It’s about security outcomes, and to get there you’ll need to start thinking in terms of the SOC platform. Defining the SOC “Platform” The initial stab at the SOC platform already exists with some overlapping capabilities. You already have a security monitoring capability, maybe an on-prem SIEM. As discussed in the last post, the SOC platform should include threat intelligence. Currently, some organizations use a separate threat intel platform (TIP) to curate and prioritize the incoming external data. The third leg of the SOC platform is operations, where validating, verifying, and ultimately addressing any alerts happens. We’ll have a lot to say about security operations in the next post. Though it may seem the evolved security operations platform is just bolting together a bunch of stuff you already have, we are advocating for an evolutionary approach in the SOC. You certainly could ditch the existing toolset and start from scratch, and as liberating as that may be, it’s not practical for most organizations. For instance, you’ve spent years tuning your on-prem SIEM to handle existing infrastructure, and you have to keep the SOC operating, given the attackers aren’t going to give you a break to accommodate your platform migration. Thus, it may not make sense to scrap it. Yet. Although you do have to decide where the SOC platform will run, here are some considerations: Data Location: It’s better to aggregate data as close to the originating platform as possible, so you keep cloud-based security data in the cloud, and on-prem systems go into an on-prem repository. That minimizes latency and cost. In addition, you can centralize alerts and context if your operational motions dictate. Operations Approach: Once the alert fires, what then? If you have an operations team that handles both cloud and on-prem issues, then you’ll need to centralize. The next question becomes do you consolidate the raw security data, or just the alerts and context? Care and Feeding: How much time and resource do you want to spend keeping the monitoring system up and running? There are advantages to using a cloud-based, managed platform that gets you out of the business of scaling and operating the infrastructure. The long-term trend is towards a managed offering in the cloud, but how quickly you get there depends on your migration strategy. If you’ve decided that your existing SIEM is not salvageable, then you are picking a new platform for everything and migrating as quickly as possible. But we see many organizations taking a more measured approach, focusing on building the foundation of a new platform that can handle the distributed and hybrid nature of computing in the cloud age while continuing to use the legacy platform during the migration. Analysis Once you have internal and external data collected and aggregated, you analyze the data to identify the attacks. Easy, right? Unfortunately, there is a lot of noise and vendor puffery for how the analytics actually work, making it confusing to figure out the best approach. Let’s work through the different types of techniques used by SOC tools. Rules and Reputation: Let’s start with signature-based controls, the old standard. You know, the type of correlation your RDBMS-based SIEM performed for decades. Adding patterns enumerated in the ATT&CK framework (which will discuss later in this post) helps narrow the scope of what you need to look for, but you still need to recognize the attack. You’ll need to know what you are looking for. Machine Learning: The significant evolution from simple correlation is the ability to detect an attack you haven’t seen. Advanced analytics can be used to define an activity baseline, and with that baseline defining normal behavior within your environment, your detection engine can look for anomalies. Getting into the grungy math of different machine learning models and cluster analyses probably won’t help you find attackers faster and more effectively. Continue to focus on the security outcomes during your evaluation. Does it find attacks you are likely to see? How much time and effort will it take to isolate the most impactful alerts? What’s involved in keeping the platform current? And ultimately, how will the platform’s analytics make the team more efficient? Stay focused on ensuring any new platform makes the team better, not on who’s math is better. Use Cases You may be bored (and maybe frustrated) with our constant harping on the importance of use cases in detecting attacks. There is a method to our madness in that use cases make a pretty nebulous concept more tangible. So let’s dig into a handful of use cases to get a sense of how a SOC platform will favorably impact your detection efforts. Ransomware Ransomware doesn’t seem to get as many headlines nowadays, but don’t be fooled by the media’s short attention span. Ransomware continues to be a scourge, and every company remains vulnerable. Let’s examine how an evolved SOC handles detects ransomware? First, ransomware isn’t new, particularly not the attacks — it typically uses commodity malware for the initial compromise. Attackers are more organized and proficient — once they have a foothold within a victim’s network, they perform extensive reconnaissance to find and destroy

Share:
Read Post

SOC 2025: Making Sense of Security Data

Intelligence comes from data. And there is no lack of security data, that’s for sure. Everything generates data. Servers, endpoints, networks, applications, databases, SaaS services, clouds, containers, and anything else that does anything in your technology environment. Just as there is no award for finding every vulnerability, there is no award for collecting all the security data. You want to collect the right data to make sure you can detect an attack before it becomes a breach. As we consider what the SOC will look like in 2025, given the changing attack surface and available skills base, we’ve got to face reality. The sad truth is that TBs of security data sit underutilized in various data stores throughout the enterprise. It’s not because security analysts don’t want to use the data. They don’t have a consistent process to evaluate ingested data and then analyze it constantly. But let’s not get the cart before the proverbial horse. First, let’s figure out what data will drive the SOC of the Future. Security Data Foundation The foundational sources of your security data haven’t changed much over the past decade. You start with the data from your security controls because 1) the controls are presumably detecting or blocking attacks, and 2) you still have to substantiate the controls in place for your friendly (or not so friendly) auditors. These sources include logs and alerts from your firewalls, IPSs, web proxies, email gateways, DLP systems, identity stores, etc. You may also collect network traffic, including flows and even packets. What about endpoint telemetry from your EDR or next-gen EPP product? There is a renewed interest in endpoint data because remote employees don’t always traverse the corporate network, resulting in a blind spot regarding their activity and security posture. On the downside, endpoint data is plentiful and can create issues in scale and cost. The same considerations must be weighed regarding network packets as well. But let’s table that discussion for a couple of sections since there is more context to discuss before truly determining whether you need to push all of the data into the security data store. Use Cases Once you get the obvious stuff in there, you need to go broader and deeper to provide the data required to evolve the SOC with advanced use cases. That means (selectively) pulling in application and database logs. You probably had an unpleasant flashback to when you tried that in the past. Your RDBMS-based SIEM fell over, and it took you three days to generate a report with all that data in there. But hear us out; you don’t need to get all the application logs, just the relevant ones. Which brings us to the importance of threat models when planning use cases. That’s right, old-school threat models. You figure out what is most likely to be attacked in your environment (think high-value information assets) and then work backward. How would the attacker compromise the data or the device? What data would you need to detect that attack? Do you have that data? If not, how do you get it? Aggregate and then tune. Wash, rinse, repeat for additional use cases. We know this doesn’t seem like an evolution; it’s the same stuff we’ve been doing for over a decade, right? Not exactly as the analytics you have at your disposal are much improved, which we’ll get into later in the series. Those analytics are constrained by the availability of security data. Yet you can’t capture all the data, so focus on the threat models and use cases that can answer the questions you need to know. Cloud Sources Given the cloudification of seemingly everything, we need to mention two (relatively) new sources of security data, and that’s your IaaS (infrastructure as a service) providers and SaaS applications. Given the sensitivity of the data going into the cloud, over the seemingly dead bodies of the security folks that would never let that happen, you’re going to need some telemetry from these environments to figure out what’s happening, if those environments are at risk, and ultimately to be able to respond to potential issues. Additionally, you want to pay attention to the data moving to/from the cloud, as detecting when an adversary can pivot between your environments is critical. Is this radically different from the application and database telemetry discussed above? Not so much in content, but absolutely in location. The question then becomes what and how much, if any, of the cloud security data do you centralize? What About External Data? Nowadays, you don’t just use your data to find attackers. You use other people’s data, or in other words, threat intelligence, which gives you the ability to look for attacks that you haven’t seen before. Threat intel isn’t new either, and threat intel platforms (TIP) are being subsumed into broader SOC platforms or evolving to focus more on security operations or analysts. There are still many sources of threat intel, some commercial and some open source. The magic is understanding which sources will be useful to you. That involves curation and evaluating the relevance of the third-party data. As we contemplate the security data that will drive the SOC, effectively leveraging threat intel is a cornerstone of the strategy. Chilling by the (Security Data) Lake In the early days of SIEM, there wasn’t a choice of where or how you would store your security data. You selected a SIEM, put the data in there, started with the rules and policies provided by the vendor, tuned the rules and added some more, generated the reports from the system, and hopefully found some attacks. As security tooling has evolved, now you’ve got options for how you build your security monitoring environment. Let’s start with aggregation. Or what’s now called a security data lake. This new terminology indicates that it’s not your grandad’s SIEM. Rather it’s a place to store significantly more telemetry and make better use of it. It turns out this new fangled data lake doesn’t

Share:
Read Post

SOC 2025: The Coming SOC Evolution

It’s brutal running a security operations center (SOC) today. The attack surface continues to expand, in a lot of cases exponentially, as data moves to SaaS, applications move to containers, and the infrastructure moves to the cloud. The tools used by the SOC analysts are improving, but not fast enough. It seems adversaries remain one (or more) steps ahead. There aren’t enough people to get the job done. Those that you can hire typically need a lot of training, and retaining them continues to be problematic. As soon as they are decent, they head off to their next gig for a huge bump in pay. At the same time, security is under the spotlight like never before. Remember the old days when no one knew about security? Those days are long gone, and they aren’t coming back. Thus, many organizations embrace managed services for detection and response, mostly because they have to. Something has to change. Actually, a lot has to change. That’s what this series, entitled SOC 2025 is about. How can we evolve the SOC over the next few years to address the challenges of dealing with today’s security issues, across the expanded attack surface, with far fewer skilled people, while positioning for tomorrow? We want to thank Splunk(you may have heard of them) for agreeing to be the preliminary licensee for the research. That means when we finish up the research and assemble it as a paper, they will have an opportunity to license it. Or not. There are no commitments until the paper is done, in accordance with our Totally Transparent Research methodology. SOC, what’s it for? There tend to be two use cases main use cases for the SOC. Detecting, investigating, and remediating attacks and substantiating the controls for audit/compliance purposes. We are not going to cover the compliance use case in this series. Not because it isn’t important, audits are still a thing, and audit preparation should still be done in as efficient and effective a manner as possible. But in this series, we’re tackling the evolution of the Security OPERATIONS Center, so we’re going to focus on the detection, investigation, and remediation aspects of the SOC’s job. You can’t say (for most organizations anyway) there hasn’t been significant investment in security tooling over the past five years. Or ten years. Whatever your timeframe, security budgets have increased dramatically. Of course, there was no choice given the expansion of the attack surface and the complexity of the technology environment. But if the finance people objectively look at the spending on security, they can (and should) ask some tough questions about the value the organization receives from those significant investments. And there is the rub. We, as security professionals, know that there is no 100% security. That no matter how much you spend, you can (and will) be breached. We can throw out platitudes about reducing the dwell time or make the case that the attack would have been much worse without the investment. And you’re are probably right. But as my driver’s education teacher told me over 35 years ago, “you may be right, but you’ll still be dead.” What we haven’t done very well is manage to Security Outcomes and communicate the achievements. What do we need the outcome to be for our security efforts? Our mindset needs to shift from activity to outcomes. So what is the outcome we need from the SOC? We need to find and fix security issues before data loss. That means we have to sharpen our detection capabilities and dramatically improve and streamline our operational motions. There is no prize for finding all the vulnerabilities. Like there are no penalties for missing them. The SOC needs to master detecting, investigating, and turning that information into effective remediation before data is lost. Improved Tooling Once we’ve gotten our arms around the mindset shift in focusing on security outcomes, we can focus on the how. How is the SOC going to get better in detecting, investigating, and remediating attacks? That’s where better tooling comes into play. The good news is that SOC tools are much better than even five years ago. Innovations like improved analytics and security automation give SOCs far better capabilities. But only if the SOC uses them. What SOC leader in their right mind wouldn’t take advantage of these new capabilities? In concept, they all would and should. In reality, far too many haven’t and can’t. The problem is one of culture and evolution. The security team can handle detection and even investigation. But remediation is a cross-functional effort. And what do security outcomes depend on? You guessed it – remediation. So at its root, security is a team sport, and the SOC is one part of the team. This means addressing security issues needs to fit into the operational motions of the rest of the organization. The SOC can and should automate where possible, especially the things within their control. But most automation requires buy-in from the other operational teams. Ultimately if the information doesn’t consistently and effectively turn into action, the SOC fails in its mission. Focused Evolution In this series, we will deal with both internal and external evolution. We’ll start by turning inward and spending time understanding the evolution of how the SOC collects security telemetry from both internal and external sources. Given the sheer number of new data sources that much be considered (IaaS, PaaS, SaaS, containers, DevOps, etc.), making sure the right data is aggregated is the first step in the battle. Next, we’ll tackle detection and analytics since that is the lifeblood of the SOC. Again, you get no points for detecting things, but you’ve got no chance of achieving desired security outcomes if you miss attacks. The analytics area is where the most innovation has happened over the past few years, so we’ll dig into some use cases and help you understand how frameworks like ATT&CK and buzzy marketing terms like eXtended Detection and Response (XDR) should influence

Share:
Read Post

New Age Network Detection: Use Cases

As we wrap up the New Age Network Detection (NAND) series, we’ve made the point that network analysis remains critical to finding malicious activity, even as you move to the cloud. But clearly, collection and analysis need to change as the underlying technology platforms evolve. But that does put the cart a bit ahead of the horse. We haven’t spent much time honing in on the specific use cases where NAND makes a difference. So that’s how we’ll bring the series to a close. To be clear, this is not an exhaustive list of use cases, but it hits the high points and helps you understand the value of NAND relative to other means of detection. Ransomware Another day, another high-profile ransomware attack shutting down another major business. Every organization is a target and is vulnerable. So how do you get ahead of ransomware from a detection standpoint? First, let’s discuss what ransomware is and what it’s not. Ransomware involves the adversary compromising devices and then encrypting both the machine and shared file repositories to stop an organization from accessing their data unless they pay the ransom. But ransomware isn’t new, certainly not from an attack standpoint since it uses relatively common and commodity malware families for the initial compromise. To be clear, the attackers are more organized and have gotten more proficient once they’ve gained a foothold within a victim’s network, doing extensive recon to find and then destroy backups putting further pressure to pay the ransom. So what’s different now, making ransomware so urgent to address? It’s gotten mainstream press because of the high-profile attacks on pipeline companies and health care systems. When citizens can’t get gas and drive to places, and they can’t get critical care services because the medical systems at a hospital are down, that will get people’s attention, and it has. NAND helps in the initial stages of the ransomware attack. The adversary uses common malware families to compromise devices. As discussed in the last post, network telemetry can detect command and control traffic patterns and the recon activity within the environment. Additionally, as mentioned above, attackers now take the time to search for and destroy backups, which also involves network recon patterns that NAND can detect. Having your business unable to operate because you missed a ransomware attack is a career-limiting challenge for every CISO. Just out of self-preservation, stopping ransomware has become the single top priority for every CISO. The first step in addressing the ransomware scourge has a broad detection capability to maximize the likelihood of detecting the attack. The network is the first place you’ll see the emerging attack, as well as the ongoing recon and proliferation of the attack to compromise additional devices. Thus NAND is critical to ransomware defense. Threat Hunting Threat hunting is proactively looking for attackers in your environment before you get an alert from one of your other detection methods. Unfortunately, most organizations have active attackers in their environments, but they don’t know where or what they are doing until the attackers screw up and trigger an alert. Hunting can identify these attackers and smoke them out before a traditional alert fires, but only if you have sufficient telemetry and know where to look. Hunting does involve more art than science since the hunter needs to start with an idea of what types of attacks to look for. Then they must effectively and efficiently mine through the security data to find and follow the attacker’s trail. But we shouldn’t minimize the importance of the science part of it: having the data you need and a set of tools to navigate security data. That’s where NAND comes into play. By providing a broad and deep collection capability (including full packets, where necessary) and the ability to effectively pivot through the data both via search and by clicking through live links in the interface to follow the path the attacker may have taken. To be clear, NAND will not make a noob who has no idea what they are doing into a world-class hunter. Still, it can accelerate and improve any hunt in the hands of a reasonably capable security professional. Further helping the hunter are common hunting queries, typically pre-loaded into the detection tool to kick-start any hunting effort. Again, these rules don’t make the hunt, but they can codify common searches that uncover malicious activities, including drive-by attacks, spearphishing, privilege escalation, credential stuffing, and lateral movement. If there is a use case that provides significant value to security executives, it’s hunting. Although not all organizations have the resources to devote staff to hunting, those that do can find attackers before significant damage happens. And this makes the security team look good. Insider Threat Insider threat attacks have gotten a lot of visibility within the executive suite as well. The old “inside job” typically involves an employee acting maliciously to steal data or sabotage systems. But we use a broader definition of insiders to include any entity with a presence inside the network. Thus, during most attacks, an insider has access to the internal networks and resources. So how do you detect insider threats? We laid out how NAND facilitates the collection and analysis of your network telemetry, so we’ll leverage those capabilities. Although insiders can be anywhere, so you’ll need a broad collection effort, and that will include telemetry from any remote employees and cloud resources. From an analysis standpoint, looking for anomalies from the network traffic baseline will be the strongest indication of malicious activity. Focusing on the impact of the insider threat on the business (and the longevity of the CISO), an insider attack is particularly damaging. An employee insider may have access to all sorts of systems and proprietary data and have the wherewithal (especially for IT insiders) to take down the systems, leave back doors, delete data, and otherwise damage the organization. This use case forces you to question trust because insiders are trusted to do the right thing and have

Share:
Read Post

Papers Posted

It turns out that we are still writing papers and posting them in our research library, even though far less frequently than back in the day. Working with enterprises on their cloud security strategies consumes most of our cycles nowadays. When we’re not assessing clouds or training on clouds or getting into trouble, we’ve published 3 papers over the past year. I’ve finally posted them to the research library for you to check out. Data Security in the SaaS Age: In this paper, licensed by AppOmni, we dust off the Data Security Triangle and then proceed to provide a structure to rethink what data security looks like when you don’t control the data in SaaS land. Direct link Security Hygiene: The First Line of Security: Yup, we’re back to beating the drum for sucking less on the fundamentals like security hygiene. But the fact still remains that we don’t help ourselves by taking too long to update systems and don’t do a good enough job on configuration management. We also go through the impact and benefits of cloud and PaaS to help with these operational challenges. This one has been licensed by Oracle. Direct link Security APIs: The New Application Attack Surface: This paper covers how application architecture and attack surfaces are changing, how application security needs to evolve to deal with these disruptions, and how to empower security in environments where DevOps rules the roost. It’s licensed by Salt Security. Direct link Always happy to get feedback if there is something you like (or don’t like). Add a comment or send us an email. Share:

Share:
Read Post

New Age Network Detection: Collection and Analysis

As we return to our series on New Age Network Detection, let’s revisit our first post. We argued that we’re living through technology disruption on a scale, and at a velocity, we haven’t seen before. Unfortunately security has failed to keep pace with attackers. The industry’s response has been to move the goalposts, focusing on new shiny tech widgets every couple years. We summed it up in that first post: We have to raise the bar. What we’ve been doing isn’t good enough and hasn’t been for years. We don’t need to throw out our security data. We need to make better use of it. We’ve got to provide visibility into all of the networks (even cloud-based and encrypted ones), minimize false positives, and work through the attackers’ attempts to obfuscate their activity. We need to proactively find the attackers, not wait for them to mess up and trigger an alert. So that’s the goal – make better use of security data and proactively look for attackers. We even tipped our hat to the ATT&CK framework, which has given us a detailed map of common attacks. But now you have to do something, right? So let’s dig into what that work looks like, and we start first with the raw materials that drive security analytics – data. Collection In the olden days – you know, 2012 – life was simpler. If we wanted to capture network telemetry we’d aggregate NetFlow data from routers and switches, supplementing with full packet capture where necessary. All activity was on networks we controlled, so it wasn’t a problem to access that data. But alas, over the past decade several significant changes have shifted how that data can be collected: Faster Networks: As much as it seems enterprise data centers and networks are relics of yesteryear, many organizations still run big fast networks on-prem. So collection capabilities need to keep up. It’s not enough to capture traffic at 1gbit/sec when your data center network is running at 100gbit/sec. So you’ll need to make sure those hardware sensors have enough capacity and throughput to capture data, and in many modern architectures they’ll need to analyze it in realtime as well. Sensor Placement: You don’t only need to worry about north/south traffic – adversaries aren’t necessarily out there. At some point they’ll compromises a local device, at which point you’ll have an insider to deal with, which means you also need to pay attention to east/west (lateral) movement. You’ll need sensors, not just at key choke points for external application traffic, but also on network segments which serve internal constituencies. Public Cloud: Clearly traffic to and from internal applications is no longer entirely on networks you control. These applications now run in the public cloud, so collection needs to encompass cloud networks. You’ll need to rely on IaaS sensors, which may look like virtual devices running in your cloud networks, or you may be able to take advantage of leading cloud providers’ traffic mirroring facilities. Web/SaaS Traffic & Remote Users: Adoption of SaaS applications has exploded, along with the poppulation of remote employees, and people are now busily arguing over what an office will look like moving coming out of the pandemic. That means you might never see the traffic from a remote user to your SaaS application unless you backhaul all that traffic to a collection point you control, which is not the most efficient way to network. Collection in this context involves capturing telemetry from web security and SASE (Secure Access Service Edge) providers, who bring network security (including network detection) out to remote users. You’ll also want to rely on partnerships between your network detection vendor and application-specific telemetry sources, such as CASB and PaaS services. We should make some finer points on whether you need full packet capture or only metadata for sufficient granularity and context for detection. We don’t think there it’s an either/or proposition. Metadata provides enough depth and detail in most cases, but not all. For instance if you are looking to understand the payload of an egress session you need to full packet stream. So make sure you have the option to capture full packets, knowing you will do that sparingly. Embracing more intelligence and automation in network detection enables working off captured metadata routinely, triggering full packet collection on detection of potentially malicious activity or exfiltration. Be sure to factor in storage costs when determining the most effective collection approach. Metadata is pretty reasonable to store for long periods, but full packets are not. So you’ll want to keep a couple days or weeks of full captures around when investigating an attack, but might always save years of metadata. Another area that warrants a bit more discussion is cloud network architecture. Using a transit network to centralize inter-account and external (both ingress and egress) traffic facilitates network telemetry collection. All traffic moving between environments in your cloud (and back to the data center) runs through the transit network. But for sensitive applications you’ll want to perform targeted collection within the cloud network to pinpoint any potential compromise or application misuse. Again, though, a secure architecture which leverages isolation makes it harder for attackers to access sensitive data in the public cloud. Dealing with Encryption Another complication for broad and effective network telemetry collection is that a significant fraction of network traffic is encrypted. So you can’t access the payloads unless you crack the packets, which was much easier with early versions of SSL and TLS. You used to become a Man-in-the-Middle to users: terminating their encrypted sessions, inspecting their payloads, and then re-encrypting and sending the traffic on its way. Decryption and inspection were resource intensive but effective, especially using service chaining to leverage additional security controls (IPS, email security, DLP, etc.) depending on the result of packet inspection. But that goose has been cooked since the latest version of TLS (1.3) enlisted perfect forward secrecy to break retrospective inspection. This approach issues new keys for each encrypted

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.