Securosis

Research

Two Apple Security Tidbits

Two interesting items. First up, whatever actual vulnerability was used, the Apple Developer Center was exploited with a code execution flaw: On the site, Apple credits 7dscan.com and SCANV of www.knownsec.com for reporting the bug on July 18, which is the same day the Developer Center was taken offline. During the downtime, Apple reported that the Developer Center website had been hacked, with an intruder attempting “to secure personal information” from registered developers. The company noted that while sensitive information was encrypted, some developer names, mailing addresses, and/or email addresses may have been acquired. Expect to see constant developer targeting, on all platforms, as operating systems themselves become hardened. No details on the flaw other than that, but I didn’t even expect them to release that much. They also credit the researcher who pulled user account info using, it turns out, a different flaw. That’s the guy some expected them to go after legally, which is even more interesting. Item number 2. Researchers from Georgia Tech slipped some test malware into Apple’s App Store: A group of researchers from Georgia Tech developed an app that masqueraded as a news reader that would phone home to reprogram itself into malware – something that was apparently not picked up in Apple’s security screening procedures, reports the MIT Technology Review. Charlie Miller did this once before, and I’m sure it will happen again. It’s a big cat and mouse game, and Apple is in for constant battles (as are Google, Microsoft, and Amazon with their stores). It will keep getting harder, but likely never impossible. The real question is mitigation. Apple yanks apps when needed, but generally won’t claw them back off a device. For Macs that requires a software update, and I am investigating whether it is more automated for iOS. Share:

Share:
Read Post

Lockheed-Martin Trademarks “Cyber Kill Chain”. “Cyberdouche” Still Available

It appears that Lockheed Martin has trademarked the term “Cyber Kill Chain”. This should be no surprise, and you can read my House of Cybercards post if you want to know why this isn’t merely humorous. In an interview, James Arlen, creator of the term ‘Cyberdouche’, confirmed his term “is still free to use, as also demonstrated by Lockheed.” Share:

Share:
Read Post

IBM/Trusteer: Shooting Across the Bow of the EPP Suites

Last week, IBM announced a deal to acquire Trusteer, an Israeli company focused on advance endpoint malware detection. The price tag was reported to be $800MM – $1B, so it was a pretty healthy 7-8x multiple of rumored 2013 bookings. Trusteer’s technology fills a huge gap in IBM’s advanced malware story. They do some stuff on their network (IPS) box, but without a real presence on the endpoint, their solution is limited. And for company pushing a total security solution story like IBM, you can’t really have holes. Not obvious one’s anyway. IBM has been selling Trend Micro’s endpoint security suite for years, but it hasn’t been a focus of their story and since the new security regime came in with the acquisition of Q1 Labs, any mention of endpoint security has largely been muted. Obviously that will change now that they have Trusteer (and their emerging enterprise capabilities) in their bag. To be clear, Trusteer didn’t get a huge valuation based on their story of disrupting the anti-virus market. They had built a signifiant market licensing their anti-malware toolbar for distribution through financial institutions. Basically, a bank provides Trusteer’s toolbar to their customers for free and a percentage of customers would use it, resulting in dramatically lower fraud rates for those protected customers. Of course, a bank can’t mandate the use of any technology to their customers, but the reduction in fraud for even the minority of protected devices was significant enough it became a no-brainer for the banks to write a very large check to Trusteer to cover their entire customer base. If anything after the deal closes, IBM’s global channels and presence selling technology to other financial institutions should provide a boost to Trusteer’s existing FI business as well. That’s how you justify writing that kind of check. This was a new path to market for security technology, and that provides a bulk of Trusteer’s existing revenue. But they had bigger designs to target the broader enterprise anti-malware market with a still raw, but interesting set of technologies for advanced malware protection. It’s early, but there is a clear opportunity for someone to totally disrupt the endpoint protection racket. Similar to what Palo Alto did to the perimeter firewall. IBM is betting on being able to spur that disruption. By combining Trusteer’s advanced endpoint protection capabilities with the BigFix endpoint management suite, they have pretty much everything the existing EPP vendors provide with better advanced malware protection. So getting rid of the incumbent is much more achievable, rather than asking a Fortune-class enterprise to trust a start-up. But IBM still has work to complete their endpoint security offerings. As described in our recent Endpoint Security Buyer’s Guide series, IBM now has better heuristics and some lockdown technologies. Though we expect endpoint activity monitoring to become a significant requirement over the coming few years, so that remains a gap in their offering. IBM also has to ensure they keep a good portion of the Trusteer expertise and DNA after the acquisition. They’ve been able to do so with the Q1 Labs acquisition, and as with most big M&A this is a critical success factor to get the value out of the deal. The fact that IBM has already made it clear that Trusteer’s Israeli-based research team will become a key part of a new IBM cybersecurity lab should help keep those folks for a little while. So is the beginning of the end for EPP? If you take a step back, EPP has been on a path to irrelevance for years. More than a few large enterprises have commented on how they are using the absolute cheapest means possible of checking the compliance box requiring AV and deploying these advanced products on critical endpoints. Providing years of suspect value will get customers to think like that. The good news for the existing EPP vendors is that their existing suites integrate some (but not all) of the advanced technologies needed to really address advanced malware. They’ve just done a very poor job at describing how their products have evolved, and that’s resulted in a clear negative market perception of the technology. We’ll be doing a more in-depth analysis of advanced endpoint protection starting next month, but suffice it say all the EPP guys don’t necessarily have to die during the transition. Yet the fact remains, they need to kill their golden goose if they are going to get there. If Big AV continues to position these new technologies as a minor upgrade with just a few added features to their existing offering (not to antagonize their installed base), they won’t create enough urgency to upgrade to the current version of the EPP suite. As we saw with the NYT breach (missing 44 out of 45 attacks) earlier this year, deprecated EPP is not much of a defense against modern, advanced attacks. These vendors basically need to make it very clear that the old stuff doesn’t work and make it very attractive to upgrade to the new stuff. This probably requires pulling support from the old suites despite the clear risk of customers picking a different solution when facing the upgrade. But we believe it’s a bigger risk to let 80% of their installed base use obsolete technology. We also should mention other huge winners as a result of this deal are the folks that do advanced endpoint protection, like Bit9, Bromium and Invincea. And Cisco gets some of these capabilities via the Sourcefire acquisition (who has bought Immunet a few years back). These emerging vendors have different approaches to solve the advanced malware problem, but with the valuation Trusteer was able to get they should feel pretty good about having a high value comp when they inevitably shop their companies. Photo credit: “Scrooge McDuck’s money bin for DuckTales Remastered at iam8bit gallery” originally uploaded by insidethemagic Share:

Share:
Read Post

New Paper: The CISO’s Guide to Advanced Attackers

Much of the security industry spends significant time and effort focused on how hard it is to deal with today’s attacks. Adversaries continue to improve their tactics. Senior management doesn’t get it, until there is a breach… then your successor can educate them. And the compliance mandates hanging over your organization like albatross remain 3-4 years behind the attacks you see daily. The vendor community compounds the issues by positioning every product and/or service as a solution to the APT problem. Which means they don’t really understand advanced attackers at all. But complaining doesn’t solve problems, so we put together a CISO’s Guide to Advanced Attackers to help you structure a programmatic effort to deal with these adversaries. It makes no difference what a security product or service does – they are all positioned as the only viable answer to stop the APT. Of course this isn’t useful to security professionals who actually need to protect important things. And it’s definitely not helpful to Chief Information Security Officers (CISOs) who have to explain their organization’s security programs, set realistic objectives, and manage expectations to senior management and the Board of Directors. So as usual your friends at Securosis are here to help you focus on what’s important and enable you to wade through the hyperbole to understand what’s hype and what’s real. This paper provides a high-level view of these “advanced attackers” designed to help a CISO-level audience understand what they need to know, and maps out a clear 4-step process for dealing with advanced attackers and their innovative techniques. The landing page is in our research library. You can also download The CISO’s Guide to Advanced Attackers (PDF) directly. We would like to thank Dell Secureworks for licensing the content in this paper. Obviously we wouldn’t be able to do the research we do, or offer it to you without cost, without companies supporting our efforts. Share:

Share:
Read Post

Friday Summary: Career Highlight

I got my first computer back in the mid-80’s, a few years after I started playing and programming in the back half of elementary school. It was a shiny new Commodore 64 a friend of my Mom’s gave me – we weren’t financially lucky enough to afford one ourselves. In retrospect, I probably owe that man more than anyone else outside family. I quickly fancied myself a ‘hacker’ because, after getting my first modem, I was mentally capable of logging into bulletin board systems with the word ‘hack’ in the title. As with most things in life, I had no idea what I was doing. In college I played with tech, but emergency medicine, martial arts, NROTC, and other demands ate up my time. Even when I started working in tech professionally, in the mid-to-late 90’s, I never connected with the 303 crew or any of the real hackers surrounding me. I was living and working in a bubble. I knew I wasn’t a real hacker at that point, but you could call me “hacking curious”. Fast forward to two weeks ago at Black Hat. Thursday morning at 8:22 I woke up, looked at my phone, and realized I had missed 2 calls and a text message from the Black Hat organizers. I spent the weekend and first part of the week teaching our cloud security class, and had, at some point, agreed to be a backup speaker after my session pitch didn’t make it through the process. I figured it was a sympathy invite to make me feel good about myself, that would never possibly come to fruition. Nope. They offered me a slot at 10:15 if my demo and presentation were ready (based on this software defined security research). Another speaker had to pull out. I said yes, forgetting that it wasn’t ready, because I broke part of it in the class. Then I pulled up my slides and realized they were demo slides only, and not an actual session and concept narrative. Then I went to the bathroom. Three times. Number 2. I managed to pull it together over the next 90 minutes, and made my very first Black Hat technical presentation on time. The slides worked, the demo worked, and after the session I got some major validation that this was good research on the leading edge of defensive security. To be honest, I was worried that it was so basic I would be laughed out of there. It was a career highlight. A wannabee script kiddie from Jersey managed to hold his own on the stage at Black Hat, with 90 minutes warning. I can’t stop talking about it – not because of my prodigious ego but because I’m still insanely excited. It’s like being the smallest kid on the football team and, years later, finding yourself in the NFL. Except a lot more people have played in NFL games than have spoken at Black Hat. I am a very lucky and thankful person. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Mike in SC Magazine on the Trusteer deal. Adrian on mainframe hacking at Dark Reading. Dave Lewis: Hitting The Panic Button. Mike and I are both quoted by Alan Shimel in this article about whether we would want our kids to work in infosec. Another one from Mike in Dark Reading on Innovation at Black Hat. Mike’s column in Dark Reading: Barnaby Jack & the Hacker Ethos. Favorite Securosis Posts Mike Rothman: Is Privacy Now Illegal? It depends on who you ask, I guess. A thought-provoking post from Rich. David Mortman: Rich’s Incomplete Thought: Is the Cloud the Secproasaurus Extinction Event? And Are DevOps the Mammals? Betteridge’s law does not apply. Rich: Credibility and the CISO. Yup. Other Securosis Posts Research Scratchpad: Outside Looking In Incite 8/14/2013: Tracking the Trends. HP goes past the TippingPoint of blogging nonsense. Incite 8/7/2013: Summer’s End. Continuous Security Monitoring: Migrating to CSM. Continuous Security Monitoring: Compliance. Continuous Security Monitoring: The Change Control Use Case. Favorite Outside Posts Mike Rothman: Godin: More Gold on Human Behavior. “Your first mistake is assuming that people are rational.” LOL. He must be a part-time security person… David Mortman: “Big Filter”: Intelligence, Analytics and why all the hype about Big Data is focused on the wrong thing. Dave Lewis: NSA “touches” more of Internet than Google. Rich: Unsealed court-settlement documents reveal banks stole $trillions’ worth of houses. Crime takes all forms, and justice doesn’t apply equally. Mike Rothman: HowTo: Detecting Persistence Mechanisms. Figuring out how your Windows machines are pwned is critical. I learn a lot from this cool windowsir blog. This post deals with detecting new persistence mechanisms. Research Reports and Presentations Defending Cloud Data with Infrastructure Encryption. Network-based Malware Detection 2.0: Assessing Scale, Accuracy and Deployment. Quick Wins with Website Protection Services. Email-based Threat Intelligence: To Catch a Phish. Network-based Threat Intelligence: Searching for the Smoking Gun. Understanding and Selecting a Key Management Solution. Building an Early Warning System. Implementing and Managing Patch and Configuration Management. Defending Against Denial of Service (DoS) Attacks. Securing Big Data: Security Recommendations for Hadoop and NoSQL Environments. Top News and Posts Every Important Person In Bitcoin Just Got Subpoenaed By New York’s Financial Regulator. 2003 Blackout: An Early Lesson in Planetary Scale? Cisco readies axe for 4,000 employees. Assessment of the BREACH vulnerability. ‘Next Big’ Banking Trojan Spotted In Cybercrime Underground. How the US (probably) spied on European allies’ encrypted faxes. Researcher finds way to commandeer any Facebook account from his mobile phone. Crimelords: Stolen credit cards… keep ‘em. It’s all about banking logins now. Blog Comment of the Week This week’s best comment goes to Marco, in response to Incomplete Thought: Is the Cloud the Secproasaurus Extinction Event? And Are DevOps the Mammals? I think this is a valid point. My take on it is that whether we like it or not, external compliance requirements drive a majority of security initiatives. And seeing that e.g. PCI DSS is still trying to react to internal virtualization gives you an idea on how up to date that is. Simply no big

Share:
Read Post

Research Scratchpad: Outside Looking in

I have bunch of random research thoughts I am working on. I think they are building into a cohesive whole but cannot make any promises. I’m branding these forming ideas as my “research scratchpad”, and will appreciate any feedback. Yesterday, while working with a client, I was asked to define Software Defined Security. This won’t be that post, but as part of discussing the definition and characteristics we got into another concept that has really been standing out to me for a while, and I suspect is on the verge of changing in a big way. Early security was pretty much just another aspect of infrastructure. Access controls, networking, and our minimal other controls were built into the infrastructure. This started changing in the 90’s, into what I call our “outside looking in” posture. The vast majority of security controls starting moving to external tools that are often desynchronized from the underlying infrastructure. This isn’t an absolute rule – the balance has shifted materially to a security control layer, not merely a security management layer… added to infrastructure, not necessarily embedded within it. A heck of a lot of our security involves cutting wires between boxes and inserting new boxes, or adding software agents where no one really wants them. This was a natural, proper evolution – not a mistake or stupidity. It was all we had. But the cloud and virtualization blow this apart in two ways: We are regaining hooks, thanks to APIs, into the infrastructure itself. The security management plane doesn’t necessarily need to be as decoupled as in ‘traditional’ infrastructure architectures. We are losing the ability to insert external security controls into the infrastructure. Adding these integration/choke points adds performance and functional costs beyond those we have learned to generally work around over the past couple decades. The ability to manage large swatches of infrastructure security using the same tools, techniques, and interfaces as those building and maintaining the infrastructure is a major opportunity to remediate many perceived shortcomings of existing security methods. Share:

Share:
Read Post

Ecosystem Threat Intelligence: The Risk of the Extended Enterprise [New Series]

A key aspect of business today is the extended enterprise. That’s a fancy way of saying no organization does it alone anymore. They have upstream suppliers who help produce whatever it is they produce. They have downstream distribution channels that help them sell whatever needs to be sold. They outsource business processes to third parties who can handle them better and more cheaply. With the advent of advanced communication and collaboration tools, teams work on projects even if they don’t work for the same company or reside on the same continent. Jack Welch coined the term “boundaryless organizations” back in 1990 to describe an organization that is not defined by, or limited to, horizontal, vertical, or external boundaries imposed by a predefined structure. They are common today. In order to make the extended enterprise work, your trading partners need access to your critical information. And that’s where security folks tend to break out in hives. It’s hard enough to protect your networks, servers, and applications, while making sure your own employees don’t do anything stupid to leave you exposed. Imagine your risk – based not just on how you protect your information, but also on how well all your business partners protect their information and yours. Well, actually, you don’t have to imagine that – it’s reality. Let’s do a simple thought exercise to get a feel for the risk involved in one of these interconnected business processes. Let’s say that for cost reasons the decision was made to outsource software maintenance on legacy applications to an offshore provider. These applications run in your datacenter, and maintenance only involves pretty simple bug fixes. You can’t shut down the application, but it’s clearly not strategic. What’s the risk here? Start getting a feel for your exposure by asking some questions: Which of our networks do these developers have access to? How do they connect in? Who are the developers? Has the outsourcer done background checks on them? Are those checks trustworthy? What is the security posture of the outsourcer’s network? What kinds of devices do they use? Even if the developers are trustworthy, can you trust that their machines are not compromised? Yes, you can segment your network to ensure the developers only have access to the servers and code they are responsible for. You can scan devices on connection to your network to ensure they aren’t pwned. You can check the backgrounds of the developers yourself. You can even audit the outsourcer’s network. And you can still get compromised via business partners, because things move too fast to really stay on top of everything. It takes seconds for a machine to be compromised. With one compromised machine your adversary gains presence on your network and can then move laterally to other devices with more access than the developers have. This happens every day. The point is that you have very little visibility into trading partner networks, which means additional attack surface you don’t control. No one said this job was easy, did they? These interconnected business processes will happen whether you like it or not. Even if you think they pose unacceptable risk. You can stamp your feet and throw all the tantrums you want, but unless you can show a business decision maker that the risk of maintaining the connection is greater than the benefit of providing that access you are just Chicken Little. Again. So you need to do your due diligence to understand how each organization accessing your network increases your attack surface. You need a clear understanding of how much risk each of your trading partners presents. So you need to assess each partner and receive a notification of any issues which appear to put your networks at risk. We call this an Early Warning System, and external threat intelligence can give you a head start on knowing which attacks are heading your way. Here is an excerpt from our EWS paper to illuminate the concept. You can shrink the window of exploitation by leveraging cutting-edge research to help focus your efforts more effectively, by looking in the places attackers are most likely to strike. You need an Early Warning System (EWS) for perspective on what is coming at you. None of this is new. Law enforcement has been doing this, well, forever. The goal is to penetrate the adversary, learn their methods, and take action before an attack. Even in security there is a lot of precedent for this kind of approach. Back at TruSecure (now part of Verizon Business) over a decade ago, the security program was based on performing external threat research and using it to prioritize the controls to be implemented to address imminent attacks. Amazingly enough it worked. Following up our initial EWS research, we delved into a few different aspects of threat intelligence, which provides the external content of the EWS. There is Network-based Threat Intelligence and Email-based Threat Intelligence, but both of those sources are more about what’s happening on your networks and with your brands. These really help you understand what’s happening on your partner networks, which clearly pose a risk to your environment. So we are spinning up a new series to continue our threat intelligence arc. This series is called Ecosystem Threat Intelligence and will delve into how to systematically assess your extended network of trading partners to understand the risk they present. Armed with that information you will finally have the information to block a trading partner or tune your defenses based on the risk they pose. As with all our research, we will focus on tangible solutions that can be implemented now, while positioning yourself for future advances. As a reminder, we develop our research using our Totally Transparent Research methodology to make sure you all have an opportunity to let us know when we are right – and more importantly when we are wrong. Finally, we would like to thank BitSight Technologies for potentially licensing the paper at the end of this process. Our next post will delve into the types of information you need to assess your trading partners, and how it

Share:
Read Post

Incite 8/14/2013: Tracking the Trends

I remember back in my 20s, when I though my success and wealth were assured. I was a high-flying analyst during the Internet bubble and made a bunch of coin. Then I lost a bunch of coin as the bubble deflated. Then I started a software company, which was sold off for the cash on our balance sheet. Then I chased a few hot startups that got less hot once I got there. None had a happy ending. Maybe my timing just sucks. Maybe I wasn’t very good at those specific jobs. Probably some combination of the two. But at the end of the day it doesn’t matter. I have reached the conclusion, 15 years later, that success rarely happens quickly. Some outliers get lucky, know a guy at Instagram, and walk away with big bucks in 18 months, but that is rare. You have a slightly better chance of quick startup riches than winning the lottery, and slightly worse odds than getting run over by a semi walking to the corner store. For every two steps forward, you are likely to take a step and a half back. Sometimes you have a bad day and take 3 steps back. Overnight success is usually 20 years in the making. Conversely, the express train to the mountaintop usually ends with a fall from grace and a mess at the bottom of the hill. Just check out the horror stories of all those folks who won the lottery… and were broke or dead 5 years later. It comes back to sustainability. If the change happens too fast it may not be sustainable, and sooner or later you will be right back where you started. Probably sooner. Small changes that add up over a long period of time become very substantial. Yes, you learned that in elementary school, or perhaps back when you discovered the magic of compound interest. It seems silly but it does work. Let’s take my weight as an example. I have been in good shape. I have been in bad shape as well. It has been a challenge since I was a kid. When I finally make up my mind to drop some pounds, it’s never a straight line. I lose some. I backslide a bit. I try to have more good days than bad. But if I can stay consistent with small changes the trend continues in the right direction. It’s all about tracking those trends. At some point I will get my weight to the point where it’s both comfortable and sustainable. The same goes for the size of the business. I’m looking for higher highs and higher lows, which we have been able to achieve over the past four years. If you’re trying to grow at 15% quarter over quarter, that’s probably not sustainable… not for an extended period of time. But having bigger quarters year over year? Achievable. Definitely. In other words, remember to take the scenic route. If it happens too fast don’t believe it – it’s probably not sustainable. If the trend starts to go against you think differently – what you’re doing may not be working. But don’t be surprised when an instant change vaporizes into thin air. It was never real to begin with… –Mike Photo credit: “November 7 2007 day 27 – Graphs, trends, averages, numbers…” originally uploaded by sriram bala Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. Continuous Security Monitoring The Compliance Use Case The Change Control Use Case The Attack Use Case Classification Defining CSM Why. Continuous. Security. Monitoring? Database Denial of Service Countermeasures Attacks Introduction API Gateways Implementation Key Management Developer Tools Newly Published Papers Defending Cloud Data with Infrastructure Encryption Network-based Malware Detection 2.0: Assessing Scale, Accuracy, and Deployment Quick Wins with Website Protection Services Email-based Threat Intelligence: To Catch a Phish Network-based Threat Intelligence: Searching for the Smoking Gun Incite 4 U It’s you: The slippery use of terminology by the NSA, claiming that they only use metadata and don’t search people’s email content, is ludicrous. In the field of behavioral monitoring we use attributes and metadata (e.g., time of day, location, IP addresses of senders and recipients, type of request, etc.) to detect anomalous behavior – not content! All that’s needed is metadata grouped by or linked to a specific attribute, and then scan for behavioral patterns you deem suspicious. Terrorist detected, right? Keep in mind that an attribute – something like your cellphone number, email address, SIM chip ID, or a random ID token – is used as a reference for you. The NSA neither needs nor wants to read your content, or even know your individual identity, until after they have picked a target, because metadata is all you need for behavioral tracking. This word game is complete BS: saying they are not “reading your email” is a red herring. The fact is you, and your actions, are being monitored. McNealy was right all those years ago. You have no privacy. – AL Pointing the finger at the mirror: Man, Paul Proctor tells the hard truth in No One Cares About Your Security Metrics and You are to Blame. His main point is that senior management asks you for metrics because they have to – not because they want to, or even care. To change this you need to give them information that helps them make better decisions. He then goes through a bunch of metrics that are completely worthless to senior management, including number of attacks and number of unpatched vulnerabilities. Of course Paul doesn’t put any meat into the post because he wants you to become a client, which is fine. There is no free lunch. But I will reiterate a point he makes as well: it’s not that those typical metrics are totally useless. They are quite useful, but only in an operations context.

Share:
Read Post

Continuous Security Monitoring: Migrating to CSM

We spent a bulk of this series defining the major use cases for Continuous Security Monitoring, taking a journey through Attacks, Change Control, and Compliance. We know that many of you tend to be people of action, who want to just get going. But without a proper plan and definition for what you are trying to achieve with your security monitoring initiative, you will just end up with a lot of shiny expensive shelfware. Now you need to decide on the technology platform you will use to aggregate your data sources and perform the CSM analysis. You have a bunch of candidates, and probably a few already operational in your environment – though likely underutilized. We will cover the general requirements you need to cover, and then consider whether an existing platform can satisfy them. Not to spoiler the ending, but shockingly enough it will depend on your use case. Then we will discuss deployment models and the process involved to broaden our use cases. Selecting the CSM Platform Many folks feel their eyes glaze over when someone uses the word ‘platform’. Security folks have a long and tattered history with all sorts of ‘platforms’, none of which have really done what they were supposed (promised) to do. Now we have the opportunity to reset expectations, which is why looking at the CSM platform in terms of use cases is critical. Let’s start with the general platform requirements and what you need: Secure and scalable: Depending on your primary use case and the data sources you choose to aggregate, you may have significant scalability requirements. But for lighter use cases such as compliance, data storage demands are less intense. But we like planning for the future, which means picking a solution that can provide increased scale – even if you don’t need it yet. That comes back to architecture and deployment models, as described in our Security Management 2.0 paper. Keep in mind that the CSM environment includes sensitive data. So you will want to make sure your platform provides adequate security (strong authentication, data protection at rest, data integrity, etc.) to protect your information. Analytics: Monitoring is all about being able to find patterns in disparate data sources, which requires the ability to analyze lots of data. Does that mean you need “big data” analytics? Again it depends on the use case, but make sure you can both look for patterns you already know about (standard attack scenarios) and also unknown situations that are clearly not normal. Agentry: For the attack and change control use cases you will need to get information directly from monitored endpoints, which requires some kind of agent running on the devices. Does it need to be a persistent agent? Not necessarily. You can get much of the data you need via credentialed scans or dissolving agents. But for truly continuous monitoring you will need something on the device looking for indicators of malicious activity. Flexible alerting: Collecting data is good, but alerts make that data useful. You will want to ensure each alert provides enough information for you to actually do something about it. Whether that’s a poor man’s capability to manage an incident, or integration with a broad investigative platform, you will need some way to operationally use the information from the platform. With the increasing availability of third-party threat intelligence, you should also look for the ability to pull in external research feeds to search for specific indicators in the monitored environment. Visualization: A good dashboard environment offers user-selectable elements, and defaults for both technical and non-technical users. The dashboard should focus on the highest-level information (which devices are at risk, aggregate reports, system health, etc.), and provide the ability to drill down as appropriate. Given the current state of technology, a web-based interface with significant customization is now table stakes. Reporting: If compliance is your primary use case, then your requirements are all about reporting. You need to produce artifacts to document how the security monitoring environment substantiates the effectiveness of controls on devices in scope. Even if another use cases is your driver, you will need some measure of ongoing reporting to satisfy compliance requirements. Now that we know what the CSM platform is, let’s take a minute to mention what it doesn’t need to be – at least today: Real time: One of the biggest confusions in security monitoring is ‘real-time’. You are aggregating data from an event that already happened, so it cannot actually be in real time. That said, the sooner you get the data, analyze it, and are able to determine whether you have an issue, the better. Compliance doesn’t require any kind of real-time response. Change control requires more timeliness, for critical devices, and the attack use case can urgently require fast reaction, so the shorter the window between event and alert, the better. But keep in mind that ‘real-time’ alerts aren’t useful if you cannot respond in immediately. If you have a limited triage/investigations staff (and who doesn’t?), that minimizes the relevance of ‘real-time’ response. Big data centric: Big data is all the rage in all sorts of security discussions. But for compliance and change control big data is generally overkill. And depending on the capabilities of your adversaries, advanced analytics may not add value to your efforts. Eventually you may need a true security analytics platform with pseudo-real-time data collection to drive your CSM process. If you are facing truly advanced attackers you might need much more robust searching and forensics capabilities (perhaps including big data analytics). But if you are starting with compliance or change control advanced analytics are likely to be overkill. Doesn’t the SIEM Do This? You could certainly make a case that the SIEM/Log Management product you probably already have in place is in a good position to become the platform for CSM. SIEM does a good job with most of the requirements above. And SIEM already consumes most of the data sources needed for our use cases, with the exception of endpoint forensics and network packet capture… and a number of SIEMs are gaining the ability

Share:
Read Post

Incomplete Thought: Is the Cloud the Secproasaurus Extinction Event? And Are DevOps the Mammals?

Okay, I’m just throwing this one out there because the research is far from complete but I really want to hear what other people think. As I spend more time flying around meeting with security professionals and talking about the cloud, I find that security teams are generally far less engaged with cloud and virtualization projects than I thought. It seems that large swaths of essential enterprise security are almost fully managed by the cloud and virtualization teams, with security often in more of a blind role – if not outright excluded. I’m not saying security professionals are willfully ignorant or anything, but that, for a variety of reasons, they aren’t engaged and often lack important experience with the technology that’s required to even develop appropriate policies – never mind help with implementation. To be honest, it isn’t like most security professionals don’t already have full plates, but I do worry that our workforce may lose relevance if it fails to stay up to date on the ongoing technology shifts enabled by virtualization and the cloud. The less involved we are with the growing reliance on these technologies, the less relevant we are to the organization. I already see a ton of security being implemented by DevOps types who, while experts in their fields, often miss some security essentials because security isn’t their primary role. Not that security has to do everything – that model is long dead. But I fear lack of experience with virtualization and the cloud, and of understanding how fundamentally different those operating models are, could very negatively affect our profession’s ability to accomplish our mission. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.