Securosis

Research

React Faster and Better: Respond, Investigate, and Recover

After you have validated and filtered the initial alert, then escalated to contain and respond to the incident, you may need to escalate for further specialized response, investigation, and (hopefully) recovery. This progression to the next layer of escalation varies more among organizations we have talked with than the others – due to large differences in available resources, skill sets, and organizational priorities, but as with the rest of this series the essential roles are fairly consistent. Tier 3: Respond, Investigate, and Recover Tier 3 is where incident response management, specialized resources, and the heavy hitters reside. In some cases escalation may be little more than a notification that something is going on. In others it might be a request for a specialist such as a malware analyst for endpoint forensics analysis. This is also the level where most in-depth investigation is likely to occur – including root cause analysis and management of recovery operations. Finally, this level might include all-hands-on-deck response for a massive incident with material loss potential. Despite the variation in when Tier 3 begins, the following structure aligns at a high level with the common processes we see: Escalate response: Some incidents, while not requiring the involvement of higher management, may need specialized resources that aren’t normally involved in a Tier 2 response. For example, if an employee is suspected of leaking data you may need a forensic examiner to look at their laptop. Other incidents require the direct involvement of incident response management and top-tier response professionals. We have listed this as a single step, but it is really a self-contained response cycle of constantly evaluating needs and pulling in the right people – all the way up to executive management if necessary. Investigate: You always investigate to some degree during an incident, but depending on its nature there may be far more investigation after initial containment and remediation. As with most steps in Tier 3, the lines aren’t necessarily black and white. For certain kinds of incidents – particularly advanced attacks – the investigation and response (and even containment) are carried out in lockstep. For example, if you detect customized malware, you will need to perform a concurrent malware analysis, system forensic analysis, and network forensic analysis. Determine root cause: Before you can close an incident you need to know why it happened and how to prevent it from happening again. Was it a business process failure? Human error? Technical flaw? You don’t always need this level of detail to remediate and get operations back up and running on a temporary basis, but you do need it to fully recover – and more importantly to ensure it doesn’t happen again. At least not using the same attack vector. Recover: Remediation gets you back up and running in the short term, but in recovery you finish closing the holes and restore normal operations. The bulk of recovery operations are typically handled by non-security IT operations teams, but at least partially under the direction of the security team. Permanent fixes are applied, permanent holes closed, and any restored data examined to ensure you aren’t re-introducing the very problems that allowed the incident in the first place. (Optional) Prosecute or Discipline: Depending on the nature of the incident you may need to involve law enforcement and carry a case through to prosecution, or at least discipline/fire an employee. Since nothing involving lawyers except billing ever moves quickly, this can extend many years beyond the official end of an incident. Tier 3 is where the buck stops. There are no other internal resources to help if an incident exceeds capabilities. In that case, outside contractors/specialists need to be brought in, who are then (effectively) added to your Tier 3 resources. The Team We described Tier 1 as dispatchers, and Tier 2 as firefighters. Sticking with that analogy, Tier 3 is composed of chiefs, arson investigators, and rescue specialists. These are the folks with the strongest skills and most training in your response organization. Primary responsibilities: Ultimate incident management. Tier 3 handles incidents that require senior incident management and/or specialized skills. These senior individuals manage incidents, use their extensive skills for complex analysis and investigation, and coordinate multiple business units and teams. They also coordinate, train, and manage lower level resources. Incidents they manage: Anything that Tier 2 can’t handle. These are typically large or complex incidents, or more-constrained incidents that might involve material losses or extensive investigation. A good rule of thumb is that if you need to inform senior or executive management, or involve law enforcement and/or human resources, it’s likely a Tier 3 incident. This tier also includes specialists such as forensics investigators, malware analysts, and those who focus on a specific domain as opposed to general incident response. When they escalate: If the incident exceeds the combined response capabilities of the organization. In other words, if you need outside help, or if something is so bad (e.g., a major public breach) that executive management becomes directly involved. The Tools These responders and managers have a combination of broad and deep skills. They manage large incidents with multiple factors and perform the deep investigations to support full recovery and root cause analysis. They tend to use a wide variety of specialized tools, including those they write themselves. It’s impossible to list all the options out, but here are the main categories: Network (full packet capture) forensics: You’ve probably noticed this category appearing at all the levels. While the focus in the other response tiers is more on alerting and visualization, at this level you are more likely to dig deep into the packets to fully understand what’s going on for both immediate response and later investigation. If you don’t capture it you can’t analyze it, and full packet capture is essential for the advanced incident response which provides the focus here. Once data is gone you can’t get it back – thus our incessant focus on capturing as much as you can, when you can. Endpoint

Share:
Read Post

What You *Really* Need to Know about Oracle Database Firewall

Nothing amuses me more than some nice vendor-on-vendor smackdown action. Well, plenty of things amuse me more, especially Big Bang Theory and cats on YouTube, but the vendor thing is still moderately high on my list. So I quite enjoyed this Dark Reading article on the release of the Oracle Database Firewall. But perhaps a little outside perspective will help. Here are the important bits: As mentioned in the article, this is the first Secerno product release since their acquisition. Despite what Oracle calls it, this is a Database Activity Monitoring product at its core. Just one with more of a security focus than audit/compliance, and based on network monitoring (it lacks local activity monitoring, which is why it’s weaker for compliance). Many other DAM products can block, and Secerno can monitor. I always thought it was an interesting product. Most DAM products include network monitoring as an option. The real difference with Secerno is that they focused far more on the security side of the market, even though historically that segment is much smaller than the audit/monitoring/compliance side. So Oracle has more focus on blocking, and less on capturing and storing all activity. It is not a substitute for Database Activity Monitoring products, nor is it “better” as Oracle claims. Because it is a form of DAM, but – as mentioned by competitors in the article – you still need multiple local monitoring techniques to handle direct access. Network monitoring alone isn’t enough. I’m sure Oracle Services will be more than happy to connect Secerno and Oracle Audit Vault to do this for you. Secerno basically whitelists queries (automatically) and can block unexpected activity. This appears to be pretty effective for database attacks, although I haven’t talked to any pen testers who have gone up against it. (They do also blacklist, but the whitelist is the main secret sauce). Secerno had the F5 partnership before the Oracle acquisition. It allowed you to set WAF rules based on something detected in the database (e.g., block a signature or host IP). I’m not sure if they have expanded this post-acquisition. Imperva is the only other vendor that I know of to integrate DAM/WAF. Oracle generally believes that if you don’t use their products your are either a certified idiot or criminally negligent. Neither is true, and while this is a good product I still recommend you look at all the major competitors to see what fits you best. Ignore the marketing claims. Odds are your DBA will buy this when you aren’t looking, as part of some bundle deal. If you think you need DAM for security, compliance, or both… start an assessment process or talk to them before you get a call one day to start handling incidents. In other words: a good product with advantages and disadvantages, just like anything else. More security than compliance, but like many DAM tools it offers some of both. Ignore the hype, figure out your needs, and evaluate to figure out which tool fits best. You aren’t a bad person if you don’t buy Oracle, no matter what your sales rep tells your CIO. And seriously – watch out for the deal bundling. If you haven’t learned anything from us about database security by now, hopefully you at least realize that DBAs and security don’t always talk as much as they should (the same goes for Guardium/IBM). If you need to be involved in any database security, start talking to the DBAs now, before it’s too late. BTW, not to toot our own horns, but we sorta nailed it in our original take on the acquisition. Next we will see their WAF messaging. And we have some details of how Secerno works. Share:

Share:
Read Post

How to Encrypt Block Storage in the Cloud with SecureCloud

This is a bit of a different post for me. One exercise in the CCSK Enhanced Class which we are developing for the Cloud Security Alliance is to encrypt a block storage (EBS) volume attached to an AWS instance. There are a few different ways to do this but we decided on Trend Micro’s SecureCloud service for a couple reasons. First of all, setting it up is something we can handle within the time constraints of the class. The equivalent process with TrueCrypt or some other native encryption services within our AWS instance would take more time than we have, considering the CCSK Enhanced class is only one day and covers a ton of material. The other reason is that it supports my preferred architecture for encryption: the key server is separate from the encryption engine, which is separate from the data volume. This is actually pretty complex to set up using free/open source tools. Finally, they offer a free 60-day trial. The downside is that I don’t like using a vendor-specific solution in a class since it could be construed as endorsement. So please keep in mind that a) there are other options, and b) the fact that we use the tool for the class doesn’t mean this is the best solution for you. Ideally we will rotate tools as the class develops. For example, Porticor is a new company focusing on cloud encryption, and Vormetric is coming out with cloud-focused encryption. I think one of the other “V” companies is also bringing a cloud encryption product out this week. That said, SecureCloud does exactly what we need for this exercise. Especially since it’s SaaS based, which makes setting it up in the classroom much easier. Here’s how it works: The SaaS service manages keys and users. There is a local proxy AMI you instantiate in the same availability zone as your main instances and EBS volumes. Agents for Windows Server 2008 or CentOS implement the encryption operations. When you attach a volume, the agent requests a key from the proxy which communicates with the SaaS server. Once you approve the operation the key is sent back to the proxy, and then the agent, for local decryption. The keys are never stored locally in your availability zone, only used at the time of the transaction. You can choose to manually or automatically allow key delivery based on a variety of policies. This does, for example, give you control of multiple instances of the same image connecting to the encrypted volume on a per-instance basis. Someone can’t pull your image out of S3, run it, and gain access to the EBS volume, because the key is never stored with the AMI. This is my preferred encryption model to teach – especially for enterprise apps – because it separates out the key management and encryption operations. The same basic model is the one most well-designed applications use for encrypting data – albeit normally at the data/database level, rather than by volume. I’ve only tested the most basic features of the service and it works well. But there are a bunch of UI nits and the documentation is atrocious. It was much harder to get this up and running the first time than I expected. Now for the meat. I’m posting this guide mostly for our students so they can cut and paste command lines, instead of having to do everything manually. So this is very specific to our class; but for the rest of you, once you run through the process you should be able to easily adjust it for your own requirements. Hopefully this will help fill the documentation gaps a bit… but you should still read Trend’s documentation, beacuse I don’t explain why I have you do all these steps. This also covers 2 of the class exercises because I placed some of the requirements we need later for encryption into the first, more basic, exercise: CCSK Enhanced Hands-on Exercises Preparation (Windows only) If you are a Windows user you must download an ssh client and update your key file to work with it. Download and run http://www.chiark.greenend.org.uk/~sgtatham/putty/latest/x86/putty-0.60-installer.exe. Go to Start > Program Files > PuTTY > PuTTYgen Click File, select *.*, and point it to your _name_.PEM key file. Click okay, and then Save Key, somewhere you will remember it. Download and install Firefox from http://mozilla.org. Create your first cloud server In this exercise we will launch our first AMI (Amazon Machine Image) Instance and apply basic security controls. Steps Download and install ElasticFox: http://aws.amazon.com/developertools/609?_encoding=UTF8&jiveRedirect=1. Log into the AWS EC2 Console: https://console.aws.amazon.com/ec2/home. Go to Account, then Security Credentials. Note your Access Keys. Direct link is https://aws-portal.amazon.com/gp/aws/developer/account/index.html. Click X.509 Certificates. Click Create a new Certificate. Download both the private key and certificate files, and save them where you will remember them. In Firefox, go to Tools > ElasticFox. Click Credentials, and then enter your Access Key ID and Secret Access Key. Then click Add. You are now logged into your account. If you do not have your key pair (not the certificate key we just created, but the AWS key you created when you set up your account initially) on your current system, you will need to create a new key pair and save a copy locally. To do this, click KeyPairs and then click the green button to create a new pair. Save the file where you will remember it. If you lose this key file, you will no longer be able to access the associated AMIs. Click Images. Set your Region to us-east-1. Paste “ami-8ef607e7” into the Search box. You want the CentOS image. Click the green power button to launch the image. In the New Instance(s) Tag field enter CCSK_Test1. Choose the Default security group, and availability zone us-east-1. Click Launch. ElasticFox will switch to the Instances tab, and your instance will show as Pending. Right-click and select Connect to Instance. You will be asked to open the Private Key File you saved when you set

Share:
Read Post

RSA Guide 2011: Virtualization and Cloud

2010 was a fascinating year for cloud computing and virtualization. VMWare locked down the VMSafe program, spurring acquisition of smaller vendors in the program with access to the special APIs. Cloud computing security moved from hype to hyper-hype at the same time some seriously interesting security tools hit the market. Despite all the confusion, there was a heck of a lot of progress and growing clarity. And not all of it was from the keyboard of Chris Hoff. What We Expect to See For virtualization and cloud security, there are four areas to focus on: Innovation cloudination: For the second time in this guide I find myself actually excited by new security tech (don’t tell my mom). While you’ll see a ton of garbage on the show floor, there are a few companies (big and small) with some innovative products designed to help secure cloud computing. Everything from managing your machine keys to encrypting IaaS or SaaS data. These aren’t merely virtual appliance versions of existing hardware/software, but ground-up, cloud-specific security tools. The ones I’m most interested in are around data security, auditing, and identity management. Looking SaaSy: Technically speaking, not all Software as a Service counts as cloud computing, but don’t tell the marketing departments. But this is another area that’s more than mere hype- nearly every vendor I’ve talked with (and worked with) is looking at leveraging cloud computing in some way. Not merely because it’s sexy, but since SaaS can help reduce management overhead for security in a bunch of ways. And since all of you already pay subscription and maintenance licenses anyway, pure greed isn’t the motivator. These offerings work best for small and medium businesses, and reduce the amount of equipment you need to maintain on site. They also may help with distributed organizations. SaaS isn’t always the answer, and you really need to dig into the architecture, but I’ve been pleasantly surprised at how well some of these services can work. VMSafe cracking: VMWare locked down its VMSafe program that allowed security vendors direct access to certain hypervisor functions via API. The program is dead, except the APIs are maintained for any existing members in the program. This was probably driven by VMWare wanting to control most of the security action, and they forced everyone to move to the less-effective VShield Zones system. What does this mean? Anyone with VMSafe access has a leg up on the competition, which spurred some acquisitions. Everyone else is a bit handcuffed in comparison, so when looking at your private cloud security (on VMware) focus on the fundamental architecture (especially around networking). Virtual appliances everywhere: You know all those security vendors that promoted their amazing performance due to purpose-built hardware? Yeah, now they all offer the same performance in virtual (software) appliances. Don’t ask the booth reps too much about that though or they might pull a Russell Crowe on you. On the upside, many security tools do make sense as virtual appliances. Especially the ones with lower performance requirements (like management servers) or for the mid-market. We guarantee your data center, application, and storage teams are looking hard at, or are already using, cloud and virtualization, so this is one area you’ll want to pay attention to despite the hype. And that’s it for today. Tomorrow will wrap up with Security Management and Compliance, as well as a list of all the places you can come heckle me and the rest of the Securosis team. And yes, Mike will be up all night assembling this drivel into a single document to be posted on Friday. Later… Share:

Share:
Read Post

React Faster and Better: Contain and Respond

In our last post, we covered the first level of incident response: validating and filtering the initial alert. When that alert triggers and your frontline personnel analyze the incident, they’ll either handle it on the spot or gather essential data and send it up the chain. These roles and responsibilities represent a generalization of best practices we have seen across various organizations, and your process and activities may vary. But probably not too much. Tier 2: Respond and contain The bulk of your incident response will happen within this second tier. While Tier 1 deals with a higher number of alerts (because they see everything), anything that requires any significant response moves quickly to Tier 2, where an incident manager/commander is assigned and the hard work begins. In terms of process, Tier 2 focuses on the short-term, immediate response steps: Size-up: Rapidly scope the incident to determine the appropriate response. If the incident might result in material losses (something execs need to know about), require law enforcement and/or external help, or require specialized resources such as malware analysis, it will be escalated to Tier 3. The goal here is to characterize the incident and gather the information to support containment. Contain: Based on your size-up, try to prevent the situation from getting worse. In some cases this might mean not containing everything, so you can continue to observe the bad guys until you know exactly what’s happening and who is doing it, but you’ll still do your best to minimize further damage. Investigate: After you set the initial incident perimeter, dig in to the next level of information to better understand the full scope and nature of the incident and set up your remediation plan. Remediate: Finish closing the holes and start the recovery process. The goal at this level is to get operations back up and running (and/or stop the attack), which may involve workarounds or temporary measures. This is different than a full recovery. If an incident doesn’t need to escalate any higher, at this level you’ll generally also handle the root cause analysis/investigation and manage the full recovery. This depends on on resources, team structure, and expertise. The Team If Tier 1 represent your dispatchers, Tier 2 are the firefighters who lead the investigation. They are responsible for more-complex incidents that involve unusual activity beyond simple signatures, multi-system/network issues, and issues with personnel that might result in HR/legal action. Basically, any kind of non-trivial incident ends up in the lap of Tier 2. While these team members may still specialize to some degree, it’s important for them to keep a broad perspective because any incident that reaches this level involves the complexity of multiple systems and factors. They focus more on incident handling and less on longer, deeper investigations. Primary responsibilities: Primary incident handling. More advanced investigations that may involve multiple factors. For example, a Tier 1 analyst notes egress activity; and the Tier 2 analyst then takes over and coordinates a more complete network analysis; as well as checking endpoint data where the egress originated, to identify/characterize/prioritize any exfiltration. This person has overall responsibility for managing the incident and pulling in specialist resources, as needed. They are completely dedicated to incident response. As the primary incident handlers, they are responsible for quickly characterizing and scoping the incident (beyond what they got from Tier 1), managing containment, and escalating when required. They are the ones who play the biggest role in closing the attacker’s window of malicious opportunity. Incidents they manage: Multi-system/factor incidents and investigations of personnel. Incidents are more complex and involve more coordination, but don’t require direct executive team involvement. When they escalate: Any activities involving material losses, potential law enforcement involvement, or specialized resources; and those requiring an all-hands response. They may even still play the principal management and coordination role for these incidents, but at that point senior management and specialized expertise needs to be in the loop and potentially involved. The Tools These responders have a broader skill set, but generally rely on a variety of monitoring tools to classify and investigate incidents as quickly as possible. Most people we talk with focus more on network analysis at this level because it provides the broadest scope to identify the breadth of the incident via “touch points” (devices involved in the incident). They may then delve into log analysis for deeper insight into events involving endpoints, applications, and servers; although they often work with a platform specialist – who may not be formally part of the incident response team – when they need deeper non-security expertise. Full packet capture (forensics): As in a Tier 1 response, the network is the first place to look to scope intrusions. The key difference is that in Tier 2 the responder digs deeper, and may use more specialized tools and scripts. Rather than looking at IDS for alerts, they mine it for indications of a broader attack. They are more likely to dig into network forensics tools to map out the intrusion/incident, as that provides the most data – especially if it includes effective analysis and visualization (crawling through packets by hand is a much slower process, and something to avoid at this level if possible). As discussed in our last post, simple network monitoring tools are helpful, but not sufficient to do real analysis of incident data. So full package capture is one of the critical pieces in the response toolkit. Location-specific log management: We’re using this as a catch-all for digging into logs, although it may not necessarily involve a centralized log management tool. For application attacks, it means looking at the app logs. For system-level attacks, it means looking at the system logs. This also likely involves cross-referencing with authentication history, or anything else that helps characterize the attack and provide clues as to what is happening. In the size-up, the focus is on finding major indicators rather than digging out every bit of data. Specialized tools: DLP, WAF, DAM, email/web security gateways, endpoint

Share:
Read Post

RSA Guide 2011: Data Security

As someone who has covered data security for nearly a decade, some days I wonder if I should send Bradley Manning, Julian Assange, whoever wrote the HITECH act, and the Chinese hacker community a personal note of gratitude. If the first wave of data security was driven by breach disclosure laws and a mixture of lost laptops and criminal exploits, this second wave is all about stopping leaks and keeping your pants on in public. This year I’ve seen more serious interest in large enterprises to protect more than merely credit card numbers than ever before. We also see PCI and the HITECH act (in healthcare) pushing greater investment in data security down to the mid-market. And while the technology is still far from perfect, it’s definitely maturing along nicely. What We Expect to See There are five areas of interest at the show for data security: DLP – Great taste, less filling There are two major trends in the Data Loss Prevention market- DLP Light comes of age, and full-suite DLP integration into major platforms. A large percentage of endpoint and network tools now offer basic DLP features. This is usually a regular expression engine or some other technique tuned to protect credit card numbers, and maybe a little personally identifiable information or healthcare data. Often this is included for free, or at least darn cheap. While DLP Light (as we call this) lacks mature workflow, content analysis capabilities, and so on, not every organization is ready for, or needs, a full DLP solution. If you just want to add some basic credit card protection, this is a good option. It’s also a great way to figure out if you need a dedicated DLP tool without spending too much up-front. As for full-suite DLP solutions, most of them are now available from big vendors. Although the “full” DLP is usually a separate product, there’s a lot of integration at various points of overlap like email security or web gateways. There’s also a lot of feature parity between the vendors- unless you have some kind of particular need that only one fulfills, if you stick with the main ones you can probably flip a coin to choose. The key things to ask when looking at DLP Light are what’s the content analysis engine, and how are incidents managed. Make sure the content analysis technique will work for what you want to protect, and that the workflow fits how you want to manage incidents. You might not want your AV guy finding out the CFO is emailing out customer data to a competitor. Also make sure you get to test it before paying for it. As for full-suite DLP, focus on how well it can integrate with your existing infrastructure (especially network gateways, directories, and endpoints). I also suggest playing with the UI since that’s often a major deciding factor due to how much time security and non-security risk folks spend in it. Last of all we’re starting to see more DLP vendors focus on the mid-market and easing deployment complexity. Datum in a haystack Thanks to PCI 2.0 we can expect to see a heck of a lot of discussion around “content discovery”. While I think we all know it’s a good idea to figure out where all our sekret stuff is in order to protect it, in practice this is a serious pain in the rear. We’ve all screamed in frustration when we find that Access database or spreadsheet on some marketing server all chock full of Social Security numbers. PCI 2.0 now requires you demonstrate how you scoped your assessment, and how you keep that scope accurate. That means having some sort of tool or manual process to discover where all this stuff sits in storage. Trust me, no marketing professional will possibly let this one pass. Especially since they’ve been trying to convince you it was required for the past 5 years. All full-suite DLP tools include content discovery to find this data, as well as some DLP Light options. Focus on checking out the management side, since odds are there will be a heck of a lot of storage to scan, and results to filter through. There’s a new FAM in town I hate to admit this, but there’s a new category of security tool popping up this year that I actually like. File Activity Monitoring watches all file access on protected systems and generates alerts on policy violations and unusual activity. In other words, you can build policies that alert you when a sales guy about to depart is downloading all the customer files, without blocking access to them. Or when a random system account starts downloading engineering plans to that new stealth fighter. I like the idea of being able to track what files users access and generate real-time alerts. I started talking about this years ago, but there weren’t any products on the market. now I know of 3, and I suspect more are coming down the pipe. Battle of the tokens Last year we predicted a lot of interest and push in encryption and tokenization, and for once we got it right. One thing we didn’t expect was the huge battle that erupted over ownership of the term. Encryption vendors started pushing encrypted data as tokens (which I find hard to call a token), while tokenization advocates try to convince you encryption is no more secure than guarding Hades with a chihuahua. The amusing part is all these guys offer both options in their products. Play the WIKILEAKS! WIKILEAKS! APT! WIKILEAKS! PCI! HITECH! WIKILEAKS!!! drinking game Since not enough of you are buying data security tools, the vendors will still do their best to scare your pants off and claim they can prevent the unpreventable. Amuse yourself by cruising the show floor with beer in hand and drinking anytime you see those words on marketing materials. It’s one drink per mention in a brochure, 2 drinks for a postcard handout, and 3

Share:
Read Post

Why You Should Delete Your Twitter DMs, and How to Do It

I’ve been on Twitter for a few years now, and over that time I’ve watched not only its mass adoption, but also how people changed their communication habits. One of the most unexpected changes (for me) is how many people now use Twitter Direct Messages as instant messaging. It’s actually a great feature – with IM someone needs to be online and using a synchronous client, but you can drop a DM anytime you want and, depending on their Twitter settings and apps, it can follow them across any device and multiple communications methods. DM is oddly a much more reliable way to track someone down, especially if they link Twitter with their mobile phone. The problem is that all these messages are persistent, forever, in the Twitter database. And Twitter is now one of the big targets when someone tries to hack you (as we’ve seen in a bunch of recent grudge attacks). I don’t really say anything over DM that could get me in trouble, but I also know that there’s probably plenty in there that, taken out of context, could look bad (as happened when a friend got hacked and some DMs were plastered all over the net). Thus I suggest you delete all your DMs occasionally. This won’t necessarily clear them from all the Twitter apps you use, but does wipe them from the database (and the inboxes of whoever you sent them to). This is tough to do manually, but, for now, there’s a tool to help. Damon Cortesi coded up DM Whacker, a bookmarklet you can use while logged into Twitter to wipe your DMs. Before I tell you how to use it, one big warning: this tool works by effectively performing a Cross-Site Request Forgery attack on yourself. I’ve scanned the code and it looks clean, but that could change at any point without warning, and I haven’t seriously programmed JavaScript for 10 years, so you really shouldn’t take my word on this one. The process is easy enough, but you need to be in the “old” Twitter UI: Go to the DM Whacker page and drag the bookmarklet to your bookmarks bar. Log into Twitter and navigate to your DM page. If you use the “new” Twitter UI, switch back to the “old” one in your settings. Click the bookmarklet. A box will appear in the upper-right of the Twitter page. Select what you want to delete (received and sent) or even filter by user. Click the button, and leave the page running for a while. The process can take a bit, as it’s effectively poking the same buttons you would manually. If you are really paranoid (like me) change your Twitter password. It’s good to rotate anyway. And that’s it. I do wish I could keep my conversation history for nostalgia’s sake, but I’d prefer to worry less about my account being compromised. Also, not everyone I communicate with over Twitter is as circumspect, and it’s only fair to protect their privacy as well. Share:

Share:
Read Post

The Analyst’s Dillema: Not Everything Sucks

There’s something I have always struggled with as an analyst. Because of the, shall we say, ‘aggressiveness’ of today’s markets and marketers, most of us in the analyst world are extremely cautious about ever saying anything positive about any vendors. This frequently extends to entire classes of technology, because we worry it will be misused or taken out of context to promote a particular product or company. Or, as every technology is complex and no blanket statement can possibly account for everyone’s individual circumstances, that someone will misinterpret what we say and get pissed it doesn’t work for them. What complicates this situation is that we do take money from vendors, both as advisory clients and as sponsors for papers/speaking/etc. They don’t get to influence the content – not even the stuff they pay to put their logos on – but we’re not stupid. If we endorse a technology and a vendor who offers it has their logo on that paper, plenty of people will think we pulled a pay for play. That’s why one of our hard rules is that we will never specifically mention a vendor in any research that’s sponsored by any vendor. If we are going to mention a vendor, we won’t sell any sponsorship on it. But Mike and I had a conversation today where we realized that we were holding ourselves back on a certain project because we were worried it might come too close to endorsing the potential sponsor, even though it doesn’t mention them. We were writing bad content in order to protect objectivity. Which is stupid. Objectivity means having the freedom to say when you like something. Just crapping on everything all the time is merely being contrarian, and doesn’t necessarily lead to good advice. So we have decided to take off our self-imposed handcuffs. Sometimes we can’t fully dance around endorsing a technology/approach without it ending up tied to a vendor, but that’s fine. They still never get to pay us to say nice things about them, and if some people misinterpret that there really isn’t anything we can do about it. We have more objectivity controls in place here than any other analyst firm we’ve seen, including our Totally Transparent Research policy. We think that gives us the freedom to say what we like. And, to be honest, we can’t publish good research without that freedom. Share:

Share:
Read Post

React Faster and Better: Kicking off a Response

Everyone’s process is a bit different, but through our research we have found that the best teams tend to gear themselves through three general levels of response, each staffed with increasing expertise. Once the alert triggers, your goal is to filter out the day-to-day crud junior staffers are fully capable of handling, while escalating the most serious incidents through the response levels as quickly as possible. Having a killer investigation team doesn’t do any good if an incident never reaches them, or if their time is wasted on the daily detritus that can be easily handled by junior folks. As mentioned in our last post, Organizing for Response, these tiers should be organized by skills and responsibilities, with clear guidelines and processes for moving incidents up (and sometimes down) the ladder. Using a tiered structure allows you to more quickly and seamlessly funnel incidents to the right handlers – keeping those with the most experience and skills from being distracted by lower-level events. An incident might be handled completely at any given level, so we won’t repeat the usual incident response fundamentals, but instead focus on what to do at each level, who staffs it, and when to escalate. Tier 1: Validate and filter After an incident triggers, the first step is to validate and filter. This means performing a rapid analysis of the alert and either handling it on the spot or passing it up the chain of command. While incidents might trigger off the help desk or from another non-security source, the initial analysis is always performed by a dedicated security analyst or incident responder. The analyst receives the alert and it’s his or her job to figure out whether the incident is real or not, and if it is real, how severe it might be. These folks are typically in your Security Operations Center and focus on “desk analysis”. In other words they handle everything right then and there, and aren’t running into data centers or around hallways. The alert comes in, they perform a quick analysis, and either close it out or pass it on. For simple or common alerts they might handle the incident themselves, depending on your team’s guidelines. The team These are initial incident handlers, who may be dedicated to incident response or, more frequently, carry other security responsibilities (e.g., network security analyst) as well. They tend to be focused on one or a collection of tools in their coverage areas (network vs. endpoint) and are the team monitoring the SIEM and network monitors. Higher tiers focus more on investigation, while this tier focuses more on initial identification. Primary responsibilities: Their main responsibility is initial incident identification, information gathering, and classification. They are the first human filter, and handle smaller incidents and identify problems that need greater attention. It is far more important that they pass information up the chain quickly than try to play Top Gun and handle things over their heads on their own. Good junior analysts are extremely important for quickly identifying more serious incidents for rapid response. Incidents they handle themselves: Basic network/SIEM alerts, password lockouts/failures on critical systems, standard virus/malware. Typically limited to a single area – e.g., network analyst. When they escalate: Activity requiring HR/legal involvement, incidents which require further investigation, alerts that could indicate a larger problem, etc. The tools The goal at this level is triage, so these tools focus on collecting and presenting alerts, and providing the basic investigative information we discussed in the fundamentals series. SIEM: SIEMs aren’t always very useful for full investigations, but do a good job of collecting and presenting top-level alerts and factoring in data from a variety of sources. Many teams use the SIEM as their main tool for initial reduction and scoping of alerts from other tools and filtering out the low-level crud, including obvious false positives. Central management of alerts from other tools helps to identify what’s really happening, even though the rest of the investigation and response will be handled at the original source. This reduces the number of eyeballs needed to monitor everything and makes the team more efficient. Network monitoring: A variety of network monitoring tools are in common use. They tend to be pretty cheap (and there are a few good open source options) and provide good bang for the buck, so you can get a feel for what’s really happening on your network. Network monitoring typically includes NetFlow, collected device logs, and perhaps even your IDS. Many organizations use these monitoring tools either as an extension of their SIEM environment or as a first step toward deeper network monitoring. Full packet network capture (forensics): If network monitoring represents baby steps, full packet capture is your first bike. A large percentage of incidents involves the network, so capturing what happens on the wire is the linchpin of any analysis and response. Any type of external attack, and most internal attacks, eventually involve the network. The more heavily you monitor, the greater your ability to characterize incidents quickly, because you have the data to reconstruct exactly what happened. Unlike endpoints, databases, or applications; you can monitor a network deeply, passively and securely, using tools that (hopefully) aren’t involved in the successful compromise (less chance of the bad guys erasing your network logs). You’ll use the information from your network forensics infrastructure to scope the incident and identify “touch points” for deeper investigation. At this level you need a full packet capture tool with good analysis capabilities – especially given the massive amount of data involved – even if you feed alerts to a SIEM. Just having the packets to look it, without some sort of analysis of them, isn’t as useful. Getting back to our locomotion example, deep analysis of full packet capture data is akin to jumping in the car. Endpoint Protection Platform (EPP) management console: This is often your first source for incidents involving endpoints. It should provide up-to-date information on the endpoint as well as activity logs. Data Loss Prevention

Share:
Read Post

Register for Our Cloud Security Training Class at RSA

As we previously mentioned, we will teach the very first CSA Cloud Computing Security Knowledge (Enhanced) class the Sunday before RSA. We finally have some more details and the registration link. The class costs $400 and include a voucher worth $295 to take the Cloud Computing Security Knowledge (CCSK) exam. We are working with the CSA and this is our test class to check out the material before it is sent to other training organizations. Basically, you get a full day of training with most of the Securosis team for $105. Not bad, unless you don’t like us. The class will be in Moscone and includes a mix of lecture and practical exercises. You can register online and we hope to see you there! (Yes, that means it’s a long week. I’ll buy anyone in the class their first beer to make up for it). Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.