Securosis

Research

Now China is stealing our porn

Okay, it is entirely possible he paid for it, but HOW DO WE KNOW? U.S. Finds Porn Not Secrets on Suspected China Spy’s PC A Chinese research scientist suspected of spying on the National Aeronautics and Space Administration – and pulled from a plane in March as he was about to depart for China – is set to plead to a misdemeanor charge of violating agency computer rules. Bo Jiang, who was indicted March 20 for allegedly making false statements to the U.S., was charged yesterday in a separate criminal information in federal court in Newport News, Virginia. Jiang unlawfully downloaded copyrighted movies and sexually explicit films onto his NASA laptop, according to the court filing. A plea hearing is set for tomorrow. This is why it’s important to read breaking news with skepticism. Not that China is above this sort of theft, per documented history, but that doesn’t mean everyone is working for APT1138. Share:

Share:
Read Post

The CISO’s Guide to Advanced Attackers: Breaking the Kill Chain

In our last post in the CISO’s Guide to Advanced Attacks, you verified the alert, so it’s time to spring into action. This is what you get paid for – and to be candid your longevity in the CISO role directly correlates to your ability to contain the damage and recover from the attacks as quickly and efficiently as possible. But no pressure, right? So let’s work through the steps involved in breaking the kill chain, disrupting the attackers, taking counter measures, and/or getting law enforcement involved. Incident response needs to be a structured and conditioned response. Work to avoid setting policies during firefights, even though it’s not possible to model every potential threat or gain consensus on every possible countermeasure. But try to define the most likely scenarios and get everyone on board with appropriate tactics for containment and remediation. Those scenarios provide a basis for making decisions in scenarios that don’t quite match your models. Then at least you can spin why you made certain decisions in the heat of battle. Contain the Damage As we described in Incident Response Fundamentals, containment can be challenging because you don’t exactly know what’s going on but you need to intervene as quickly as practical. The first requirement is very clear: do not make things worse. Make sure you provide the best opportunity for your investigators (both internal and external) to isolate and study the incident. Be careful not to destroy data by turning off and/or unplugging machines without first taking appropriate forensic images. Keeping the discussion high-level, containment typically involves two main parts: Quarantine the device: Isolate the device quickly so it doesn’t continue to perform reconnaissance, move laterally within your network, infect other devices, or progress toward completing its mission and stealing your data. You may monitor the device as you figure out exactly what you are doing but make sure it doesn’t cause any more harm. Protect critical data: One reason to quarantine the device is to ensure that it cannot continue to mine your network and possibly exfiltrate data. But you also can’t assume the compromised device you identified is the only one. So go back to the potential targets you outlined when you sized up the adversary, and take extra care to protect the critical data most interesting to your adversary. One thing we know about advanced attackers is that they generally have multiple paths to accomplish their mission. You may have discovered one (the compromised device), but there are likely more. So be a little extra diligence with monitoring data access and egress points, to help disrupt the kill chain in case of multiple compromises. Investigate and Mitigate Your next step is to identify the attack vectors and determine appropriate remediation paths. As mentioned above you want to be sure to gather just as much information as you need to mitigate the problem (stop the bad guys) and collect it in a way that doesn’t preclude subsequent legal (or other) action at some point. For more details on malware investigation techniques, we point you again to Malware Analysis Quant for a very granular attack investigation process. When it comes to mitigation you will set a series of discreet achievable goals and assign resources to handle them. Just like any other project, right? But when dealing with advanced attackers you have a few remediation paths to consider: Clean: People also also call this the Big Bang approach because you need to do it quickly and completely. Because if you leave the attacker with any foothold in your environment you will start all over again sooner than later. Most organizations opt for this approach – the sooner you clean your environment the better. Observe: In certain instances, such as when you are dealing with an inside job or law enforcement is involved, you may be asked not to clean all the compromised machines. But as described above, you need to take extra care to ensure you don’t suffer further losses while observing the attackers. That involves deep monitoring (likely network full packet capture and memory forensics) on traffic in and out of critical data stores – as well as tightening controls on egress filters and/or DLP gateways. Disinformation: Another less common alternative is to actively provide disinformation to adversaries. That might involve dummy bids, incorrect schematics, or files with tracking data which might help identify the attacker. This is a very advanced tactic, generally performed with the guidance of law enforcement or a very select third-party incident response firm. Executing the Big Bang To get rid an advanced attacker you need to find all compromised devices. We have been talking about how to do that by searching for indicators of compromise but you cannot assume you have seen and profiled all the malware in use. Those pesky advanced attackers may be throwing 0-day attacks at you. This, again, is where threat intelligence comes in to look for patterns others have seen (though not likely your specific files). Once you have identified all the affected devices (and we mean all of them), they need to go dark at the same time. You cannot leave the adversary with an opportunity to compromise other devices or execute a contingency plan to retain a foothold while you work through your machines during cleanup. This probably entails wiping the machines down to bare metal – even if that means losing data. Given the capabilities of advanced attackers, you cannot be sure of totally eliminating the device compromise any other way. When the affected devices are wiped and rebuilt you need to monitor them and capture egress traffic during a burn-in period to make sure you didn’t miss anything. That means scrutinizing all configuration changes for indications that the attacker is breaking back in or finding new victims, as well as looking for command and control indicators. The moment the adversary is blown out they will start working double-time to get back in. You are never done. So you need to ensure your

Share:
Read Post

Off topic: Cycling is the new golf

From the Economist: TRADITIONALLY, business associates would get to know each other over a round of golf. But road cycling is fast catching up as the preferred way of networking for the modern professional. A growing number of corporate-sponsored charity bike rides and city cycle clubs are providing an ideal opportunity to talk shop with like-minded colleagues and clients while discussing different bike frames and tricky headwinds. Many believe cycling is better than golf for building lasting working relationships, or landing a new job, because it is less competitive. Oh, biking is definitely competitive, but not as directly competitive. Anyway, call this one wishful thinking on my part – I would rather ride than golf any day. Then again I have only ridden once in a business context, and it blew away every other team building/networking/whatever exercise in my entire professional history. Share:

Share:
Read Post

Malware string in iOS app interesting, but probably not a risk

From Macworld: iOS app contains potential malware: The app Simply Find It, a $2 game from Simply Game, seems harmless enough. But if you run Bitdefender Virus Scanner–a free app in the Mac App Store–it will warn you about the presence of a Trojan horse within the app. A reader tipped Macworld off to the presence of the malware, and we confirmed it. I looked into this for the article, and aside from blowing up my schedule today it was pretty interesting. Bitdefender found a string which calls an iframe pointing to a malicious site in our favorite top-level domain (.cn). The string was embedded in an MP3 file packaged within the app. The short version is that despite my best attempts I could not get anything to happen, and even when the MP3 file plays in the (really bad) app it never tries to connect to the malicious URL in question. Maybe it is doing something really sneaky, but probably not. At this point people better at this than me are probably digging into the file, but my best guess is that a cheap developer snagged a free music file from someplace, and the file contained a limited exploit attempt to trick MP3 players into accessing the payload’s URL when they read the ID3 tag. Maybe it targets an in-browser music player. The app developer included this MP3 file but the app’s player code isn’t vulnerable to the MP3’s, so exploit nothing bad happens. It’s interesting, and could easily slip by Apple’s vetting if there is no way the URL could trigger. Maybe we will hear more when people perform deeper analysis and report back, but I doubt it. I suspect the only thing exploited today was my to do list. Share:

Share:
Read Post

Getting Logstalgic

Good tip here in a post from the Chief Monkey about a new open source log visualization tool called Logstalgia. It basically shows web access logs visualized as a pong game. So all of you folks in my age bracket will really appreciate it. Here is the description from the project page: Logstalgia is a website traffic visualization that replays or streams web-server access logs as a pong-like battle between the web server and an never ending torrent of requests. Requests appear as colored balls (the same color as the host) which travel across the screen to arrive at the requested location. Successful requests are hit by the paddle while unsuccessful ones (eg 404 – File Not Found) are missed and pass through. The paths of requests are summarized within the available space by identifying common path prefixes. Related paths are grouped together under headings. For instance, by default paths ending in png, gif or jpg are grouped under the heading Images. Paths that don’t match any of the specified groups are lumped together under a Miscellaneous section. So how do you use it? Basically figuring out if you have an issue is about seeing weird patterns. This pong looking visualization is definitely interesting. For example, if you are getting hammered by a small set of IP addresses, then that will be pretty easy to see using the tool. If your site is dropping a lot of traffic, then you’ll see that too. Check out the video. Not only does it have cool music, but your mind should be racing in terms of how you’d use the tool in your day to day troubleshooting. Does it provide a smoking gun? Nope. But it gives you a way to visualize sessions in an interesting way, and the price is right. Good tip Chief. Thanks. Share:

Share:
Read Post

Friday Summary: May 3, 2013

I was weirdly interested in Paul Miller’s year off the Internet. Paul is a writer for The Verge, and they actually paid him to keep writing (offline) through the year instead of kicking him to the curb like most publications would have. Spoiler: in retrospect the entire thing was a mix of isolating and asinine. And now I’m supposed to tell you how it solved all my problems. I’m supposed to be enlightened. I’m supposed to be more “real,” now. More perfect. But instead it’s 8PM and I just woke up. I slept all day, woke with eight voicemails on my phone from friends and coworkers. I went to my coffee shop to consume dinner, the Knicks game, my two newspapers, and a copy of The New Yorker. And now I’m watching Toy Story while I glance occasionally at the blinking cursor in this text document, willing it to write itself, willing it to generate the epiphanies my life has failed to produce. I didn’t want to meet this Paul at the tail end of my yearlong journey. Paul is still just as happy or miserable as he was a year ago, except now he doesn’t know who Honey Boo Boo is. Or maybe he does because, without the Internet, he probably watched entirely too much bad cable television. Or local news. Technology doesn’t move backwards. At least not until we blow the planet up, create a life-eliminating disease, the robots convert us to fuel, or the nanobots ingest every organic molecule and turn the planet into grey goo (pick one – maybe two). The Internet is here to stay, and disconnecting is more likely to make you less happy because you would lose one of the few communications channels that works in our distributed society. As Paul learned, the Internet is merely an enabler. If you’re lazy and procrastinate, it isn’t like you need the Internet for that. If you get too wrapped up in Facebook or Twitter, odds are you were the same way with memos and water coolers – and could be again. The Internet does allow some people to bypass certain psychological and social limitations around face to face interaction, but the Internet isn’t what actually made them assholes in the first place. But yes, the Internet can most definitely exacerbate certain behaviors, it weakens social herd immunity, and it enables nut jobs to congregate more freely. I have personally found great value in moderating my Internet consumption, but I’m not so foolish as to think its total elimination would buy my anything. Especially because I now have kids, I try to make sure they know I’m focused on them and not a screen in my hand. Mostly it’s a matter of not letting myself get caught up in a bunch of garbage that doesn’t matter (especially on Twitter), obsessing over the news, or spending countless hours reading things that really don’t affect my life or improve my education. It’s all a balance. I’m far from perfect, but I suppose my extreme lack of leisure time makes it easier for me to focus. So I am proud to announce, much to your relief (yeah, right), that I am not leaving Twitter, Facebook, email, or the Internet in general. On the other hand, I reserve the right to check them when I want, not respond to every email, and not apologize for missing that blog post. The Internet is a big part of my life, but my life is much more than the Internet. –Rich On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Rich quoted in Macworld on an iOS app which includes a malware string. Favorite Securosis Posts Mike Rothman: Twitter security for media companies. These are good tips for every company, but more urgent for media companies given the recent Twitter hacks. This is a big deal for companies that provide shared access to corporate Twitter accounts. At some point we would like to see Twitter support federation (perhaps as a subscription service) so companies can define who can do what with their account, and enforce those entitlements. Details, details. (Editor’s note – Twitter supports OAuth, so it does allow this -Rich) Adrian Lane: Trailblazing Equality. Rich: Socially engineering (trading) bots. Other Securosis Posts Off topic: Cycling is the new golf. Malware string in iOS app interesting, but probably not a risk. Getting Logstalgic. Security Analytics with Big Data: Use Cases. Gaming the pirates – literally. Google Glass Has Already Been Hacked By Jailbreakers. Security Funding via Tin Cup. IaaS Encryption: External Key Manager Deployment and Feature Options. IaaS Encryption: Encrypting Entire Volumes. Favorite Outside Posts Mike Rothman: 102 hours in pursuit of Marathon suspects. Unbelievable story detailing the hunt for the Boston Marathon suspects. Really great reporting to produce a full account. Adrian Lane: It’s time for a Chief API Officer. While I don’t think a development trend warrants its own C-level executive, the importance of APIs to development is hard to overstate. David Mortman: Cryptography is a systems problem (or) ‘Should we deploy TLS’. Rich: One security equation to rule them all. I would like to see the formal proof but this looks accurate. Research Reports and Presentations Email-based Threat Intelligence: To Catch a Phish. Network-based Threat Intelligence: Searching for the Smoking Gun. Understanding and Selecting a Key Management Solution. Building an Early Warning System. Implementing and Managing Patch and Configuration Management. Defending Against Denial of Service (DoS) Attacks. Securing Big Data: Security Recommendations for Hadoop and NoSQL Environments. Tokenization vs. Encryption: Options for Compliance. Pragmatic Key Management for Data Encryption. The Endpoint Security Management Buyer’s Guide. Top News and Posts McAfee Patents Technology to Detect and Block Pirated Content. Sound like a bad idea to anyone else? Pirates hate piracy (when it happens to them). How long before we see Greenheart’s data on Pastebin? Syrian Electronic Army Hijacks Guardian Twitter Accounts. Army? It’s probably three guys living in the Bronx. Samsung Delays Android Security Software. Blue For The Pineapple. Step by step tutorial on turing the Fon AccessPoint into a stealthy WiFi hijacker. Defense contractor pwned by

Share:
Read Post

IaaS Encryption: External Key Manager Deployment and Feature Options

Deployment and topology options The first thing to consider is how you want deploy external key management. There are four options: An HSM or other hardware key management appliance. This provides the highest level of physical security but the appliance will need to be deployed outside the cloud. When using a public cloud this means running the key manager internally, relying on a virtual private cloud, and connecting the two with a VPN. In private clouds you run it somewhere on the network near your cloud, which is much easier. A key management virtual appliance. Your vendor provides a pre-configured virtual appliance (instance) for you to run in your private cloud. We do not recommend you run this in a public cloud because – even if the instance is encrypted – there is significantly more exposure to live memory exploitation and loss of keys. If you decide to go this route anyway, use a vendor that takes exceptional memory protection precautions. A virtual appliance doesn’t offer the same physical security as a physical server, but they do come hardened and support more flexible deployment options – you can run it within your cloud. Key management software, which can run either on a dedicated server or within the cloud on an instance. The difference between software and a virtual appliance is that you install the software yourself rather than receiving a configured and hardened image. Otherwise it offers the same risks and benefits as a virtual appliance, assuming you harden the server (instance) as well as the virtual appliance. Key management Software as a Service (SaaS). Multiple vendors now offer key management as a service specifically to support public cloud encryption. This also works for other kinds of encryption, including private clouds, but most usage is for public clouds. There are a few different deployment topologies, which we will discuss in a moment. When deploying a key manager in a cloud there are a few wrinkles to consider. The first is that if you have hardware security requirements your only option is to deploy a HSM or encryption/key management appliance compatible with the demands of cloud computing – where you may have many more dynamic network connections than in a traditional network (note that raw key operations per second is rarely the limiting factor). This can be on-premise with your private cloud, or remote with a VPN connection to the virtual private cloud. It could also be provided by your cloud provider in their data center, offered as a service, with native cloud API support for management. Another option is to store the root key on your own hardware, but deploy a bastion provisioning and management server as a cloud instance. This server handles communications with encryption clients/agents and orchestrates key exchanges, but the root key database is maintained outside the cloud on secure hardware. If you don’t have hardware security requirements a number of additional options open up. Hardware is often required for compliance reasons, but isn’t always necessary. Virtual appliances and software servers are fairly self-explanatory. The key issue (no pun intended) is that you are likely to need additional synchronization and orchestration to handle multiple virtual appliances in different zones and clouds. We will talk about this more in a moment, when we get to features. Like deploying a hardware appliance, some key management service providers also deploy a local instance to assist with key provisioning (this is provider dependent and not always needed). In other cases the agents will communicate directly with the cloud provider over the Internet. A final option is for the security provider to partner with the cloud provider and install some components within the cloud to improve performance, to enhance resilience, and/or to reduce Internet traffic – which cloud providers charge for. To choose an appropriate topology answer the following questions: Do you need hardware-level key security? How many instances and key operations will you need to support? What is the topology of your cloud deployment? Public or private? Zones? What degree of separation of duties and keys do you need? Are you willing to work with a key management service provider? Cloud features For a full overview of key management servers, see our paper Understanding and Selecting a Key Management Solution. Rather than copying and pasting an 18-page paper we will focus on a few cloud-specific requirements we haven’t otherwise covered yet. If you use any kind of key management service, pay particular attention to how keys are segregated and isolated between cloud consumers and from service administrators. Different providers have different architectures and technologies to manage this, and you should to map your security requirements agains how they manage keys. In some cases you might be okay with a provider having the technical ability to get your keys, but this if often completely unacceptable. Ask for technical details of how they manage key isolation and the root of trust. Even if you deploy your own encryption system you will need granular isolation and segregation of keys to support cloud automation. For example if a business unit or development team is spinning up and shutting down instances dynamically, you will likely want to provide the capability to manage some of their own keys without exposing the rest of the organization. Cloud infrastructure is more dynamic than traditional infrastructure, and relies more on Application Programming Interfaces (APIs) and network connectivity – you are likely to have more network connections from a greater number of instances (virtual machines). Any cloud encryption tool should support APIs and a high number of concurrent network connections for key provisioning. For volume encryption look for native clients/agents designed to work with your specific cloud platform. These are often able to provide information above and beyond standard encryption agents to ensure only acceptable instances access keys. For example they might provide instance identifiers, location information, and other indicators which do not exist on a non-cloud encryption agent. When they are available you might use them to only allow an instance to

Share:
Read Post

Security Analytics with Big Data: Use Cases

Why do we use big data for security analytics? Aside from big data hype in the press, what motivates customers to look for new solutions? On the other side of the coin, why are vendors altering their products to use – or at least integrate with – big data? In our discussions with customers they cite performance and scalability, particularly for security event analysis. In fact this research project was originally outlined as a broad examination of the potential for big data for security analytics. The customers we speak with don’t care about generalities – they need to solve existing problems, specifically around installed SIEM and log management systems. We refocused this research on a focused need to scale beyond what they have today and get more from existing investments, and big data is a means to that end. Today’s post focuses on the customer use cases and delves into why SIEM, log management, and other event-centric monitoring systems struggle under evolving requirements. Data velocity and clustered data management are new terms in IT, but they define two core characteristics of big data. This is no coincidence – as IT practitioners learn more about the promise of big data they apply its capabilities to the problems of existing SIEM solutions. The inherent strengths of big data overlap beautifully with SIEM deficiencies in the areas of scalability, analysis speed, and rapid data insertion. And given the potential for greater analysis capabilities, big data is viewed as a way to both keep pace with exploding volumes of event data and do more with it. Specific use cases drive interest in big data. Big data analytics are expanding, and complement SIEM. But the reason it is such a major trend is that big data addresses important issues in existing platforms. To serve prospective buyers we need to understand the issues that drive them to investigate new products and solutions. The basic issues above are the ones that always seem to plague SIEM: scaling, efficiency, and detection of threats – but those are generic placeholders for more specific demands. Use Cases More (Types of) Data – The problem we heard most often was “We need to analyze more types of data to get better analysis”. The need to include more data types, beyond traditional netflow and syslog event streams, is to derive actionable information from the sea of data. Threat intelligence is not not a simple signature and detection is more complex than reviewing a single event. Communications data such as Twitter streams, blog comments, voice, and other rich data sources are unstructured and require different parsing algorithms to interpret. Netflow syslog data is highly structured, with each element defined by its location within a record. Blog comments, phishing emails, botnet C&C, or malicious files? Not so much. The problems accommodating more types of data are scalability and usability. First, adding data types means handling more data, and existing systems often can’t handle any more. Adding capacity to already taxed systems often requires costly add-ons. Rolling out additional data collectors and servers to process their output data takes months, and the cost in IT time can be prohibitive as well. That all assumes the SIEM architecture can scale up to greater volumes of data coming in faster. Second, many of these systems cannot handle alternative data types – either they normalize the data in a way that strips much of its value or the system lacks suitable tools for analyzing alternate (raw) data types. Most systems have evolved to include configuration management and identity information, but they don’t handle Twitter feeds or diverse threat intelligence. Given evolving attack profiles, the flexibility to capture and dig into any data type is now a key requirement. Anti-Drill-Down – We have seen steady advances in aggregation, correlation, dashboards, and data enrichment to help security folks identity security threats, faster. But these iterative advancements have not kept pace with the volume of security data that needs to be parsed, nor the diversity of attack signatures. Overall situational awareness has not improved and the signal-to-noise ratio has gotten worse instead of than better. The entire process – the entire mindset – has been called into question. Today the typical process is as follows: a) An event or combination of events that looks interesting is captured. b) SIEM correlates and enriches data to provide better context, analyzes data in terms of rules, and generates an alert if it detects an anomaly. c) To verify that a suspicious event is indeed a threat, generally a human must “drill down” into a combination of machine-readable and human-readable data to make sense of it. The security practitioner must cross reference-multiple data sources. Enrichment is handy but too much manual analysis is still required to weed through false positives. In many cases the analyst extracts data to run other scripts or tools to produce the final analysis – we have even seen exports to MS Excel to find outliers and detect fraud. We need better analytics tools with more options than simple SQL queries and pattern matching. The types of analysis SIEMs can perform are limited, and most SIEM solutions lack programatic extensions to enable more complex analysis. “The net result is we always get a blob of stuff we have to sift through, then verify, investigate, validate and, often adjust the policy to filter our more detritus.” The anti-drill-down use case offers more automated checking using more powerful analytics and data mining tools than simple scripts and SQL queries. Architectural Limitations – Some customers attribute their performance issues – especially lagging timely threat analysis – to SIEM architecture and process. It takes time to gather data, move it to a central location, normalize, correlate, and then enrich. This generally makes near-real-time analysis a fantasy. Queries run on centralized event servers, and often take minutes to complete, while compliance reports generally take hours. Some users report that the volume of data stresses their systems, and queries on relational servers take too long to complete. Centralized computation limits the speed and timelines of analysis and reporting. The current

Share:
Read Post

Incite 5/1/2013: Trailblazing Equality

I recently took the Boy to see “42,” which I highly recommend for everyone. It’s truly a great (though presumably dramatized) story about Jackie Robinson and Branch Rickey as they tore down the color line in major league baseball. My stepfather knew Jackie Robinson pretty well and always says great things about him. It seems the movie downplayed the abuse he took, alone, as he worked to overcome stereotypes, bigotry, and intolerance to move toward the ideal of the US founding fathers that “all men are created equal”. But importantly the movie successfully conveyed the significance of his actions and the courage of the main players. As unlikely as it seemed in 1945 that we would have a black man playing in the major leagues, it must have felt similarly unlikely that we would have an openly gay man playing in the NBA (or any major league sport). Except that it’s not. Jason Collins emerged from his self-imposed dungeon after 12 years in the NBA and became the first NBA player to acknowledge that he’s gay. It turns out men of all creeds, colors, nationalities, and sexual orientations play professional sports. Who knew? This was a watershed moment in the drive toward equal rights. NFL writer Mike Freeman Tweeted that it was a great day in his life: “(I) get to see a true civil rights moment unfold instead of reading about it in a book.” Those interested in equality are ecstatic. Those wanting to maintain the status quo, not so much. I tend to not discuss my personal views on politics, religion, or any other hot topic publicly. The reality is that I believe what I believe, and you believe what you believe. We can have a good, civil discussion about those views, but I’m unlikely to change my mind and you are unlikely to change yours. Most such discussions are a complete waste of time. I accept your right to believe what you want and I hope you accept mine. Unfortunately the world isn’t like that. There was a tremendous amount of support for Jason Collins from basketball players, other athletes, and even the president of the US. There was also a lot of bigotry, ignorance, and hatred spewed in his direction. But when he stepped out of the closet he knew that would be the case. He was ready. And he is laying the groundwork for other gay athletes to emerge from their darkness. As Jackie Robinson blazed the trail for athletes like Roy Campanella, Larry Doby, and Satchel Paige to play in the majors, Jason Collins will be the first of many professional athletes to embrace who they are and stop hiding. I think it’s great. Hats off to Jason Collins and all of the other courageous gay athletes that will become known in the months and years to come. Although you may disagree, which is cool. You are entitled to your own opinions. But to be clear, you can’t stop it. This genie is out of the bottle, and it’s not going back in. –Mike Photo credits: Sports Illustrated cover – May 6, 2013 Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. Defending Cloud Data/IaaS Encryption Encrypting Entire Volumes Protecting Volume Storage Understanding Encryption Systems How IaaS Storage Works IaaS Encryption Security Analytics with Big Data Introduction The CISO’s Guide to Advanced Attackers Verify the Alert Mining for Indicators Intelligence, the Crystal Ball of Security Sizing up the Adversary Newly Published Papers Email-based Threat Intelligence: To Catch a Phish Network-based Threat Intelligence: Searching for the Smoking Gun Understanding and Selecting a Key Management Solution Building an Early Warning System Implementing and Managing Patch and Configuration Management Incite 4 U The worst press release of the year: It kills me to do this, but this week I need to slam an “article” on Dark Reading that claims users don’t care about security. This is clearly a press release posted as if it were a news article, which deliberately confuses readers. As an occasional writer for DR (and huge supporter of the team there), it hurts to see such drivel intermingled with good content. Unfortunately many online publications now post press releases as articles in the ongoing battle to collect page views, which is a horrific practice that should be destroyed. Back to the press release, which has more hyperbole than the Encyclopedia of Hyperbole. It claims that users don’t care about security since they reuse passwords and don’t track the latest threats. That’s stupid. They reuse passwords because the alternatives don’t work for most average users. They don’t track threats or obsess about security because it isn’t their job. At least most FUD press releases make minor nods to reality – this one doesn’t even pretend. It reeks of desperation. Pathetic. – RM Stepping into the AV time machine: I know this OPSWAT post, Varied Antivirus Engine Detection of Two Malware Outbreaks is dated April 13, 2013 but it feels like 2003. It talks about the need to use multiple detection engines because anti-virus vendors add signatures for new attacks at different times. Wait. What? Evidently no one told these guys that blacklists are dead. But this seems to be a recurring theme – I recently got into it with another MSS, who told me how great it is that they can scan traffic with two different AV engines to catch advanced malware. I tried to delicately tell him that they wouldn’t catch much advanced malware with 15 AV engines, but they can certainly crush their throughput. I guess I shouldn’t be surprised – AV remains the primary control to fight attacks, even though it’s not good enough. Sigh. – MR Always the last to know: Wendy Nather had exactly the same thought I did on the the latest Verizon Data Breach Report, and hit the nail on the

Share:
Read Post

IaaS Encryption: Encrypting Entire Volumes

As we mentioned in our last post, there are three options for encrypting entire storage volumes: Instance-managed Externally-managed Proxy We will start with the first two today, then cover proxy encryption and some deeper details on cloud key managers (including SaaS options) next. Instance-managed encryption This is the least secure and manageable option; it is generally only suitable for development environments, test instances, or other situations where long-term manageability isn’t a concern. Here is how it works: The encryption engine runs inside the instance. Examples include TrueCrypt and the Linux dm-crypt tool. You connect a second new storage volume. You log into your instance, and using the encryption engine you encrypt the new storage volume. Everything is inside the instance except the raw storage, so you use a passphrase, file-based key, or digital certificate for the key. You can also use this technique with a tool like TrueCrypt and create and mount a storage volume that’s really just a encrypted large file stored on your boot volume. Any data stored on the encrypted volume is protected from being read directly from the cloud storage (for instance if a physical drive is lost or a cloud administrator tries to access your files using their native API), but is accessible from the logged-in instance while the encrypted volume is mounted. This protects you from many cloud administrators, because only someone with actual access to log into your instance can see the data, which is something even a cloud administrator can’t do without the right credentials. This option also protects data in snapshots. Better yet, you can snapshot a volume and then connect it to a different instance so long as you have the key or passphrase. Instance-managed encryption also works well for public and private cloud. The downside is that this approach is completely unmanageable. The only moderately secure option is to use a passphrase when you mount the encrypted volume, which requires manual intervention every time you reboot the instance or connect it (or a snapshot) to a different instance. For security reasons you can’t store the key (or passphrase) in a file in the instance, or use a stored digital certificate, because anything stored on the unencrypted boot volume of the instance is exposed. Especially since, as of this writing, we know of no options to use this to encrypt a bootable instance – it only works for ‘external’ storage volumes. In other words this is fine for test and development, or to exchange data with someone else by whole volumes, but should otherwise be avoided. Externally-managed encryption Externally-managed encryption is similar to instance-managed, but the keys are handled outside the instance in a key management server or Hardware Security Manager (HSM). This is an excellent option for most cloud deployments. With this option the encryption engine (typically a client/agent for whatever key management tool you are using) connects to an extermal key manager or HSM. The key is provided subject to the key manager’s security checks, and then used by the engine or client to access the storage volume. The key is never stored on disk in the instance, so the only exposure is in RAM (or snapshots of RAM). Many products further reduce this exposure by overwriting keys’ memory when the keys aren’t in active use. As with instance-managed encryption, storage volumes and snapshots are protected from cloud administrators. But using an external key manager offers a wealth of new benefits, such as: This option supports reboots, autoscaling, and other cloud operations that instance-managed encryption simply cannot. The key manager can perform additional security checks, which can be quite in-depth, to ensure only approved instances access keys. It can then provide keys automatically or alert a security administrator for quick approval. Auditing and reporting are centralized, which is essential for security and compliance. Keys are centrally managed and stored, which dramatically improves manageability and resiliency at enterprise scale. Externally-managed encryption supports a wide range of deployment options, such as hybrid clouds and even managing keys for multiple clouds. This approach works well for both public and private clouds. A new feature just becoming available even you to encrypt a boot volume, similar to laptop full disk encryption (FDE). This isn’t currently possible with any other volume encryption option, and it is only available in some products. There are a few downsides, including: The capital investment is greater – you need a key management server or HSM, and a compatible encryption engine. You must install and maintain a key management server or HSM that is accessible to your cloud infrastructure. You need to ensure your key manager/HSM will scale with your cloud usage. This isn’t less an issue of how many keys it stores than how well it performs in a cloud, or when connecting to a cloud (perhaps due to network latency). This is often the best option for encrypting volume storage, but our next post will dig into the details a bit more – there are many deployment and feature options. Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.