Securosis

Research

IaaS Encryption: How to Choose

There is no single right way to pick the best encryption option. Which is ‘best’ depends on a ton of factors including the specifics of the cloud deployment, what you already have for key management or encryption, the nature of the data, and so on. That said, here are some guidelines that should work in most cases. Volume Storage Always use external key management. Instance-managed encryption is only acceptable for test/development systems you know will never go into production. For sensitive data in public cloud computing choose a system with protection for keys in volatile memory (RAM). Don’t use a cloud’s native encryption capabilities if you have any concern that a cloud administrator is a risk. In private clouds you may also need a product that protects keys in memory if sensitive data is encrypted in instances sharing physical hosts with untrusted instances that could perform a memory attack. Pick a product designed to handle the more dynamic cloud computing environment. Specifically one with workflow for rapidly provisioning keys to cloud instances and API support for the cloud platform you use. If you need to encrypt boot volumes and not just attached storage volumes, select a product with a client that includes that capability, but make sure it works for the operating systems you use for your instances. On the other hand, don’t assume you need boot volume support – it all depends on how you architect cloud applications. The two key features to look for, after platform/topology support, are granular key management (role-based with good isolation/segregation) and good reporting. Know your compliance requirements and use hardware (such as an HSM) if needed for root key storage. Key management services may reduce the overhead of building your own key infrastructure if you are comfortable with how they handle key security. As cloud natives they may also offer other performance and management advantages, but this varies widely between products and cloud platforms/services. It is hard to be more specific without knowing more about the cloud deployment but these questions should get you moving in the right direction. The main things to understand before you start looking for a product are: What cloud platform(s) are we on? Are we using public or private cloud, or both? Does our encryption need to be standardized between the two? What operating systems will our instances run? What are our compliance and reporting requirements? Do we need boot volume encryption for instances? (Don’t assume this – it isn’t always a requirement). Do root keys need to be stored in hardware? (Generally a compliance requirement because virtual appliances or software servers are actually quite secure). What is our cloud and application topology? How often (and where) will we be provisioning keys? Object storage For server-based object storage, such as you use to back an application, a cloud encryption gateway is likely your best option. Use a system where you manage the keys – not your cloud provider – and don’t store those keys in the cloud. For supporting users on services like Dropbox, use a software client/agent with centralized key management. If you want to support mobile devices make sure the product you select has apps for the mobile platforms you support. As you can see, figuring out object storage encryption is usually much easier than volume storage. Conclusion Encryption is our best tool protecting cloud data. It allows us to separate security from the cloud infrastructure without losing the advantages of cloud computing. By splitting key management from the data storage and encryption engines, it supports a wide array of deployment options and use cases. We can now store data in multi-tenant systems and services without compromising security. In this series we focused on protecting data in IaaS (Infrastructure as a Service) environments but keep in mind that alternate encryption options, including encrypting data when you collect it in an application, might be a better choice or an additional option for greater granularity. Encrypting cloud data can be more complex than on traditional infrastructure, but once you understand the basics adapting your approach shouldn’t be too difficult. The key is to remember that you shouldn’t try to merely replicate how you encrypt and manage keys (assuming you even do) in your traditional infrastructure. Understand how you use the cloud and adapt your approach so encryption becomes an enabler – not an obstacle to moving forward with cloud computing. Share:

Share:
Read Post

Security earnings season in full swing

Most folks think you need to be a day trading financial junkie to have any interest in quarterly earnings releases and/or conference call transcripts. But you can learn a lot from following the results of your strategic security vendors and companies you don’t do business with, but who would like to do business with you. You can glean stuff about overall market health, significant problem spaces, technology innovation, and business execution. For instance, if you are thinking about upgrading your perimeter network security gear you have a bunch of options, most of them public companies. You cannot going much about Cisco, Juniper, IBM, Dell, or HP through their conference calls. Security is barely a rounding error for those technology behemoths, although a company like Intel does talk a little bit about its McAfee division because it is key to its growth prospects. But if you pay attention to the smaller public companies, such as Symantec, Check Point, Fortinet, Palo Alto, Sourcefire, Imperva, Websense, Qualys, etc., you can learn about how those bigger companies are competing. You need to keep in mind that you get a very (very very) skewed perspective, but it provides some ammo when challenging sales reps from those big companies. You can also learn a lot about business. How certain channel strategies work, or don’t work, which can help optimize how you procure technology. You can get a feel for R&D spend by your key vendors, which is important to the health of their new product pipelines. You should also read the Q&A transcripts, where investment analysts ask about different geographies, margins, product growth, and a host of other things. This information cannot help you configure your devices more effectively, but it does help you understand the folks you do business with, and feel better about writing big checks to your strategic vendors. Especially when you know the big deal they mention in the conference call is you. Here is a list of transcripts for the major publicly traded security companies. And if your favorite company (or the one in your 401k) isn’t here, it’s likely because they haven’t announced their Q1 results yet (like Splunk), or they may still be private. Symantec FQ4 2013 Earnings Call Transcript Check Point Q1 2013 Earnings Call Transcript Fortinet Q1 2013 Earnings Call Transcript Sourcefire Q1 2013 Earnings Call Transcript Qualys Q1 2013 Earnings Call Transcript Imperva Q1 2013 Earnings Call Transcript Websense Q1 2013 Earnings Call Transcript Proofpoint Q1 2013 Earnings Call Transcript SolarWinds Q1 2013 Earnings Call Transcript VASCO Data Security Q1 2013 Earnings Call Transcript Zix Q1 2013 Earnings Call Transcript That should keep you busy for a little while… Photo credit: “scrooge-mcduck” originally uploaded by KentonNgo Share:

Share:
Read Post

Database Breach Results in $45M Theft

Today’s big news is the hack against banking systems to pre-authenticate thousands of ATM and pre-paid debit cards. The attackers essentially modified debit card databases in several Middle Eastern banks, then leveraged their virtual cards into cash. From AP Newswire: Hackers got into bank databases, eliminated withdrawal limits on pre-paid debit cards and created access codes. Others loaded that data onto any plastic card with a magnetic stripe – an old hotel key card or an expired credit card worked fine as long as they carried the account data and correct access codes. A network of operatives then fanned out to rapidly withdraw money in multiple cities, authorities said. The cells would take a cut of the money, then launder it through expensive purchases or ship it wholesale to the global ringleaders. Lynch didn’t say where they were located. The targets were reserves held by the banks to fund pre-paid credit cards, not individual account holders, Lynch said … calling it a ”virtual criminal flash mob,”. The plundered ATMs were in Japan, Russia, Romania, Egypt, Colombia, Britain, Sri Lanka, Canada and several other countries, and law enforcement agencies from more than a dozen nations were involved in the investigation, U.S. prosecutors said It’s not clear how many of the thieves have been caught, or what percentage of the cash has been retrieved. Apparently this was the second attack, with the first successfully pulling $5 million from ATMs. Police only caught up with some of the attackers on the second attack, after they had managed to steal another $40M. How the thefts were detected is not clear, but it appears that it was part of a murder investigation of one of the suspects, and not fraud detection software within the banking system. The banks are eager to point to the use of mag stripe cards as the key issue here, but if your database is owned an attacker can direct funds to any account Share:

Share:
Read Post

McAfee Gets Some NGFW Stones

In hindsight we should have seen this coming. I mean it’s not like McAfee even showed up for the most recent NSS Labs next-generation firewall (NGFW) test. They made noise about evolving their IPS, I mean Network Security Platform, to offer integrated firewall capabilities. But evidently it was either too hard or would have taken too long (or both) to provide a competitive product. So McAfee solved the problem by writing a $389MM check for Stonesoft. You haven’t heard of Stonesoft? They weren’t a household name but they have had a competitive firewall product for years. Decent distribution in Europe and a very small presence in the US. They did about $50MM in revenues last year and are publicly traded in Finland. I guess what’s surprising is that it wasn’t Cisco, Juniper, IBM, or HP. What about Cisco’s blank check to regain competitiveness in the security business? If it’s not connected to an SDN apparently Juniper isn’t interested. I guess IBM and HP hope that if they continue to ignore the NGFW market it will just go away. Hope is not a strategy. And as perimeter consolidation continues (and it is happening – regardless of what IPS vendors tell you), if you don’t have a competitive integrated product you won’t be in the game for long. So McAfee needed to make this move. Certainly before someone else did. But it’s not all peaches and cream. McAfee has their work cut out for them. It’s not like they have really excelled at integrating any of their larger acquisitions. And they have to reconcile their existing IPS platform with Stonesoft’s integrated capabilities. Don’t forget about the legacy SideWinder proxy firewall, which continues to show up a lot in highly secure government environments. Why have one integrated platform when you can have 3? How they communicate the roadmap and assure customers (who are already looking at other alternatives) will determine the success of this deal. To further complicate matters, integration plans are basically on hold due to some wacky Finnish laws that prevent real integration until the deal is basically closed. It is unlikely they will be able to do any real planning until the fall (when they have acquired 50% of the stock), and cross-selling cannot start until they have 90% of the stock tendered – probably early 2014. Details, details. The NGFW game of musical chairs is about to stop, and the move towards the Perimeter Security Gateway is going to begin. The M&A in the space is pretty much done because there just aren’t any decent alternatives available to buy without writing a multi-billion-dollar check any more. Those vendors without something NGFW are likely to see their network security revenues plummet within 2 years. Select your network security vendors accordingly. Photo credit: “Stone Pile” originally uploaded by Mark McQuitty Share:

Share:
Read Post

Incite 5/8/2013: One step at a time

Do you ever look at your To Do list and feel like you want to just run away and hide? Me too. I talk a lot about consistent effort and not trying to hit home runs, but working for a bunch of singles and doubles. That works great for run rate activities like writing the Incite and my blog series. But I am struggling to move forward on a couple very important projects that are bigger than a breadbox and critical to the business. It is annoying the crap out of me, and I figure publicly airing my issues might help me push through them. I have tried to chunk up these projects into small tasks. That’s how you defeat overwhelm, right? But here it just means I need to push a bunch of tasks back and back and back in my Todo app rather than just one. I think my problem is that I feel like I need a block of time sufficient to complete a smaller task. But I rarely have a solid block of a couple hours to focus and write so I get stuck and don’t even start. But that’s nonsense. I don’t have to finish the entire task now – I just need to do a bit every day, and sure enough it will get done. Is that as efficient as clearing the calendar, shutting off Twitter and email, and getting into the zone? Nope. It will definitely take longer to finish but I can make progress without finishing the entire task. Really, I can. As much as I try to teach my kids what they need to know, every so often I learn from them too. XX1 just finished her big year-end project. It was a multi-disciplinary project involving science, language arts, and social studies. She invented a robot (J-Dog 6.2) that would travel to Jupiter for research. We went to the art store and got supplies so she could mock up the look of the robot; she had to write an advertisement for the product, a user manual, and a journal in the robot’s voice to describe what was happening – among other things. She did a great job. I’m not sure where she got her artistic chops or creativity but the Boss and I didn’t help her much at all. How does that relate to my issue getting big things done? She worked on the project a little every day. She cut the pieces of the model one day. Painted it the next. Outlined the journal on the third. And so on. It’s about making progress, one step at a time. She finished two days early so she didn’t have to do an all-nighter the day before – like her old man has been known to do. So I need to take a lesson and get a little done. Every day. Chip away at it. I have an hour left in my working day, so I need to get to work… –Mike Photo credits: XX1 Geobot project – May 2013 Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. Defending Cloud Data/IaaS Encryption Object Storage Encrypting Entire Volumes Protecting Volume Storage Understanding Encryption Systems Security Analytics with Big Data Use Cases Introduction The CISO’s Guide to Advanced Attackers Evolving the Security Program Breaking the Kill Chain Verify the Alert Mining for Indicators Newly Published Papers Email-based Threat Intelligence: To Catch a Phish Network-based Threat Intelligence: Searching for the Smoking Gun Understanding and Selecting a Key Management Solution Building an Early Warning System Implementing and Managing Patch and Configuration Management Incite 4 U I (for the record) am not the world’s greatest lover: I don’t know Troy Hunt but he probably isn’t either. But this awesome post basically supports his claim as the world’s greatest lover by stating “I could quite rightly say that nobody has ever demonstrated that this is not the case and there are no proven incidents that disprove it.” Then he goes on to lampoon the web site security seals from your favorite big security vendor. Not just that they can’t really justify their assurances that something is secure, but showing screenshots of these ‘protected’ sites busted by simple attacks. As funny (in a sad way) as this is, ultimately it won’t make much of a difference because the great unwashed think those seals actually mean something. – MR Nuclear powered 0-day: This is a bit of a weird one. Internet Explorer 8, and only IE version 8, is being actively exploited in the wild with a 0-day attack. It is always interesting when a vulnerability only works on one version of IE and doesn’t affect earlier or later versions. Additionally the malware was propagated through a US Department of Labor website, and only to people researching illnesses associated with work on nuclear weapons. Clearly the attackers were targeting a certain demographic, but I haven’t seen any reports of actual exploitation, which is the part we should be most interested in (except the DoL website – they totally pwned that one). It seems like a bit of an outlier attack because I don’t expect too many of their targets to look on the DoL site for that information, but what do I know? As we have learned, these espionage attacks are basically a targeted spray and play: attacking every possible path to their desired targets, understanding that the law of averages is in their favor. – RM Learn it. Know it. Live it.: Security professionals talk about how developers don’t understand security, but the Coverity team throws it right back at them with 10 Things Developers Wished Security People Knew. This is sound advice for security people working with software development. The underlying belief is that all these things require security to get to know the people, process, and code

Share:
Read Post

Some (re)assembly required

Japanese Coast Guard ship (indirectly) sold to North Korea: “The vessel was sold in a state in which information regarding operational patterns of the patrol vessel could have been obtained by some party,” an official told the paper. “We were on low security alert at that time.” That is certainly not the case these days, with heightened tensions on the Korean peninsula and the Japanese coast guard regularly involved in patrols around the disputed Diaoyu (Senkaku) islands. Like hardware, data has a lifecycle. Eventually you will need to dispose of the data and/or the device that stores/processes/transmits it (and these days, all the cloudy services connecting to it…). Embedded systems, from ship navigation system to “quantified self” device such as Fitbit, should be included in data lifecycle analyses when relevant, and treated as appropriate for the sensitivity of the data that could be extracted. As this story shows, sensitivity of data or business processes is not static and changes with political tensions – among other factors: It is important to periodically re-assess policies on information disposal and how sensitive information may be hiding in the nooks and crannies of devices you thought were harmless at the time. …the Coast Guard admitted that there were no policies in place to remove data recording equipment or wipe data before selling decommissioned vessels, meaning the same thing could have happened on other occasions. Oh goodie. Share:

Share:
Read Post

Finger-pointing is step 1 of the plan

Dennis Fisher writes in Finger-Pointing on Cyberespionage does little good without a plan: The acknowledgement from the Pentagon, in truth, feels fairly anticlimactic. It’s the equivalent of Mark McGwire admitting to using steroids-10 years after every fan in the country had already accepted that fact. At some point it becomes sort of silly to even mention it. Water is wet, ice cream is delicious and China is attacking our networks. It just is. It’s a good piece but misses a couple key elements: in geopolitics, finger-pointing is an essential part of every plan, and execution on cybersecurity started a few years ago (Aurora/Google and Lockheed). This is a propaganda campaign to generate political and popular support, and nothing – nothing – progresses without this foundation. The problem is the invasion by other special interests, including copyright holders, that complicates the narrative. Make no mistake – this story was written years ago, and we are just watching the latest episodes to air. Share:

Share:
Read Post

IaaS Encryption: Object Storage

Sorry, but the title is a bit of a bait and switch. Before we get into object storage encryption we need to cover using proxies for volume encryption. Proxy encryption The last encryption option uses an inline software encryption proxy to encrypt and decrypt data. This option doesn’t work for boot volumes, but may allow you to encrypt a wider range of storage types, and offers an alternate technical architecture for connecting to external volumes. The proxy is a virtual appliance running in the same zone as the instance accessing the data and the storage volume. We are talking about IaaS volumes in this section, so that will be our focus. The storage volume attaches to the proxy, which performs all cryptographic operations. Keys can be managed in the proxy or extended to external key management using the options we already discussed. The proxy uses memory protection techniques to resist memory parsing attacks, and never stores unencrypted keys in its own persistent storage. The instance accessing the data then connects to the proxy using a network file system/sharing protocol like iSCSI. Depending on the pieces used this could, for example, allow multiple instances to connect to a single encrypted storage volume. Protecting object storage Object storage such as Amazon S3, Openstack Swift, and Rackspace Cloud Files, is fairly straightforward to encrypt, with three options: Server-side encryption Client/agent encryption Proxy encryption As with our earlier examples overall security is dependent on where you place the encryption agent, key management, and data. Before we describe these options we need to address the two types of object storage. Object storage itself, like our examples above, is accessed and managed only via APIs and forms the foundation of cloud data storage (although it might use traditional SAN/NAS underneath). There are also a number of popular cloud storage services including Dropbox, Box.com, and Copy.com – as well as applications to build private internal systems – which include basic object storage but layer on PaaS and SaaS features. Some of these even rely on Amazon, Rackspace, or another “root” service to handle the actual storage. The main difference is that these services tend to add their own APIs and web interfaces, and offer clients for different operating systems – including mobile platforms. Server-side encryption With this option all data is encrypted in storage by the cloud platform itself. The encryption engine, keys, and data all run within the cloud platform and are managed by the cloud administrators. This option is extremely common at many public cloud object storage providers, sometimes without additional cost. Server-side encryption really only protects against a single threat: lost media. It is more of a compliance tool than an actual security tool because the cloud administrators have the keys. It may offer minimal additional security in private cloud storage but still fails to disrupt most of the dangerous attack paths for access to the data. So server-side encryption is good for compliance and may be good useful in private clouds; but it offers no protection against cloud administrators and depending on configuration it may provide little protection for your data in case of management plane compromise. Client/agent encryption If you don’t trust the storage environment your best option is to encrypt the data before sending it up. We call this Virtual Private Storage because, as with a Virtual Private Network, we turn a shared public resource into a private one by encrypting the information on it while retaining the keys. The first way to do this is with an encryption agent on the host connecting to the cloud service. This is architecturally equivalent to externally-managed encryption for storage volumes. You install a local agent to encrypt/decrypt the data before it moves to the cloud, but manage the keys in an external appliance, service, or server. Technically you could manage locally, as with instance-managed encryption, but it is even less useful here than for volume encryption because object storage is normally accessed by multiple systems, so we always need to manage keys in multiple locations. The minimum architecture is comprised of encryption agents and a key management server. Agents implement the cloud’s native object storage API, and provide logical volumes or directories with decrypted access to the encrypted volume, so applications do not need to handle cloud storage or encryption APIs. This option is most often used with cloud storage and backup services rather than for direct access to root object storage. Some agents are advances on file/folder encryption, especially for tools like Dropbox or Box.com which are accessed as a normal directory on client systems. But stock agents need to be tuned to work with the specific platform in question – which is outside our object storage focus. Proxy encryption One of the best options for business-scale use of object storage, especially public object storage, is an inline or cloud-hosted proxy. There are two main topologies: The proxy resides on your network, and all data access runs through it for encryption and decryption. The proxy uses the cloud’s native object storage APIs. The proxy runs as a virtual appliance in either a public or private cloud. You also set two key management options: internal to the proxy or external; and the usual deployment options: hardware/appliance, virtual appliance, or software. Proxies are especially useful for object storage because they are a very easy way to implement Virtual Private Storage. You route all approved connections through the proxy, which encrypts the data and then passes it on to the object storage service. Object storage encryption proxies are evolving very quickly to meet user needs. For example, some tie into the Amazon Web Services Storage Gateway to keep some data local and some in the cloud for faster performance. Others not only proxy to the cloud storage service, but function as a normal network file share for local users. Share:

Share:
Read Post

The CISO’s Guide to Advanced Attackers: Evolving the Security Program

The tactics we have described so far are very useful for detecting and disrupting advanced attackers – even if used only in one-off situations. But you can and should establish a more structured and repeatable process – especially if you expect to be an ongoing target of advanced attackers. So you need to evolve your existing security program, including incident response capabilities. But what exactly does that mean? It means you need to factor in the tactics you will see from advanced attackers and increase the sophistication of your intelligence gathering, active controls, and incident response. Change is hard – we get that. Unless you have just had a recent breach – then it’s easy. At that point instead of budget pressures you get a mandate to fix it no matter the cost, and you will face little resistance to changing process to ensure success with the next response. Even without a breach as catalyst you can make these kinds of changes, but you will need some budgetary kung fu with strategic use of recent high-profile attacks to make your point. But even leveraging a breach doesn’t necessarily result in sustainable change, regardless of how much money you throw at the problem. Evolving these processes involves not only figuring out what to do now, or even in the future. Those are short term band-aids. Success requires empowering your folks to rise to the challenge of advanced attackers. Pile more work on to make sure they can accept their additional responsibilities, and recognize them for stepping up. This provides an opportunity for some managers to take on more important responsibilities and ensures everyone is on the hook to get something done. Just updating processes and printing out new workflows won’t change much unless there are adequate resources and clear accountability in place to ensure change takes place. Identify Gaps Start evolving your program by identifying gaps in the status quo. That’s easiest when you are cleaning up a breach because it is usually pretty obvious what worked, what doesn’t, and what needs to change. Without a breach you can use periodic risk assessment or penetration testing to pinpoint issues. But regardless of the details of your gaps or how you find them, it is essential that you (as senior security professional) drive process changes to address those gaps. Accountability starts and ends with the senior security professional, with or without the CISO title. Be candid about what went wrong and right with senior management and your team, and couch the discussion in terms of improving your overall capability to defend against advanced attackers. Intelligence Gathering The next aspect of detecting advanced attackers is building an intelligence gathering program to provide perspective on what is happening out there. Benefit from the misfortune of others, remember? Larger organizations tend to formalize an intelligence group, while smaller entities need to add intelligence gathering and analysis to the task lists of existing staff. Of all the things that could land on a security professional, needing to do intelligence research isn’t a bad extra responsibility. It provides exposure to cutting-edge attacks and makes a difference in your defenses. That’s how you should sell it. Once you determine organizational structure and accountability for intelligence you ll need to focus on integration points with the rest of your active (defensive) and passive (monitoring) controls. Is the intelligence you receive formatted to integrate directly into your firewall, IPS, and WAF? What about integration with your SIEM or forensics tools? Don’t forget about analyzing malware – isolating and searching for malware indicators is key to detecting advanced attackers. Understand that more sophisticated and mature environments should push beyond just searching for technical indicators of compromise. Mature intelligence processes include proactive intelligence gathering about potential and active adversaries, as we described earlier. If you don’t have those capabilities internally which of your service providers can offer it, and how can you use it? Finally you will need to determine your stance on information sharing. We are big fans of sharing what you see with folks like you (same industry, similar company size, geographical neighbors, etc.) to learn from each other. The key to information sharing networks (aside from trust) is reducing the signal-to-noise ratio – it is easy for active networks to generate lots of chatter that isn’t relevant to you. As with figuring out integration points, you need accountability and structure for collecting and using information from sharing networks. Tracking Innovation Another aspect of dealing with advanced attackers is tracking industry innovation on how to manage them. We have done considerable research into evolving endpoint controls, network-based advanced malware detection, and the application of intelligence (Early Warning, Network-based Threat Intelligence, Email-based Threat Intelligence) to understand how these technologies can help. But all those technologies together cannot provide the sustainable change you need. So who in your organization will be responsible for evaluating new technologies? How often? You might not have budget to buy all the latest and greatest shiny objects to hit the market – but you still need to know what’s out there, and you might need to find the money to buy something that solves a sufficiently serious problem. We have seen organizations assemble a new technology task force, comprised of promising individual contributors within each of the key security disciplines. These folks monitor their areas of expertise, meet with innovative start-ups and other companies, go to security conferences, and leverage research services to evaluate new technologies. At periodic meetings they present what they find. Not just what the shiny object does but also it could would change what the organization does, and why that would be better. This shows not just whether they can parrot back what a vendor tells them, but how well they can apply that capability to existing control sets. Evolving DFIR As we have discussed throughout this series, a key aspect of detecting advanced attackers is digital forensics and incident response (DFIR). First you need to ensure responders have an adequate tools to determine what happened and analyze attacks. So you need to revisit your data collection infrastructure, and

Share:
Read Post

2FA isn’t a big enough gun

The arms race goes on and on. The folks at Trusteer recently found an evolved type of malware designed to game financial institutions’ two-factor authentication (2FA) mechanisms on compromised devices. This is Darwin at work, folks – why should attackers try to rob banks, when they can mug everyone who comes out with money? Whatever gun you have, they come back with a bigger one. This is fun, right? Trusteer’s security team recently analyzed a Ramnit variant that is targeting a UK bank with a clever one-time password (OTP) scam. The malware stays idle until the user successfully logs into their account,.. The most interesting part is the reconnaissance and detailed understanding of the process and transaction types & formats required to successfully perform this attack. This is no smash and grab – it’s a very sophisticated set of technologies used to game a bank’s security controls. 2FA is still a good thing. But don’t think it’s the only thing, and definitely don’t think it makes you secure. Many of us learned that from the RSA hack, but for those who didn’t get the message the first time, your strong authentication isn’t strong enough. At least not all the time… Photo credit: “Big Guns” originally uploaded by DM Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.