Securosis

Research

Safari enables per-site Java blocking

I missed this during all my travels, but the team at Intego posted a great overview: Meanwhile, Apple also released Safari 6.0.4 for Mountain Lion and Lion, as well as Safari 5.1.9 for Snow Leopard. The new versions of Safari give users more granular control over which sites may run Java applets. If Java is enabled, the next time a site containing a Java applet is visited, the user will be asked whether or not to allow the applet to load, with buttons labeled Block and Allow: Your options are always allow, always block, or prompt. I still highly recommend disabling Java entirely in all browsers, but some of you will need it and this is a good option without having to muck with plugins. Share:

Share:
Read Post

No news is just plain good: Friday Summary, April 18, 2013

I know the exact moment I stopped watching local news. It was somewhere around 10-15 years ago. A toddler had died after being left locked in a car on a hot day. I wasn’t actually watching the news, but one of the screamers for the upcoming broadcast came on during a commercial break for whatever I was watching. A serious looking female reporter, in news voice, mentioned the death and how hot cars could get in the Colorado sun. Then she threw a big outdoor thermometer in a car, slammed the door, and reminded me to watch the news at 10 to see the results. I threw up a little bit, I think. I don’t remember the exact moment I gave up on cable news, but it was sometime within the past year or two. I have a TV in my office I use for background noise; one of those little things you do when you have been working at home for a decade or so. I used to keep it on MSNBC but the bias finally went too over the top for me. Fox is out of the question, and I was trying out CNN. That lasted for less than an hour before I realized that Fox is for the right, MSNBC for the left, and CNN for the stupid. It was nothing other than sensational exploitative drivel. As an emergency responder I know what we see at night rarely correlates to actual events. I have been on everything from national incidents to smaller events that still attracted the local press. Even responders and commanders don’t always have the full picture – never mind a reporter hovering at the fringe. Once I was on the body recovery of a 14-year-old who died after falling off a cliff while taking a picture. I showed up on the third day of the search, right around when one of our senior members finally located him due to the green gloss of a disposable camera. He used a secondary radio channel to report his location and finding because we know the press scans all the emergency frequencies. I was quietly sent up and we didn’t stop the rest of the search, to provide a little decorum. Around the time the very small group of us arrived at the scene, the press finally figured it out. The next thing I knew there was a helicopter headed our way to get video. Of a dead kid. Who had been in the Colorado sun, outdoors, for 3 days. I used my metallic emergency blanket to cover him him and protect his family. Years later I was on another call to recover the body of a suicide in one of the most popular mountain parks in Boulder. Gunshot to the head. When we got to the scene one of the police investigators mentioned we that needed to watch what we said because the local station had a new boom mike designed to pick up our conversations at a distance. I never saw it, so maybe it wasn’t true. I don’t watch local news. I don’t watch cable news. Even this week I avoid it. They both survive only on exploitation and emotional manipulation. I do occasionally watch the old-school national news shows, where they still behave like journalists. I read. A lot. Sources with as little bias as I can find. According to the Guardian, research shows the news is bad for you. Right now I find it hard to disagree. On to the Summary: Favorite Securosis Posts Adrian Lane: Run faster or you’ll catch privacy. Managing privacy in large firms is its own private hell. Hello, EU privacy laws! Mike Rothman: Sorry for Security Rocking. LMFAO applied to security FTW. And evidently I slighted our contributor Gal, who believes he’s up to provide the definitive Security LMFAO version. Name that tune, brother! Rich: The CISO’s Guide to Advanced Attacks. I am jealous I’m not writing this one. David Mortman: Run faster or you’ll catch privacy Other Securosis Posts Intel Buys Mashery, or Why You Need to Pay Attention to API Security. On password hashing and how to reply to security flaws. Safari enables per-site Java blocking. Incite 4/17/2013: Tipping the balance between good and evil. Why you still need security groups with host firewalls. Is it murder if the victim is already dead?. Unused security intelligence is, well… dumb. Favorite Outside Posts Adrian Lane: Agilebits 1Password support and Design Flaw?. Good discussion of the flaw and a good response from AgileBits. Now… patch, please! Mike Rothman: Patton Oswalt on the Boston Marathon Attack. I linked to this in the Incite but it’s worth mentioning again. Great context about taking a long-term view, even when the wounds are fresh. David Mortman: NIST: It’s Time To Abandon Control Frameworks As We Know Them. Rich: EmergentChaos on the 1Password design flaw issue. Don’t just read the post – read the first comment. The guys at AgileBits show yet again why I trust them. Research Papers Email-based Threat Intelligence: To Catch a Phish. Network-based Threat Intelligence: Searching for the Smoking Gun. Understanding and Selecting a Key Management Solution. Building an Early Warning System. Implementing and Managing Patch and Configuration Management. Defending Against Denial of Service (DoS) Attacks. Securing Big Data: Security Recommendations for Hadoop and NoSQL Environments. Tokenization vs. Encryption: Options for Compliance. Top News and Posts ColdFusion hack used to steal hosting provider’s customer data. Wait, people still use Cold Fusion? (Rich – I used to totally rock CF, back in the day!) Oracle Patches 42 Java Flaws. House approves cybersecurity overhaul in bipartisan vote. Cloudscaling licenses Juniper virty networking for new OpenStack distro. Microsoft deploys 2-factor to all services. Obama threatens to veto CISPA. Get your popcorn. Update: DARPA Cyber Chief Peiter “Mudge” Zatko Heads To Google. Google does so many great security things, but their views on privacy kill their usefulness to me. Blog Comment of the Week This week’s best comment goes to fatbloke, in response

Share:
Read Post

Why you still need security groups with host firewalls

Security groups are the basic firewall rules associated with instances in various compute clouds. Different platforms may use different names but security group is the most common so that’s the term we will use. Basically, it is a way of defining hypervisor firewall rules. Of course this is a gross simplification – different cloud platforms enforce groups at other layers of the virtual or physical network, but you get the point. You assign instances to a security group and they inherit that rule set, which applies at a per instance level. This is key because you need to do some deeper thinking about what access rules should apply to an individual instance, which is distinctly not like a network segment with a firewall in front of it. For example you can set security group rules that restrict traffic between all instances assigned to the same security group. Thus it has traits of both a host firewall and network firewall, which is kinda cool. I was teaching our cloud security class last week and one student asked why we don’t just use IP tables or another host firewall. The answer is pretty basic. Security groups allow you to decouple network security from the operating system on the instance. This provides a few advantages: Security for specific instances can be managed without needing to instantiate or access them. Network security rules can be managed via the cloud API and management plane, supporting better automation. Security groups apply no matter the boot or security state of an instance, so if your instance is compromised you can isolate it easily with a quick security group rule change. This does not mean you don’t still need host firewalls. They still play a valuable role when you need extra granularity, such as protecting instances when they move between different security groups. Another use for a host firewall is to provide the administrator with control over the specific instance’s security without requiring cloud management layer changes. Security group capabilities vary widely between platforms but the basic principles are pretty consistent. They also don’t necessarily substitute (yet) for more advanced firewall/IPS setups, which is when virtual appliances or some of the fancy integrated technologies (such as what VMWare is doing with vShield) come into play to inspect inter-VM traffic. The more I use them the more I am becoming a big fan of security groups, even with their limitations. They are pretty dumb, without even basic stateful packet inspection capabilities. Long term, any network security tools that want to play well with the cloud will need to adopt the same degree of integration with security groups implemented via the cloud platform, as well as access to those controls via robust cloud APIs. Share:

Share:
Read Post

IaaS Encryption: Protecting Volume Storage

Now that we have covered all the pesky background information, we can start delving into the best ways to actually protect data. Securing the Storage Infrastructure and Management Plane Your first step is to lock down the management plane and the infrastructure of your cloud storage. Encryption can compensate for many configuration errors and defend against many management plane attacks, but that doesn’t mean you can afford to skip the basics. Also, depending on which encryption architecture you select, a poorly-secured cloud deployment could obviate all those nice crypto benefits by giving away too much access to portions of your encryption implementation. We are focused on data protection so we don’t have space to cover all the ins and outs of management plane security, but here are some data-specific pieces to be aware of: Limit administrative access: Even if you trust all your developers and administrators completely, all it takes is one vulnerability on one workstation to compromise everything you have in the cloud. Use access controls and tiered accounts to limit administrative access, as you do for most other systems. For example, restrict snapshot privileges to a few designated accounts, and then restrict those accounts from otherwise managing instances. Integrate all this into your privileged user management. Compartmentalize: You know where flat networks get you, and the same goes for flat clouds. Except that here we aren’t talking about having everything on one network, but about segregation at the management plane level. Group systems and servers, and limit cloud-level access to those resources. So an admin account for development systems shouldn’t also be able to spin up or terminate instances in the production accounting systems. Lock down the storage architecture: Remember, all clouds still run on physical systems. If you are running a private cloud, make sure you keep everything up to date and configured securely. Audit: Keep audit logs, if your platform or provider supports them, of management-plane activities including starting instances, creating snapshots, and altering security groups. Secure snapshot repositories: Snapshots normally end up in object storage, so follow all the object storage rules we will offer later to keep them safe. In private clouds, snapshot storage should be separate from the object storage used to support users and applications. Alerts: For highly sensitive applications, and depending on your cloud platform, you may be able to generate alerts when snapshots are created, new instances are launched from particular instances, etc. This isn’t typically available out of the box but shouldn’t be hard to script, and may be provided by an intermediary cloud broker service or platform if you use one. There is a whole lot more to locking down a management plane, but focusing on limiting admin access, segregating your environment at the cloud level with groups and good account privileges, and locking down the back-end storage architecture, together make a great start. Encrypting Volumes As a reminder, volume encryption protects from the following risks: Protects volumes from snapshot cloning/exposure Protects volumes from being explored by the cloud provider, including cloud administrators Protects volumes from being exposed by physical drive loss (more for compliance than a real-world security issue) IaaS volumes can be encrypted three ways: Instance-managed encryption: The encryption engine runs within the instance and the key is stored in the volume but protected by a passphrase or keypair. Externally managed encryption: The encryption engine runs in the instance but keys are managed externally and issued to instances on request. Proxy encryption: In this model you connect the volume to a special instance or appliance/software, and then connect the application instance to the encryption instance. The proxy handles all crypto operations and may keep keys either onboard or external. We will dig into these scenarios next week. Share:

Share:
Read Post

Cybersh** just got real

Huawei not expecting growth in US this year due to national security concerns (The Verge). U.S. to scrutinize IT system purchases with ties to China (PC World): U.S. authorities will vet all IT system purchases made from the Commerce and Justice Departments, NASA, and the National Science Foundation, for possible security risks, according to section 516 of the new law. “Cyber-espionage or sabotage” risks will be taken into account, along with the IT system being “produced, manufactured, or assembled” by companies that are owned, directed or funded by the Chinese government. This is how you fight asymmetric espionage. Expect the consequences to continue until the attacks taper off to an acceptable level (yes, there is an acceptable level). Share:

Share:
Read Post

Proposed California Data Law *Will* Affect Security

Threatpost reports that California is considering a law requiring companies to show consumers what data is collected on them. Known as the “Right to Know Act of 2013,” AB 1291 was amended this week to boost its chances of success after being introduced in February by state Assembly member Bonnie Lowenthal. If passed, it would require any business that retains customer data to give a copy of that information, including who it has been shared with, for the past year upon request. It applies to companies that are both on – and offline. The claim is that it doesn’t add data protection requirements, but it does. Here is how: You will need mechanisms to securely share the data with customers. This will likely be the same as what healthcare and financial institutions do today (generally email encryption). You will need better auditing of who data is shared with. Depending on interpretation of the law, you might need better auditing of how it is used internally. Right now this doesn’t seem to be a requirement – I am just paranoid from experience. What to do? For now? Nothing. Remember the Compliance Lifecycle. Laws are proposed, then passed, then responsibility is assigned to an enforcement body, then they interpret the law, then they start enforcement, then we play the compensating controls game, then the courts weigh in, and life goes on. Vendors will likely throw AB 1291 into every presentation deck they can find, but there is plenty of time to see how this will play out. But if this goes through, there will definitely be implications for security practitioners. Share:

Share:
Read Post

Brian Krebs outs possible Flashback malware author

Brian Krebs thinks he may have identified the author of the Flashback Mac malware that caused so much trouble last year. Brian is careful with accusations but displays his full investigative reporting chops as he lays out the case: Mavook asks the other member to get him an invitation to Darkode, and Mavook is instructed to come up with a brief bio stating his accomplishments, and to select a nickname to use on the forum if he’s invited. Mavook replies that the Darkode nick should be not be easily tied back to his BlackSEO persona, and suggests the nickname “Macbook.” He also states that he is the “Creator of Flashback botnet for Macs,” and that he specializes in “finding exploits and creating bots.” Brian has started to expose more detailed information from his access to parts of the cybercrime underground, and it’s damn compelling to read. Share:

Share:
Read Post

Appetite for Destruction

We (Rich and Gal) were chatting last week about the destructive malware attacks in South Korea. One popular theory is that patch management systems were compromised and used to spread malware to affected targets, which deleted Master Boot Records and started wiping drives (including network connected drives), even on Linux. There was a lot of justfied hubbub over the source of the attacks, but what really interested us is their nature, and the implications for our defenses. Think about it for a moment. For at least the past 10 years our security has skewed towards preventing data breaches. Before that, going back to Code Red and Melissa, our security was oriented toward preventing mass destructive attacks. (Before that it was all Orange Book, all the time, and we won’t go there). Clearly these attacks have different implications. Preventing mass destruction focuses on firewalls (and other networking gear, for segmentation, not that everyone does a great job with it), anti-malware, and patching (yes, we recognize the irony of patch management being the vector). Preventing breaches is about detection, response, encryption, and egress filtering. The South Korean attack? Targeted destruction. And it wasn’t the first. We believe Stratfor had a ton of data destroyed. Stuxnet (yes, Stuxnet) was a fire and forget munition. But, for the most part, even Anonymous limits their destructive activities to DDoS and the occasional opportunistic target. Targeted destruction isn’t a new game but it’s one we haven’t played much. Take Rich’s Data Breach Triangle concept, or Lockheed’s Cyber Kill Chain. You have three components to a successful attack – a way in, a way out, and something to steal. But for targeted destruction all you need is a way in and something to wreck. Technically, if you use some fire and forget malware (single-use or worm), you don’t even need to interact with anything behind the target’s walls. No one was sitting at a Metasploit console on the other side of the Witty Worm. So what can we do? We definitely don’t have all the answers on this one – targeted destructive attacks, especially of the fire and forget variety, are hard as hell to stop. But a few things come to mind. We cannot rely on response after the malware is triggered, so we need better segregation and containment. Note that we are skipping traditional defense advice because at this point we assume something will get past your perimeter blocking. Rich has started using the term “hypersegregation” to reflect the increasingly granular isolation we can perform, even down to the application level in some cases, without material management overhead increasing (read more). As you move more into cloud and disk-based backups, you might want to ensure you still keep some offline backups of the really important stuff. We don’t care whether it’s disk or tape, but at some point the really critical stuff needs to be offline somewhere. Once again, incident response is huge. But in this case you need to emphasize the containment side of response more than investigation. On the upside these attacks are rarely quiet once they trigger. On the downside they can be quite stealthy, even if they ping the outside world for commands. But there is one point in your favor. Targeted destruction as an endgame is relatively self-limiting. There’s a reason it isn’t the dominant attack type, and while we expect to see more of it moving forward but it isn’t about to be something most of us face on a daily basis. Also, because malware is the main mechanism, all our anti-exploitation work will continue to pay off, making these attacks more and more expensive for attackers. Well, assuming you get the hell off Windows XP. Share:

Share:
Read Post

Get Ready for Phone Security and Regulations

Emergency services providers and others are being hit with telephone-based denial of service attacks. Nasty stuff, powered by IP-based phone systems. This relates to SWATing (what hit Brian Krebs). It has become trivial to use computers to make and spoof phone calls. This is the sort of thing that could lead to new regulations. It is already against the law, but these incidents may lead to rules tightening how companies connect to the phone system. Which probably isn’t great for innovation, and might not work anyway. Share:

Share:
Read Post

IaaS Encryption: Understanding Encryption Systems

Now that we have covered the basics of how IaaS platforms store data, we need to spend a moment reviewing the parts of an encryption system that are relevant for protecting cloud data. Encryption isn’t our only security tool, as we mentioned in our last post, but it is one of the only practical data-specific tools at our disposal in cloud computing. The three components of a data encryption system Cryptographic algorithms and implementation specifics are important at the micro level, but when designing encryption for cloud computing or anything else, the overall structure of the cryptographic system is just as important. There are many resources on which algorithm to select and how to use it, but far less on how to piece together an overall system. When encrypting data in the cloud, knowing how and where to place these pieces is incredibly important, and one of the most common causes of failure. In a multi-tenant environment – even in a private cloud – with almost zero barriers to portability, we need to pay particular attention to where we manage keys. Three major components define the overall structure of an encryption system are: The data: The object or objects to encrypt. It might seem silly to break this out, but the security and complexity of the system are influenced by the nature of the payload, as well as where it is located or collected. The encryption engine: The component that handles the actual encryption (and decryption) operations. The key manager: The component that handles key and passes them to the encryption engine. In a basic encryption system all three components are likely to be located on the same system. As an example take personal full disk encryption (the built-in tools you might use on your home Windows PC or Mac): the encryption key, data, and engine are all stored and used on the same hardware. Lose that hardware and you lose the key and data – and the engine, but that is’t normally relevant. (Neither is the key, usually, because it is protected with another key that is not stored on the system – but if the system is lost while running, with the key is in memory, that becomes a problem). In a traditional application we would more likely break out the components – with the encryption engine in an application server, the data in a database, and key management in an external service or appliance. In cloud computing some interesting limitations force certain architectural models: As of this writing, we cannot typically encrypt boot instances the way we can encrypt the full disk of a server or workstation. So we have fewer options for where to put and how to secure our data. One risk to protect against is a rogue cloud administrator, or anyone with administrative access to the infrastructure, seeing your data. So we have fewer options for where to securely manage keys. Data is much more portable than in traditional infrastructure, thanks to native storage redundancy and data management tools such as snapshots. Encryption engines may run on shared resources with other tenants. So your engine may need special techniques to protect keys in live memory, or you may need to alter your architecture to reduce risk. Automation dramatically impacts your architecture, because you might have 20 instances of a server spin up at the same time, then go away. Provisioning of storage and keys must be as dynamic and elastic as the underlying cloud application itself. Automation also means you may manage many more keys than in a more static, traditional application environment. As you will see in the next sections when we get into details, we will leverage the separation of these components in a few different ways to compensate for many of the different security risks in the cloud. Honestly, the end result is likely to be more secure than what you use in your traditional infrastructure and application architectures. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.