Securosis

Research

Integration vs. Segregation

But, he said, segregation of EHR data simply is not feasible or practical for integrated health systems such as Wellstar, … “But I also have to be able to make the information available immediately in an emergency,” he said. “A 90-second delay if you’re waiting at an ATM for your money is an inconvenience. But if it takes 90 seconds figure out if you’re allergic to penicillin, it could be a matter of life and death. Segregated healthcare networks rarely work, expert says Nice to see our friend Martin Fisher give some good quote in the CSO Online article and he’s right. As more integrated business systems become pervasive, they screw up your ability to segment networks. To be clear, segmentation is your friend, but that only works when you can segment. Otherwise you need to provide more access than you’d prefer, and that means the focus turns toward authentication (making sure the right people get on) and security monitoring. If you can’t keep them out, you had better be able to React Faster and Better. Share:

Share:
Read Post

Friday Summary: January 11, 2013

Tina Slankas presented at the Phoenix ISSA chapter this week on use of patterns for building security programs – slides can be downloaded here (PDF). The thrust of her idea was to use patterns – think design patterns if you like – for putting together control frameworks to define security efforts. Tina stated she was using the definition of ‘pattern’ in a very broad way, but the essence was reusable constructs for managing different aspects of enterprise security. For example: how identity management will function at a high level, and how will it fit with other systems. As a software developer or architect, patterns are invaluable for object-oriented programming, helping model complex ideas as a collection of simple patterns. To be honest, I abandoned the idea of secure design patterns for software architecture pretty much when I first got involved with security. I could not articulate security into the patterns, be they behavioral or structural. Maybe that was just my lack of skill at the time, but it felt like the complexities of how to secure code were beyond pattern descriptions. What was compromised was not as interesting as how it was compromised, and it usually turned out to be a process or protocol that got abused. It was the bits flowing between different patterns, or the ones left undefined, that I worried about. Trust relationships. Assumptions. Identity. Avoiding things like replay attacks. Repudiation. The problem space felt process-oriented, not object-oriented. But in terms of a control or management framework for IT systems, reusable patterns are an interesting idea. They help with consistency across multiple sites/deployments. They offer a layer of abstraction – you don’t care if the problem is solved by a firewall, a WAF, or DLP, so long as the required controls are in place and meet the requirements. Your could represent the entire PCI specification as a set of patterns. Unless you have a huge infrastructure to manage, I’m not clear how practical this is – but I am interested in the idea of security patterns. I remain skeptical of its value for secure code development, but I see its value for security program management. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Rich’s TidBITS post: Do You Need Mac Antivirus Software in 2013? Gunnar’s Dark Reading post: What Is It You Would Say That You Do Here? Adrian’s Dark Reading Post on DB Threats and Countermeasures. Securosis Posts $50K buys how much FDE? Java Sucks. Again. Most Consumers Don’t Need Mac AV. Integration vs. Segregation. DDoS: Distributed, but not evenly. Incite 1/9/2013: Never Lost. Detection vs. Protection and the Game of Words. ENISA BYOD FTW. Pwn Ur Cisco Phone. Understanding Identity Management for Cloud Service: The Solution Space. Prove It to Use It. Bored? Set up your own CA. Internet Explorer 8 0-Day Bypasses Patch. Favorite Outside Posts Adrian Lane: Hardening Sprints. What are they? Do you need them? I’m a big fan of the occasional hardening sprint to let each developer fix one thing that bugs them, to pull stuff out of the security bucket list, or to otherwise do quality control. James Arlen: Nather’s Law of Policy Management. Mike Rothman: State sponsored attack: a howto guide. For a change, Rob Graham is lampooning the prevailing wisdom. He’s very good that that. Project Quant Posts Malware Analysis Quant: Index of Posts. Malware Analysis Quant: Metrics – Monitor for Reinfection. Malware Analysis Quant: Metrics – Remediate. Malware Analysis Quant: Metrics – Find Infected Devices. Malware Analysis Quant: Metrics – Define Rules and Search Queries. Malware Analysis Quant: Metrics – The Malware Profile. Malware Analysis Quant: Metrics – – – Dynamic Analysis. Research Reports and Presentations Implementing and Managing Patch and Configuration Management. Defending Against Denial of Service (DoS) Attacks. Securing Big Data: Security Recommendations for Hadoop and NoSQL Environments. Tokenization vs. Encryption: Options for Compliance. Pragmatic Key Management for Data Encryption. The Endpoint Security Management Buyer’s Guide. Pragmatic WAF Management: Giving Web Apps a Fighting Chance. Understanding and Selecting Data Masking Solutions. Top News and Posts Adobe fixes Flash Player and Microsoft patches IE 10 to update its built-in version. Under the hood of the cyber attack on the U.S. Banks. Facebook, Yahoo Fix Valuable $ecurity Hole$. Zero-Day Java Exploit Debuts in Crimeware. Does Your Alarm Have a Default Duress Code? How PCI Standards Will Really Die. Enhancing Certificate Security. Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Bert Knabe, in response to Prove It to Use It. You mean you don’t believe it?! It’s from a government official! They never lie! Share:

Share:
Read Post

DDoS: Distributed, but not evenly

It shouldn’t come as any surprise, but big financials are still suffering a wave of DDoS attacks. DDoS is like an accidental amputation – there is no question whether it’s a problem. The trick is to know ahead of time if you are on the list, and the best thing to do is keep an eye on your peers. Not everyone needs to invest proactively in DDoS protection, but you sure as heck need a plan and a vendor contact just in case. Especially if you are big, handle money, work with (or piss off) governments located “East” (Europe, Asia, Middle, whatever), or like to poke Anonymous. Update 1/9: According to the New York Times, a “former” gov official with connections says Iran is definitely behind the attacks. Backing up the rumors we’ve all been hearing from the start. Share:

Share:
Read Post

Incite 1/9/2013: Never Lost

I was in the car the other day with one of the kids, and they asked me if I ever get lost. I have a pretty good sense of direction and have been able to read maps as long as I remember. I was probably compensating for my Mom’s poor sense of direction and my general anxiety at a young age about feeling lost. But it’s different today. With the advent of ever-present GPS and decent navigation, I can say it has been a long while since I have really been lost. I get misdirected sometimes, but that lasts maybe a minute and then I figure out my way. But these gadgets are no silver bullet. A couple years ago I was doing a seminar tour and ended up in Detroit. I did my thing, got some sleep, and was ready to head out to the airport the next morning. The car was equipped with a GPS from the rental car company, so I hit the button to take me to the airport and started driving. About 40 minutes later, I started thinking something was screwy. Then I got that feeling in the pit of my stomach, when I realized I selected the wrong airport in the GPS. I was driving in the wrong direction for over a half-hour and I was very unlikely to make my flight. And this was not the day to miss the flight. The Boss was leaving town and I had to get the kids from their various schools and activities. Of course, when I finally got to the right airport, all the flights back to Atlanta were booked up. I was totally screwed. So I paid a whole bunch of idiot tax and bought a first class seat on another airline. And I still had to call in a bunch of favors from friends and family to take the kids until I could get home. Feels like I’m still paying for that period of idiocy. Let’s just say I double check every time I enter an address into a GPS nowadays. But now let’s consider navigation metaphorically. We have technology that can help us get anywhere we want to go. It’s built into your car and you carry it in your pocket. But that doesn’t make it any easier to know where you should be going. And even when you get there, you are usually disappointed with the destination… Maybe it wasn’t everything you cracked it up to be. Sometimes the grass isn’t greener when you get there. When I think about it and play out the metaphor a bit further, there’s another reason it has been a while since I was last lost. I guess at this point in my life, I don’t get lost because I’m not trying to get anywhere. I’m very fortunate to be in a situation where I can actually say that. And mean it. Given my cultural programming, it took me a long time to accept where I am and to not strive to get to where I’m not. There are some days I forget – I am human after all. But there is no GPS for life. That’s worth remembering. –Mike Photo credits: Hertz NeverLost III originally uploaded by Josh Bancroft Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. Understanding Identity Management for Cloud Services The Solution Space Introduction Newly Published Papers Implementing and Managing Patch and Configuration Management Defending Against Denial of Service Attacks Securing Big Data: Security Recommendations for Hadoop and NoSQL Environments Pragmatic WAF Management: Giving Web Apps a Fighting Chance Incite 4 U BYOD basics: I mentioned this briefly on the blog earlier this week but wanted to add a bit. ENISA released a great guide to getting started with BYOD. It is far more practical than most approaches I have seen, and includes links to a lot of public examples. One of key aspect is how the guide consistently addresses the issue of getting employee cooperation. You can’t hit BYOD with a hammer or you will just end up smashing your thumb. If the employee owns it, you need to entice them with benefits – not act as if you are doing them a favor by allowing them access to corporate email on their off hours. For more detail on the technology, I wrote a paper with a spectrum of options for protecting data on iOS. As a security guy I hate giving up control as much as anyone, but employees aren’t cattle and they need a fair deal or they’ll figure out a way around whatever you come up with. – RM Carnival of dysfunction: Leaking thousands of patient records is not news, – we have had a steady diet of leaks and breaches over the past decade. But the recent LA Times article on a couple who improperly stored some 300k patient records was interesting for the myriad levels of disfunction it describes. And it’s clear from comments by both the third party provider and Kaiser that they don’t understand data security. Couple that with the Times slant that small firms should never store sensitive data at home because it can’t be secure, and you have a carnival of dysfunction. This issue is not unique to Kaiser – most large enterprises engage small third party service providers because they offer a specific skill at low cost and are agile enough to adapt to market changes. But don’t expect them to know security, and don’t expect them to comply with requests for military-grade security or formal compliance processes. Companies should provide simple security controls that are both understandable and implementable by small firms. For example some full disk encryption, key management, and a dedicated computer for sensitive

Share:
Read Post

Detection vs. Protection and the Game of Words

Any time you go after an entrenched technology, there will be pushback. So it’s not surprising that some folks believe that imperva’s anti-virus study is garbage. this makes it pretty clear that the product a customer installs is very much a different thing from the program that virustotal uses – they will in most cases behave very differently and so the results that virustotal spits out cannot be considered representative of what actual users of anti-malware products will experience. Normally I won’t link to anything on the anti-virus-rants blog because I object to kurt’s lack of capitalization. But in this case, he underscores a reality that every security professional needs to deal with. There is detection and then there is protection. You can be protected from a malware attack without having technology to detect the malware. That means you have a synergistic (or complimentary) control in place to protect the device. For example, you may not have a signature to block a 0-day, but you’ve implemented application white listing on that public kiosk, so the malware can’t install. Protection without detection. Imagine that. So kurt’s general issue is railing against the industry marketing machine for vilifying the AV vendors because they can’t actually detect enough malware. His point is that endpoint protection involves more than just anti-virus detection. As such, many of those malware samples (tested by Technion through VirusTotal) would not necessarily compromise a device because other controls in the suite would provide the protection. he’s right. And yes, my lack of capitalization is an homage to kurt. 😉 But then he swings at Rob Graham about Rob’s defense of the testing methodology. Rob’s point is that the methodology is fine. The AV agents don’t detect a lot of the malware and that is how many folks deploy the anti-virus engines as part of a security gateway or UTM. In that scenario, Rob is right as well. Though we continue to avoid the elephant in the room, and that’s the marketing spin. You know, the game of words. Imperva spun this story as an indictment of endpoint protection, when it was really validating what should already be common knowledge. Standalone anti-virus is not going to catch much malware. You can decompose words and try to infer Imperva’s intent, but that’s pointless. The marketing folks at Imperva are good at what they do and I don’t begrudge them for spinning the results of the study. It’s their job. They try to create urgency for their employer’s technology by favorably positioning data points to tell their story. As I’ve said many times before. Don’t hate the playas, hate the game. Share:

Share:
Read Post

ENISA BYOD FTW

ENISA released a solid BYOD/Consumeriation of IT guide. At first I was turned off by phrases in the executive summary like: Ensure that governance aspects are derived from business processes and protection requirements, and are defined before dealing with technology. But once you get into it, this is a great starter guide that includes both policy and technical pieces. Best part: a lot of examples and links to real world projects. Worst parts: the DLP bits don’t reflect what’s available (over-estimates); and some vendor-specific language. Share:

Share:
Read Post

Pwn Ur Cisco Phone

what’s the deal with the cisco phone eavesdropping hack? These phones are basically little computers. If an attacker can take control of it, they can do the same things from it that they could by using a rogue or compromised system on a network. The “eavesdropping mic” is just one of many ways the compromised phone could be used. Yup, there is a demo out there of someone taking over a Cisco IP phone because basically it’s a computer. Even better, it’s a computer that allows privilege escalation via a kernel exploit if someone has access to the phone. Of course Lonervamp brings up one of the key issues, which is exfiltration. So if someone can eavesdrop on my very interesting heavy breathing during my deep research endeavors, they still have to get the data off the phone and out of the network. Remember back to Rich’s awesome data breach triangle. No exfiltration, no breach for you (in my Soup Nazi voice). But all the same, folks just plug stuff into their networks without a lot of thought for how these devices can become weapons against them. At some point they will, or not. Share:

Share:
Read Post

Understanding Identity Management for Cloud Service: The Solution Space

Adrian and Gunnar here: After spending a few weeks getting updates from Identity and Access Management (IAM) service vendors – as well as a couple weeks for winter break – we have gathered the research we need to delve into the meat of our series on Understanding and Selecting Identity Management for Cloud Services. Our introductory post outlined the topics we will cover. This series is intended as a market overview, taking a broad look at issues you need to consider when evaluating cloud-based identity support systems. The intro hinted at the reasons cloud computing models force change in our approaches to access control, but today’s post will flesh out the problems of cloud IAM. The cloud excels at providing enterprise with apps and data. But what about identity information? Companies face issues trying to retain control of identity management while taking advantage of the cloud. The goal is to unify identity management for internal and external user across both traditional IT and third party cloud services. It is possible to manage user access to cloud computing resources in-house, but the architecture must address take integration complexity and management costs into account. Most organizations – particularly enterprises – find these inconveniences outweigh the benefits. For many of the same reasons (including on-demand service, elasticity, broad network access, reduction in capital expenditures, and total cost) companies adopt cloud computing services instead of in-house services, and they also leverage third-party cloud services to manage identity and access management. Managing identity was a lot simpler when the client-server computing model was the norm, and users were mostly limited to a desktop PC with another set of credentials to access a handful of servers.. set up the ACLs, sprinkle on some roles, and voila! But as servers and applications multiplied, the “endpoint” shifted from fixed desktop to remote devices, and servers were integrated to other server domains – never mind ACLs and roles, what realm are we in? – we used directory services to provide a single identity management repository, and help propagate identity across the enterprise. Now we have an explosion of external service providers: financial applications, cloud storage, social media, workflow, CRM, email, collaboration, and web conferencing, to name a few. These ‘extra-enterprise’ services are business critical, but don’t directly link into traditional directory services. Cloud computing services turn identity management on its ear. The big shift comes in three main parts: IT no longer owns the servers and applications the organization relies upon, provider capabilities are not fully compatible with existing internal systems, and the ways users consume cloud services have changed radically. In fact an employee may consume corporate cloud services without ever touching in-house IT systems. Just about every enterprise uses Software as a Service (SaaS), and many use Platform and Infrastructure as a Service (PaaS and IaaS, respectively) as well – each with its own approaches to Identity and Access Management. Extending traditional corporate identity services outside the corporate environment is not a trivial effort – it requires integration of existing IAM systems with the cloud service provider(s). Most companies rely on dozens of cloud service providers, each with a different set of identity and authorization capabilities, as well as different programatic and web interfaces. The time, effort, and cost to develop and maintain links with each service provider can be overwhelming. Cloud Identity Solutions Ideally we want to extend the existing in-house identity management capabilities to third-party systems, minimizing the work for IT management while delivering services to end users with minimal disruption. And we would like to maintain control over user access – adding and removing users as needed, and propagating new authorization policies without significant latency. We also want to collect information on access and policy status that help meet security and compliance requirements. And rather than build a custom bridge to each and every third-party service, we would like a simple management interface that extends our controls and policies to the various third-party services. Features and benefits common to most cloud identity and access management systems include: Authentication, Single Sign-on (SSO): One of the core services is the ability to authenticate users based on provided credentials, and then allow each user to access multiple (internal and external) services without having to repeatedly supply credentials to each service. Offering SSO to users is, of course, just about the only time anyone is happy to see the security team show up – make the most of it! Identity Federation: Federated identity is where identity and authorization settings are collected from multiple identity management systems, enabling different systems to define user capabilities and access. Identity and authorization are a shared responsibility across multiple authoritative sources. Federated identity is a superset of authentication and single sign-on. Federation made headway as a conveyance engine for SSO and Web Services. Its uptake in cloud has been substantial because its core architecture helps companies navigate one of the thornier cloud issues: retaining in-house control of user accounts while leveraging cloud apps and data. Granular authorization controls: Access is typically not an ‘all-or-nothing’ proposition – each user is allowed access to a subset of functions and data stored in the cloud. Authorization maps instruct applications which resources to provide to each user. How much control you have over each user’s access depends, both on the capabilities of the cloud service provider and on the capabilities of the IAM system. The larger industry trends – in authorization in general and the cloud specifically – are a focus on finer-grained access control, and removing access policy from code as much as possible. In a nutshell, roles are necessary but not sufficient for authorization – you need attributes too. You also do not want to spelunk through millions of lines of code to define/review/change/audit them, so they should be configurable and data driven. Administration: User administrators generally prefer a single management pane for administering users and managing identity across multiple services. The goal of most cloud IAM systems is to do just that, but they

Share:
Read Post

Prove It to Use It

“Last year, one billion dollars was stolen in the U.S. by Romanian hackers,” says American ambassador in Bucharest, Mark Gitenstein. I expect to see this used in plenty of presentations and press releases in the coming months. If you use the number, you have to prove it is real. Good luck with that. Share:

Share:
Read Post

Bored? Set up your own CA

How much does it cost to start your own CA? The main thing you’re looking to do is to pass the WebTrust audit and associated practices that the platforms will require you to do. Microsoft has the most mature process. They have a set of rules and guidelines. If you follow them, you’re in. One of those, by the way, is that you have to be a retail CA, as opposed to an internal one or a government one. It’s best to work with Microsoft first, and once you’re in their root program move to the others. They are fair, disciplined, and helpful. Most of all, once you’ve gone through all that, it’s easier to get into the other important root stores. This is an interesting description of the process Jon Callas drove at PGP to get them into the CA business. It’s instructive to understand the process, especially since compromising a CA seems to be the path of least resistance for a bunch of attackers to execute on multi-faceted attacks. I think it bears mentioning that starting the CA is really only the first step. Having certs in any of the major browsers makes you a major attack target. So even if it costs $250K to get things up and running, it will cost a lot more over time to protect the integrity of your CA. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.