Securosis

Research

New Paper: Leveraging Threat Intelligence in Security Monitoring

As we continue our research into the practical uses of threat intelligence (TI), we have documented how TI should change existing security monitoring (SM) processes. In our Leveraging Threat Intelligence in Security Monitoring paper, we go into depth on how to update your security monitoring process to integrate malware analysis and threat intelligence. Updating our process maps demonstrates that we don’t consider TI a flash in the pan – it is a key aspect of detecting advanced adversaries as we move forward. Here is our updated process map for TI+SM.:   As much as you probably dislike thinking about other organizations being compromised, this provides a learning opportunity. An excerpt from the paper explains in more detail: There are many different types of threat intelligence feeds and many ways to apply the technology – both to increase the effectiveness of alerting and to implement preemptive workarounds based on likely attacks observed on other networks. That’s why we say threat intelligence enables you to benefit from the misfortune of others. By understanding attack patterns and other nuggets of information gleaned from attacks on other organizations, you can be better prepared when they come for you. And they will be coming for you – let’s be clear about that. So check out the paper and figure out how your processes need to evolve, both to keep pace with your adversaries, and to take advantage of all the resources now available to keep your defenses current. We would like to thank Norse Corporation for licensing this paper. Without support from our clients, you wouldn’t be able to use our research without paying for it. You can check out the permanent landing page for the paper, or download it directly (PDF). Share:

Share:
Read Post

Advanced Endpoint and Server Protection: Detection/Investigation

Our last AESP post covered a number of approaches to preventing attacks on endpoints and servers. Of course prevention remains the shiny object most practitioners hope to achieve. If they can stop the attack before the device is compromised there need be no clean-up. We continue to remind everyone that hope is not a strategy, and counting on blocking every attack before it reaches your devices always ends badly. As we detailed in the introduction, you need to plan for compromise because it will happen. Adversaries have gotten much better, attack surface has increased dramatically, and you aren’t going to prevent every attack. So pwnage will happen, and what you do next is critical, both to protecting the critical information in your environment and to your success as a security professional. So let’s reiterate one of our core security philosophies: Once the device is compromised, you need to shorten the window between compromise and when you know the device has been owned. Simple to say but very hard to do. The way to get there is to change your focus from prevention to a more inclusive process, including detection and investigation… Detection Our introduction described detection: You cannot prevent every attack, so you need a way to detect attacks after they get through your defenses. There are a number of different options for detection – most based on watching for patterns that indicate a compromised device. The key is to shorten the time between when the device is compromised and when you discover it has been compromised. To be fair, there is a gray area between detection and prevention, at least from an endpoint and server standpoint. With the exception of application control, the prevention techniques described in the last post depend on actually detecting the bad activity first. If you are looking at controls using advanced heuristics, you detect the malicious behavior first – then you block it. In an isolation context you run executables in the walled garden, but you don’t really do anything until you detect bad activity – then you kill the virtual machine or process under attack. But there is more to detection than just figuring out what to block. Detection in the broader sense needs to include finding attacks you missed during execution because: You didn’t know it was malware at the time, which happens frequently – especially given how quickly attackers innovate. Advanced attackers have stockpiles of unknown exploits (0-days) they use as needed. So your prevention technology could be working as designed, but still not recognize the attack. There is no shame in that. Alternatively, the prevention technology may have missed the attack. This is common as well because advanced adversaries specialize in evading known preventative controls. So how can you detect after compromise? Monitor other data sources for indicators that a device has been compromised. This series is focused on protecting endpoints and servers, but looking at devices is insufficient. You also need to monitor the network for a full perspective on what’s really happening, using a couple techniques: Network-based malware detection: One of the most reliable ways to identify compromised devices is to watch for communications with known botnets. You can look for specific traffic patterns, or for communications to known botnet IP addresses. We covered these concepts in both the NBMD 2.0 and TI+SM series. Egress/Content Filtering: You can also look for content that should not be leaving the confines of your network. This may involve a broad DLP deployment – or possibly looking for sensitive content on your web filters, email security gateways, and next generation firewalls. Keep in mind that every endpoint and server device has a network stack of some sort. Thus a subset of this monitoring can be performed within the device, by looking at traffic that enters and leaves the stack. As mentioned above, threat intelligence (TI) is making detection much more effective, facilitated by information sharing between vendors and organizations. With TI you can become aware of new attacks, emerging botnets, websites serving malware, and a variety of other things you haven’t seen yet and therefore don’t know are bad. Basically you leverage TI to look for attacks even after they enter your network and possibly compromise your devices. We call this retrospective searching. This works by either a) using file trajectory – tracking all file activity on all devices, looking for malware files/droppers as they appear and move through your network; or b) looking for attack indicators on devices with detailed activity searching on endpoints – assuming you collect sufficient endpoint data. Even though it may seem like it, you aren’t really getting ahead of the threat. Instead you are looking for likely attacks – the reuse of tactics and malware against different organizations gives you a good chance to see malware which has hit others before long. Once you identify a suspicious device you need to verify whether the device is really compromised. This verification involves scrutinizing what the endpoint has done recently for indicators of compromise or other such activity that would confirm a successful attack. We’ll describe how to capture that information later in this post. Investigation Once you validate the endpoint has been compromised, you go into incident response/containment mode. We described the investigation process in the introduction as: Once you detect an attack you need to verify the compromise and understand what it actually did. This typically involves a formal investigation, including a structured process to gather forensic data from devices, triage to determine the root cause of the attack, and searching to determine how broadly the attack has spread within your environment. As we described in React Faster and Better, there are a number of steps in a formal investigation. We won’t rehash them here, but to investigate a compromised endpoint and/or server you need to capture a bunch of forensic information from the device, including: Memory contents Process lists Disk images (to capture the state of the file system) Registry values Executables (to support malware analysis and reverse engineering) Network activity logs As part of the investigation you also need to understand the attack timeline. This enables you

Share:
Read Post

Friday Summary: March 7, 2014

I don’t code much. In fact over the last 10 years or so I have been actively discouraged from coding, with at least one employer threatening to fire me if I was discovered. I have helped firms architect new products, I have done code reviews, I have done some threat modeling, and even a few small Java utilities to weave together a couple other apps. But there has been very, very little development in the last decade. Now I have a small project I want to do so I jumped in with both feet, and it feels like I was dumped into the deep end of the pool. I forgot how much bigger a problem space application development is, compared to simple coding. In the last couple of days I have learned the basics of Ruby, Node.js, Chef, and even Cucumber. I have figured out how to bounce between environments with RVM. I brushed up on some Python and Java. And honestly, it’s not very difficult. Learning languages and tools are trivial matters. A few hours with a good book or web site, some dev tools, and you’re running. But when you are going to create something more than a utility, everything changes. The real difficulty is all the different layers of questions about the big picture: architecture, deployment, UI, and development methodologies. How do you want to orchestrate activities and functions? How do you want to architect the system? How do you allow for customization? Do I want to do a quick prototype with the intention of rewriting once I have the basic proof of concept, or do I want to stub out the system and then use a test-driven approach? State management? Security? Portability? The list goes on. I had forgotten a lot of these tasks, and those brain cells have not been exercised in a long time. I forgot how much prep work you need to do before you write a line of code. I forgot how easy it is to get sucked into the programming vortex, and totally lose track of time. I forgot the stacks of coffee-stained notes and hundreds of browser tabs with all the references I am reviewing. I forgot the need to keep libraries of error handling, input validation, and various other methods so I don’t need to recode them over and over. I forgot how much I eat when developing – when my brain is working at capacity I consume twice as much food. And twice as much caffeine. I forgot the awkwardness of an “Aha!” moment when you figure out how to do something, a millisecond before your wife realizes you haven’t heard a word she said for the last ten minutes. It’s all that. And it’s good. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Mort quoted in Network World. Rich quoted in Building the security bridge to the Millennials. Adrian quoted on Database Denial of Service. David Mortman and Adrian Lane will be presenting at Secure360. Mike and JJ podcast about the Neuro-Hacking talk at RSA. Favorite Securosis Posts Adrian Lane: Research Revisited: The Data Breach Triangle. This magical concept from Rich has aged very very well. I also use this frequently, basically because it’s awesome. Mike Rothman: Research Revisited: Off Topic: A Little Perspective. Rich brought me back to the beginning of this strange journey since I largely left the corporate world. 2006 was so long ago, yet it seems like yesterday. Other Securosis Posts Incite 3/5/2014: Reentry. Research Revisited: FireStarter: Agile Development and Security. Research Revisited: POPE analysis on the new Securosis. Research Revisited: Apple, Security, and Trust. Research Revisited: Hammers vs. Homomorphic Encryption. Research Revisited: Security Snakeoil. New Paper: The Future of Security The Trends and Technologies Transforming Security. Research Revisited: RSA/NetWitness Deal Analysis. Research Revisited: 2006 Incites. Research Revisited: The 3 Dirty Little Secrets of Disclosure No One Wants to Talk About. Favorite Outside Posts Adrian Lane: Charlie Munger on Governance. Charlie Munger is a favorite of mine, and about as pragmatic as it gets. Good read from Gunnar’s blog. Gal Shpantzer: Bloodletting the Arms Race: Using Attacker’s Techniques for Defense. Ryan Barnett, web app security and WAF expert, writes about banking trojans’ functionality and how to use it against attackers. David Mortman: Use of the term “Intelligence” in the RSA 2014 Expo. Mike Rothman: How Khan Academy is using design to pave the way for the future of education. I’m fascinated by design, or more often by very bad design. Which we see a lot of in security. This is a good story of how Khan Academy focuses on simplification to teach more effectively. Research Reports and Presentations The Future of Security: The Trends and Technologies Transforming Security. Security Analytics with Big Data. Security Management 2.5: Replacing Your SIEM Yet? Defending Data on iOS 7. Eliminate Surprises with Security Assurance and Testing. What CISOs Need to Know about Cloud Computing. Defending Against Application Denial of Service Attacks. Executive Guide to Pragmatic Network Security Management. Security Awareness Training Evolution. Firewall Management Essentials. Top News and Posts Behind iPhone’s Critical Security Bug, a Single Bad ‘Goto’. We Are All Intelligence Officers Now. A week old – we’re catching up on our reading. Marcus Ranum at RSA (audio). Hacking Team’s Foreign Espionage Infrastructure Located in U.S. The Face Behind Bitcoin Uroburos Rootkit Fix it tool available to block Internet Explorer attacks leveraging CVE-2014-0322 Blog Comment of the Week This week’s best comment goes to Marco Tietz, in response to Research Revisited: FireStarter: Agile Development and Security, and you’ll have to watch the video to get it. @Adrian: good video on Agile vs Security. But why did you have the Flying Spaghetti Monster in there and didn’t even give it credit! 🙂 rAmen Share:

Share:
Read Post

Incite 3/5/2014: Reentry

After I got off the plane Friday night, picked my bag up off the carousel, took the train up to the northern Atlanta suburbs, got picked up by the Boss, said hello to the kids, and then finally took a breath – my first thought was that RSA isn’t real. But it is quite real, just not sustainable. That makes reentry into my day to day existence a challenge for a few days.   It’s not that I was upset to be home. It’s not that I didn’t want to see my family and learn about what they have been up to. My 5 minute calls two or three times a day, while running between meetings, didn’t give me much information. So I wanted to hear all about things. But first I needed some quiet. I needed to decompress – if I rose to the surface too quickly I would have gotten the bends. For me the RSA Conference is a nonstop whirlwind of activity. From breakfast to the wee hours closing down the bar at the W or the Thirsty Bear, I am going at all times. I’m socializing. I’m doing business. I’m connecting with old friends and making new ones. What I’m not doing is thinking. Or recharging. Or anything besides looking at my calendar to figure out the next place I need to be. For an introvert, it’s hard. The RSA Conference is not the place to be introverted – not if you work for yourself and need to keep it that way. I mean where else is it normal that dinner is a protein bar and shot of 5-hour energy, topped off with countless pints of Guinness? Last week that was not the exception, it was the norm. I was thankful we were able to afford a much better spread at the Security Blogger’s Meetup (due to the generosity of our sponsors), so I had a decent meal at least one night. As I mentioned last week, I am not about to complain about the craziness, and I’m thankful the Boss understands my need to wind down on reentry. I make it a point to not travel the week after RSA, both to recharge, get my quiet time, and reconnect with the family. The conference was great. Security is booming and I am not about to take that for granted. There are many new companies, a ton of investment coming into the sector, really cool innovative stuff hitting the market, and a general awareness that the status quo is no good. Folks are confused and that’s good for our business. The leading edge of practitioners are rethinking security and have been very receptive to research we have been doing to flesh out what that means in a clear, pragmatic fashion. This is a great time to be in security. I don’t know how long it will last, but the macro trends seem to be moving in our direction. So I’ll file another RSA Conference into the memory banks and be grateful for the close friends I got to see, the fantastic clients who want to keep working with us, and the new companies I look forward to working with over the next year (even if you don’t know you’ll be working with us yet). Even better, next year’s RSA Conference has been moved back to April 2015. So that gives me another two months for my liver to recover and my brain cells to regenerate. –Mike PS: This year we once again owe huge thanks to MSLGROUP and Kulesa Faul, who made our annual Disaster Recovery Breakfast possible. We had over 300 people there and it was really great. Until we got the bill, that is… Photo credit: “Reentry” originally uploaded by Evan Leeson Securosis Firestarter Have you checked out our new video podcast? Rich, Adrian, and Mike get into a Google Hangout and, well, hang out. We talk a bit about security as well. We try to keep these less than 15 minutes, and usually fail. Feb 21 – Happy Hour – RSA 2014 Feb 17 – Payment Madness Feb 10 – Mass Media Abuse Feb 03 – Inevitable Doom Jan 27 – Government Influence Jan 20 – Target and Antivirus Jan 13 – Crisis Communications 2014 RSA Conference Guide In case any of you missed it, we published our fifth RSA Conference Guide. Yes, we do mention the conference a bit, but it’s really our ideas about how security will shake out in 2014. You can get the full guide with all the memes you can eat. Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. Leveraging Threat Intelligence In Security Monitoring Quick Wins with TISM The Threat Intelligence + Security Monitoring Process Revisiting Security Monitoring Benefiting from the Misfortune of Others Advanced Endpoint and Server Protection Prevention Assessment Introduction Newly Published Papers The Future of Security Security Management 2.5: Replacing Your SIEM Yet? Defending Data on iOS 7 Eliminating Surprises with Security Assurance and Testing What CISOs Need to Know about Cloud Computing Defending Against Application Denial of Service Security Awareness Training Evolution Firewall Management Essentials Incite 4 U TI is whatever you want it to mean: Interesting experiment from FireEye/Mandiant’s David Bianco, who went around the RSA show floor and asked vendors what threat intelligence (TI) meant to vendors who used the term prominently in their booths. Most folks just use the buzzword, and mean some of the less sophisticated data sources. I definitely understand David’s perspective, but he is applying the wrong filter. It’s like of like having a Ph.D. candidate go into a third grade classroom and wonder why the students don’t understand differential equations. Security is a big problem, and the kinds of things David is comfortable with at the top of his Pyramid of Pain would be lost on 98% of the world. If even 40% of the broad market would use

Share:
Read Post

Research Revisited: FireStarter: Agile Development and Security

I have had many conversations over the last few months with firms about to take their first plunge into Agile development methodologies. Each time they ask how to map secure software development processes into an Agile framework. So I picked this Firestarter for today’s retrospective on Agile Development and Security (see the original post with comments). I am a big fan of the Agile project development methodology, especially Agile with Scrum. I love the granularity and focus it requires. I love that at any given point in time you are working on the most important feature or function. I love the derivative value of communication and subtle peer pressure that Scrum meetings produce. I love that if mistakes are made, you do not go far in the wrong direction – resulting in higher productivity and few total disasters. I think Agile is the biggest advancement in code development in the last decade because it addresses issues of complexity, scalability, focus, and bureaucratic overhead. But it comes with one huge caveat: Agile hurts secure code development. There, I said it. Someone had to. The Agile process, and even the scrum leadership model, hamstrings development in terms of building secure products. Security is not a freakin’ task card. Logic flaws are not well-documented and discreet tasks to be assigned. Project managers (and unfortunately most ScrumMasters) learned security by skimming a “For Dummies” book at Barnes & Noble while waiting for lattes, but they are the folks making the choices as to what security should make it into iterations. Just like general IT security, we end up wrapping the Agile process in a security blanket or bolting on security after the code is complete, because the process itself is not suited to secure development. I know several of you will be saying “Prove it! Show us a study or research evidence that supports your theory.” I can’t. I don’t have meaningful statistical data to back up my claim. But that does not mean it isn’t true, and there is ample anecdotal evidence to support what I am saying. For example: The average Sprint duration of two weeks is simply too short for meaningful security testing. Fuzzing & black box testing are infeasible in the context of nightly builds or pre-release sanity checks. Trust assumptions, between code modules or system functions where multiple modules process requests, cannot be fully exercised and tested within the Agile timeline. White box testing can be effective, but security assessments simply don’t fit into neat 4-8 hour windows. In the same way Agile products deviate from design and architecture specifications, they deviate from systemic analysis of trust and code dependencies. It is a classic forest for the trees problem: efficiency and focus gained by skipping over big picture details necessarily come at the expense of understanding how the system and data are used as a whole. Agile is great for dividing and conquering what you know, but not so great for dealing with the abstract. Secure code development is not like fixing bugs where you have a stack trace to follow. Secure code development is more about coding principles that lead to better security. In the same way Agile cannot help enforce code ‘style’, it doesn’t help with secure coding guidelines. (Secure) style verification is an advantage of peer programming and inherent to code review, but not intrinsic to Agile. The person on the Scrum team with the least knowledge of security, the Product Manager, prioritizes what gets done. Project managers generally do not track security testing, and they are not incented to get security right. They are incented to get the software over the finish line. If they track bugs on the product backlog, they probably have a task card buried somewhere but do not understand the threats. Security personnel are chickens in the project, and do not gate code acceptance the way they traditionally were able in waterfall testing; they may also have limited exposure to developers. The fact that major software development organizations are modifying or wrapping Agile with other frameworks to compensate provide security is evidence of the difficulties in applying security practices directly. The forms of testing that fit Agile development are more likely to get done. If they don’t fit they are typically skipped (especially at crunch time), or they need to be scheduled outside the development cycle. It’s not just that the granular focus on tasks makes it harder to verify security at the code and system levels. It’s not just that features are the focus, or that the wrong person is making security decisions. It’s not just that the quick turnaround in code production precludes effective forms of testing for security. It’s not just that it’s hard to bucket security into discreet tasks. It is all that and more. We are not about to see a study comparing Waterfall with Agile for security benefits. Putting together similar development teams to create similar products under two development methodologies to prove this point is infeasible. I have run Agile and Waterfall projects of similar natures in parallel, and while Agile had overwhelming advantages in a number of areas, security was not one of them. If you are moving to Agile, great – but you will need to evolve your Agile process to accommodate security. What do you think? How have you successfully integrated secure coding practices with Agile? Share:

Share:
Read Post

Research Revisited: POPE analysis on the new Securosis

Since we’re getting all nostalgic and stuff, I figured I’d dust off the rationale I posted the day we announced that I was joining Securosis. That was over 4 years ago and it has been a great ride. Rich and Adrian haven’t seen fit to fire me for cause yet, and I think we’ve done some great work. Of course the plans you make typically aren’t worth the paper they’re written on. We have struggled to launch that mid-market research offering. And we certainly couldn’t have expected the growth we have seen with our published research (using our unique Totally Transparent Research process) or our retainer business. So on balance it’s still all good. By the way, I still don’t care about an exit, as I mentioned in the piece below. I am having way too much fun doing what I love. How many folks really get to say that? So we will continue to take it one day at a time, and one research project at a time. We will try new stuff because we’re tinkerers. We will get some things wrong, and then we’ll adapt. And we’ll drink some beer. Mostly because we can… The Pope Visits Security Incite + Securosis (Originally published on the Security Incite blog – Jan 4, 2010.) When I joined eIQ, I did a “POPE” analysis on the opportunity, to provide a detailed perspective on why I made the move. The structure of that analysis was pretty well received, so as I make another huge move, I may as well dust off the POPE and use that metaphor to explain why I’m merging Security Incite with Securosis. People Analyzing every “job” starts with the people. I liked the freedom of working solo, but ultimately I knew that model was inherently limiting. So thinking about the kind of folks I’d want to work with, a couple of attributes bubbled to the top. First, they need to be smart. Smart enough to know when I’m full of crap. They also need to be credible. Meaning I respect their positions and their ability to defend them, so when they tell me I’m full of crap – I’m likely to believe them. Any productive research environment must be built on mutual respect. Most importantly, they need to stay on an even keel. Being a pretty excitable type (really!), when around other excitable types the worst part of my personality surfaces. Yet, when I’m around guys that go with the flow, I’m able to regulate my emotions more effectively. As I’ve been working long and hard on personal development, I didn’t want to set myself back by working with the wrong folks. For those of you that know Rich and Adrian, you know they are smart and credible. They build things and they break them. They’ve both forgotten more about security than most folks have ever known. Both have been around the block, screwed up a bunch of stuff and lived to tell the story. And best of all, they are great guys. Guys you can sit around and drink beer with. Guys you looking forward to rolling your sleeves up with and getting some stuff done. Exactly the kind of guys I wanted to work with. Opportunity Securosis will be rolling out a set of information products targeted at accelerating the success of mid-market security and IT professionals. Let’s just say the security guy/gal in a mid-market company may be the worst job in IT. They have many of the same problems as larger enterprises, but no resources or budget. Yeah, this presents a huge opportunity. We also plan to give a lot back to the community. Securosis publishes all its primary research for free on the blog. We’ll continue to do that. So we have an opportunity to make a difference in the industry as well. To be clear, the objective isn’t to displace Gartner or Forrester. We aren’t going to build a huge sales force. We will focus on adding value and helping to make our clients better at their jobs. If we can do that, everything else works itself out. Product To date, no one has really successfully introduced a syndicated research product targeted to the mid-market, certainly not in security. That fact would scare some folks, but for me it’s a huge challenge. I know hundreds of thousands of companies struggle on a daily basis and need our help. So I’m excited to start figuring out how to get the products to them. In terms of research capabilities, all you have to do is check out the Securosis Research Library to see the unbelievable productivity of Rich and Adrian. The library holds a tremendous amount of content and it’s top notch. As with every business trying something new, we’ll run into our share of difficulties – but generating useful content won’t be one of them. Exit Honestly, I don’t care about an exit. I’ve proven I can provide a very nice lifestyle for my family as an independent. That’s liberating, especially in this kind of economic environment. That doesn’t mean I question the size of the opportunity. Clearly we have a great opportunity to knock the cover off the ball and build a substantial company. But I’m not worried about that. I want to have fun, work with great guys and help our clients do their jobs better. If we do this correctly, there are no lack of research and media companies that will come knocking. Final thoughts On the first working day of a new decade, I’m putting the experiences (and road rash) gained over last 10 years to use. Whether starting a business, screwing up all sorts of things, embracing my skills as an analyst or understanding the importance of balance in my life, this is the next logical step for me. Looking back, the past 10 years have been very humbling. It started with me losing a fortune during the Internet bubble. selling the company I founded for the cash on our balance sheet because

Share:
Read Post

Research Revisited: Off Topic: A Little Perspective

As I was crawling through the old archives for some posts, I found my very first reference to Mike here at Securosis. I timed this Revisited post to fire off when Mike’s post on joining Securosis goes live, and the title now seems to have more meaning. Off Topic: A Little Perspective This has nothing to do with security other than the fact Mike Rothman is a security analyst. Sometimes it’s worth sitting back and evaluating why you’re in the race in the first place. It’s all too easy to get caught up in the insanity of day-to-day demands or the incredibly deceptive priorities of the corporate and government rat races. A few months ago I took a step back and decided to reduce travel, stay healthy, and start this blog. I wanted a more personal outlet for writing on topics and in a style that’s inappropriate at my day job (in other words, more fun). My challenge is running this site in a way that doesn’t create a conflict of interest with my employer, and thus I don’t publish anything here that I should be publishing there. Mike just went off and started his own company to support his real priorities. You should really read this. Share:

Share:
Read Post

Research Revisited: Apple, Security, and Trust

Update: After publishing this, I realized I should have taken more time editing, especially after Apple released their iOS Security paper this week. My intention was to refer to situations where, often due to attacks, vulnerabilities, or other events, Apple is pushed into responding. They can still struggle to balance the lines between what they want to say, and what outsiders want to hear. They have very much improved communications with researchers, the media, and the level of security information they publish in the open. It is the crisis situations that knock things off kilter at times. I am sometimes called an Apple apologist for frequently defending their security choices, but it wasn’t always that way. I first started writing about Apple security because those were the products I used, and I was worried Apple didn’t take security seriously. I was very personally invested in their choices, and there were a lot of reasons when I first posted this back in 2006 to think we were headed for disaster. In retrospect, my post was both on and off target. I thought at the time that Apple needed to focus more on communications. But Apple, as always, chose their own path. They have improved communications significantly, but not nearly as much as someone like Microsoft. But at the same time they tripled down on security. iOS is now one of the most secure platforms out there (yes, even despite the patch last week). OS X is also far more secure than it was, and Apple continues to invest in new security options for users. I was right and I was wrong. Apple recognized, due to the massive popularity of iOS, that building customer trust was essential to maintaining a market lead. They acted on that with dramatic improvements in security. iOS has yet to suffer any major wide-scale exploitation. OS X added features like FileVault 2 (encryption for the masses) and GateKeeper (wrecking malware markets). Apple most definitely sees security as essential to trust. But they still struggle with communications. Not that I expect them to ever not act like Apple, but they are still feeling their way around the lines to find a level they are comfortable with culturally, which still avoids negative spin cycles like I talk about below. This post originally appeared on October 18, 2006 Apple, Security, and Trust Before I delve into this topic I’d like to remind readers that I’m a Mac user and Apple fan. We are a 2 person, 2 Mac, 3 iPod, 2 Airport Express household, with another Mac in the plans this spring. By the same token I don’t think Microsoft is evil and consider some of their products to be quite good. That said I prefer OS X and have no plans to switch to Vista, although I’ll probably run it in a virtual machine on my Mac. What I’m about to say is in the nature of protecting, not attacking, one of my favorite vendors. Apple faces a choice. Down one path is the erosion of trust, lost opportunities, and customers facing increased risk. On the other path is increased trust, greater opportunities, and happy, safe, customers. I have a lot vested in Apple, and I’d like to keep it that way. As most of you probably know by now, Apple shipped a limited number of video iPods loaded with a Windows virus that could infect an attached PC. The virus is well known and all antivirus software should stop it, but the reality is this is an extremely serious security failure on the part of Apple. The numbers are small and damages limited, but there was obviously some serious breakdown in their security controls and QA process. As with many recent Apple security stories this one was about to quietly fade into the night were it not for Apple PR. In Apple’s statement they said, “As you might imagine, we are upset at Windows for not being more hardy against such viruses, and even more upset with ourselves for not catching it.”. As covered by George Ou and Amrit Williams, this statement is embarrassing, childish, and irresponsible. It’s the technical equivalent of blaming a crime victim for their own victimization. I’m not defending the security problems of XP, which are a serious epidemic unto themselves, but this particular mistake was Apple’s fault, and easily preventable. While Mike Rothman agrees with Ou and Williams, he correctly notes that this is just Apple staying on message. That message, incorporated into all major advertising and marketing, is that Macs are more secure and if you’d just switch to a Mac you wouldn’t have to worry about spyware and viruses. It’s a good message, today, because it’s true. I bought my mom a Mac and talked my sister into switching her small business to Macs primarily because of security. I’m overprotective and no longer feel my friends and family can survive on the Internet on XP. Vista is a whole different animal, fundamentally more secure than its predecessors, but it’s not available yet so I couldn’t consider that option. Thus it was iMac and Mac mini city. But when Apple sticks to this message in the face of a contradictory reality they expose themselves, and their customers, to greater risks. Reality is starting to change and Apple isn’t, and therein lies my concern. All relationships are founded on trust and need. (Amrit has another good post on this topic in business relationships). One of the keystones of trust is security. I like to break trust into three components: Intent: How do you intend to treat participants in a relationship? Capability: Can you behave in compliance with your intent? Communication: Can you effectively communicate both your intent and capability? Since there’s no perfect security we always need to make security tradeoffs. Intent decides how far you need to go with security, while capability defines if you’re really that secure, and communication is how you get customers to believe both your intent and capability. Recent actions by Apple are breaking their foundations of trust. As a business this is a

Share:
Read Post

Research Revisited: Hammers vs. Homomorphic Encryption

We are running a retrospective during RSA because we cannot blog at the show. We each picked a couple posts we like and still think relevant enough to share. I picked a 2011 post on Hammers and Homomorphic Encryption, because a couple times a year I hear about a new startup which is going to revolutionize security with a new take on homomorphic encryption. Over and over. And perhaps some day we will get there, but for now we have proven technologies that work to the same end. (Original post with comments) Researchers at Microsoft are presenting a prototype of encrypted data which can be used without decrypting. Called homomorphic encryption, the idea is to keep data in a protected state (encrypted) yet still useful. It may sound like Star Trek technobabble, but this is a real working prototype. The set of operations you can perform on encrypted data is limited to a few things like addition and multiplication, but most analytics systems are limited as well. If this works, it would offer a new way to approach data security for publicly available systems. The research team is looking for a way to reduce encryption operations, as they are computationally expensive – their encryption and decryption demand a lot of processing cycles. Performing calculations and updates on large data sets becomes very expensive, as you must decrypt the data set, find the data you are interested in, make your changes, and then re-encrypt altered items. The ultimate performance impact varies with the storage system and method of encryption, but overhead and latency might typically range from 2x-10x compared to unencrypted operations. It would be a major advancement if they could dispense away with the encryption and decryption operations, while still enabling reporting on secured data sets. The promise of homomorphic encryption is predictable alteration without decryption. The possibility of being able to modify data without sacrificing security is compelling. Running basic operations on encrypted data might remove the threat of exposing data in the event of a system breach or user carelessness. And given that every company even thinking about cloud adoption is looking at data encryption and key management deployment options, there is plenty of interest in this type of encryption. But like a lot of theoretical lab work, practicality has an ugly way of pouring water on our security dreams. There are three very real problems for homomorphic encryption and computation systems: Data integrity: Homomorphic encryption does not protect data from alteration. If I can add, multiply, or change a data entry without access to the owner’s key: that becomes an avenue for an attacker to corrupt the database. Alteration of pricing tables, user attributes, stock prices, or other information stored in a database is just as damaging as leaking information. An attacker might not know what the original data values were, but that’s not enough to provide security. Data confidentiality: Homomorphic encryption can leak information. If I can add two values together and come up with a consistent value, it’s possible to reverse engineer the values. The beauty of encryption is that when you make a very minor change to the ciphertext – the data you are encrypting – you get radically different output. With CBC variants of encryption, the same plaintext has different encrypted values. The question with homomorphic encryption is whether it can be used while still maintaining confidentiality – it might well leak data to determined attackers. Performance: Performance is poor and will likely remain no better than classical encryption. As homomorphic performance improves, so do more common forms of encryption. This is important when considering the cloud as a motivator for this technology, as acknowledged by the researchers. Many firms are looking to “The Cloud” not just for elastic pay-as-you-go services, but also as a cost-effective tool for handling very large databases. As databases grow, the performance impact grows in a super-linear way – layering on a security tool with poor performance is a non-starter. Not to be a total buzzkill, but I wanted to point out that there are practical alternatives that work today. For example, data masking obfuscates data but allows computational analytics. Masking can be done in such a way as to retain aggregate values while masking individual data elements. Masking – like encryption – can be poorly implemented, enabling the original data to be reverse engineered. But good masking implementations keep data secure, perform well, and facilitate reporting and analytics. Also consider the value of private clouds on public infrastructure. In one of the many possible deployment models, data is locked into a cloud as a black box, and only approved programatic elements ever touch the data – not users. You import data and run reports, but do not allow direct access the data. As long as you protect the management and programmatic interfaces, the data remains secure. There is no reason to look for isolinear plasma converters or quantum flux capacitors when when a hammer and some duct tape will do. Share:

Share:
Read Post

Research Revisited: Security Snakeoil

Wow! Sometimes we find things in the archives that still really resonate. This is a short one but I’ll be damned if I don’t expect to see this exact phrase used on the show floor at RSA this week. This was posted September 25, 2006. I guess some things never change… How to Smell Security Snake Oil in One Sentence or Less If someone ever tells you something like the following: “We defend against all zero day attacks using a holistic solution that integrates the end-to-end synergies in security infrastructure with no false positives.” Run away. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.