In our 2013 RSA Guide we wrote that 2012 was a tremendous year for cloud security. We probably should have kept our mouth shut and remembered all those hype cycles, adoption curves, and other wavy lines because 2013 blew it away. That said, cloud security is still quite nascent, and in many ways losing the race with the cloud market itself, expanding the gap between what’s happening in the cloud and what’s actually being secured in the cloud. The next few years are critical for security professionals and vendors as they risk being excluded from cloud transformation projects, and thus find themselves disengaged in enterprise markets as cloud vendors and DevOps take over security functions.
Lead, Follow, or Get the Hell out of the Way
2013 saw cloud computing begin to enter the fringes of the early mainstream. Already in 2014 we see a bloom of cloud projects, even among large enterprises. Multiple large financials are taking tentative steps into public cloud computing. When these traditionally risk-averse technological early adopters put their toes in the water, the canary sings (okay, we know the metaphor should be that the canary dies, but we don’t want to bring you down).
Simultaneously we see cloud providers positioning themselves as a kind of security providers. Amazon makes abundantly clear that they consider security one of their top two priorities, that their data centers are more secure than yours, and that they can wipe out classes of infrastructure vulnerabilities to let you focus on applications and workloads. Cloud storage providers are starting to provide data security well beyond what most enterprises can even dream of implementing (such as tracking all file access, by user and device). In our experience Security has a tiny role in many cloud projects, and rarely in the design of security controls. The same is true for traditional security vendors, who have generally failed to adapt their products to meet new cloud deployment patterns.
We can already see how this will play out at the show, and in the market. There is a growing but still relatively small set of vendors taking advantage of this gap by providing security far better attuned to cloud deployments. These are the folks to look at first if you are involved in a cloud project. One key to check out is their billing model: do they use elastic metered pricing? Can they help secure SaaS or PaaS, like a cloud database? Or is their answer, “Pay the same as always, run our virtual appliance, and route all your network traffic through it.” Sometimes that’s the answer, but not nearly as often as it used to be.
And assess honestly when and where you need security tools, anyway. Cloud applications don’t have the same attack surface as traditional infrastructure. Risks and controls shift; so should your investments. Understand what you get from your provider before you start thinking about spending anywhere else.
SECaaS Your SaaS
We are getting a ton of requests for help with cloud vendor risk assessment (and we are even launching a 1-day workshop), mostly driven by Software as a Service. Most organizations only use one to three Infrastructure as a Service providers, but SaaS usage is exploding. More often than not, individual business units sign up for these services – often without going through procurement process.
A new set of vendors is emerging, to detect usage of SaaS, help integrate it into your environment (predominantly through federated identity management), and add a layer of security. Some of these providers even provide risk ratings, although that is no excuse for not doing your own homework. And while you might think you have a handle on SaaS usage because you block Dropbox and a dozen other services, there are thousands of these things in active use. And, in the words of one risk officer who went around performing assessments: at least one of them is a shared house on the beach with a pile of surfboards out front, an open door, and a few servers in a closet.
There are a dozen or more SaaS security tools now on the market, and most of them will be on the show floor. They offer a nice value proposition but implementation details vary greatly, so make sure whatever you pick meets your needs. Some of you care more about auditing, others about identity, and others about security, and none of them really offer everything yet.
Workload Security Is Coming
“Cloud native” application architectures combine IaaS and SaaS in new highly dynamic models that take advantage of autoscaling, queue services, cloud databases, and automation. They might pass a workload (such as data analysis) to a queue service, which spins up a new compute instance in the current cheapest zone, which completes the work, and then passes back results for storage in a cloud database.
Under these new models – which are in production today – many traditional security controls break. Vulnerability assessment on a server that only lives for an hour? Patching? Network IDS, when there is no actual network to sniff?
Talk to your developers and cloud architects before becoming too enamored with any cloud security tools you see on the show floor. What you buy today may not match your needs in six months. You need to be project driven rather than product driven because you can no longer purchase one computing platform and use it for everything. That is, again, why we think you should focus on elastic pricing that will fit your cloud deployments as they evolve and change. So an elastic pricing model is often the best indicator that your vendor ‘gets’ the cloud.
Barely Legal SECaaS
We are already running long, so suffice it to say there are many more security offerings as cloud services, and a large percentage of them are mature enough to satisfy your needs. The combination of lower operational management costs, subscription pricing, pooled threat intelligence, and other analytics, is often better than what you can deploy and manage completely internally. You still need to ask hard questions and be very careful with technobabble pillow talk, because not all cloud services are created equal. Look for direct answers – especially on how providers protect your data, segregate users, and allow you to get your data back if necessary. Finally, walk away if they want you to sign an NDA first.
Here’s to the Server Huggers
Many of you are considering private clouds, or have one already, to reduce the perceived risks of multitenancy. As we wrote in What CISOs Need to Know about Cloud Computing, we think private clouds are largely a transition technology to make server huggers feel they are still in control. Well, that and to hold us over until there is more competition in the real public cloud market – as opposed to outfits merely offering a different form of hosting.
Most of the private cloud security focus is, rightfully, on network security. The key questions to ask are how it affects your network topology, and how well Software Defined Networking is supported, because this is the first place we see SDN establishing a beachhead. Also understand the costs and hardware requirements of supporting a private cloud. You definitely need something that supports distributed deployments, tightly integrated with the cloud platform.
The Cloudwashing Dead
Finally, we see no shortage of cloudwashing, and expect to see a lot more at the show. Nearly every product will feature a ‘cloud’ version. But by this point you should know what to look for, to determine which are built for cloud, and which are merely the same software wrapped in a virtual appliance or an endpoint/server agent that has barely been modified. Ask for reference clients who have deployed on Azure, Amazon, or Google – not just on one of the many semi-private hosted cloud providers.
Posted at Monday 17th February 2014 3:57 pm
(2) Comments •
This is our last regular Firestarter before we record our pre-RSA Quarterly Happy Hour. This week, after a few non-sequiturs, we talk about the madness of payment systems. It seems the US is headed towards chip and signature, not chip and PIN like the rest of the world, because banks think American are too stupid to remember a second PIN.
Posted at Monday 17th February 2014 11:20 am
(0) Comments •
By Mike Rothman
We are in the home stretch, with only a few more deep dives to post.
EPP: Living on Borrowed Time?
Every year we take a step back and wonder if this is the year customers will finally revolt against endpoint protection suites and shift en masse to something free, or one of the new technologies focused on preventing advanced attacks. It is so easy to forget how important inertia is to security buying cycles. Combined with the continued (ridiculous) PCI mandate for ‘anti-malware’ (whatever that means), the AV vendors continue to print money.
Our friends at 451 Group illustrate this with a recent survey. A whopping 5% of respondents are reducing their antivirus budget, while 13% are actually increasing the budget. Uh, what?!?! Most are maintaining the status quo, so you will see the usual AV suspects with their big RSA Conference booths, paid for by inertia and the PCI Security Standards Council. Sometimes it would be great to have a neutron cluebat to show the mass market the futility of old-school AV…
Don’t Call It a Sandbox
The big AV vendors cannot afford to kill their golden goose, so innovation is unlikely to come from them. The good news is that there are plenty of companies taking different approaches to detection at the endpoint and server. Some look at file analysis, others have innovative heuristics, and you will also see isolation technologies on the floor. Don’t forget old-school application control, which is making a comeback on the back of Windows XP’s end of life, and the fact that servers and fixed function devices should be totally locked down.
We expect isolation vendors to make the most noise at the RSA Conference. Their approach is to isolate vulnerable programs (including Java, browsers, and/or Office suites) from the rest of the device so malware can’t access the file system or other resources to further compromise the device. Whether isolation is via virtualization, VDI, old-school terminal services, or newfangled endpoint isolation (either at the app or kernel level), it is all about accepting that you cannot stop infection, so you need to make sure malware can’t get to anything interesting on the device.
These technologies are promising but not yet mature. We have heard of very few large-scale implementations but we need to do something different, so we are watching these technologies closely, and you should too.
The Rise of the Endpoint Monitors
As we described in the introduction to our Advanced Endpoint and Server Protection series, we are seeing a shift in budget from predominately prevention to detection and investigation functions. This is a great thing in light of the fact that you cannot stop all attacks.
At the show we will see a lot of activity around endpoint forensics, driven by hype over the recent FireEye/Mandiant and Bit9/Carbon Black deals, bringing this technology into the spotlight. But there is a bigger theme – what we call “Endpoint Activity Monitoring”. It involves storing very detailed historical endpoint (and server) telemetry, and then searching for indicators of compromise in hopes of identifying new attacks that evade the preventative controls. This allows you to find compromised devices even if they are dormant.
Of course if isolation is immature technology, endpoint activity monitoring is embryonic. There are a bunch of different approaches to storing that data, so you will hear vendors poking each other about whether they store on-site or in the cloud. They also have different approaches to analyzing that massive amount of data. But all these technical things obscure the real issue: whether these technologies can scale. This is another technology to keep an eye on at the show.
Endpoints and Network: BFF
The other side of the coin discussed in our Network Security deep dive is that endpoint solutions to prevent and detect advanced malware need to work with network stuff. The sooner an attack can be either blocked or detected, the better, so being able to do some prevention/detection on the network is key.
This interoperability is also important because running a full-on malware analysis environment on every endpoint is inefficient. Being able to have an endpoint or server agent send a file either to an on-premise network-based sandbox or a cloud-based analysis engine provides a better means of determining how malicious the file really is.
Of course this malware analysis doesn’t happen in real time, and you usually cannot wait for a verdict from off-device analysis before allowing the file to execute on the device. So devices will still get popped but technology like endpoint activity monitoring, described above, gives you the ability to search for devices that have been pwned using a profile of the malware from analysis engines.
Most MDM vendors have been bought, so managing these devices is pretty much commodity technology now. Every endpoint protection vendor has a mobile offering they are bundling into their suite. But nobody seems to care. It’s not that these products aren’t selling. They are flying off the virtual shelves, but they are simply not exciting. And if it’s not exciting you won’t hear much about it at the conference.
Some new startups will be introducing technologies like mobile IPS, but it just seems like yesterday’s approach to a problem that requires thinking differently. Maybe these folks should check out Rich’s work on protecting iOS, which gets down to the real issue: the data. It seems like the year of mobile malware is coming – right behind the year of PKI. Not that mobile malware doesn’t exist, but it’s not having enough impact to fire the industry up. Which means it will be a no-show at the big show.
Posted at Monday 17th February 2014 6:00 am
(0) Comments •
By Adrian LaneGunnar
One of the biggest trends in security gets no respect at RSA. Maybe because identity folks still look at security folks cross-eyed. But this year things will be a bit different. Here’s why:
The Snowden Effect
Companies are (finally) dealing with the hazards of privilege – a.k.a. Privileged User Access. Yes, we hate the term “insider threat” – we have good evidence that external risks are the real issue. That said, logic does not always win out – many companies are asking themselves right now, “How can I stop a ‘Snowden Incident’ from happening at my company?” This Snowden Effect is getting traction as a marketing angle, and you will see it on the RSA Conference floor because people are worried about their dirty laundry going public.
Aside from the marketing hype, we have been surprised by the zeal with which companies are now pursuing technology to enforce Privileged User Access policies. The privileged user problem is not new, but companies’ willingness to incur cost, complexity, and risk to address it is. Part of this is driven by auditors assigning higher risk to these privileged accounts (On a cynical note, we have to wonder, “What’s the matter, big-name audit firm? All out of easy findings?”). But sometimes the headline news does really scare the bejesus out of companies in that vertical (that’s right, we’re looking at you, retailers). Whatever the reason, companies and external auditors are waking up to privileged users as perhaps the largest catalyst in downside risk scenarios. Attackers go after databases because that’s where the data is (duh). The same goes for privileged accounts – that’s where the access is!
But while the risk is almost universally recognized, what to do about it isn’t – aside from “continuous improvement”, because hey, everyone needs to pass their audit. One reason the privileged user problem has persisted so long is that the controls often reduce productivity of some of the most valuable users, drive up cost, and generally increase availability risk. Career risk, anyone? But that’s why security folks make the big bucks. High-probability events gets the lion’s share of attention, but lower-probability gut-punch events like privileged user misuse have come to the fore. Buckle up!
Nobody cares what your name is!
Third-party identity services and cloud-based identity are gaining momentum. The need for federation (to manage customer, employee, and partner identities), and two-factor authentication (2FA) to reduce fraud are both powerful motivators. But we expected last year’s hack of Mat Honan to start a movement away from passwords in favor of certificates and other better user authentication tools. But what we got was risk-based handling of requests on the back end. It is not yet the year of PKI, apparently.
Companies are less concerned with logins and more concerned with request context and metadata. Does the user normally log in at this time? From that location? With that app? Is this a request they normally make? Is it for a typical dollar amount? A lot more is being spent on analytics to determine ‘normal’ behavior than on replacing identity infrastructure, and fraud analytics on the back end are leading the way. In fact precious little attention is being paid to identity systems on the front end – even payment processors are discussing third-party identity from Facebook and Twitter for authentication. What could possibly go wrong? As usual cheap, easy, and universally available trump security – for authentication tools, this time. To compensate, effort will need to be focused on risk-based authorization on the back end.
Posted at Sunday 16th February 2014 12:00 pm
(0) Comments •
By Mike RothmanAdrian Lane
As we continue deep dives into our coverage areas, we now hit security management and compliance.
If you don’t like it, SECaaS!
We have taken a bunch of calls this year from folks looking to have someone else manage their SIEM. Why? Because after two or three failed attempts, they figure if they are going to fail again, they might as well have a service provider to blame. Though that has put some wind in the sails of the service providers who offer monitoring services, and provided an opening for those who can co-source and outsource the SIEM. Just make sure to poke and prod the providers about how you are supposed to respond to an incident when they have your data. And to be clear… they have your data.
As we mentioned in the network security deep dive, threat intelligence (TI) is hot. But in terms of security management, many early TI services were just about integrating IP black lists and malware file signatures – not all that intelligent! Now you will see all sorts of intelligence services on malware, botnets, compromised devices, and fraud analytics – and the ability to match their indicators against your own security events. This is not just machine-generated data, but often includes user behaviors, social media analysis, and DoS tactics. Much of this comes from third-party services, whose sole business model is to go out looking for malware and figure out how best to detect and deal with it. These third parties have been very focused on making it easier to integrate data into your SIEM, so keep an eye out for partnerships between SIEM players and TI folks trying to make SIEM useful.
Shadow of Malware
SIEMs have gotten a bit of a black eye over last couple years – just as vendors were finally coming to terms with compliance requirements, they got backhanded by customer complaints about failures to adequately detect malware. As malware detection has become a principal use case for SIEM investment, vendors have struggled to keep pace – first with more types of analytics, then more types of data, and then third-party threat intelligence feeds. For a while it felt like watching an overweight mall cop chase teenage shoplifters – funny so long as the cop isn’t working for you. But now some of the mall cops are getting their P90X on and chasing the mallrats down – yes, that means we see SIEMs becoming faster, stronger, and better at solving current problems. Vendors are quietly embracing “big data” technologies, a variety of built-in and third-party analytics, and honest-to-goodness visualization tools.
So you will hear a lot about big data analytics on the show floor. But as we said in our Security Management 2.5 research, don’t fall into the trap. It doesn’t actually matter what the underlying technology is so long as it meets your needs, at the scale you require.
Third time is… the same
There hasn’t been much activity around compliance lately, as it got steamrolled by the malware juggernaut. Although your assessors show up right on time every quarter, and you haven’t figured out how to get rid of them quicker yet, have you? We didn’t think so. PCI 3.0 is out but nobody really cares. It’s the same old stuff, and you have a couple years to get it done. Which gives you plenty of time for cool malware detection stuff at the show.
The ‘GRC’ meme will be on the show floor, but that market really continues to focus on automating the stuff you need to do, without adding real value to either your security program or your business. A good thing, yes, but not sexy enough to build a marketing program on. Aggregating data, reducing data, and pumping out some reports – good times. If your organization is big enough and you have many moving technology parts (yeah, pretty much everyone), then these technologies make sense. Though odds are you already have something for compliance automation. The question is whether it sucks so bad that you need to look for something else?
You know a market has reached the proverbial summit when the leading players talk about the new stuff they are doing. Clearly the vulnerability management market is there, along with its close siblings configuration management and patch management, though the latter two can be subsumed by the Ops group (to which security folks say: “Good riddance!”). The VM folks are talking about passive monitoring, continuous assessment, mobile devices, and pretty much everything except vulnerability management. Which makes sense because VM just isn’t sexy. It is a zero-sum game, which will force all the major players in the space to broaden their offerings – did we mention they will all be talking ‘revolutionary’ new features?
But the first step in a threat management process is “Assessment.” A big part of assessment is discovering and understanding the security posture of devices and applications. That is vulnerability management, no? Of course it is – but the RSA Conference is about the shiny, not useful…
–Mike RothmanAdrian Lane
Posted at Friday 14th February 2014 11:00 am
(0) Comments •
By Adrian Lane
With PoS malware, banking trojans, and persistent NSA threats the flavors of the month and geting all the headlines, application security seems to get overshadowed every year at the RSA Conference. Then again, who wants to talk about the hard, boring tasks of fixing the applications that run your business. We have to admit it’s fun to read about who the real hackers are, including selfies of the
dorks people apparently selling credit card numbers on the black market. Dealing with a code vulnerability backlog? Not so much fun. But very real and important trends are going on in application security, most of which involve “calling in the cavalry” – or more precisely outsourcing to people who know more about this stuff, to jumpstart application security programs.
The Application Security Specialists
Companies are increasingly calling in outside help to deal with application security, and it is not just the classi dynamic web site and penetration testing. On the show floor you will see several companies offering cloud services for code scanning. You upload your code and associated libraries, and they report back on known vulnerabilities. Conceptually this sounds an awful lot like white-box scanning in the cloud, but there is more to it – the cloud services can do some dynamic testing as well. Some firms leverage these services before they launch public web applications, while others are responding to customer demands to prove and document code security assurance. In some cases the code scanning vendors can help validate third-party libraries – even when source code is not available – to provide confidence and substantiation for platform providers in the security of their foundations.
Several small professional services firms are popping up to evaluate code development practices, helping to find bad code, and more importantly getting development teams pointed in the right direction. Finally, there is new a trend in application vulnerability management – no, we are not talking about tools that scan for platform defects. The new approaches track vulnerabilities in much the same way we track general software defects, but with a focus on specific issues around security. Severity, path to exploit, line of code responsible, and calling modules that rely on defective code, are all areas where tools can help development teams prioritize security vulnerability fixes.
At the beginning of 2013, several small application security gateway vendors were making names for themselves. Within a matter of months the three biggest were acquired (Mashery by Intel, Vordel by Axway, and Layer 7 by CA). Large firms quickly snapping up little firms often signal the end of a market, but in this case it is just the beginning – to become truly successful these smaller technologies need to be integrated into a broader application infrastructure suite. Time waits for no one, and we will see a couple new vendors on the show floor with similar models.
You will also see a bunch of activity around API gateways because they serve as application development accelerators. The gateway provides base security controls, release management, and identity functions in a building block platform, on top of which companies publish internal systems to the world via RESTful APIs. This means an application developer can focus on delivery of a good user experience, rather than worrying extensively about security. Even better, a gateway does not care whether the developer is an employee or a third party. That plays into the trend of using third-party coders to develop mobile apps. Developers are compensated according to the number of users of their apps, and gateways track which app serves any given customer. This simple technology allows crowdsourcing apps, so we expect the phenomenon to grow over the next few years.
Bounty Hunters – Bug Style
Several companies, most notably Google and Microsoft, have started very public “security bug bounty” programs and hackathons to incentivize professional third-party vulnerability researchers and hackers to find and report bugs for cash. These programs have worked far better than the companies originally hoped, with dozens of insidious and difficult-to-detect flaws disclosed quickly, before new code goes live. Google alone has paid out more than $1 million in bounties – their programs has been so successful that they have announced they will quintuple rewards for bugs on core platforms. These programs tend to attract skilled people who understand the platforms and uncover things development teams were totally unaware of. Additionally, internal developers and security architects learn from attacker approaches. Clearly, as more software publishers engage the public to shake down their applications, we will see everyone jumping on this bandwagon – which will provide an opportunity for small services firms to help software companies set up these programs.
Posted at Friday 14th February 2014 6:00 am
(0) Comments •
By Adrian Lane
Bacon as a yardstick: This year will see the 6th annual Securoris Disaster Recovery Breakfast, and I am measuring attendance in required bacon reserves. Jillian’s at the Metreon has been a more than gracious host each year for the event. But when we order food we (now) do it in increments of 50 people. At the moment we are ordering bacon for 250, and we might need to bump that up! We have come a long way since 2009, when we had about 35 close friends show up, but we are overjoyed that so many friends and associates will turn out. Regardless, we expect a quiet, low-key affair. It has always been our favorite event of the week because of that. Bring your tired, your hungry, your hungover, or just plain conference-weary self over and say ‘Howdy’. There will be bacon, good company, and various OTC pharmaceuticals to cure what ills you.
Note from Rich: Actually we had a solid 100 or so that first year. I know – I had to pay the bill solo.
Big Spin: More and more firms are spinning their visions of big data, which in turn makes most IT folks’ heads spin. These visions look fine within a constrained field of view, but the problem is what is left unsaid: essentially the technologies and services you will need but which are not offered – and vendors don’t talking about them. Worse, you have to filter through non-standard terminology deployed to support vendor spin – so it’s extremely difficult to compare apples against apples. You cannot take vendor big data solutions at face value – at this early stage you need to dig in a bit. But to ask the right questions, you need to know what you probably don’t yet understand. So the vendor product demystification process begins with translating their materials out of vendor-speak. Then you can determine whether what they offer does what you need, and finally – and most importantly – identify the areas they are not discussing, so you can discover their deficiencies. Is this a pain in the ass? You betcha! It’s tough for us – and we do this all day, for a living. So if you are just learning about big data, I urge you to look at the essential characteristics defined in the introduction to our Securing Big Data Clusters paper – it is a handy tool to differentiate big data from big iron, or just big BS.
Laying in wait. I have stated before that we will soon stop calling it “big data”, and instead just call these platforms “modular databases”. Most new application development projects do not start with a relational repository – instead people now use some form of NoSQL. Which should be very troubling to any company that derives a large portion of its revenue from database sales. Is it odd that none of the big three database vendors has developed a big data platform (a real one - not a make believe version)? Not at all. Why jump in this early when developers are still trying to decide whether Couch or Riak or Hadoop or Cassandra or something else entirely is best for their projects? So do the big three database vendors endorse big data? Absolutely. To varying degrees they encourage customer adoption, with tools to support integration with big data – usually Hadoop. It is only smart to play it slow, lying in wait like a giant python, and later swallow the providers that win out in the big data space. Until then you will see integration and management tools, but very tepid development of NoSQL platforms from big relational players. Yes, I expect hate mail on this from vendors, so feel free to chime in.
Hunter or hunted? One the Securosis internal chat board we were talking about open security job positions around the industry. Some are very high-profile meat grinders that we wouldn’t touch with asbestos gloves and a 20’ pole. Some we recommend to friends with substantial warnings about mental health and marital status. Others not at all. Invariably our discussion turned to the best job you never took: jobs that sounded great until you go there – firms often do a great job of hiding dirty laundry until after you come on board. Certain positions provide a learning curve for a company: whoever takes the job, not matter how good, fails miserably. Only after the post-mortem can the company figure out what it needs and how to structure the role to work out. Our advice: be careful and do your homework. Security roles are much more difficult than, say, programmer or generic IT staffer. Consult your network of friends, seek out former employees, and look at the firm’s overall financial health for some obvious indicators. Who held the job before you and what happened? And if you get a chance to see Mike Rothman present “A day in the life of a CISO”, check it out – he captures the common pitfalls in a way that will make you laugh – or cry, depending on where you work.
On to the Summary:
Webcasts, Podcasts, Outside Writing, and Conferences
Favorite Securosis Posts
Other Securosis Posts
Favorite Outside Posts
- Dave Lewis: When hacking isn’t.
- David Mortman: Tesla Hires Hacker Kristin Paget to, Well, Secure Some Things.
- Mike Rothman: Your relationship with the future. Philosopher king Seth Godin says you need to make a choice. Focus efforts on folks who hope for a better tomorrow, or those who pine for the “good old days”. I tend to look to the future, but I am working on that right now. It’s hard but worth it…
- Mike Rothman (apparently has two favorites this week): 6 Pieces of Advice from Successful Writers. You are a writer. Whether you get paid to write (like us) or not, you have to document something. There are some good tips for breaking through blocks and writing to make your points.
- Adrian Lane: DRM in the real world. Cory Doctorow’s very good discussion of the “copy protection” side of Digital Rights Management (DRM) issues, and some very astute observations on how they relate to security. Keep in mind that DRM is much more than just copy protection. And Bruce Lehman’s regulatory framework may have been bonkers, but its roots went back to the Xanadu project many years before – people wanted huge compensation to go along with wide distribution.
- Gunnar: BlackBerry laughs at Samsung’s Knox security struggles. The fact that Knox does not run on the majority of Samsung devices – much less all Android devices – is a major problem. And it is sad if your leading feature is supposed to be security, but you don’t have enough to sell your product.
- Rich: American businesses are holding credit card security back. You will hear more form us on this soon. Pathetic.
Research Reports and Presentations
Top News and Posts
Blog Comment of the Week
This week’s best comment goes to Dwayne Melancon, in response to Firestarter: Mass Media Abuse.
Note from Rich: That’s part of our anti-spam attempts. Not that it seems to stop much spam.
Posted at Thursday 13th February 2014 11:21 pm
(0) Comments •
By Mike Rothman
In an advanced endpoint and server protection consolidation play, Bit9 and Carbon Black announced a merger this morning. Simultaneously, the combined company raised another $38 million in investment capital to fund the integration, pay the bankers, and accelerate their combined product evolution. Given all the excitement over anything either advanced or cyber, this deal makes a lot of sense as Bit9 looks to fill in some holes in its product line, and Carbon Black gains a much broader distribution engine.
But let’s back up a bit. As we have been documenting in our Advanced Endpoint and Server Protection series, threat management has evolved to require assessment, prevention, detection, investigation, and remediation. Bit9’s heritage is in prevention, but they have been building out a much broader platform, including detection and early investigation capabilities, over the past 18 months. But pulling detailed telemetry from endpoints and servers is difficult, so they had a few more years of work to build out and mature their offering. Integrating Carbon Black’s technology gives them a large jump ahead, toward a much broader product offering for dealing with advanced malware.
Carbon Black was a small company, and despite impressive technology they were racing against the clock. With FireEye’s acquisition of Mandiant, endpoint forensic and investigation technology is becoming much more visible in enterprise accounts as FireEye’s sales machine pushes the new toy into existing customers. Without a means to really get into that market, Carbon Black risked losing ground and drowning in the wake of the FireEye juggernaut. Combined with Bit9, at least they have a field presence and a bunch of channel relationships to leverage. So we expect them to do exactly that.
Speaking of FireEye, the minute they decided to buy Mandiant, the die was cast on the strategic nature of their Bit9 partnership. As in, it instantly became not so strategic. Not that the technology overlapped extensively, but clearly FireEye was going to go its own way in terms of endpoint and server protection. So Bit9 made a shrewd move, taking out one of the main competitors to the MIR (now FireEye HX) product. With the CB technology Bit9 can tell a bigger, broader story than FireEye about prevention and detection on devices for a while.
We also like the approach of bundling both the Bit9 and Carbon Black technologies for one price per protected endpoint or server. This way they remove any disincentive to protect devices across their entire lifecycle. They may be leaving some money on the table, but all their competitors require multiple products (with multiple license fees) to provide comparably broad protection. Bundling makes it much easier to tell a differentiated story.
We got one question about whether Bit9 is now positioned to go after the big endpoint protection market. Many security companies have dancing fairies in their eyes, thinking of the multiple billions companies spend on endpoint protection that doesn’t work. Few outfits have been able to break the inertia of the big EPP vendors, to build a business on alternative technology. But it will happen at some point. Bit9 now has most of the pieces and could OEM the others pretty cheaply, because it’s not like an AV signature engine or FDE product is novel today. It is too early to tell whether they will go down that path – to be candid they have a lot of runway to sell protection for critical devices, and follow that with detection/investigation capabilities across the enterprise.
In a nutshell we are positive on this deal. Of course there are always pesky details to true technical integration and building a consistent and integrated user experience. But Bit9 + CB has a bunch of the pieces we believe are central to advanced endpoint and server protection. Given FireEye’s momentum, it is just a matter of time before one of the bigger network players takes Bit9 out to broaden their own protection to embrace endpoints and servers.
Posted at Thursday 13th February 2014 3:55 pm
(0) Comments •
By Mike Rothman
As we begin deeper dives into our respective coverage areas, we will start with network security. We have been tracking the next generation (NG) evolution for 5 years, during which time it has fundamentally changed the meaning of the perimeter – as we will discuss below. Those who moved quickly to embrace NG have established leadership positions, at the expense of those that didn’t. Players who were leaders 5 short years ago have become non-existent, and there is a new generation of folks with innovative network security approaches to handle advanced attacks. After many years of stagnation, network security has come back with a vengeance.
Back to Big Swinging (St)icks
The battle for the perimeter is raging right now in network security land. In one corner you have the incumbent firewall players, who believe that because the future of network security has been anointed ‘NGFW’ by those guys in Stamford, it is their manifest destiny to subsume every other device in the perimeter. Of course the incumbent IPS folks have a bit to say about that, and are happy to talk about how NGFW devices keel over when you turn on IPS rules and SSL decryption.
So we come back to the age-old battle when you descend into the muck of the network. Whose thing is bigger? Differentiation on the network security front has gone from size of the application library in 2012, to migrating from legacy port/protocol policies in 2013, to who has the biggest and fastest gear in 2014. As they work to substantiate their claims, we see a bunch of new entrants in the security testing business. This is a good thing – we still don’t understand how to read NSS Labs’ value map.
Besides the size of the equipment, there is another more impactful differentiation point for NGXX boxes: network-based malware detection (NBMD). All the network security leaders claim to detect malware on the box, and then sling mud about where analysis occurs. Some run analysis on the box (or more often, set of boxes) while others run in the cloud – and yes, they are religious about it. So if you want to troll a network security vendor, tell them their approach is wrong.
You will also hear the NGXX folks who continue to espouse consolidation, but not in a UTM-like way because UTM is so 2003. But in a much cooler and shinier NGXX way. No, there is no difference – but don’t tell the marketeers that. They make their money ensuring things are sufficiently shiny on the RSAC show floor.
More Bumps (in the Wire)
Speaking of network-based malware detection (NBMD), that market continues to be red hot. Almost every organization we speak to either has or is testing one. Or they are pumping some threat intelligence into network packet capture devices to look for callbacks. Either way, enterprises have gotten religion about looking for malware on the way in – before it wreaks havoc.
One area where they continue to dawdle, though, is putting devices inline. Hold up a file for a microsecond, and employees start squealing like stuck pigs. The players in this market who offer this capability as a standalone find most of their devices deployed out-of-band in monitor mode. With the integration of NBMD into broader NG network security platforms, the capability is deployed inline because the box is inherently inline.
This puts standalone devices at a competitive disadvantage, and likely means there won’t be any standalone players for much longer. By offering capabilities that must be inline (like IPS), vendors like FireEye will force the issue and get their boxes deployed inline. Problem solved, right? Of course going inline requires a bunch of pesky features like fail open, hot standby, load balancing, and redundant hardware. And don’t forget the flack jacket when a device keels over and takes down a Fortune 10 company’s call center.
ET Phone Home
Another big theme you will see at this year’s RSA is the attack of Threat Intelligence (TI). You know, kind of like when ET showed up all those years ago, got lost, and figured out how to send a network ping zillions of light years with a Fisher Price toy. We are actually excited about how TI offerings are developing – with more data on things like callbacks, IP reputation, attack patterns, and all sorts of other cool indicators of badness. Even better, there is a specific drive to integrate this data more seamlessly into security monitoring and eventually update blocking rules on network security devices in an automated fashion.
Of course automatic blocking tends to scare the crap out of security practitioners. Mostly because they saw Terminator too many times. But given the disruption of cloud computing and this whole virtualization thing, security folks will get much more comfortable with having a machine tune their rules, because it’s going to happen fast. There is no alternative – carbon-based units just can’t keep up.
Though we all know how that story featuring Skynet turned out, so there will be a clear focus on ensuring false positives are minimized, probably to the point of loosening up the blocking rules just to make sure. And that’s fine – the last thing you want is a T1000 showing up to tell you that sessions you knocked down caused a missed quarter.
Network and Endpoints: BFF
When it comes to advanced malware, the network and the endpoints are not mutually exclusive. In fact over the past year we have seen integration between endpoint folks like Bit9 and network-based malware detection players such as FireEye and Palo Alto Networks. This also underlies the malware defense stories coming from Sourcefire (now Cisco) and McAfee, and pushed the FireEye/Mandiant acquisition announced in January. You can bet the Mandiant folks were drinking some high-end champagne as they welcomed 2014.
There is method to the madness, because network folks need visibility on endpoints. These network detection devices are going to miss at some point, both due to new attack tactics (those notorious 0-days) and devices that escape the comfy confines of the corporate network and perimeter defenses. It’s hard to keep track of those pesky laptops and mobile devices. If you can’t catch everything on the way in, you had better be able to figure out what happened on the devices and determine if that thing you missed caused a mess – quickly.
So what does it mean? You will likely see a bunch of kumbaya on the show floor – these enemies are now friends. Best friends, at that.
Clouds on the Horizon
As we wrote in the key themes, cloud everything remains a big driver of security stuff. And yes, it’s boring. But the network security folks have been largely left out of the cloudwashing for the past few years, and this year they will catch up. We will cover that in depth in our cloud security deep dive, but for now suffice it to say all the network security vendors continue to roll their stuff into VMs and AMIs that can run in public and private clouds. So they are ready to solve the cloud computing security problem. As usual, incumbents continue to solve yesterday’s problem tomorrow.
This isn’t all bad – just understand the potential performance impact of having to route all your traffic through a virtual network security device choke point to enforce policies. But all those issues go away as Software Defined Networks (SDNs) provide much more flexibility to route traffic as you need, and offer bigger faster networks. SDNs do promise to change a lot, but be wary of the double-edged sword – now your admins (or anyone who hacks them) can press a button and take your entire security layer out of the traffic flow.
Posted at Thursday 13th February 2014 11:00 am
(0) Comments •
By Adrian LaneAdrian Lane
Security Information and Event Management (SIEM) systems create a lot of controversy among security folks – they are a pain but it is an instrumental technology for security, compliance, and operations management. The problem is – given the rapid evolution of SIEM/Log Management over the past 4-5 years – that product obsolescence is a genuine issue. The problems caused by products that have failed to keep pace with technical evolution and customer requirements cannot be trivialized. This pain becomes more acute when a SIEM fails to collect the essential information during an incident – and even worse when it completely fails to detect a threat. Customers spend significant resources (both time and money) on caring for and feeding their SIEM. If they don’t feel the value is commensurate with their investment they will move on – searching for better, easier, and faster products. It is only realistic for these customers to start questioning whether their incumbent offerings make sense moving forward.
We are happy to announce the launch our latest research paper: Security Management 2.5. We discuss changing customer demands, and how vendors are altering their platforms to address them. We then provide a detailed process to help determine whether you need to swap providers, and if so how.
We would like to thank IBM and McAfee for licensing this research. Support from the community enables us to bring you our Totally Transparent Research free of charge, so we are happy IBM and McAfee chose to license this report. You can get the full paper: Security Management 2.5: Replacing Your SIEM Yet?
–Adrian LaneAdrian Lane
Posted at Thursday 13th February 2014 10:00 am
(0) Comments •
We have covered the key themes we expect to see at the RSA Conference, so now we will cover a theme or two you probably won’t see at the show (or not enough of, at least), but really should. The first is this DevOps things guys like Gene Kim are pushing. It may not be obvious yet, but DevOps promises to upend everything you know about building and launching applications, and make a fundamental mark on security. Or something I like to call “SecOps”.
DevOps, Cloud, and the Death of Traditional IT
Recently in one of my cloud security classes I had a developer in attendance from one of those brand-name consumer properties all of you, and your families, probably use. When he writes a code update he checks it in and marks it for production; then a string of automated tools and handoffs runs it through test suites and security checks, and eventually deploys it onto their infrastucture/platform automatically. The infrastructure itself adjusts to client demands (scaling up and down), and the concept of an admin accessing a production server is an anachronism.
At the latest Amazon Web Services conference, Adobe (I believe the speaker was on the Creative Cloud team) talked about how they deploy their entire application stack using a series of AWS templates. They don’t patch or upgrade servers, but use templates to provision an entirely new stack, slowly migrate traffic over, and then shut down the old one when they know everything works okay. The developers use these templates to define the very infrastructure they run on, then deploy applications on top of it.
Microsoft Office? In the cloud. Your CRM tool? In the cloud. HR? Cloud. File servers? Cloud. Collaboration? Cloud. Email? Cloud. Messaging? Get the picture? Organizations can move almost all (and sometimes all) their IT operations onto cloud-based services.
DevOps is fundamentally transforming IT operations. It has its flaws, but if implemented well it offers clear advantages for agility, resiliency, and operations. At the same time, cloud services are replacing many traditional IT functions. This powerful combination has significant security implications. Currently many security pros are completely excluded from these projects, as DevOps and cloud providers take over the most important security functions.
Only a handful of security vendors are operating in this new model, and you will see very few sessions address it. But make no mistake – DevOps and the Death of IT will show up as a key theme within the next couple years, following the same hype cycle as everything else. But like the cloud these trends are real and here to stay, and have an opportunity to become the dominant IT model in the future.
Posted at Thursday 13th February 2014 7:18 am
(1) Comments •
There is no stopping the train now that it’s rolling. Here is the final key theme that we expect to see at the show, and yes it’s all about the cloud. And yes, I managed to work a Jimmy Buffett lyric into the piece. Rich 1, Internet 0.
Cloud Everything. Again. We’re Bored Now.
The cloud first appeared in this illustrious guide a mere three or four years ago. The first year it was all hype – with no products, few vendors realized that cloud computing had nothing at all to do with NOAA, and plenty of security pros thought they could just block the cloud at the firewall. The following year was all cloud washing, as booths branded themselves with more than sticky notes saying “We Heart Cloud,” but again, almost nobody did more than wrap a custom-hardware-accelerated platform onto a commodity hypervisor. But the last year or so we saw glimmers of hope, with not only a few real (okay, virtual) products, cloud curious security pros starting to gain a little experience, and more honest to goodness native cloud products. (Apologies to the half-dozen cloud native vendors who have been around for more than a few years, and don’t worry, we know who you are.)
We honestly hoped to drop the cloud from our key themes, but this is one trend with legs. More accurately, cloud computing is progressing nicely through the adoption cycle, deep into the early mainstream. The problem is that many vendors recognize the cloud will affect their business, but don’t yet understand exactly how, and find themselves more in tactical response mode. They have products, but they are mostly adaptations of existing tools rather than the ground-up rebuilds that will be required. There are more cloud native tools on the market now, but the number is still relatively small, and we will still see massive cloud washing on the show floor. While we’re at it, we may was well lump in Software Defined Networking, though ‘SDN-washing’ doesn’t really roll off the tongue.
Two areas you will see hyped on the show floor which provides real benefits are Security as a Service (SECaaS – say it loud and love it), and threat intelligence. Vendors may be slow to rearchitect their products to protect native cloud infrastructure and workloads, but they are doing a good job of pushing their own products into the cloud, and collective intelligence breaks some of the information sharing walls that have held security back for decades.
But here is all you need to know about what you will see across the show – big financial institutions are all kicking around various cloud projects. The sharks smell the money, unlike in previous years when it was about looking good for the press and early adopters. In the immortal words of the great sage Jimmy Buffett, “Can you feel them circling honey, can you feel them schooling around? You got fins to right, fins to the left, and you’re the only game in town.”
Posted at Wednesday 12th February 2014 1:11 pm
(0) Comments •
By Mike Rothman
As we return to our Advanced Endpoint and Server Protection series, we are back working our way through the reimagined threat management process. After discussing assessment you know what you have and what risk those devices present to the organization. Now you can design a control set to prevent compromise from happening in the first place.
Prevention: Next you try to stop an attack from being successful. This is where most of the effort in security has gone for the past decade, with mixed (okay, lousy) results. A number of new tactics and techniques are modestly increasing effectiveness, but the simple fact is that you cannot prevent every attack. It has become a question of reducing your attack surface as much as practical. If you can stop the simplistic attacks you can focus on more advanced ones.
Obviously there are many layers you can and should bring to bear to protect endpoints and servers. Our PCI-centric brethren call these compensating controls. But we aren’t talking about network or application stuff in this series, so we will restrict our discussion to technologies and tactics focused on preventing compromise on endpoints and servers themselves. As we described in 2014 Endpoint Security Buyer’s Guide, there are a number of alternative approaches to protecting endpoints and servers that need to be discussed, compared, and contrasted.
Traditional File Signatures
You cannot really discuss endpoint prevention without at least mentioning signatures. You remember those, right? They are all about maintaining a huge blacklist of known malicious files to prevent from executing. The Free AV products on the market now typically only use this approach, but the broader endpoint protection suites have been supplementing traditional signature engines with additional heuristics and cloud-based file reputation for years.
To expand a bit on file reputation, AV vendors realized a long time ago that it wasn’t efficient to download hashes for every single known malware file to every single protected endpoint. So they took a cloud-based approach which involves keeping a small subset of frequently-seen malware signatures on each device, and if the file cannot be found locally the endpoint agent consults the cloud for a determination on the file. If the file isn’t known by the cloud either it may be uploaded for analysis. This is similar to how cloud-based network-based malware detection works.
But detection of advanced attacks is still problematic if detection is restricted to matching files at runtime. You have no chance to detect zero-day or polymorphic malware attacks, which are both very common. So the focus has moved to other approaches.
You cannot rely on matching what a file looks like, so you need to pay much more attention to what it does. This is the concept behind the advanced heuristics used to detect malware in recent years. The issue with early heuristics was having enough context to know whether an executable was taking a legitimate action. Malicious actions were defined generically for each device based on operating system characteristics, so false positives (blocking a legitimate action) and false negatives (failing to block an attack) were both common: a lose/lose scenario.
Heuristics have evolved to also recognize normal application behavior. This advance has dramatically improved accuracy because rules are built and maintained at a specific application-level. This requires understanding all the legitimate functions within a constrained universe of frequently targeted applications, and developing a detailed profile of each covered application. Any unapproved application action is blocked. Vendors basically build a positive security model for each application – which is a tremendous amount of work.
That means you won’t see every application profiled with true advanced heuristics, but that would be overkill. As long as you can protect the “big 7” applications targeted most often by attackers (browsers, Java, Adobe Reader, Word, Excel, PowerPoint, and Outlook), you have dramatically reduced the attack surface of each endpoint and server.
To use a simple example, there aren’t really any good reasons for a keylogger to capture keystrokes while filling out a form on a banking website. And it is decidedly fishy to take a screen grab of a form with PII on it at the time of submission. These activities would have been missed previously – both screen grabs and reading keyboard input are legitimate operating system functions in specific scenarios – but context enables us to recognize these actions as attacks and stop them.
To dig a little deeper let’s list some of the specific types of behavior the advanced heuristics would be looking for:
- Injected threads
- Process creation
- System file/configuration/registry changes
- File system changes
- OS level functions including print screen, network stack changes, key logging, etc.
- Turning off protections
- Account creation and privilege escalation
Vendors’ ongoing research ensures their profiles of authorized activities for protected applications remain current. For more detail on these kinds of advanced heuristics check out our Evolving Endpoint Malware Detection research.
Of course this doesn’t mean attackers won’t continue to target operating system vulnerabilities, applications (including the big 7), or the weakest link in your environment (employees) with social engineering attacks. But advanced heuristics makes a big difference in the efficacy of anti-malware technology for profiled applications.
Application control entails a default deny posture on devices. You define a set of authorized executables that can run on a device, and block everything else. This provides true device lockdown – no executables (either malicious or legitimate) can execute without being explicitly authorized. We took a deep dive into application control in a recent series (The Double-Edged Sword & Use Cases and Selection Criteria), so we will just highlight some key aspects.
Candidly, application control has suffered significant perception issues, mostly because early versions of the technology were thrust into a general-purpose use case, where they significantly impacted user experience. If employees think a security control prevents them from doing their jobs, it will not last. But over the past few years application control has found success in a few use cases where devices can and should be totally locked down. That typically means fixed-function devices such as kiosks and ATMs, as well as servers. Devices where a flexible user experience isn’t an issue.
It is possible to deploy application control in a general-purpose context for knowledge workers, but the deployment must provide sufficient flexibility to allow employees to use the applications they need, when they need them. That may mean providing a grace period when users can run new software without waiting for authorization. Or perhaps specifically defining situations where software can run – perhaps for applications from authorized software publishers, or installed by trusted employees. But understand that the more flexibility you provide for who can run what software, the weaker the security model – and the point of application control is to greatly strengthen the model.
In addition to better profiling malware and looking for indicators of compromise, another growing prevention technique is isolating executable from the rest of the device by running them in a kind of sandbox. The idea is to spin up a walled garden for a limited set of applications (the big 7, for example) to shield the rest of the device from anything bad happening to those applications. A more complicated approach involves isolating every process running on the device from other processes, which enables much finer granularity in which activities are allowed on the endpoint or server.
In the event an application is compromised (and detected using advanced heuristics, as described above), the sandbox prevents the application (and whoever has subverted it) from accessing core device features such as the file system and memory, and prevents the attacker from loading additional malware. Isolation technology can take a forensic image of the application to facilitate malware analysis before killing the application and reseting the sandbox.
This approach isn’t actually new. Security-aware individuals have been running virtual machines on endpoints for risky applications for years. These new endpoint protection technologies focus on being transparent – users might not even know they are running applications in isolated environments.
Of course sandboxes are not a panacea. The isolation technology needs base operating system services (network stacks, printer drivers, etc.), so the device may still be vulnerable to attacks on those services despite isolation. The technology doesn’t relieve you from the need to manage device hygiene (patching and configuration), as discussed in our Endpoint Security Buyer’s Guide.
Another issue with isolation is increasingly sophisticated evasion tactics, as attackers have means to recognize their malware is running in an isolated environment and “lie low”. Of course making malware inert is a desired outcome, but that can prevent you from detecting and removing it or stopping its spread. And when isolating server devices (either by running them in a private cloud or using isolation technologies), many of the tactics to defeat network-based sandboxes come into play. These include requiring human interaction (such as dialog boxes), malware quiet periods (waiting out the sandbox), process hiding (to evade heuristic detection), and version/environmental checks (to only attack vulnerable applications or operating systems).
Keep in mind that isolation technologies can tax the underlying device. So without a fairly recent and high-powered device these prevention products can adversely impact the performance.
As with traditional endpoint protection suites, these new offerings require presence on each protected desktop or server. Yes, you need agents everywhere, and yes, they basically act as benign rootkits on each device. That is necessary because much of today’s malware interacts at the kernel level, so prevention needs to run similarly deep to keep up. The good news is that technologies to deploy and manage agents (even hundreds of thousands) are robust and mature.
The bad news is that most of these advanced endpoint and server prevention technologies do not include traditional signature engines. And yes, earlier we did discuss the ineffectiveness of those older techniques, but there is one significant reason signatures are still in play: compliance. A strict assessor might interpret the requirement for anti-malware on all in-scope devices to require signature-based detection. Until there is a precedent for assessors to accept advanced heuristics and isolation technologies as sufficient to satisfy the requirement for anti-malware defenses, you may also need a traditional agent on each device.
A Note on ‘Effectiveness’
As you start evaluating these advanced prevention offerings, don’t be surprised to get a bunch of inconsistent data on the effectiveness of specific approaches. You are also likely to encounter many well-spoken evangelists spouting monumental amounts of hyperbole and religion in favor of their particular approach – whatever it may be – at the expense of all other options. This happens in every security market undergoing rapid innovation, as companies try to establish momentum for their approach and products.
And a lab test upholding one product or approach over another isn’t much consolation when you need to clean up an attack your tools failed to prevent. And those evangelists will be nowhere to be found when a security researcher shows how to evade their shiny technology. We at Securosis try to float above the hyperbole and propaganda, to keep you focused on what’s really important – not 1% alleged effectiveness differences. If products or categories are within a few percent of each other across a variety of tests, we consider that a draw.
But there can be value in comparative tests. If you see an outlier, that warrants investigation and a critical assessment of the test and methodology. Was it skewed toward one category? Was the test commissioned by a vendor or someone else with an agenda? Was real malware, freshly found in the wild, used in the test? All testing methodologies have issues and limitations – don’t base a decision, or even a short list, around a magic chart or a product review/test.
What’s Right for You?
That begs the question of how to decide on a preventative technology. It comes down to a few questions:
- What kind of adversaries do you face?
- Which applications are most frequently used?
- How disruptive will employees allow the protection to be?
- What percentage of devices have been replaced in the past year?
With answers to these questions you should be able to implement a set of prevention controls on endpoints and servers, which will work within the organization’s constraints.
Now your friends at Securosis are going to deliver the hard truth. You cannot block the attacks. Not all of them. That is just harsh reality. You are still locked in an arms race that shows no signs of abating any time soon. It is just a matter of time before the attackers come out with new tactics to defeat even the latest and greatest endpoint and server protection technologies.
The next two aspects of the threat management cycle – detection and investigation – come into play more often than we would like. So our next post will focus on detection and investigation.
Posted at Wednesday 12th February 2014 11:09 am
(0) Comments •
By Adrian LaneGal Shpantzer
You didn’t think you would need to wait long for a Snowden reference, did you? Well, you know we Securosis guys like to keep you in suspense. But without further ado, it’s time. Snowden time!
The biggest noisemaker at RSA this year – besides Rothman – will be everyone talking about the NSA revelations. Everyone with a bully pulpit (which is basically everyone) will be yelling about how the NSA is all up in our stuff. Self-aggrandizing security pundits will be preaching about how RSA took a bribe, celebrating their disgust by speaking in the hallways and at opportunistic splinter conferences, instead of at the RSA podia. DLP, eDiscovery, and masking vendors will be touting their solutions to the “insider threat” with Snowden impersonators (as discussed in APT0). Old-school security people will be mumbling quietly in the corners of the Tonga Room, clutching drinks with umbrellas in them, saying “I told you so!”
One group who will be very, very quiet during the show: encryption vendors. They will not be talking about this! Why? Because they really can’t prove their stuff is not compromised, and in the absence of proof, they have already been convicted in the security star chamber. Neither Bruce Schneier nor Ron Rivest will be pulling proofs of non-tampering out of magic math hats. And even if they could, the security industry machine isn’t interested. There is too much FUD to throw. What’s worse is that encryption vendors almost universally look to NIST to validate the efficacy of their solutions – now that NIST is widely regarded as a pawn of the NSA, who can provide assurance? I feel sorry for the encryption guys – it will be a witch hunt!
The real takeaway here is that IT is – for the first time – questioning the foundational technologies data security has been built upon. And it has been a long time coming! Once we get past Snowden and NSA hype, the industry won’t throw the baby out with the bathwater, but will continue to use encryption – now with contingency plans, just in case. Smart vendors should be telling customers how to adjust or swap algorithms if and when parts of the crypto ecosystem becomes suspect. These organizations should also be applying disaster recovery techniques to encryption solutions, just in case.
–Adrian LaneGal Shpantzer
Posted at Wednesday 12th February 2014 7:00 am
(0) Comments •
By Mike Rothman
Sitting at my feet is the brand spanking new Kindle I ordered for XX1. It arrived before the snow and ice storm hits the ATL, so we got pretty lucky. She’s a voracious reader and it has become inefficient (and an ecological crime) to continue buying her paper books. She has probably read the Harry Potter series 5 or 6 times, and is constantly giving me new lists of books to buy. She has books everywhere. She reads on the bus. She gets in trouble because sometimes she reads in class. It’s pretty entertaining that the Boss and I need to try to discipline her, when her biggest transgression is reading in class. I kind of want to tell the teacher that if they didn’t suck at keeping the kid’s attention, it wouldn’t be a problem. But I don’t.
I have used the Kindle app on my iOS devices for a couple years. I liked it but my older iPads are kind of heavy, so it wasn’t a very comfortable experience to prop on my chest and read. I also had an issue checking email and the Tweeter late at night. So I bought a Kindle to just read. And I do. Since I got it my reading has increased significantly. Which I think is a good thing.
So I figured it was time to get XX1 a Kindle too. The Boss was a bit resistant, mostly because she likes the tactile feeling of reading a book and figured XX1 should too. Once we got past that resistance, I loaded up the first Divergent book onto my Kindle and let her take it for a test drive. I showed her two features, first the ability to select a word and see it in the dictionary. That’s pretty awesome – how many kids do you know who take the time to write down words they don’t know and look them up later? I also showed her how to highlight a passage. She was sold.
A day and half later, she was ready for book 2 in the Divergent series. Suffice it to say, I loaded up book 3 as well, preemptively. Of all the vices my kids have, reading is probably okay. Before I go to bed tonight I will set up her new device and load up a bunch of books I have which I think she’ll like. We will be snowed in for at least a day, so they will give her something to do. The over/under in Vegas is that she reads two books over the next couple days. I’m taking the over.
What’s really cool is that in a few years, she will hardly remember carrying a book around. That will seem so 2005. Just like it seems like a lifetime ago that I loaded up 40-45 CDs to go on a road trip in college (or cases of cassette tapes when I was in high school). Now I carry enough music on my phone to drive for about 3 weeks, and never hear the same song twice.
It’s the future, and it’s pretty cool.
Photo credit: “Stack of Books” originally uploaded by Indi Samarajiva
Have you checked out our new video podcast? Rich, Adrian, and Mike get into a Google Hangout and, well, hang out. We talk a bit about security as well. We try to keep these less than 15 minutes, and usually fail.
2014 RSA Conference Guide
We’re at it again. For the fifth year wea re putting together a comprehensive guide to what you need to know if you will be in San Francisco for the RSA Conference at the end of February. We will also be recording a special Firestarter video next week, because you obviously cannot get enough of our mugs.
And don’t forget to register for the Disaster Recovery Breakfast Thursday, 8-11 at Jillian’s.
We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too.
The Future of Information Security
Leveraging Threat Intelligence in Security Monitoring
Advanced Endpoint and Server Protection
Newly Published Papers
Incite 4 U
Hot or Not: We spend a ton of time working with security startups (and lately cloud startups looking for security help). So we will be the first to admit we don’t know all of them, and it can sometimes be hard to evaluate broad market perception – our instincts and research are good but we don’t do quantitative market surveys. Justin Somaini just published his personal survey results on security startups and issues and it’s pretty interesting. (Full disclosure: Justin is Chief Trust Officer at Box, who is licensing a paper of ours). Justin got 500 responses from people rating the perceived value of every security startup he could find, and also teased out a bit on perceived top security issues. I’m sure there is survey bias, but if you want a sense of which startups have the best recognition this is a great start, and Justin published all the results in the open, just the way we like it. (Note to Mike: I call dibs on the new prospect list.). – RM
Attacks are not evenly distributed: You have to love Rob Graham. Words matter to Rob. And when he see words misused he usually pens a very detailed diatribe on the Errata blog. This time he takes Glenn Greenwald and NBC News to task for incorrectly calling an attack DDoS. Rob’s point is that nation-states would not likely launch a DDoS attack because it involves lots of compromised devices taking down networks. Nation-states aren’t likely to use compromised devices when they have more efficient means of knocking things down. The whole rant comes back to Rob’s general expectation that professional reporters should get it right, rather than simply parroting hacktivists without even trying to understand what they are repeating. The hacktivists get a pass because they “are largely unskilled teenagers with a very narrow range of expression.” Kind of sounds like a lot of adults I know as well… But that’s just me. – MR
Facing the unfamiliar: When I was a programmer there was always a ‘dread’ project: a task I dreaded facing because it was new, tough, and would require significant effort to solve. I would drag my feet, worry about the project, and keep pushing it to the bottom of the stack. More often than not, once I jumped in, not only did the task turn out easier than I thought, but the process of learning made the whole effort exciting and fun! “How do you face a programming task you’ve never done before?” brought this to mind, and I can say without reservation, “Jump in and try it.” If you fail, that’s actually okay – we call that “rapid prototyping” now, and it’s part of the learning process. But I’m betting that more often than not new tasks are not as hard as you think, and more rewarding that you imagine! – AL
Snap, Clinkle, Popped: Peter Hesse makes a good case for why even startups need to worry about security with a story of a stealth-mode payment startup called Clinkle getting pwned recently. Was the breach a death blow? Probably not, but it doesn’t look good for a company trying to get established in the payment space. It highlights a key reality of today’s world: you need to think about security early. Like Day 2, right after you open your bank account and make your first Staples run. You can use the cloud for a bunch of stuff, but ultimately you need a security strategy both for your product (whatever it is) and your company. – MR
Let’s talk about trust: I will be publishing my “Security’s Future” paper next week, and one of the key things I call out is the need for cloud providers to establish trust. We have two great examples of trust failures this week, with both Snapchat (again) and Instagram suffering security malfunctions. With a difference: Snapchat is struggling to manage their security responses, while Instagram (owned by Facebook, BTW) fixed things quickly and paid the discoverer a bug bounty. This is the new normal, folks, and cloud providers need to not only bake in security as best they can, but learn to respond like Facebook/Instagram too – nail issues early and work well with researchers. – RM
Proof of concept companies: Normally we provide a detailed writeup when technology vendors in key coverage areas (e.g., WAF, DAM and cloud) go on acquisition sprees like Imperva did last week when they acquired Incapsula and Skyfence in one fell swoop. But these acquisitions are so closely aligned with Imperva’s vision that there was not much to report: both offer SaaS-based security gateways, monitoring and blocking suspicious behavior – albeit for slightly different use cases. In both cases the firms were funded by Imperva’s founder Shlomo Kramer, and Incapsula licensed Imperva’s technology in exchange for an equity stake. It was as if these two firms were externally incubated by Imperva – an astute move in case things did not work out, in which case they wouldn’t have impacted Imperva’s reputation, and the financial cost would have been minimal. But the concepts worked, so once the models were proven they were rolled up into the Imperva stable without much fuss or the typical worries about technology or cultural integration. In the interest of full disclosure, we have been using Incapsula for a number of years here, after Cloudflare failed to offer some of the security features and performance we wanted, and we have been happy with it. Incapsula isn’t the last word in filtering, but it filters out most cruft. – AL
Posted at Wednesday 12th February 2014 12:00 am
(0) Comments •