By Mike Rothman
Holy crap, time flies! Especially when you mark years by making the annual pilgrimage to San Francisco for the RSA Conference. Once again we are hosting our RSA Conference Disaster Recovery Breakfast. It has been six frickin’ years! That’s hard to believe but reinforces that we are not spring chickens anymore.
We are grateful that so many of our friends, clients, and colleagues enjoy a couple hours away from the glitzy show floor and club scene that dominates the first couple days of the conference. By Thursday you will probably be a disaster like us and ready to kick back, have some conversations at a normal decibel level, and grab a nice breakfast.
And with the continued support of MSLGROUP and Kulesa Faul, we are happy to provide an oasis in a morass of hyperbole, booth babes, and tchotchke hunters.
As always, the breakfast will be Thursday morning from 8-11 at Jillian’s in the Metreon. It’s an open door – come and leave as you want. We will have food, beverages, and assorted recovery items to ease your day (non-prescription only). Yes, the bar will be open because Mike doesn’t like to drink alone.
Remember what the DR Breakfast is all about. No marketing, no spin, just a quiet place to relax and have muddled conversations with folks you know, or maybe even go out on a limb and meet someone new. After three nights of RSA Conference shenanigans we are pretty confident you will enjoy the DRB as much as we do.
See you there.
To help us estimate numbers, please RSVP to rsvp (at) securosis (dot) com.
Posted at Thursday 23rd January 2014 6:36 am
(1) Comments •
By Mike Rothman
I was on the phone last week with Jen Minella, preparing for a podcast on our Neuro-Hacking talk at this year’s RSA Conference, when she asked what my story is. We had never really discussed how we each came to start mindfulness practices. So we shared our stories, and then I realized that given everything else I share on the Incite, I should tell it here as well.
Simply put, I was angry and needed to change. Back in 2006 I decided I wanted to live past 50, so I starting taking better care of myself physically. But being more physically fit is only half the equation. I needed to find a way to deal with the stress in my life. I had 3 young children, was starting an independent research boutique, and my wife needed me to help around the house.
In hindsight I call that period my Atlas Phase. I took the weight of the world on my shoulders, and many days it was hard to bear. My responsibilities were crushing. So my anger frequently got the best of me. I went for an introductory session with a life coach midway through 2007. After a short discussion she asked a poignant question. She wondered if my kids were scared of me. That one question forced me to look in the mirror and realize who I really was. I had to acknowledge they were scared at times. That was the catalyst I needed. I wasn’t going to be a lunatic father. I need to change. The coach suggested meditation as a way to start becoming more aware of my feelings, and to even out the peaks and valleys of my emotions.
A few weeks later I went to visit my Dad. He had been fighting a pretty serious illness using unconventional tactics for a few years at that point. I mentioned meditation to him and he jumped out of his chair and disappeared for a few minutes. He came back with 8 Minute Meditation, and then described how meditation was a key part of his plan to get healthy. He told me to try it. It was only 8 minutes. And it was the beginning of a life-long journey.
These practices have had a profound impact on my life. 6 years later it’s pretty rare for me to get angry. I am human and do get annoyed and frustrated. But it doesn’t turn into true anger. Or I guess I don’t let it become anger. When I do get angry it’s very unsettling, but I’m very aware of it now and it doesn’t last long, which I know my wife and kids appreciate. I do too.
Everyone has a different story. Everyone has a different approach to dealing with things. There is no right or wrong. I’ll continue to describe my approach and detail the little victories and the small setbacks. Mostly because this is a weekly journal I use to leave myself breadcrumbs on my journey, so I remember where I have been and how far I have come. And maybe some of you appreciate it as well.
Photo credit: “Scared Pandas” originally uploaded by Brian Bennett
We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too.
Reducing Attack Surface with Application Control
Security Management 2.5: You Buy a New SIEM Yet?
Advanced Endpoint and Server Protection
Newly Published Papers
Incite 4 U
SGO: Standard Government Obscurity: The Target hack was pretty bad, and it seems clear it may only be the tip of the iceberg. Late last week the government released a report with more details of the attack so companies could protect themselves. Er, sort of. The report by iSIGHT Partners was only released to select retailers. As usual, the government isn’t talking much, so iSIGHT went and released the report on their own. A CNN article states, “The U.S. Department of Homeland Security did not make the government’s report public and provided little on its contents. iSIGHT Partners provided CNNMoney a copy of its findings.” Typical. If I were a retailer I would keep reading Brian Krebs to learn what’s going on. The feds are focused on catching the bad guys – you are on your own to stop them until the cuffs go on. – RM
Unrealistic expectations are on YOU! Good post on the Tripwire blog about dealing with unrealistic security expectations. Especially because it seems very close to the approach I have advocated via the Pragmatic CSO for years. I like going after a quick win and making sure to prioritize activities. But my point with the title is that if senior management has unrealistic expectations, it’s because your communications strategies are not effective. You can blame them all you want for being unreasonable, but if they have been in the loop as you built the program, enlisted support, and started executing on initiatives, nothing should be a surprise to them. – MR
Other people’s stuff: The recent Threatpost article ‘Starbucks App Stores User Information, Passwords in Clear Text’ is a bit misleading, as they don’t mention that the leaky bit of code is actually in the included Crashylitics utility. The real lesson here is not about potential harm from passwords in log files, which is a real problem, with a low probability of exploitation. It’s that applications built on third party libraries and APIs inherit their level of security (duh!). It is a mistake to abdicate security, assuming the authors of every utility do security right. Make no mistake – we are in the age of APIs and open source leverage. It makes a lot of sense for developers to leverage whatever utilities they can to cut development time or reduce the quantity of code they need to produce. We see a compelling new use case for third party code validation services for apps, and application code security, because development teams are sprinting too fast to vet other people’s stuff. – AL
The PCI protection dance begins: It seems that with every high-profile breach the PCI Security Standards Council goes out of their way to point out how the compromised retailer was clearly not compliant or they couldn’t have been breached. It appears this time with Target will be no different. Dark Reading works through some speculation about what Target did or didn’t have, and how the attackers could have monetized the stolen info. But then you have a PCI forensicator talking about how Target couldn’t have been PA-DSS compliant because those kinds of attacks are specifically protected against in the standard. Uh huh. I’m sure the assessment went through the code line by line, and it seems the malware attacked the underlying POS operating system. But whatever. The machine will rise up to protect the machine. Just the way things go… – MR
SOS (Same old sh##): As the Target breach drags on and it becomes clear that more retailers have been hacked, the Payment Card Industry (PCI) Data Security Standard (DSS) revision 3.0 will undergo major scrutiny. Does the standard go far enough? Is it too prescriptive? Will the PCI Council embrace more detection and forensic requirements? Should merchants focus on additional physical and electronic security controls around PoS? These conversations are fundamentally unimportant, and a red herring for payment card data security. Either the card brands will mandate EMV or point-to-point encryption – both of which somewhat disintermediate merchants from the financial details of transactions – or we will get a few more years of the status quo. And the status quo isn’t working very well right now. Don’t look for changes to PCI-DSS to alter the story one bit – hackers, attackers, and fraudsters have too many ways to game the current US system. Without a fundamental shift in the way payment card security is handled, we will get to continue the breach parade of the last decade. – AL
Pwn me once, shame on me, pwn me twice… Earlier this month some Microsoft blog and Twitter accounts were hacked by the Syrian Electronic Army. Not good for a company that is now known for being darn good with security. Shockingly enough, it appears the attack can be traced back to standard old phishing. Okay, they fixed it and everyone knows these things happen. On the bad side, it happened again this week, with the Office blog. These aren’t Microsoft’s core services, but a string of attacks like this can directly degrade trust. While Microsoft surely (hopefully?) has better back-end controls for the serious stuff, it is hard to maintain a good public perception if you experience multiple, ongoing, public compromises. I feel for them – they have one of the largest attack surfaces in the world – but hopefully they will get all hands on deck before things get worse. – RM
(For vendors) is awareness the problem? When I first saw the title Our biggest problem is awareness on Seth Godin’s blog, I immediately thought of mindfulness. So you see where my head is at. But Seth’s point is that a lot of sales folks vilify marketing because they don’t think there is enough awareness of the company, products, etc. Which really means they want inbound calls from customers ready to write checks. Seth points out that the product and customer experience need to speak for themselves. And when that happens awareness isn’t a problem. He’s right. – MR
Posted at Wednesday 22nd January 2014 12:00 am
(0) Comments •
By Mike Rothman
Back in November I learned I will be giving a talk on Neuro-Hacking at RSA with Jennifer Minella. We will be discussing how mindfulness practices can favorably impact the way you view things, basically allowing you to hack your brain. But I am pretty sure you can’t sell my synapses on an Eastern European carder forum.
Over the last few months Jen and I have been doing a lot of research to substantiate the personal experience we have both had with mindfulness practices. We know security folks tend to be tough customers and reasonably skeptical about pretty much everything – unless there is data to back up any position. The good news is that there is plenty of data about how mindfulness can impact stress, job performance, and work/life balance. And big companies are jumping on board – Aetna is the latest to provide a series of Evidence-based Mind-Body Stress Management Programs based on mindfulness meditation and yoga.
Meditation and yoga are becoming a big business (yoga pants FTW), so it is logical for big companies to jump on the bandwagon. The difference here, and the reason I don’t believe this is a fad, is the data. That release references a recent study in the Journal of Occupational Health Psychology. That’s got to be legitimate, right?
Participants in the mind-body stress reduction treatment groups (mindfulness and Viniyoga) showed significant improvements in perceived stress with 36 and 33 percent decreases in stress levels respectively, as compared to an 18 percent reduction for the control group as measured with the Perceived Stress Scale. Participants in the mind-body interventions also saw significant improvements in various heart rate measurements, suggesting that their bodies were better able to manage stress.
The focus of our talk is going to be solutions and demystifying some of these practices. It’s not about how security people are grumpy. We all know that. We will focus on how to start and develop a sustainable practice. Mindfulness doesn’t need to be hard or take a long time. In as little as 5-15 minutes a day you can dramatically impact your ability to deal with life. Seriously.
But don’t take our word for it. Show up for the session and draw your own conclusions. We just recorded a podcast for the RSA folks, and I’ll link to it once it’s available, later this week. Jen and I will also be posting more mindfulness stuff on our respective blogs in the lead up to the conference (much to Rich’s chagrin).
Photo credit: “6 Instant Ways To Stress Less And Smile More – Flip Your Perspective” originally uploaded by UrbaneWomenMag
Posted at Monday 20th January 2014 2:12 pm
(1) Comments •
In this week’s Firestarter Rich, Mike, and Adrian discuss the latest in the Target relevations and whether over-reliance on antivirus is to blame once again. We aren’t out to blame the victim. We also pick our top prevention strategies for this sort of attack. Ain’t hindsight great?
Posted at Monday 20th January 2014 8:35 am
(0) Comments •
By Mike Rothman
We have always been fans of making sure applications and infrastructure are ready for prime time before letting them loose on the world. It’s important not to just use basic scanner functions either – your adversaries are unlikely to limit their tactics to things you find in an open source scanner. Security Assurance and Testing enables organizations to limit the unpleasant surprises that happen when launching new stuff or upgrading infrastructure.
Adversaries continue to innovate and improve their tactics at an alarming rate. They have clear missions, typically involving exfiltrating critical information or impacting the availability of your technology resources. They have the patience and resources to achieve their missions by any means necessary. And it’s your job to make sure deployment of new IT resources doesn’t introduce unnecessary risk.
In our Eliminating Surprises with Security Assurance and Testing paper, we talk about the need for a comprehensive process to identify issues – before hackers do it for you. We list a number of critical tactics and programs to test in a consistent and repeatable manner, and finally go through a couple use cases to show how the process would work at both the infrastructure and application levels.
To avoid surprise we suggest a security assurance and testing process to ensure the environment is ready to cope with real traffic and real attacks. This goes well beyond what development organizations typically do to ‘test’ their applications, or ops does to ‘test’ their stacks.
It also is different from a risk assessment or a manual penetration test. Those “point in time” assessments aren’t necessarily comprehensive. The testers may find a bunch of issues but they will miss some. So remediation decisions are made with incomplete information about the true attack surface of infrastructure and applications.
We would like to thank our friends at Ixia for licensing this content. Without the support of our clients, our open research model wouldn’t be possible.
Direct Download (PDF): Eliminate Surprises with Security Assurance and Testing
Posted at Sunday 19th January 2014 11:12 am
(0) Comments •
By Adrian Lane
Today I am going to write about tokenization. Four separate people have sent me a questions about tokenization in the last week. As a security paranoiac I figured there was some kind of conspiracy or social engineering going on – this whole NSA/Snowden/RSA thingy has me spooked. But after I calmed down and realized that these are ‘random’ events, I recognized that the questions are good and relevant to a wider audience, so I will answer a couple of them here on the blog. In no particular order:
- “What is throttling tokenization?” and “How common is the ‘PCI tokenization throttle function’ in tokenization products and services?” I first heard about “throttling tokenization systems” and “rate limiting functions” from the card brands as a secondary security service. As I understand the intention, it is to provide, in case a payment gateway is compromised or an attacker gains access to a token service, a failsafe so someone couldn’t siphon off the entire token database. My assumption was that this rate monitor/throttle would only be provided on de-tokenization requests or vault inquiries that return cardholder information. Maybe that’s smart because you’d have a built-in failsafe to limit information leakage. Part of me thinks this is more misguided guidance, as the rate limiting feature does not appear to be in response to any reasonable threat model – de-tokenization requests should be locked down and not available through general APIs in the first place!!! Perhaps I am not clever enough to come up with a compromise that would warrant such a response, but everything I can think of would (should) be handled in a different manner. But still, as I understand from conversations with people who are building tokenization platforms, the throttling functions are a) a DDoS protection and b) a defense against someone who figures out how to request all tokens in a database. And is it common? Not so far as I know – I don’t know of any token service or product that builds this in; instead the function is provided by other fraud and threat analytics at the network and application layers. Honestly, I don’t have inside information on this topic, and one of the people who asked this question should have had better information than I do.
- Do you still write about tokenization? Yes.
- Are you aware of any guidance in use of vault-less solutions? Are there any proof points or third-party validations of their security? For the audience, vault-less tokenization solutions do not store a database of generated tokens – they use a mathematical formula to generate them, so no need to store that which can be easily derived. And to answer the question, No, I am not aware of any. That does not mean no third-party validation exists, but I don’t follow these sorts of proofs closely. What’s more, because the basic design of these solutions closely resemble a one-time pad or similar, conceptually they are very secure. The proof is always in the implementation, so if you need this type of validation have your vendor provide a third-party validation by people qualified for that type of analysis.
- Why is “token distinguishability” discussed as a best practice? What is it and which vendors provide it? Because PCI auditors need a way to determine whether a database is full of real credit cards or just tokens. This is a hard problem – tokens can and should be very close to the real thing. The goal for tokens is to make them as real as possible so you can use them in payment systems, but they will not be accepted as actual payment instruments. All the vendors potentially do this. I am unaware of any vendor offering a tool to differentiate real vs. tokenized values, but hope some vendors will step forward to help out.
- Have you seen a copy of the tokenization framework Visa/Mastercard/etc.? announced a few months back? No. As far as I know that framework was never published, and my requests for copies were met with complete and total silence. I did get snippets of information from half a dozen different people in product management or development roles – off the record – at Visa and Mastercard. It appears their intention was to define a tokenization platform that could be used across all merchants, acquirers, issuers, and related third parties. But this would be a platform offered by the brands to make tokenization an industry standard. On a side note I really did think, from the way the PR announcement was phrased, that the card brands were shooting for a cloud identity platform to issue transaction tokens after a user self-identified to the brands. It looked like they wanted a one-to-one relationship with the buyer to disintermediate merchants out of the payment card relationship. That could be a very slick cloud services play, but apparently I was on drugs – according to my contacts there is no such effort.
And don’t forget to RSVP for the 6th annual (really, the 6th? How time flies ….) Securosis Disaster Recovery Breakfast during the RSA Conference.
On to the Summary:
Webcasts, Podcasts, Outside Writing, and Conferences
Favorite Securosis Posts
Other Securosis Posts
Favorite Outside Posts
Research Reports and Presentations
Top News and Posts
Blog Comment of the Week
This week’s best comment goes to Todd Thiemann, in response to Advanced Endpoint and Server Protection: Assessment.
What role would attestation play in determining your security posture? This might not play in understanding vulnerabilities, but it would help to understand compromises. If you can attest that the hardware/software stack of a given system is in a known, valid/trusted state, you could go a long way towards avoiding Advanced Persistent Threats that have pre-occupied organizations of late.
Posted at Thursday 16th January 2014 11:22 pm
(2) Comments •
From Brian Krebs’ awesome reporting on the Target breach (emphasis added):
The source close to the Target investigation said that at the time this POS malware was installed in Target’s environment (sometime prior to Nov. 27, 2013), none of the 40-plus commercial antivirus tools used to scan malware at virustotal.com flagged the POS malware (or any related hacking tools that were used in the intrusion) as malicious. “They were customized to avoid detection and for use in specific environments,” the source said.
That source and one other involved in the investigation who also asked not to be named said the POS malware appears to be nearly identical to a piece of code sold on cybercrime forums called BlackPOS, a relatively crude but effective crimeware product. BlackPOS is a specialized piece of malware designed to be installed on POS devices and record all data from credit and debit cards swiped through the infected system.
I swear I’ve been briefed by a large percentage of those vendors on how their products stop 0-day attacks. Let me go find my notes…
Posted at Thursday 16th January 2014 2:07 pm
(1) Comments •
I am currently polishing off the first draft of my Data Security for iOS 7 paper, and reached one fascinating conclusion during the research which I want to push out early. Apple’ approach is implementing is very different from the way we normally view BYOD. Apple’s focus is on providing a consistent, non-degraded user experience while still allowing enterprise control. Apple enforces this by taking an active role in mediating mobile device management between the user and the enterprise, treating both as equals. We haven’t really seen this before – even when companies like Blackberry handle aspects of security and MDM, they don’t simultaneously treat the device as something the user owns. Enough blather – here you go…
Apple has a very clear vision of the role of iOS devices in the enterprise. There is BYOD, and there are enterprise-owned devices, with nearly completely different models for each. The owner of the device defines the security and management model.
In Apple’s BYOD model users own their devices, enterprises own enterprise data and apps on devices, and the user experience never suffers. No dual personas. No virtual machines. A seamless experience, with data and apps intermingled but sandboxed. The model is far from perfect today, with one major gap, but iOS 7 is the clearest expression of this direction yet, and only the foolish would expect Apple to change any time soon.
Enterprise-owned devices support absolute control by IT, down to the new-device provisioning experience. Organizations can degrade features as much as they want and need, but the devices will, as much as allowed, still provide the complete iOS experience.
In the first case users allow the enterprise space on their device, while the enterprise allows users access to enterprise resources; in the second model the enterprise owns everything. The split is so clear that it is actually difficult for the enterprise to implement supervised mode on an employee-owned device.
We will explain the specifics as we go along, but here are a few examples to highlight the different models.
On employee owned devices:
- The enterprise sends a configuration profile that the user can choose to accept or decline.
- If the user accepts it, certain minimal security can be required, such as passcode settings.
- The user gains access to their corporate email, but cannot move messages to other email accounts without permission.
- The enterprise can install managed apps, which can be set to only allow data to flow between them and managed accounts (email). These may be enterprise apps or enterprise licenses for other commercial apps. If the enterprise pays for it, they own it.
- The user otherwise controls all their personal accounts, apps, and information on the device.
- All this is done without exposing any user data (like the user’s iTunes Store account) to the enterprise.
- If the user opts out of enterprise control (which they can do whenever they want) they lose access to all enterprise features, accounts, and apps. The enterprise can also erase their ‘footprint’ remotely whenever they want.
- The device is still tied to the user’s iCloud account, including Activation Lock to prevent anyone, even the enterprise, from taking the device and using it without permission.
On enterprise owned devices:
- The enterprise controls the entire provisioning process, from before the box is even opened.
- When the user first opens the box and starts their assigned device, the entire experience is managed by the enterprise, down to which setup screens display.
- The enterprise controls all apps, settings, and features of the device, down to disabling the camera and restricting network settings.
- The device can never be associated with a user’s iCloud account for Activation Lock; the enterprise owns it.
This model is quite different from the way security and management were handled on iOS 6, and runs deeper than most people realize. While there are gaps, especially in the BYOD controls, it is safe to assume these will slowly be cleaned up over time following Apple’s usual normal improvement process.
Posted at Thursday 16th January 2014 1:51 pm
(2) Comments •
By Mike Rothman
In the first post in our Application Control series we discussed why it is hard to protect endpoints, and some of the emerging alternative technologies that promise to help us do better. Mostly because it is probably impossible do a worse job of protecting endpoints, right? We described Application Control (also known as Application Whitelisting), one of these alternatives, while being candid about the perception and reality of this technology after years of use.
Our conclusion was that Application Control makes a lot of sense in a variety of use cases, and can work in more general situations, if the organization is willing to make some tradeoffs. This post describes the “good fit” use cases and mentions some of the features & functions that can make a huge difference to security and usability.
Given the breadth of ways computing devices are used in a typical enterprise, trying to use a generic set of security controls for every device doesn’t make much sense. So first you spend some time profiling the main use models of these devices and defining some standard ‘profiles’, for which you can then design appropriate defenses. There are quite a few attributes you can use to define these use cases, but here are the main ones we usually see:
- Operating System: You protect Windows devices differently than Macs than Linux servers, because each has a different security model and different available controls. When deciding how to protect a device, operating system is a fundamental factor.
- Usage Model: Next look at how the device is used. Is it a desktop, kiosk, server, laptop, or mobile device? We protect personal desktops differently than kiosks, even if the hardware and operating system are the same.
- Application variability: Consider what kind of applications run on the device, as well as how often they change and are updated.
- Geographic distribution: Where is the device located? Do you have dedicated IT and/or security staff there? What is the culture and do you have the ability to monitor and lock it down? Some countries don’t allow device monitoring and some security controls require permission from government organizations, so this must be a consideration as well.
- Access to sensitive data: Do the users of these devices have access to sensitive and/or protected data? If so you may need to protect them differently. Likewise, a public device in an open area, with no access to corporate networks, may be able to do with much looser security controls.
Using these types of attributes you should be able to define a handful (or two) of use cases, which you can use to determine the most appropriate means of protecting each device, trading off security against usability.
Let’s list a few of the key use cases where application control fits well.
When an operating system is at the end of its life and no longer receiving security updates, it is a sitting duck. Attackers have free rein to continue finding exploitable defects with no fear of patches to ruin their plans. Windows XP security updates officially end April 2014 – after that organizations still using XP are out of luck. (Like luck has anything to do with it…)
We know you wonder why on Earth any organization serious about security – or even not so serious – would still use XP. It is a legitimate question, with reasonable answers. For one, some legacy applications still only run on XP. It may not be worth the investment – or even possible, depending on legal/ownership issues – to migrate to a modern operating system, so on XP they stay. A similar situation arises with compliance requirements to have applications qualified by a government agency. We see this a lot in healthcare, where the OS cannot even be patched without going through a lengthy and painful qualification process. That doesn’t happen, so on XP it stays. Despite Microsoft’s best efforts, XP isn’t going away any time soon.
Unfortunately that means XP will still be a common target for attackers, and organizations will have little choice but to protect vulnerable devices somehow. Locking them down may be one of the few viable options. In this situation using application control in default-deny mode, allowing only authorized applications to run, works well.
Fixed Function Devices
Another use case we see frequently for application control is fixed function devices, such as kiosks running embedded operating systems. Think an ATM or payment station, where you don’t see the underlying operating system. These devices only run a select few applications, built specifically for the device. In this scenario there is no reason for any software besides authorized applications to run. Customers shouldn’t be browsing the Internet on an ATM machine. So application control works well to lock down kiosks.
Similarly, some desktop computers in places like call centers and factory floors only run very stable and small sets of applications. Locking them down to run provides protection both from malware and employees loading unauthorized software or stealing data.
In both this use case and OS lockdown you will get little to no pushback from employees about their inability to load software. Nothing in their job description indicates they should be loading software or accessing anything but the applications they need to do their jobs. In these scenarios application control is an excellent fit.
Another clear use case for application control is on server devices. Servers tend to be dedicated to a handful of functions, so they can be locked down to those specific applications. Servers don’t call the Help Desk to request access to iTunes, and admins can be expected to understand and navigate the validation process when they have a legitimate need for new software. Locking down servers can work very well – especially appealing because servers, as the repository of most sensitive data, are the ultimate target of most attacks.
General Purpose Devices
There has always been a desire to lock down general-purpose devices, which are among the most frequently compromised. Those employees keep clicking stuff, and are notoriously hard to control. Theoretically, if you could stop unauthorized code from running on these devices, you could protect employees from themselves. As our last post mentioned, end users push back against this because sometimes they legitimately need to install additional software. People get grumpy if they can’t do their jobs.
Application control does have a role on general-purpose desktops – so long as there is sufficient flexibility for allow knowledge workers to load legitimate software. In most cases the application control software allows a grace period of a few hours to a day or so to run new application before it needs to be explicitly authorized by a manager or IT person. There are other situations where application control’s trust model is more flexible – such as authorized software distributions, authorized publishers, and trusted users.
Of course flexible trust introduces a window of vulnerability for new malware. Reducing application control’s very strong security model can enable employees to load software to get their jobs done, but this is a tricky trade-off which requires careful consideration. A good balance can make application control viable in situations where it would be a non-starter, and we see many organizations deploying application control successfully. But be sure you have other controls in place – such as network security monitoring and malware callback detection – to identify compromised devices where application control isn’t enough.
Application Control Selection Criteria
Now that you have an idea of the key use cases for application control, let’s spend a little time highlighting some of the key features to look for in products implementing this security model.
- Library of Executables: If the application control product doesn’t know about your applications it takes considerable work to get the system set up. So a large and current application library is critical to scaling the technology. You want the library updated in a timely fashion, especially for patches and updates. Keep in mind that application control won’t recognize an updated version of a program until it is explicitly added.
- Flexible trust model: Speaking specifically of patches, you should be able to define certain software publishers whose code can run on your devices. That way, as long as code is properly signed by a trusted software vendor, it will run without explicit authorization. Similarly, you should be able to define trusted software distribution products that automatically install products, trusted directories where applications can be found, and perhaps even specific users who can install software. Each trust is a security trade-off, but they make the technology much more workable at scale.
- Policy setup: The product should be able to monitor which applications are used on managed devices and build a baseline for the environment. This baseline can be used to quickly define which applications are legitimate and which are not, for a good first cut at your policy base.
- Easy to manage policies: The most resource-intensive aspect of deploying application control is keeping policies up to date. So you want an easy-to-use system for defining what is authorized and what is blocked for groups of users.
- Flexible enforcement: You should have flexible options to run the software with an alert to the IT group, allow users to run the application with a grace period before it needs to be authorized, and simply block execution. Policies should be able to take into account device type, user, and group, to support implement controls that support your organization’s requirements.
The good news is that application control technology has been on the market for quite a few years, and a number of offerings that can provide all these capabilities. One other aspect to consider is leveraging other technologies already in place. For instance if you add application control as part of an endpoint protection or management suite, you can leverage the agents already on devices to simplify management and policy maintenance.
To wrap up this short series: application control can be useful – particularly for stopping advanced attackers and when operating systems are no longer supported. There are trade-offs as with any security control, but with proper planning and selection of which use cases to address, application control resists device compromise and protects enterprise data.
Posted at Wednesday 15th January 2014 4:47 pm
(1) Comments •
By Adrian Lane
If you made it this far we know your old platform is akin to an old junker automobile: every day you drive to work in a noisy, uncomfortable, costly vehicle that may or may not get you where you need to be, and every time you turn around you’re spending more money to fix something. With cars figuring out what you want, shopping, getting financing, and then dealing with car sales people is no picnic either, but in the end you do it to make you life a bit easier and yourself more comfortable. It is important to remember this because, at this stage of SIEM replacement, it feels like we have gone through a lot of work just so we can do more work to roll out the new platform. Let’s step back for a moment and focus on what’s important; getting stuff done as simply and easily as possible.
Now that you are moving to something else, how do you get there? The migration process is not easy, and it takes effort to move from from the incumbent to the new platform. We have outlined a disciplined and objective process to determine whether it is worth moving to a new security management platform. Now we will outline a process for implementing the new platform and transitioning from the incumbent to the new SIEM. You need to implement, and migrate your existing environment to the new platform, while maintaining service levels, and without exposing your organization to additional risk. This may involve supporting two systems for a short while. Or in a hybrid architecture using two systems indefinitely. Either way, when a customer puts his/her head on the block to select a new platform, the migration needs to go smoothly. There is no such thing as a ‘flash’ cutover. We recommend you start deploying the new SIEM long before you get rid of the old. At best, you will deprecate portions of the older system after newer replacement capabilities are online, but you will likely want the older system as a fallback until all new functions have been vetted and tuned. We have learned the importance of this staging process the hard way. Ignore it at your own peril, keeping in mind that your security management platform supports several key business functions.
We offer a migration plan for moving to the new security management platform. It covers data collection as well as migrating/reviewing policies, reports, and deployment architectures. We break the migration process into two phases: planning and implementation. Your plan needs to be very clear and specific about when things get installed, how data gets migrated, when you cut over from old systems to new, and who performs the work. The Planning step leverages much of the work done up to this point in evaluating replacement options – you just need to adapt it for migration.
- Review: Go back through the documents you created earlier. First consider your platform evaluation documents, which will help you understand what the current system provides and key deficiencies to address. These documents become the priority list for the migration effort, the basis for your migration task list. Next leverage what you learned during the PoC. To evaluate your new security management platform provider you conducted a mini deployment. Use what you learned from that exercise – particularly what worked and didn’t – as input for subsequent planning, and address the issues you identified.
- Focus on incremental success: What do you install first? Do you work top down or bottom up? Will you keep both systems operational throughout the entire migration, or shut down portions of the old as each node migrates? We recommend using your deployment model as a guide. You can learn more about these models by checking out Understanding and Selecting a SIEM. When using a mesh deployment model, it is often easiest to make sure a single node/location is fully functional before moving on to the next. With ring architectures it is generally best to get the central SIEM platform operational, and then gradually add nodes around it until you reach the scalability limit of the central node. Hierarchal models are best deployed top-down, with the central server first, followed by regional aggregation nodes in order of criticality, down to the collector level. Break the project up to establish incremental successes and avoid dead ends.
- Allocate resources: Who does the work? When will they do it? How long will it take to deploy the platform, data collectors, and/or log management support system(s)? This is also the time to engage professional services and enlist the new vendor’s assistance. The vendor presumably does these implementations all day long so they should have expertise at estimating these timelines. You may also want to engage them to perform some (or all) of the work in tandem with your staff, at least for the first few locations until you get the process down.
- Define the timeline: Estimate the time it will take to deploy the servers, install the collectors, and implement your policies. Include time for testing and verification. There is likely to be some ‘guesstimation’, but you have some reasonable metrics to plan from, from the PoC and prior experience with SIEM. You did document the PoC, right? Plan the project commencement date and publish to the team. Solicit feedback and adjust before commencing because you need shared accountability with the operations team(s) to make sure everyone has a vested interest in success.
- Preparation: We recommend you do as much work as possible before you begin migration, including construction of the rules and policies you will rely on to generate alerts and reports. Specify in advance any policies, reports, user accounts, data filters, backup schedules, data encryption, and related services you can. You already have a rule base so leverage it to get going. Of course you’ll tune things as you go, but why reinvent the wheel or rush unnecessarily? Keep in mind that you will always find something you failed to plan for – often an unexpected problem – that sets your schedule back. Preparation helps spot missing tasks and makes deployment go faster.
It is helpful for team morale, not to mention the confidence of upper management, to demonstrate the value of the new platfrom early on. So you should plan some “quick wins” into the migration process where possible. Delivering what you already have in the incumbent platform may be critical to long-term success, but completely uninspiring to the people deciding your bonus. If there are key facets of the new platform that can be delivered early in the implementation process, it is worth your time to do so.
The migration need not (and in fact generally should not) be an all-at-once exercise – you have the luxury of doing one piece at a time in the order that best suits your requirements.
- Deploy platform(s): This varies based on the deployment model as discussed above, but typically you install the main security management platforms first. Basic system configuration, identity management and access control integration, and basic network configuration. Once complete connect to a couple data sources and other aggregation points to make sure the system is operating correctly.
- Deploy supporting services: Deploy the data collectors and make sure event collection is working correctly. If you use a flat deployment model, configure the platform to collect events for the first set of deployment tasks. If you use a Log Management/SIEM hybrid or regional data aggregators, install the additional aggregation points and get them feeding data into the primary SIEM system to confirm proper information flow – at a small scale before ramping up event traffic. If you are moving to a new platform for real-time analysis make sure event collection happens properly. Your only concern right now should be getting data into the system in a timely fashion – tune it later.
- Install policies and reports: Next deploy the rules that comb through events and find anomalies. Hopefully you created as many as possible during the PoC and planning stages, and perhaps you can leverage your initial implementation. For real-time analysis you need to tune those rules to optimize performance. Remember that each additional rule incurs significant processing cost. It’s math – correlating multiple data sources against many rules causes the system to do exponentially more work, reducing effective performance and throughput. Look for ways to create rules with fewer comparisons, and balance fine-tuning rules for specific problems against more generic rules that catch many problems – sometimes you can throw hardware at the problem (with a bigger server) to handle more events, but it is always useful to strive for more efficient policies.
- Test and verify: Are your reports being generated properly? Are the correct alerts being generated in a timely fashion? Generate copies of the reports and send them to the team for review and compare against the existing platform (which is still operational, right?). For alerts and forensic analysis it makes sense to rerun your “Red Team” drill from the PoC to make sure you catch anomalies and confirm the accuracy of your results. Verify you get what you need – now is the time to find any problems with the system – while you still have a chance to find and fix problems, before you start depending on the new platform.
- Stakeholder sign-off: Get it in writing – trust us, this will save aggravation in the future when someone from Ops says: “Hey, where is XYZ? I still need it!” Have the compliance, security, and IT ops teams sign off on completion of the project – they own it now too (remember shared accountability?). Make sure the group is satisfied and/or all issues are documented – if not fully solved – by this point.
- Decommission: Now you can retire the older system. You may choose to run the incumbent SIEM for a few months after the new system is fully operational, just in case. But there are not many reasons to keep the older system around long-term, and plenty of reasons to send it packing. Older agents and sensors should be removed, user accounts dedicated to the older platform locked down, and hardware and virtual server real estate reclaimed. Once again, someone will need to be assigned the work with an agreed-on time frame for completion. Trouble ticketing systems are a handy way to schedule these tasks and get automated completion reports.
Posted at Wednesday 15th January 2014 9:00 am
(0) Comments •
By Mike Rothman
As I discussed last week, the beginning of the year is a time for ReNewal and taking a look at what you will do over the next 12 months. Part of that renewal process should be clearing out the old so the new has room to grow. It’s kind of like forest fires. The old dead stuff needs to burn down so the new can emerge. I am happy to say the Boss is on board with this concept of renewal – she has been on a rampage, reducing the clutter around the house.
The fact is that we accumulate a lot of crap over the years, and at some point we kind of get overrun by stuff. Having been in our house almost 10 years, since the twins were infants, we have stuff everywhere. It’s just the way it happens. Your stuff expands to take up all available space. So we still have stuff from when the kids were small. Like FeltKids and lots of other games and toys that haven’t been touched in years. It’s time for that stuff to go.
We have a niece a few years younger than our twins, and a set of nephews (yes, twins run rampant in our shop) who just turned 3, we have been able to get rid of some of the stuff. There is nothing more gratifying than showing up with a huge box of action figures that were gathering dust in our basement, and seeing the little guys’ eyes light up. When we delivered our care package over Thanksgiving, they played with the toys for hours.
The benefit of decluttering is twofold. First it gets the stuff out of our house. It clears room for the next wave of stuff tweens need. I don’t quite know that that is because iOS games don’t seem to take up that much room. But I’m sure they will accumulate something now that we have more room. And it’s an ongoing process. If we can get through this stuff over the next couple months that will be awesome. As I said, you accumulate a bunch of crap over 10 years.
The other benefit is the joy these things bring to others. We don’t use this stuff any more. It’s just sitting around. But another family without our good fortune could use this stuff. If these things bring half the joy and satisfaction they brought our kids, that’s a huge win.
And it’s not just stuff that you have. XX1 collected over 1,000 books for her Mitzvah project to donate to Sheltering Books, a local charity that provides books to homeless people living in shelters. She and I loaded up the van with boxes and boxes of books on Sunday, and when we delivered them there was great satisfaction from knowing that these books, which folks kindly donated to declutter their homes, would go to good use with people in need.
And the books were out of my garage. So it was truly a win-win-win. Karma points and a decluttered garage. I’ll take it.
Photo credit: “home-office-reorganization-before-after” originally uploaded by Melanie Edwards
We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too.
Reducing Attack Surface with Application Control
Security Management 2.5: You Buy a New SIEM Yet?
Advanced Endpoint and Server Protection
Newly Published Papers
Incite 4 U
Don’t take it personally: Steven Covey has been gone for years, but his 7 habits live on and on. Our friend George Hulme did a piece for CSO Online detailing the 7 habits of effective security pros. The first is communication and the second is business acumen. I’m not sure you need to even get to #3. Without the ability to persuade folks that security is important, within the context of a critical business imperative – nothing else matters. Of course then you have squishy stuff like creativity and some repetitious stuff like “actively engaging with business stakeholders”. But that’s different than business acumen. I guess it wouldn’t have resonated as well if it was 5 habits, right? Another interesting one is problem solving. Again, not unique to security, but if you don’t like to investigate stuff and solve problems, security isn’t for you. One habit that isn’t on there is don’t take it personally. Security success depends on a bunch of other things going right, so even if you are blamed for a breach or outage, it is not necessarily your fault. Another might be “wear a mouthguard” because many security folks get kicked in the teeth pretty much every day. – MR
Out-of-control ad frenzy: Safari on my iPad died three times Saturday am, and the culprit was advertisement plug-ins. My music stream halted when a McDonalds ad screeched at me from another site. I was not “lovin’ it!” The 20 megabit pipe into my home and a new iPad were unable to manage fast page loads because of the turd parade of third-party ads hogging my bandwidth. It seems that in marketers’ frenzy to know everything you do and push their crap on you, they forgot to serve you what you asked for. The yoast blog offers a nice analogy, comparing on-line ads to brick-and-mortar merchants tagging customers with stickers, but it’s more like carrying around a billboard. And that analogy does not even scratch the surface of the crap going on under the covers. So I have to ask, as the media has been barking for months about Snowden-related revelations about NSA spying, why is nobody talking about marketing firms pwning your browser and scraping every piece of data they can? Those of you who don’t examine what web pages do behind the scenes when you visit them – the folks with better things to do – might be surprised to learn that many web sites use over 20 trackers, and send info to a dozen third parties completely unrelated to the content you actually requested. Referrer tags, ghost scripts, framing, re-routing through marketing sites, cookies, intentional data leakage, plug-ins, and browser scraping. We use Google+ here for the Securosis Firestarter, but Google contacts 5 different Google servers every hour to update Google on, among other things, my patch level. Yes, hourly! Do you honestly think we could not buy stuff, or find information, if all this crap was blocked? Let’s find out! I’m going to try out the kickstarter project Ad Trap to see if it increases or reduces web browsing satisfaction. – AL
There is timing in everything: The Target attack occurred at the worst possible time for Target, and the best for attackers – what a coincidence! In our research meeting today someone mentioned that by attacking close to the holidays, the attackers likely reduced the effectiveness of credit card fraud detection mechanisms – people buy more weird stuff from new and unusual places at Christmas. It also meant banks were very unlikely to cancel and reissue cards, given the impact that would have on consumers’ ability to spend money they don’t have during the holidays. Sorry Suzie, no Doc McStuffins play set for you – Santa’s magic card doesn’t work any more. This is logical, but it turns out the guy known for Prisoner’s Dilemma research put together a mathematical model for cyberattack timing. On the upside this is something defenders can use to model and prepare for attacks. On the downside I suspect many bad guys have this model instinctively hardwired into their brains. Well, the successful attackers, at least. – RM
Gracefully impaling yourself: Dave Lewis uses some of his CSO blog real estate to laud our own Rich for disclosing in gory detail a mistake he made with his AWS account. Dave’s point (and one we reiterated in this week’s Firestarter) is that there is a right way and a wrong way to communicate during a breach. Full disclosure is better. If you don’t know something, say you don’t know. And share information so perhaps someone else can avoid the trap that you fell into. It is hard when you need to juggle the demands of lawyers to limit liability, the desire of customers to figure out what they lost, the heavy hand of law enforcement who needs unspoiled evidence, and the need for someone internally to point the finger elsewhere. The best way to make sure you are ready? A tabletop exercise, which will at least make sure everyone understands their roles and responsibilities. – MR
Get some! Investment professionals consistently advising people to “invest in themselves”, as time and money spent on education pays the greatest returns. I am a huge fan of people who are students of their profession and study their craft to get better. Again, it pays dividends in career advancement, which leads to more job satisfaction. I made sure I had training budget to send my team to conferences and training sessions. They always came back stimulated from the new knowledge, and from being away from the daily grind for a couple days. Tom’s Guide has an article on planning your 2014 certifications. If you read the Securosis blog you know we are not huge fans of certifications; many of these rubber stamps don’t prove competency or make people better at their jobs. Lots of people use certificates as a badge of belonging to some club. Or perhaps to get by HR screeners on their next job interview. Whatever. I’m not about to endorse certifications for the sake of accumulating certificates, but it is time to get a plan together for the coming year. Figure out what would be most beneficial for you to learn, get management approval before the budget runs out, and get out there! Whether it’s chasing a certifications or just learning a set of new skills, training is highly beneficial – not only to your employer but also to your psyche. And it doesn’t happen unless you make it. – AL
Horse. Dead. Redux: Rumor is Ira Winkler is still pissed at me for letting The Macalope pick on him in the early days of this blog. No, I’m not the Macalope, and Ira deserved the criticism. That said, I do like his take on the so-called RSA boycott. I realize we have been beating on this issue, but like a good late-night talk show host, you work with the material you have. It is a pretty definitive piece – Ira lays out the false assumptions, grandstanding, and hypocrisy grounding most of the echo chamber nonsense on the RSA/NSA issue and accompanying boycott. I can only assume he has gotten over the ribbing he received on our site, because his 2007 particular article was fairly misinformed itself. Who says folks don’t learn from their mistakes? – RM
Posted at Wednesday 15th January 2014 3:00 am
(0) Comments •
By Mike Rothman
As we described in the introduction to the Advanced Endpoint and Server Protection series, given the inability of most traditional security controls to defend against advanced attacks, it is time to reimagine how we do threat management. This new process has 5 phases; we call the first phase Assessment. We described it as:
Assessment: The first step is gaining visibility into all devices, data sources, and applications that present risk to your environment. And you need to understand the current security posture of anything to know how to protect it.
You need to know what you have, how vulnerable, and how exposed it is. With this information you can prioritize and design a set of security controls to protect it.
What’s at Risk?
As we described in the CISO’s Guide to Advanced Attackers, you need to understand what attackers would be trying to access in your environment and why. Before you go into a long monologue about how you don’t have anything to steal, forget it. Every organization has something that is interesting to some adversary. If could be as simple as compromising devices to launch attacks on other sites, or as focused as gaining access to your environment to steal the schematics to your latest project. You cannot afford to assume adversaries will not use advanced attacks – you need to be prepared either way.
We call this Mission Assessment, and it involves figuring out what’s important in your environment. This leads you to identify interesting targets most likely to be targeted by attackers. When trying to understand what an advanced attacker will probably be looking for, there is a pretty short list:
- Intellectual property
- Protected customer data
- Business operational data (proposals, logistics, etc.)
- Everything else
To learn where this data is within the organization, you need to get out from behind your desk and talk to senior management and your peers.
Once you understand the potential targets, you can begin to profile adversaries likely to be interested in them. Again, we can put together a short list of likely attackers:
- Unsophisticated: These folks favor smash and grab attacks, where they use publicly available exploits (perhaps leveraging attack tools such as Metasploit and the Social Engineer’s Toolkit) or packaged attack kits they buy on the Internet. They are opportunists who take what they can get.
- Organized Crime: The next step up the food chain is organized criminals. They invest in security research, test their exploits, and always have a plan to exfiltrate and monetize what they find. They are also opportunistic but can be quite sophisticated in attacking payment processors and large-scale retailers. They tend to be most interested in financial data but have been known to steal intellectual property if they can sell it and/or use brute force approaches like DDoS threats for extortion.
- Competitor: Competitors sometimes use underhanded means to gain advantage in product development and competitive bids. They tend to be most interested in intellectual property and business operations.
- State-sponsored: Of course we all hear the familiar fretting about alleged Chinese military attackers, but you can bet every large nation-state has a team practicing offensive tactics. They are all interested in stealing all sorts of data – from both commercial and government entities. And some of them don’t care much about concealing their presence.
Understanding likely attackers provides insight into their tactics, which enables you to design and implement security controls to address the risk. But before you can design the security control set you need to understand where the devices are, as well as the vulnerabilities of devices within your environment. Those are the next two steps in the Assessment phase.
This process finds the endpoints and servers on your network, and makes sure everything is accounted for. When performed early in the endpoint and server protection process, this helps avoid “oh crap” moments. It is no good when you stumble over a bunch of unknown devices – with no idea what they are, what they have access to, or whether they are steaming piles of malware. Additionally, an ongoing discovery process can shorten the window between something popping up on your network, you discovering it, and figuring out whether it has been compromised.
There are a number of techniques for discovery, including actively scanning your entire address space for devices and profiling what you find. This works well enough and is traditionally the main way to do initial discovery. You can supplement active discovery with a passive discovery capability, which monitors network traffic and identifies new devices based on network communications. Depending on the sophistication of the passive analysis, devices can be profiled and vulnerabilities can be identified (as we will discuss below), but the primary goal of passive monitoring is to find new unmanaged devices faster. Passive discovery is also helpful for identifying devices hidden behind firewalls and on protected segments which active discovery cannot reach.
Finally, another complicating factor for discovery – especially for servers – is cloud computing. With the ability to spin up and take down virtual instances – perhaps outside your data center – your platform needs to both track and assess cloud resources, which requires some means of accessing cloud console(s) and figuring out what instances are in use.
Finally, make sure to also pull data from existing asset repositories such as your CMDB, which Operations presumably uses to track all the stuff they think is out there. It is difficult to keep these data stores current so this is no substitute for an active scan, but it provides a cross-check on what’s in your environment.
Determine Security Posture
Once you know what’s out there you need to figure out whether it’s secure. Or more realistically, how vulnerable it is. That typically requires some kind of vulnerability scan on the devices you discovered. There are many aspects to vulnerability scanning – at the endpoint, server, and application layers – so we won’t rehash all the research from Vulnerability Management Evolution. Check it out to understand how a vulnerability management platform can help prioritize your operational security activity. Key features to expect from your scanner include:
- Device/Protocol Support: Once you find a endpoint and/or server, you need to determine its security posture. Compliance demands that we scan all devices with access to private/sensitive/protected data, so any scanner should assess the varieties of devices running in your environment, as well as servers running all relevant operating systems.
- External and Internal Scanning: Don’t assume adversaries are purely external (or internal) – you need to assess devices both from inside and outside your network. You need some kind of scanner appliance (which could be virtualized) to scan the innards of your environment. You will also want to monitor your IP space from the outside to identify new Internet-facing devices, find open ports, etc.
- Accuracy: Unless you enjoy chasing wild geese, you will appreciate scanners that prioritizes accuracy to minimize false positives.
- Vulnerability Research: Every vulnerability requires a determination of severity, so it is very helpful to have information – from either the vendor’s research team or third parties – on the vulnerability directly within the scanning console, to help figure out which problems are real.
- Scale: The scanner must be able to scan your environment quickly and effectively – whether that is 200 or 200,000 devices. Make sure it is extensible enough to cover what you will need as you add devices, databases, apps, virtual instances, etc.
- New and Updated Tests: Organizations face new attacks constantly, and attackers never stop evolving. Your scanner needs to stay current to test for the latest attacks. Exploit code based on patches and public vulnerability disclosures typically appears within a day, so scanners need to be updated almost daily, and you need the ability to update them with new tests transparently – whether on-premises or in the cloud.
A vulnerability scan will provide some perspective on what is vulnerable, but that doesn’t necessarily equate to risk. Given that you presumably have a bunch of defenses in place on the network in front of your endpoints and servers, attackers may not be able to reach a device. Automated attack path analysis and visualization tools can be useful for determining which devices can be reached by an external attacker or a compromised internal device.
It may not be as sexy as a shiny malware sandbox or advanced detection technology, but these assessment tasks are critical before you can even start thinking about building a set of controls to prevent advanced attacks. Assessment needs to happen on an ongoing basis, because your technology environment is dynamic, and the attacks you are subject to change as well – possibly daily. Our next post will dig into emerging technologies to better protect endpoints and servers.
Posted at Tuesday 14th January 2014 3:40 pm
(1) Comments •
By Adrian Lane
You made your decision and kicked it up the food chain – now the fun begins. Well, fun for some people, anyway. For the first half of this discussion we will assume you have decided to move to a new platform and offer tactics for negotiating for a replacement platform. But some people decide not to move, using the possible switch for negotiating leverage. It is no bad thing to stay with your existing platform, so long as you have done the work to know it can meet your requirements. We are writing this paper for the people who keep telling us about their unhappiness, and how their evolving requirements have not been met. So after asking all the right questions, if the best answer is to stay put, that’s a less disruptive path anyway.
For now, though, let’s assume your current platform won’t get you there. Now your job is to get the best price for the new offering. Here are a few tips to leverage for the best deal:
- Time the buy: Yes, this is Negotiation 101. Wait until the end of the quarter and squeeze your sales rep for the best deal to get the PO in by the last day of the month. Sometimes it works, sometimes it doesn’t. But it’s worth trying. The rep may ask for your commitment that the deal will, in fact, get done that quarter. Make sure you can get it done if you pull this card.
- Tell the incumbent they lost the deal: Next get the incumbent involved. Once you put in a call letting them know you are going in a different direction, they will usually respond. Not always, but generally the incumbent will try to save the deal. And then you can go back to the challenger and tell them they need to do a bit better because you got this great offer from their entrenched competition. Just like when buying a car, to use this tactic you must be willing to walk away from the challenger and stay with the incumbent.
- Look at non-cash add-ons: Sometimes the challenger can’t discount any more. But you can ask for additional professional services, modules, boxes, licenses, whatever. With new data analytics, maybe your team lacks some in-house skills for a successful transition – the vendor can help. Remember, the incremental cost of software is zero, zilch, nada – so vendors can often bundle in a little more to get the deal when pushed to the wall.
- Revisit service levels: Another non-cash sweetener could be an enhanced level of service. Maybe it’s a dedicated project manager to get your migration done. Maybe it’s the Platinum level of support, even if you pay for Bronze. Given the amount of care and feeding required to keep any security management platform tuned and optimized, a deeper service relationship could come in handy.
- Dealing with your boss’s boss: One last thing: be prepared for your recommendation to be challenged, especially if the incumbent sells a lot of other gear to your company. This entire process has prepared you for that call, so just work through the logic of your decision once more, making clear that your recommendation is best for the organization. But expect the incumbent to go over your head – especially if they sell a lot of storage or servers to your company.
Negotiating with the incumbent
Customers also need to consider that maybe staying is the best option for their organization, so knowing how to leverage both sides helps you make a better deal. Dealing with an incumbent who doesn’t want to lose business adds a layer of complexity to the decision, so customers need to be prepared for incumbent vendors trying to save the business; fortunately there are ways to leverage that behavior as the decision process comes to a conclusion. It would be naive to not prepare in case the decision goes the other way – due to pricing, politics, or any other reason beyond your control. So if you have to make the status quo work and keep the incumbent, here are some ideas for making lemonade from the proverbial lemon:
- Tell the incumbent they are losing the deal: We know it is not totally above-board – but all’s fair in love, war, and sales. If the incumbent didn’t already know they were at risk, it can’t hurt to tell them. Some vendors (especially the big ones) don’t care, which is probably one reason you were looking at new stuff anyway. Others will get the wake-up call and try to make you happy. That’s the time to revisit your platform evaluation and figure out what needs fixing.
- Get services: If you have to make do with what you have, at least force the vendor’s hand to make your systems work better. Asking a vendor for feature enhancement commitments will only add to your disappointment, but there are many options at your disposal. If your issue is not getting proper value from the system, push to have the incumbent provide some professional services to improve the implementation. Maybe send your folks to training. Have their team set up a new set of rules and do knowledge transfer. We have seen organizations literally start over, which may make sense if your initial implementation is sufficiently screwed up.
- Scale up (at lower prices): If scalability is the issue, confront that directly with the incumbent and request additional hardware and/or licenses to address the issue. Of course this may not be enough but every little bit helps, and if moving to a new platform isn’t an option, at least you can ease the problem a bit. Especially when the incumbent knows you were looking at new gear because of a scaling problem.
- Add use cases: Another way to get additional value is to request additional modules thrown into a renewal or expansion deal. Maybe add the identity module or look at configuration auditing. Or work with the team to add database and/or application monitoring. Again, the more you use the tool, the more value you will get, so figure out what the incumbent will do to make you happy.
Honestly, if you must stick with the existing system, you don’t have much flexibility. The incumbent doesn’t need to know that, though, so try to use the specter of migration as leverage. But at the end of the day it is what it is. Throughout this process you have figured out what you need the tool to do, so now do your best to get there within your constraints.
Once the deal is done, it’s time to move to the new platform, and you will be knee-deep in the migration. We will wrap up this paper with the migration, and planning to get onto the new kit. It will be hard – it always is – but you can leverage everything you learned through your first go-round with the incumbent, as well as this process, to build a very clear map of where you need to go and how to get there.
Posted at Tuesday 14th January 2014 8:00 am
(0) Comments •
Okay, we have content in this thing. We promise. But we can’t stop staring at our new title video sequence. I mean, just look at it!
This week Rich, Mike, and Adrian discuss Target, Snapchat, RSA, and why no one can get crisis communications correct.
Sorry we hit technical difficulties with the live Q&A Friday, but we think we have the kinks worked out (I’d blame Mike if I were inclined to point fingers). Our plan is to record Friday again – keep an eye on our Google+ page for the details.
Posted at Monday 13th January 2014 5:27 pm
(0) Comments •
Last week I wrote up my near epic fail on Amazon Web Services where I ‘let’ someone launch a bunch of Litecoin mining instances in my account.
Since then I received some questions on my forensics process, so I figure this is a good time to write up the process in more detail. Specifically, how to take a snapshot and use it for forensic analysis.
I won’t cover all the steps at the AWS account layer – this post focuses on what you should do for a specific instance, not your entire management plane.
The first step, which I skipped, is to collect all the metadata associated with the instance. There is an easy way, a hard way (walk through the web UI and take notes manually), and the way I’m building into my nifty tool for all this that I will release at RSA (or sooner, if you know where to look).
The best way is to use the AWS command line tools for your operating system. Then run the command
aws ec2 describe-instances --instance-ids i-5203422c (inserting your instance ID). Note that you need to follow the instructions linked above to properly configure the tool and your credentials.
I suggest piping the output to a file (e.g.,
aws ec2 describe-instances --instance-ids i-5203422c > forensic-metadata.log) for later examination.
You should also get the console output, which is stored by AWS for a short period on boot/reboot/termination.
aws ec2 get-console-output --instance-id i-5203422c. This might include a bit more information if the attacker mucked with logs inside the instance, but won’t be useful for a hacked instance because it is only a boot log. This is a good reason to use a tool that collects instance logs outside AWS.
That is the basics of the metadata for an instance. Those two pieces collect the most important bits. The best option would be CloudTrail logs, but that is fodder for a future post.
Now on to the instance itself. While you might log into it and poke around, I focused on classical storage forensics. There are four steps:
- Take a snapshot of all storage volumes.
- Launch an instance to examine the volumes.
- Attach the volumes.
- Perform your investigation.
If you want to test any of this, feel free to use the snapshot of the hacked instance that was running in my account (well, one of 10 instances). The snapshot ID you will need is
Snapshot the storage volumes
I will show all this using the web interface, but you can also manage all of it using the command line or API (which is how I now do it, but that code wasn’t ready when I had my incident).
There is a slightly shorter way to do this in the web UI by going straight to volumes, but that way is easier to botch, so I will show the long way and you can figure out the shorter alternative yourself.
- Click Instances in your EC2 management console, then check the instance to examine.
- Look at the details on the bottom, click the Block Devices, then each entry. Pull the Volume ID for every attached volume.
- Switch to Volumes and then snapshot each volume you identified in the steps above.
- Label each snapshot so you remember it. I suggest date and time, “Forensics”, and perhaps the instance ID.
You can also add a name to your instance, then skip direct to Volumes and search for volumes attached to it.
Remember, once you take a snapshot, it is read-only – you can create as many copies as you like to work on without destroying the original. When you create a volume from an instance it doesn’t overwrite the snapshot, it gets another copy injected into storage.
Snapshots don’t capture volatile memory, so if you need RAM you need to either play with the instance itself or create a new image from that instance and launch it – perhaps the memory will provide more clues. That is a different process for another day.
Launch a forensics instance
Launch the operating system of your choice, in the same region as your snapshot. Load it with whatever tools you want. I did just a basic analysis by poking around.
Attach the storage volumes
- Go to Snapshots in the management console.
- Click the one you want, right-click, and then “Create Volume from Snapshot”. Make sure you choose the same Availability Zone as your forensics instance.
- Seriously, make sure you choose the same Availability Zone as your instance. People always mess this up. (By ‘people’, I of course mean ‘I’).
- Go back to Volumes.
- Select the new volume when it is ready, and right click/attach.
- Select your forensics instance. (Mine is stopped in the screenshot – ignore that).
- Set a mount point you will remember.
Perform your investigation
- Create a mount point for the new storage volumes, which are effectively external hard drives. For example,
sudo mkdir /forensics.
- Mount the new drive, e.g.,
sudo mount /dev/xvdf1 /forensics. Amazon may change the device mapping when you attach the drive (technically your operating system does that, not AWS, and you get a warning when you attach).
- Remember, use
sudo bash (or the appropriate equivalent for your OS) if you want to peek into a user account in the attached volume.
And that’s it. Remember you can mess with the volume all you want, then attach a new one from the snapshot again for another pristine copy. If you need a legal trail my process probably isn’t rigorous enough, but there should be enough here that you can easily adapt.
Again, try it with my snapshot if you want some practice on something with interesting data inside. And after RSA check back for a tool which automates nearly all of this.
Posted at Monday 13th January 2014 2:35 pm
(0) Comments •