Securosis

Research

Apple’s Very Different BYOD Philosophy

I am currently polishing off the first draft of my Data Security for iOS 7 paper, and reached one fascinating conclusion during the research which I want to push out early. Apple’ approach is implementing is very different from the way we normally view BYOD. Apple’s focus is on providing a consistent, non-degraded user experience while still allowing enterprise control. Apple enforces this by taking an active role in mediating mobile device management between the user and the enterprise, treating both as equals. We haven’t really seen this before – even when companies like Blackberry handle aspects of security and MDM, they don’t simultaneously treat the device as something the user owns. Enough blather – here you go… Apple has a very clear vision of the role of iOS devices in the enterprise. There is BYOD, and there are enterprise-owned devices, with nearly completely different models for each. The owner of the device defines the security and management model. In Apple’s BYOD model users own their devices, enterprises own enterprise data and apps on devices, and the user experience never suffers. No dual personas. No virtual machines. A seamless experience, with data and apps intermingled but sandboxed. The model is far from perfect today, with one major gap, but iOS 7 is the clearest expression of this direction yet, and only the foolish would expect Apple to change any time soon. Enterprise-owned devices support absolute control by IT, down to the new-device provisioning experience. Organizations can degrade features as much as they want and need, but the devices will, as much as allowed, still provide the complete iOS experience. In the first case users allow the enterprise space on their device, while the enterprise allows users access to enterprise resources; in the second model the enterprise owns everything. The split is so clear that it is actually difficult for the enterprise to implement supervised mode on an employee-owned device. We will explain the specifics as we go along, but here are a few examples to highlight the different models. On employee owned devices: The enterprise sends a configuration profile that the user can choose to accept or decline. If the user accepts it, certain minimal security can be required, such as passcode settings. The user gains access to their corporate email, but cannot move messages to other email accounts without permission. The enterprise can install managed apps, which can be set to only allow data to flow between them and managed accounts (email). These may be enterprise apps or enterprise licenses for other commercial apps. If the enterprise pays for it, they own it. The user otherwise controls all their personal accounts, apps, and information on the device. All this is done without exposing any user data (like the user’s iTunes Store account) to the enterprise. If the user opts out of enterprise control (which they can do whenever they want) they lose access to all enterprise features, accounts, and apps. The enterprise can also erase their ‘footprint’ remotely whenever they want. The device is still tied to the user’s iCloud account, including Activation Lock to prevent anyone, even the enterprise, from taking the device and using it without permission. On enterprise owned devices: The enterprise controls the entire provisioning process, from before the box is even opened. When the user first opens the box and starts their assigned device, the entire experience is managed by the enterprise, down to which setup screens display. The enterprise controls all apps, settings, and features of the device, down to disabling the camera and restricting network settings. The device can never be associated with a user’s iCloud account for Activation Lock; the enterprise owns it. This model is quite different from the way security and management were handled on iOS 6, and runs deeper than most people realize. While there are gaps, especially in the BYOD controls, it is safe to assume these will slowly be cleaned up over time following Apple’s usual normal improvement process. Share:

Share:
Read Post

Friday Summary: January 17, 2014

Today I am going to write about tokenization. Four separate people have sent me a questions about tokenization in the last week. As a security paranoiac I figured there was some kind of conspiracy or social engineering going on – this whole NSA/Snowden/RSA thingy has me spooked. But after I calmed down and realized that these are ‘random’ events, I recognized that the questions are good and relevant to a wider audience, so I will answer a couple of them here on the blog. In no particular order: “What is throttling tokenization?” and “How common is the ‘PCI tokenization throttle function’ in tokenization products and services?” I first heard about “throttling tokenization systems” and “rate limiting functions” from the card brands as a secondary security service. As I understand the intention, it is to provide, in case a payment gateway is compromised or an attacker gains access to a token service, a failsafe so someone couldn’t siphon off the entire token database. My assumption was that this rate monitor/throttle would only be provided on de-tokenization requests or vault inquiries that return cardholder information. Maybe that’s smart because you’d have a built-in failsafe to limit information leakage. Part of me thinks this is more misguided guidance, as the rate limiting feature does not appear to be in response to any reasonable threat model – de-tokenization requests should be locked down and not available through general APIs in the first place!!! Perhaps I am not clever enough to come up with a compromise that would warrant such a response, but everything I can think of would (should) be handled in a different manner. But still, as I understand from conversations with people who are building tokenization platforms, the throttling functions are a) a DDoS protection and b) a defense against someone who figures out how to request all tokens in a database. And is it common? Not so far as I know – I don’t know of any token service or product that builds this in; instead the function is provided by other fraud and threat analytics at the network and application layers. Honestly, I don’t have inside information on this topic, and one of the people who asked this question should have had better information than I do. Do you still write about tokenization? Yes. Are you aware of any guidance in use of vault-less solutions? Are there any proof points or third-party validations of their security? For the audience, vault-less tokenization solutions do not store a database of generated tokens – they use a mathematical formula to generate them, so no need to store that which can be easily derived. And to answer the question, No, I am not aware of any. That does not mean no third-party validation exists, but I don’t follow these sorts of proofs closely. What’s more, because the basic design of these solutions closely resemble a one-time pad or similar, conceptually they are very secure. The proof is always in the implementation, so if you need this type of validation have your vendor provide a third-party validation by people qualified for that type of analysis. Why is “token distinguishability” discussed as a best practice? What is it and which vendors provide it? Because PCI auditors need a way to determine whether a database is full of real credit cards or just tokens. This is a hard problem – tokens can and should be very close to the real thing. The goal for tokens is to make them as real as possible so you can use them in payment systems, but they will not be accepted as actual payment instruments. All the vendors potentially do this. I am unaware of any vendor offering a tool to differentiate real vs. tokenized values, but hope some vendors will step forward to help out. Have you seen a copy of the tokenization framework Visa/Mastercard/etc.? announced a few months back? No. As far as I know that framework was never published, and my requests for copies were met with complete and total silence. I did get snippets of information from half a dozen different people in product management or development roles – off the record – at Visa and Mastercard. It appears their intention was to define a tokenization platform that could be used across all merchants, acquirers, issuers, and related third parties. But this would be a platform offered by the brands to make tokenization an industry standard. On a side note I really did think, from the way the PR announcement was phrased, that the card brands were shooting for a cloud identity platform to issue transaction tokens after a user self-identified to the brands. It looked like they wanted a one-to-one relationship with the buyer to disintermediate merchants out of the payment card relationship. That could be a very slick cloud services play, but apparently I was on drugs – according to my contacts there is no such effort. And don’t forget to RSVP for the 6th annual (really, the 6th? How time flies ….) Securosis Disaster Recovery Breakfast during the RSA Conference. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian quoted in DBaaS article. Rich talks with Dennis Fisher about the Target breach (podcast). Favorite Securosis Posts Rich: Security Management 2.5: Negotiation. I hate negotiating. Some people live for it, but I can’t be bothered. On that note, I need to go buy a car. David Mortman: Firestarter: Crisis Communications. Mike Rothman: Security Management 2.5: Negotiation. This is a great post. A solid plan for buying any big-ticket item. Adrian Lane: Apple’s Very Different BYOD Philosophy. Nobody else has covered this to my knowledge, but Rich describes it very clearly. Enterprise-owned devices are simpler, but iOS almost seamlessly handles the division between enterprise and user domains on BYOD gear. Not very dramatic but simple and effective. Other Securosis Posts A Very Telling Antivirus Metric. Reducing Attack Surface with Application Control: Use Cases and Selection Criteria. Incite 1/15/2014: Declutter. Advanced Endpoint and Server Protection:

Share:
Read Post

Reducing Attack Surface with Application Control: Use Cases and Selection Criteria

In the first post in our Application Control series we discussed why it is hard to protect endpoints, and some of the emerging alternative technologies that promise to help us do better. Mostly because it is probably impossible do a worse job of protecting endpoints, right? We described Application Control (also known as Application Whitelisting), one of these alternatives, while being candid about the perception and reality of this technology after years of use. Our conclusion was that Application Control makes a lot of sense in a variety of use cases, and can work in more general situations, if the organization is willing to make some tradeoffs. This post describes the “good fit” use cases and mentions some of the features & functions that can make a huge difference to security and usability. Use Cases Given the breadth of ways computing devices are used in a typical enterprise, trying to use a generic set of security controls for every device doesn’t make much sense. So first you spend some time profiling the main use models of these devices and defining some standard ‘profiles’, for which you can then design appropriate defenses. There are quite a few attributes you can use to define these use cases, but here are the main ones we usually see: Operating System: You protect Windows devices differently than Macs than Linux servers, because each has a different security model and different available controls. When deciding how to protect a device, operating system is a fundamental factor. Usage Model: Next look at how the device is used. Is it a desktop, kiosk, server, laptop, or mobile device? We protect personal desktops differently than kiosks, even if the hardware and operating system are the same. Application variability: Consider what kind of applications run on the device, as well as how often they change and are updated. Geographic distribution: Where is the device located? Do you have dedicated IT and/or security staff there? What is the culture and do you have the ability to monitor and lock it down? Some countries don’t allow device monitoring and some security controls require permission from government organizations, so this must be a consideration as well. Access to sensitive data: Do the users of these devices have access to sensitive and/or protected data? If so you may need to protect them differently. Likewise, a public device in an open area, with no access to corporate networks, may be able to do with much looser security controls. Using these types of attributes you should be able to define a handful (or two) of use cases, which you can use to determine the most appropriate means of protecting each device, trading off security against usability. Let’s list a few of the key use cases where application control fits well. OS Lockdown When an operating system is at the end of its life and no longer receiving security updates, it is a sitting duck. Attackers have free rein to continue finding exploitable defects with no fear of patches to ruin their plans. Windows XP security updates officially end April 2014 – after that organizations still using XP are out of luck. (Like luck has anything to do with it…) We know you wonder why on Earth any organization serious about security – or even not so serious – would still use XP. It is a legitimate question, with reasonable answers. For one, some legacy applications still only run on XP. It may not be worth the investment – or even possible, depending on legal/ownership issues – to migrate to a modern operating system, so on XP they stay. A similar situation arises with compliance requirements to have applications qualified by a government agency. We see this a lot in healthcare, where the OS cannot even be patched without going through a lengthy and painful qualification process. That doesn’t happen, so on XP it stays. Despite Microsoft’s best efforts, XP isn’t going away any time soon. Unfortunately that means XP will still be a common target for attackers, and organizations will have little choice but to protect vulnerable devices somehow. Locking them down may be one of the few viable options. In this situation using application control in default-deny mode, allowing only authorized applications to run, works well. Fixed Function Devices Another use case we see frequently for application control is fixed function devices, such as kiosks running embedded operating systems. Think an ATM or payment station, where you don’t see the underlying operating system. These devices only run a select few applications, built specifically for the device. In this scenario there is no reason for any software besides authorized applications to run. Customers shouldn’t be browsing the Internet on an ATM machine. So application control works well to lock down kiosks. Similarly, some desktop computers in places like call centers and factory floors only run very stable and small sets of applications. Locking them down to run provides protection both from malware and employees loading unauthorized software or stealing data. In both this use case and OS lockdown you will get little to no pushback from employees about their inability to load software. Nothing in their job description indicates they should be loading software or accessing anything but the applications they need to do their jobs. In these scenarios application control is an excellent fit. Servers Another clear use case for application control is on server devices. Servers tend to be dedicated to a handful of functions, so they can be locked down to those specific applications. Servers don’t call the Help Desk to request access to iTunes, and admins can be expected to understand and navigate the validation process when they have a legitimate need for new software. Locking down servers can work very well – especially appealing because servers, as the repository of most sensitive data, are the ultimate target of most attacks. General Purpose Devices There has always been a desire to lock down general-purpose devices, which are among the most frequently compromised.

Share:
Read Post

Security Management 2.5: Migration

If you made it this far we know your old platform is akin to an old junker automobile: every day you drive to work in a noisy, uncomfortable, costly vehicle that may or may not get you where you need to be, and every time you turn around you’re spending more money to fix something. With cars figuring out what you want, shopping, getting financing, and then dealing with car sales people is no picnic either, but in the end you do it to make you life a bit easier and yourself more comfortable. It is important to remember this because, at this stage of SIEM replacement, it feels like we have gone through a lot of work just so we can do more work to roll out the new platform. Let’s step back for a moment and focus on what’s important; getting stuff done as simply and easily as possible. Now that you are moving to something else, how do you get there? The migration process is not easy, and it takes effort to move from from the incumbent to the new platform. We have outlined a disciplined and objective process to determine whether it is worth moving to a new security management platform. Now we will outline a process for implementing the new platform and transitioning from the incumbent to the new SIEM. You need to implement, and migrate your existing environment to the new platform, while maintaining service levels, and without exposing your organization to additional risk. This may involve supporting two systems for a short while. Or in a hybrid architecture using two systems indefinitely. Either way, when a customer puts his/her head on the block to select a new platform, the migration needs to go smoothly. There is no such thing as a ‘flash’ cutover. We recommend you start deploying the new SIEM long before you get rid of the old. At best, you will deprecate portions of the older system after newer replacement capabilities are online, but you will likely want the older system as a fallback until all new functions have been vetted and tuned. We have learned the importance of this staging process the hard way. Ignore it at your own peril, keeping in mind that your security management platform supports several key business functions. Plan We offer a migration plan for moving to the new security management platform. It covers data collection as well as migrating/reviewing policies, reports, and deployment architectures. We break the migration process into two phases: planning and implementation. Your plan needs to be very clear and specific about when things get installed, how data gets migrated, when you cut over from old systems to new, and who performs the work. The Planning step leverages much of the work done up to this point in evaluating replacement options – you just need to adapt it for migration. Review: Go back through the documents you created earlier. First consider your platform evaluation documents, which will help you understand what the current system provides and key deficiencies to address. These documents become the priority list for the migration effort, the basis for your migration task list. Next leverage what you learned during the PoC. To evaluate your new security management platform provider you conducted a mini deployment. Use what you learned from that exercise – particularly what worked and didn’t – as input for subsequent planning, and address the issues you identified. Focus on incremental success: What do you install first? Do you work top down or bottom up? Will you keep both systems operational throughout the entire migration, or shut down portions of the old as each node migrates? We recommend using your deployment model as a guide. You can learn more about these models by checking out Understanding and Selecting a SIEM. When using a mesh deployment model, it is often easiest to make sure a single node/location is fully functional before moving on to the next. With ring architectures it is generally best to get the central SIEM platform operational, and then gradually add nodes around it until you reach the scalability limit of the central node. Hierarchal models are best deployed top-down, with the central server first, followed by regional aggregation nodes in order of criticality, down to the collector level. Break the project up to establish incremental successes and avoid dead ends. Allocate resources: Who does the work? When will they do it? How long will it take to deploy the platform, data collectors, and/or log management support system(s)? This is also the time to engage professional services and enlist the new vendor’s assistance. The vendor presumably does these implementations all day long so they should have expertise at estimating these timelines. You may also want to engage them to perform some (or all) of the work in tandem with your staff, at least for the first few locations until you get the process down. Define the timeline: Estimate the time it will take to deploy the servers, install the collectors, and implement your policies. Include time for testing and verification. There is likely to be some ‘guesstimation’, but you have some reasonable metrics to plan from, from the PoC and prior experience with SIEM. You did document the PoC, right? Plan the project commencement date and publish to the team. Solicit feedback and adjust before commencing because you need shared accountability with the operations team(s) to make sure everyone has a vested interest in success. Preparation: We recommend you do as much work as possible before you begin migration, including construction of the rules and policies you will rely on to generate alerts and reports. Specify in advance any policies, reports, user accounts, data filters, backup schedules, data encryption, and related services you can. You already have a rule base so leverage it to get going. Of course you’ll tune things as you go, but why reinvent the wheel or rush unnecessarily? Keep in mind that you will always find something you failed to

Share:
Read Post

Incite 1/15/2014: Declutter

As I discussed last week, the beginning of the year is a time for ReNewal and taking a look at what you will do over the next 12 months. Part of that renewal process should be clearing out the old so the new has room to grow. It’s kind of like forest fires. The old dead stuff needs to burn down so the new can emerge. I am happy to say the Boss is on board with this concept of renewal – she has been on a rampage, reducing the clutter around the house. The fact is that we accumulate a lot of crap over the years, and at some point we kind of get overrun by stuff. Having been in our house almost 10 years, since the twins were infants, we have stuff everywhere. It’s just the way it happens. Your stuff expands to take up all available space. So we still have stuff from when the kids were small. Like FeltKids and lots of other games and toys that haven’t been touched in years. It’s time for that stuff to go. We have a niece a few years younger than our twins, and a set of nephews (yes, twins run rampant in our shop) who just turned 3, we have been able to get rid of some of the stuff. There is nothing more gratifying than showing up with a huge box of action figures that were gathering dust in our basement, and seeing the little guys’ eyes light up. When we delivered our care package over Thanksgiving, they played with the toys for hours. The benefit of decluttering is twofold. First it gets the stuff out of our house. It clears room for the next wave of stuff tweens need. I don’t quite know that that is because iOS games don’t seem to take up that much room. But I’m sure they will accumulate something now that we have more room. And it’s an ongoing process. If we can get through this stuff over the next couple months that will be awesome. As I said, you accumulate a bunch of crap over 10 years. The other benefit is the joy these things bring to others. We don’t use this stuff any more. It’s just sitting around. But another family without our good fortune could use this stuff. If these things bring half the joy and satisfaction they brought our kids, that’s a huge win. And it’s not just stuff that you have. XX1 collected over 1,000 books for her Mitzvah project to donate to Sheltering Books, a local charity that provides books to homeless people living in shelters. She and I loaded up the van with boxes and boxes of books on Sunday, and when we delivered them there was great satisfaction from knowing that these books, which folks kindly donated to declutter their homes, would go to good use with people in need. And the books were out of my garage. So it was truly a win-win-win. Karma points and a decluttered garage. I’ll take it. –Mike Photo credit: “home-office-reorganization-before-after” originally uploaded by Melanie Edwards Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. Reducing Attack Surface with Application Control The Double Edged Sword Security Management 2.5: You Buy a New SIEM Yet? Negotiation Selection Process The Decision Process Evaluating the Incumbent Revisiting Requirements Platform Evolution Changing Needs Introduction Advanced Endpoint and Server Protection Assessment Introduction Newly Published Papers What CISOs Need to Know about Cloud Computing Defending Against Application Denial of Service Security Awareness Training Evolution Firewall Management Essentials Continuous Security Monitoring API Gateways Threat Intelligence for Ecosystem Risk Management Dealing with Database Denial of Service Identity and Access Management for Cloud Services The 2014 Endpoint Security Buyer’s Guide The CISO’s Guide to Advanced Attackers Incite 4 U Don’t take it personally: Steven Covey has been gone for years, but his 7 habits live on and on. Our friend George Hulme did a piece for CSO Online detailing the 7 habits of effective security pros. The first is communication and the second is business acumen. I’m not sure you need to even get to #3. Without the ability to persuade folks that security is important, within the context of a critical business imperative – nothing else matters. Of course then you have squishy stuff like creativity and some repetitious stuff like “actively engaging with business stakeholders”. But that’s different than business acumen. I guess it wouldn’t have resonated as well if it was 5 habits, right? Another interesting one is problem solving. Again, not unique to security, but if you don’t like to investigate stuff and solve problems, security isn’t for you. One habit that isn’t on there is don’t take it personally. Security success depends on a bunch of other things going right, so even if you are blamed for a breach or outage, it is not necessarily your fault. Another might be “wear a mouthguard” because many security folks get kicked in the teeth pretty much every day. – MR Out-of-control ad frenzy: Safari on my iPad died three times Saturday am, and the culprit was advertisement plug-ins. My music stream halted when a McDonalds ad screeched at me from another site. I was not “lovin’ it!” The 20 megabit pipe into my home and a new iPad were unable to manage fast page loads because of the turd parade of third-party ads hogging my bandwidth. It seems that in marketers’ frenzy to know everything you do and push their crap on you, they forgot to serve you what you asked for. The yoast blog offers a nice analogy, comparing on-line ads to brick-and-mortar merchants tagging customers with stickers, but it’s more like carrying around a billboard. And that analogy

Share:
Read Post

Advanced Endpoint and Server Protection: Assessment

As we described in the introduction to the Advanced Endpoint and Server Protection series, given the inability of most traditional security controls to defend against advanced attacks, it is time to reimagine how we do threat management. This new process has 5 phases; we call the first phase Assessment. We described it as: Assessment: The first step is gaining visibility into all devices, data sources, and applications that present risk to your environment. And you need to understand the current security posture of anything to know how to protect it. You need to know what you have, how vulnerable, and how exposed it is. With this information you can prioritize and design a set of security controls to protect it. What’s at Risk? As we described in the CISO’s Guide to Advanced Attackers, you need to understand what attackers would be trying to access in your environment and why. Before you go into a long monologue about how you don’t have anything to steal, forget it. Every organization has something that is interesting to some adversary. If could be as simple as compromising devices to launch attacks on other sites, or as focused as gaining access to your environment to steal the schematics to your latest project. You cannot afford to assume adversaries will not use advanced attacks – you need to be prepared either way. We call this Mission Assessment, and it involves figuring out what’s important in your environment. This leads you to identify interesting targets most likely to be targeted by attackers. When trying to understand what an advanced attacker will probably be looking for, there is a pretty short list: Intellectual property Protected customer data Business operational data (proposals, logistics, etc.) Everything else To learn where this data is within the organization, you need to get out from behind your desk and talk to senior management and your peers. Once you understand the potential targets, you can begin to profile adversaries likely to be interested in them. Again, we can put together a short list of likely attackers: Unsophisticated: These folks favor smash and grab attacks, where they use publicly available exploits (perhaps leveraging attack tools such as Metasploit and the Social Engineer’s Toolkit) or packaged attack kits they buy on the Internet. They are opportunists who take what they can get. Organized Crime: The next step up the food chain is organized criminals. They invest in security research, test their exploits, and always have a plan to exfiltrate and monetize what they find. They are also opportunistic but can be quite sophisticated in attacking payment processors and large-scale retailers. They tend to be most interested in financial data but have been known to steal intellectual property if they can sell it and/or use brute force approaches like DDoS threats for extortion. Competitor: Competitors sometimes use underhanded means to gain advantage in product development and competitive bids. They tend to be most interested in intellectual property and business operations. State-sponsored: Of course we all hear the familiar fretting about alleged Chinese military attackers, but you can bet every large nation-state has a team practicing offensive tactics. They are all interested in stealing all sorts of data – from both commercial and government entities. And some of them don’t care much about concealing their presence. Understanding likely attackers provides insight into their tactics, which enables you to design and implement security controls to address the risk. But before you can design the security control set you need to understand where the devices are, as well as the vulnerabilities of devices within your environment. Those are the next two steps in the Assessment phase. Discovery This process finds the endpoints and servers on your network, and makes sure everything is accounted for. When performed early in the endpoint and server protection process, this helps avoid “oh crap” moments. It is no good when you stumble over a bunch of unknown devices – with no idea what they are, what they have access to, or whether they are steaming piles of malware. Additionally, an ongoing discovery process can shorten the window between something popping up on your network, you discovering it, and figuring out whether it has been compromised. There are a number of techniques for discovery, including actively scanning your entire address space for devices and profiling what you find. This works well enough and is traditionally the main way to do initial discovery. You can supplement active discovery with a passive discovery capability, which monitors network traffic and identifies new devices based on network communications. Depending on the sophistication of the passive analysis, devices can be profiled and vulnerabilities can be identified (as we will discuss below), but the primary goal of passive monitoring is to find new unmanaged devices faster. Passive discovery is also helpful for identifying devices hidden behind firewalls and on protected segments which active discovery cannot reach. Finally, another complicating factor for discovery – especially for servers – is cloud computing. With the ability to spin up and take down virtual instances – perhaps outside your data center – your platform needs to both track and assess cloud resources, which requires some means of accessing cloud console(s) and figuring out what instances are in use. Finally, make sure to also pull data from existing asset repositories such as your CMDB, which Operations presumably uses to track all the stuff they think is out there. It is difficult to keep these data stores current so this is no substitute for an active scan, but it provides a cross-check on what’s in your environment. Determine Security Posture Once you know what’s out there you need to figure out whether it’s secure. Or more realistically, how vulnerable it is. That typically requires some kind of vulnerability scan on the devices you discovered. There are many aspects to vulnerability scanning – at the endpoint, server, and application layers – so we won’t rehash all the research from Vulnerability Management Evolution. Check it out to understand how a

Share:
Read Post

Security Management 2.5: Negotiation

You made your decision and kicked it up the food chain – now the fun begins. Well, fun for some people, anyway. For the first half of this discussion we will assume you have decided to move to a new platform and offer tactics for negotiating for a replacement platform. But some people decide not to move, using the possible switch for negotiating leverage. It is no bad thing to stay with your existing platform, so long as you have done the work to know it can meet your requirements. We are writing this paper for the people who keep telling us about their unhappiness, and how their evolving requirements have not been met. So after asking all the right questions, if the best answer is to stay put, that’s a less disruptive path anyway. Replacement tactics For now, though, let’s assume your current platform won’t get you there. Now your job is to get the best price for the new offering. Here are a few tips to leverage for the best deal: Time the buy: Yes, this is Negotiation 101. Wait until the end of the quarter and squeeze your sales rep for the best deal to get the PO in by the last day of the month. Sometimes it works, sometimes it doesn’t. But it’s worth trying. The rep may ask for your commitment that the deal will, in fact, get done that quarter. Make sure you can get it done if you pull this card. Tell the incumbent they lost the deal: Next get the incumbent involved. Once you put in a call letting them know you are going in a different direction, they will usually respond. Not always, but generally the incumbent will try to save the deal. And then you can go back to the challenger and tell them they need to do a bit better because you got this great offer from their entrenched competition. Just like when buying a car, to use this tactic you must be willing to walk away from the challenger and stay with the incumbent. Look at non-cash add-ons: Sometimes the challenger can’t discount any more. But you can ask for additional professional services, modules, boxes, licenses, whatever. With new data analytics, maybe your team lacks some in-house skills for a successful transition – the vendor can help. Remember, the incremental cost of software is zero, zilch, nada – so vendors can often bundle in a little more to get the deal when pushed to the wall. Revisit service levels: Another non-cash sweetener could be an enhanced level of service. Maybe it’s a dedicated project manager to get your migration done. Maybe it’s the Platinum level of support, even if you pay for Bronze. Given the amount of care and feeding required to keep any security management platform tuned and optimized, a deeper service relationship could come in handy. Dealing with your boss’s boss: One last thing: be prepared for your recommendation to be challenged, especially if the incumbent sells a lot of other gear to your company. This entire process has prepared you for that call, so just work through the logic of your decision once more, making clear that your recommendation is best for the organization. But expect the incumbent to go over your head – especially if they sell a lot of storage or servers to your company. Negotiating with the incumbent Customers also need to consider that maybe staying is the best option for their organization, so knowing how to leverage both sides helps you make a better deal. Dealing with an incumbent who doesn’t want to lose business adds a layer of complexity to the decision, so customers need to be prepared for incumbent vendors trying to save the business; fortunately there are ways to leverage that behavior as the decision process comes to a conclusion. It would be naive to not prepare in case the decision goes the other way – due to pricing, politics, or any other reason beyond your control. So if you have to make the status quo work and keep the incumbent, here are some ideas for making lemonade from the proverbial lemon: Tell the incumbent they are losing the deal: We know it is not totally above-board – but all’s fair in love, war, and sales. If the incumbent didn’t already know they were at risk, it can’t hurt to tell them. Some vendors (especially the big ones) don’t care, which is probably one reason you were looking at new stuff anyway. Others will get the wake-up call and try to make you happy. That’s the time to revisit your platform evaluation and figure out what needs fixing. Get services: If you have to make do with what you have, at least force the vendor’s hand to make your systems work better. Asking a vendor for feature enhancement commitments will only add to your disappointment, but there are many options at your disposal. If your issue is not getting proper value from the system, push to have the incumbent provide some professional services to improve the implementation. Maybe send your folks to training. Have their team set up a new set of rules and do knowledge transfer. We have seen organizations literally start over, which may make sense if your initial implementation is sufficiently screwed up. Scale up (at lower prices): If scalability is the issue, confront that directly with the incumbent and request additional hardware and/or licenses to address the issue. Of course this may not be enough but every little bit helps, and if moving to a new platform isn’t an option, at least you can ease the problem a bit. Especially when the incumbent knows you were looking at new gear because of a scaling problem. Add use cases: Another way to get additional value is to request additional modules thrown into a renewal or expansion deal. Maybe add the identity module or look at configuration auditing. Or work with the team to add database

Share:
Read Post

Cloud Forensics 101

Last week I wrote up my near epic fail on Amazon Web Services where I ‘let’ someone launch a bunch of Litecoin mining instances in my account. Since then I received some questions on my forensics process, so I figure this is a good time to write up the process in more detail. Specifically, how to take a snapshot and use it for forensic analysis. I won’t cover all the steps at the AWS account layer – this post focuses on what you should do for a specific instance, not your entire management plane. Metadata The first step, which I skipped, is to collect all the metadata associated with the instance. There is an easy way, a hard way (walk through the web UI and take notes manually), and the way I’m building into my nifty tool for all this that I will release at RSA (or sooner, if you know where to look). The best way is to use the AWS command line tools for your operating system. Then run the command aws ec2 describe-instances –instance-ids i-5203422c (inserting your instance ID). Note that you need to follow the instructions linked above to properly configure the tool and your credentials. I suggest piping the output to a file (e.g., aws ec2 describe-instances –instance-ids i-5203422c > forensic-metadata.log) for later examination. You should also get the console output, which is stored by AWS for a short period on boot/reboot/termination. aws ec2 get-console-output –instance-id i-5203422c. This might include a bit more information if the attacker mucked with logs inside the instance, but won’t be useful for a hacked instance because it is only a boot log. This is a good reason to use a tool that collects instance logs outside AWS. That is the basics of the metadata for an instance. Those two pieces collect the most important bits. The best option would be CloudTrail logs, but that is fodder for a future post. Instance Forensics Now on to the instance itself. While you might log into it and poke around, I focused on classical storage forensics. There are four steps: Take a snapshot of all storage volumes. Launch an instance to examine the volumes. Attach the volumes. Perform your investigation. If you want to test any of this, feel free to use the snapshot of the hacked instance that was running in my account (well, one of 10 instances). The snapshot ID you will need is snap-ccd3e9c6. Snapshot the storage volumes I will show all this using the web interface, but you can also manage all of it using the command line or API (which is how I now do it, but that code wasn’t ready when I had my incident). There is a slightly shorter way to do this in the web UI by going straight to volumes, but that way is easier to botch, so I will show the long way and you can figure out the shorter alternative yourself. Click Instances in your EC2 management console, then check the instance to examine. Look at the details on the bottom, click the Block Devices, then each entry. Pull the Volume ID for every attached volume. Switch to Volumes and then snapshot each volume you identified in the steps above. Label each snapshot so you remember it. I suggest date and time, “Forensics”, and perhaps the instance ID. You can also add a name to your instance, then skip direct to Volumes and search for volumes attached to it. Remember, once you take a snapshot, it is read-only – you can create as many copies as you like to work on without destroying the original. When you create a volume from an instance it doesn’t overwrite the snapshot, it gets another copy injected into storage. Snapshots don’t capture volatile memory, so if you need RAM you need to either play with the instance itself or create a new image from that instance and launch it – perhaps the memory will provide more clues. That is a different process for another day. Launch a forensics instance Launch the operating system of your choice, in the same region as your snapshot. Load it with whatever tools you want. I did just a basic analysis by poking around. Attach the storage volumes Go to Snapshots in the management console. Click the one you want, right-click, and then “Create Volume from Snapshot”. Make sure you choose the same Availability Zone as your forensics instance. Seriously, make sure you choose the same Availability Zone as your instance. People always mess this up. (By ‘people’, I of course mean ‘I’). Go back to Volumes. Select the new volume when it is ready, and right click/attach. Select your forensics instance. (Mine is stopped in the screenshot – ignore that). Set a mount point you will remember. Perform your investigation Create a mount point for the new storage volumes, which are effectively external hard drives. For example, sudo mkdir /forensics. Mount the new drive, e.g., sudo mount /dev/xvdf1 /forensics. Amazon may change the device mapping when you attach the drive (technically your operating system does that, not AWS, and you get a warning when you attach). Remember, use sudo bash (or the appropriate equivalent for your OS) if you want to peek into a user account in the attached volume. And that’s it. Remember you can mess with the volume all you want, then attach a new one from the snapshot again for another pristine copy. If you need a legal trail my process probably isn’t rigorous enough, but there should be enough here that you can easily adapt. Again, try it with my snapshot if you want some practice on something with interesting data inside. And after RSA check back for a tool which automates nearly all of this. Share:

Share:
Read Post

Security Management 2.5: Selection Process

With vendor evaluations in hand, you are ready to make your decision, right? The answer is both yes and no. We know the importance of this decision – you are here because your first attempt at this project wasn’t as successful as it needed to be. After the vendor evaluation process you are in a position to distinguish innovative technologies from pigs with fresh lipstick. But now you need to see which of the vendors is actually the best fit for you! Successful decision-making on SIEM replacement goes beyond vendor evaluation – it entails evaluating yourself too. It is important to differentiate between the two because you cannot make a decision without taking a long hard look at yourself, your team, and your company. This is an area where many projects fail, so let’s break the decision down to ensure you can make a good recommendation and feel comfortable with it – from both internal and external perspectives. But remember the selection of the ‘right’ vendor may come down to more than matching needs against capabilities. The output of our Security Management 2.5 process is not really a decision – it’s more of a recommendation. The final decision will likely be made in the executive suite. That’s why we focused so much on gathering data (quantitative where possible) – you will need to defend your recommendation until the purchase order is signed. And probably afterwards. Defensible Position We won’t mince words. This decision generally isn’t about objective or technical facts – especially since most of you reading this have an incumbent in play, typically part of a big company with important relationships with heavies inside your shop. This could get political, or the decision might be entirely financial, so you need your ducks in a row and a compelling argument for any change. And even then you might not be able to push through a full replacement. In that case the answer might be to supplement. In this scenario you still aggregate information with the existing platform, but then you feed it to the new platform for analysis, reporting, forensics, etc. across the enterprise. Given the economic cost of running both, this is unacceptable for some organizations, but if your hands are tied on replacement, this kind of creative approach is worth considering. But that is still only the external part of the decision process. In many cases the (perceived) failure of your existing SIEM may be self-inflicted. So we also need to evaluate and explain the causes of the failure, with assurance that you can avoid those issues this time. If not your successor will be in the same boat in another 2-3 years. So before you put your neck on the chopping block and advocate for change (if that is what you decide), do some deep internal analysis as well. Looking in the mirror First, let’s make sure you really re-examined the existing platform in terms of the original goals. Did your original goals adequately map your needs at the time, or was there stuff you did not anticipate? How have your goals changed over time? Be honest! Do not let ego get in the way of doing what’s right, and take a hard and fresh look at the decision to ensure you don’t repeat previous mistakes. Did you kick off this process because you were pissed at the original vendor? Or because they got bought and seemed to forget about the platform? Do you know what it will take to get the incumbent where it needs to be – and whether that is even possible? Is it about throwing professional services at the issues? Is there a fundamental technology problem? Did you assess the issues critically the first time around? If it was a skills issue, have you successfully addressed it? Can your folks build and maintain the platform moving forward? Are you looking at a managed service to take that concern off the table? If it was a resource problem, do you now have enough staff for proper care and feeding? Yes, the new generation of platforms requires less expertise to keep operational, but don’t be naive – no matter what any sales rep says, you cannot simply set and forget them. Whatever you pick will require expertise to deploy, manage, tune, and analyze reports. These platforms are not self-aware – by a long shot. Remember, there are no right or wrong answers here, but the truth (and your commitment) will become clear when you need to sell something to management. Some of you may worry that management will see the need for replacement as “your fault” for choosing the incumbent, so make sure you have answers to these questions and that you aren’t falling into a self-delusional trap. You need your story straight and your motivations clear. Have a straightforward and honest assessment of what is going right and wrong, so you are not caught off guard when asked to justify changes and new expenses. Setting Expectations Revisiting requirements provides insight into what you need the security management platform to do. Remember, not everything is Priority #1, so pick your top three must-have items and prioritize the requirements. You can prioritize specific use cases (compliance, security, forensics, operations), and have a pretty good feeling about whether the new platform or incumbent will meet your expectations. If you love some new features of the challenger(s), will your organization leverage them? Firing off alerts faster won’t help if your team takes a week to investigate each issue, or cannot keep up with the increased demand. The new platform’s ability to look at application and database traffic doesn’t matter if your developers won’t help you understand normal behavior to build the rule set. Fancy network flow analysis can be a productivity sink if your DNS and directory infrastructure is a mess and you can’t reliably map an IP to user ID. Does your existing product have too many features? Yes, some organizations simply cannot take advantage of (or

Share:
Read Post

Reducing Attack Surface with Application Control: The Double-Edged Sword [New Series]

The problems of protecting endpoints are pretty well understood. As we described in The 2014 Guide to Endpoint Security, you have stuff (private data and/or intellectual property) that others want. On the other hand, you have employees who need to do their jobs and require access to said private data and/or intellectual property. Those employees have sensitive data on their devices, so you need to protect their endpoints. It’s not like this is anything new. Protecting endpoints has been a focus of security professionals since, well, always – with decidedly unimpressive results. Why is protecting endpoints so hard? It can’t be a matter of effort, right? Billions have been spent on research to identify better ways to protect these devices. Organizations have spent tens of billions on endpoint security products and services. Yet, every minute more devices are compromised, more data is stolen, and security folks keep having to answer senior management, regulators, and ultimately customers as to why this keeps happening. The lack of demonstrable progress comes down to two intertwined causes. First, devices are built using software that has defects attackers can exploit. Nothing is perfect, especially not software, so every line of code presents attack surface. Second, employees can be fooled into taking action (such as installing software or clicking a link) that results in a successful attack. These two causes can’t really be separated. If the device isn’t vulnerable, then nothing an employee does should result in a successful attack. And likewise, if the employee doesn’t allow delivery of the attack/exploit code by clicking things, having vulnerable software is less of an issue. So if you can disrupt either causes your endpoints will be far better protected. Of course this is much easier said than done. In this new series, “Reducing Attack Surface with Application Control,” we will dig into the good and bad of application control (also known as application white listing) technology, talking about how AppControl can stop malware in its tracks and mitigate the risks of both vulnerable software and gullible users. We won’t shy away from addressing head-on the perception issues of endpoint lockdown, which cause many organizations to disregard the technology as infeasible in their environments. Finally, we will discuss use cases where AppControl makes a lot of sense and how it can favorably impact security posture, both reducing the attack surface of vulnerable devices and protecting users from themselves. Accelerating Attacker Innovation We mentioned the billions of dollars being spent on research to protect endpoint devices more effectively. It is legitimate to ask why these efforts haven’t really worked. It comes back to attackers innovating faster than defenders. And even if technology emerges to protect devices more effectively, it takes years for new technologies to become pervasive enough to blunt the impact of attackers across a broad market. The reactive nature of traditional malware defenses – in terms of finding an attack, profiling it, and developing a signature to block it on the device – makes existing mitigations too little too late. Attackers now randomly change what attacks look like using polymorphic malware, so looking for malware files cannot solve the problem. Additionally, attackers have new and increasingly sophisticated means to contact their command and control (C&C) systems and obscure data during exfiltration, making detection all the harder. Attackers also do a lot more testing now to make sure their attacks work before they use them. Endpoint security technologies can be bought for a very small investment, so attackers refine their malware to ensure it works against a majority of the defenses in use. This causes security professionals to look at different ways of breaking the kill chain, as we described in The CISO’s Guide to Advanced Attackers. You can do this a couple different ways: Impede Delivery: If the attacker cannot deliver the attack to a vulnerable device, the chain is broken. This involves effectively stopping tactics like phishing, either by blocking the email before it gets to an employee or training employees not to click things that would result in malware delivery. Stop Compromise: Even if the attack does reach a device, if it cannot execute and exploit the device, the chain is broken. This involves a different approach to protecting endpoints, and will be the main focus of this series. Block C&C: If the device is compromised, but cannot contact the command and control infrastructure to receive instructions and additional attack code, the impact of the attack is reduced. This requires the ability to analyze all outbound network traffic for C&C patterns, as well as watching for contact with networks with bad reputations. We discussed many of these tactics in our Network-based Threat Intelligence research. Block Exfiltration: The last defense is to stop the exfiltration of data from your environment. Whether via data leak prevention technology or some other means of content or egress filtering to detect protected content, if you can stop data from leaving your environment there is no loss. The earlier you break the kill chain, the better. But in the real world, you are best served by a multi-faceted approach encompassing all the options listed above. Now let’s dig into the Stop Compromise strategy for breaking the kill chain, which is really where application control fits into the security control hierarchy. Stop Code Execution. Stop Malware. The main focus of anti-virus and anti-malware technology since the beginning has been to stop malicious code from executing on a device, thus stopping compromise. What has been evolving is how the malware is detected, and what parts of devices software can access. There are currently a handful of approaches. Block the Bad: This is the traditional AV approach of matching malware signatures against code executing on the device. The problem is scale because there is so much bad that you cannot possible expect an endpoint to check for every attack since the beginning of time. Improve Heuristics: It is impossible to block all malware because it is constantly changing, so you need to focus on what

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.