Securosis

Research

Exploit U

It seems Universities are the latest targets for targeted attackers, looking for a preview of the next set of technologies to come out of the major research universities. But protecting these networks is a herculean task, given the open nature of university operations, which are driven by collaboration and sharing. It makes it tough to protect things when they are fundamentally open. “A university environment is very different from a corporation or a government agency, because of the kind of openness and free flow of information you’re trying to promote,” said David J. Shaw, the chief information security officer at Purdue University. “The researchers want to collaborate with others, inside and outside the university, and to share their discoveries.” So what can these folks do to protect themselves? One suggestion in the article is to not take sensitive research on laptops to certain countries. Uh, it’s not like those folks can’t get into the networks through the front door. So, like in the commercial world, try to make it as hard as possible for attackers to get at the good stuff. Mr. Shaw, of Purdue, said that he and many of his counterparts had accepted that the external shells of their systems must remain somewhat porous. The most sensitive data can be housed in the equivalent of smaller vaults that are harder to access and harder to move within, use data encryption, and sometimes are not even connected to the larger campus network, particularly when the work involves dangerous pathogens or research that could turn into weapons systems. Vaults? I like that idea. Photo credit: “b is for back to school” originally uploaded by lamont_cranston Share:

Share:
Read Post

Endpoint Security Buyer’s Guide: Endpoint Hygiene and Reducing Attack Surface

As we mentioned in the last post, anti-malware tends to be the anchor in endpoint security control sets. Given the typical attacks that is justified, but too many organizations forget the importance of keeping devices up-to-date and configured securely. Even “advanced attackers” don’t like to burn 0-day attacks when they don’t need to. So leaving long-patched vulnerabilities exposed, or keeping unnecessary services active on endpoints, makes it easy for them to own your devices. The progression in almost every attack – regardless of the attacker’s sophistication – is to compromise a device, gain a foothold, and then systematically move towards the target. By ensuring proper hygiene on devices you can reduce attack surface; if attackers want to get in, make them work for it. When we say ‘hygiene’ we are referring to three main functions: patch management, configuration management, and device control. We will offer an overview of each function, and then discuss some technical considerations involved in the buying decision. For more detail on patch and configuration management, see Implementing and Managing Patch and Configuration Management. Patch Management Patch managers install fixes from software vendors to address vulnerabilities. The best known patching process is monthly, from Microsoft. On Patch Tuesday Microsoft issues a variety of software fixes to address defects, many of which could result in exploitation of their systems. Many other vendors have adopted similar approaches, with a periodic patch cycle and out-of-cycle patches for important issues – generally when an exploit shows up in the wild. Once a patch is issued your organization needs to assess it, figure out which devices need to be patched, and install it within the window specified by policy – typically a few days. A patch management product scans devices, installs patches, and reports on the success or failure of the process. Our Patch Management Quant research provides a very detailed view of the patching process, so check it out for more information. Patch Management Technology Considerations Coverage (OS and applications): Your patch management offering needs to support the operating systems and applications you need to keep current. Discovery: You cannot patch what you don’t know about, so you need a way to identify new devices and get rid of deprecated devices – otherwise the process will fail. You can achieve this with a built-in discovery capability, bidirectional integration with vulnerability management (for active and passive monitoring for new devices), asset management and inventory software, or more likely all of the above. Library of patches: Another facet of coverage is accuracy and support of the operating systems and applications you use. We talk about the big 7 vulnerable applications (browsers, Java, Adobe Reader, Word, Excel, PowerPoint, and Outlook) – ensure those targeted applications are covered. Keep in mind that the word ‘supported’ on a vendor’s data sheet doesn’t mean they support whatever it is well. Be sure to test the vendor’s patch library and check the timeliness of their updates. How long do they take to package and deploy patches to customers after a patch is released? Reliable deployment of patches: If patches don’t install consistently – including updating, adding, and/or removing software – that means more work for you. This can easily make a tool more trouble than it’s worth. Do they get it right the first time? Agent vs. agentless: Does the patch vendor assess devices with an agent, or do they perform ‘agentless’ scanning (typically using a non-persistent or ‘dissolvable’ agent), and if so how do they deploy patches? This is almost a religious dispute, but fortunately both models work. If the patch manager requires an agent it should be integrated with any other endpoint agents (anti-malware, device control, etc.) to minimize the number of agents per endpoint. Remote devices: How does the patching process work for remote and disconnected devices? This includes field employees’ laptops as well as devices in remote locations with limited bandwidth. What features are built in to ensure the right patches are deployed, regardless of location? Can you be alerted when a device hasn’t updated within a configurable window – perhaps because it hasn’t connected? Deployment architecture: Some patches gigabytes in size, so some flexibility in distribution is important – especially for remote devices and locations. Architectures may include intermediate patch distribution points to minimize network bandwidth, as well as intelligent packaging to install only the appropriate patches on each device. Scheduling flexibility: Of course disruptive patching must not impair productivity, so you should be able to schedule patches during off hours and when machines are idle. Value-add: As you consider a patch management tool make sure you fully understand its value-add – what distinguishes it from low-end and low-cost (free) operating-system-based tools such as Microsoft’s WSUS. Make sure the tool supports your process and provides the capabilities you need. Configuration Management Configuration management enable an organization to define an authorized set of configurations for devices. These configurations control applications installed, device settings, running services, and on-device security controls. This is important because unauthorized configuration changes might indicate malware manipulation or operational error, perhaps exploitable. Additionally, configuration management can help ease the provisioning burden of setting up and reimaging devices in cas of malware infection. Configuration Management Technology Considerations Coverage (OS and applications): Your configuration management offering needs to support your operating systems. Enough said. Discovery: You cannot manage devices you don’t know about, so you need a way to identify new deviceand get rid of deprecated devices – otherwise the process will fail. You can achieve this with a built-in discovery capability, bidirectional integration with vulnerability management (for active and passive monitoring for new devices), asset management and inventory software, or more likely all of the above. Supported standards and benchmarks: The more built-in standards and/or configuration benchmarks offered by the tool, the better your chance of finding something you can easily adapt to your own requirements. This is especially important for highly regulated environments which need to support and report on multiple regulatory hierarchies. Policy editing: Policies generally require customization to satisfy requirements. Your configuration management tool should offer a flexible policy editor to define policies and add new baseline configurations

Share:
Read Post

Endpoint Security Buyer’s Guide: Anti-Malware, Protecting Endpoints from Attacks

After going over the challenges of protecting those pesky endpoints in the introductory post of the Endpoint Security Buyer’s Guide, it is now time to turn our attention to the anchor feature of any endpoint security offering: anti-malware. Anti-malware technologies have been much maligned. In light of the ongoing (and frequently successful) attacks on devices ‘protected’ by anti-malware tools, we need some perspective – not only on where anti-malware has been, but where the technology is going, and how that impacts endpoint security buying decisions. History Lesson: Reacting No Bueno Historically, anti-malware technologies have utilized virus signatures to detect known bad files – a blacklist. It’s ancient history at this point, but as new malware samples accelerated to tens of thousands per day, this model broke. Vendors could neither keep pace with the number of files to analyze nor update their hundreds of millions of deployed AV agents with gigabytes of signatures every couple minutes. So anti-malware vendors started looking at new technologies to address the limitations of the blacklist, including heuristics to identify attack behavior within endpoints, and various reputation services to identify malicious IP addresses and malware characteristics. But the technology is still inherently reactive. Anti-malware vendors cannot protect against any attack until they see and analyze it – either the specific file or recognizable and identifiable tactics or indicators to watch for. They need to profile the attack and push updated rules down to each protected endpoint. “Big data” signature repositories in the cloud, cataloging known files both safe and malicious, have helped to address the issues around distributing billions of file hashes to each AV agent. If an agent sees a file it doesn’t recognize, it asks the cloud for a verdict. But that’s still a short-term workaround for a fundamental issue with blacklists. In light of modern randomly mutating polymorphic malware, expecting to reliably match identifiable patterns would be unrealistic – no matter how big an signature repository you build in the cloud. Blacklists can block simple attacks using common techniques, but are completely ineffective against advanced malware attacks from sophisticated adversaries. Anti-malware technology needs to evolve, and it cannot rely purely on file hashes. We described the early stages of this evolution in Evolving Endpoint Malware Detection, so we will summarize here. Better Heuristics You cannot depend on reliably matching what a file looks like – you need to pay much more attention to what it does. This is the concept behind the heuristics that anti-malware offerings have been built on in recent years. The issue with those early heuristic offerings was having enough context to know whether an executable was taking a legitimate action. Malicious actions were defined generically for a device, generally based on operating system characteristics, so false positives (blocking a legitimate action) and false negatives (failing to block an attack) were both common: lose/lose. The heuristics have evolved to factor in authorized application behavior. This advancement has dramatically improved heuristic accuracy, because rules are built and maintained for each application. Okay, not every application, but at least the 7 applications targeted most often by attackers (browsers, Java, Adobe Reader, Word, Excel, PowerPoint, and Outlook). These applications have been profiled to identify authorized behavior. And anything unauthorized is blocked. Sound familiar? Yes, this is a type of whitelisting: only authorized activities are allowed. By understanding all the legitimate functions within a constrained universe of frequently targeted applications, a significant chunk of attack surface can be eliminated. To use a simple example, there really aren’t any good reasons for a keylogger to capturing keystrokes while filling out a form on a banking website. And it is decidedly fishy to take a screen grab of a form with PII on it. These activities would have been missed previously – both screen grabs and reading keyboard input are legitimate functions – but context enables us to recognize and stop them. That doesn’t mean attackers won’t continue targeting operating system vulnerabilities, other applications (outside the big 7), or employees with social engineering. But this approach has made a big difference in the efficacy of anti-malware technology. Better Isolation The next area of innovation on endpoints is the sandbox. We have talked about sandboxing malware within a Network-based Malware Detection Device, which also enables you to focus on what the file does before it is executed on a vulnerable system. But isolation zones for testing potentially malicious code are appearing on endpoints as well. The idea is to spin up a walled garden for a limited set of applications (the big 7, for example) that shields the rest of the device from those applications. Many of us security-aware individuals have been using virtual machines on our endpoints to run these risky applications for years. But this approach only suited the technically savvy, and never saw broad usage within enterprises. To find any market success, isolation products must maintain a consistent user experience. It is still pretty early for isolation technologies, but the approach – even down to virtualizing different processes within the OS – shows promise. It is definitely one to keep an eye on. Of course it is important to keep in mind that sandboxes are not a panacea. If the isolation technology utilizes any base operating system services (network stacks, printer drivers, etc.), the device is still vulnerable to attacks on those services – even running in an isolated environment. So an isolation technology doesn’t mean you don’t have to manage the hygiene (patching & configuration) on the device, as we will discuss in the next post. Total Lockdown Finally, there is the total lockdown option: defining an authorized set of applications/executables that can run on the device and blocking everything else. This Application Whitelisting (AWL) approach has been around for 10+ years, and still remains a niche use case for endpoint protection. But it is not mainstream and unlikely ever to be, because it breaks the end-user experience, and end users don’t like it much. If an application an employee wants to run isn’t authorized, they are out of business – unless either IT manages a very quick

Share:
Read Post

New Paper: Network-based Malware Detection 2.0: Assessing Scale, Accuracy and Deployment

Detecting malware feels like a losing battle. Between advanced attacks, innovative attackers, and well-funded state-sponsored and organized crime adversaries, organizations need every advantage they can get to stop the onslaught. We first identified and documented Network-Based Malware Detection (NBMD) devices as a promising technology back in early 2012, and they have made a difference in detecting malware at the perimeter. Of course nothing is perfect, but every little bit helps. But nothing stays static in the security world so NBMD technology has evolved with the attacks it needs to detect. So we updated our research to account for changes in scalability, accuracy, and deployment: The market for detecting advanced malware on the network has seen rapid change over the 18 months since we wrote the first paper. Compounding the changes in attack tactics and control effectiveness, the competition for network-based malware protection has dramatically intensified, and every network security vendor either has introduced a network-based malware detection capability or will soon. This creates a confusing situation for security practitioners who mostly need to keep malware out of their networks, and are uninterested in vendor sniping and trash talking. Accelerating change and increasing confusion usually indicate it is time to wade in again and revisit findings to ensure you understand the current decision criteria – in this case of detecting malware on your network. So this paper updates our original research to make sure you have the latest insight for your buying decisions. The landing page is in our Research Library. You can also download Network-based Malware Detection 2.0: Assessing Scale, Accuracy and Deployment (PDF) directly. We would like to thank Palo Alto Networks for licensing the content in this paper. Obviously we wouldn’t be able to do the research we do, or offer it to you without cost, without companies supporting our research. Share:

Share:
Read Post

Incite 7/17/2013: 80 años

If you want a feel for how long 80 years is, here are a few facts. In 1933, the President was Herbert Hoover until March, when FDR became President. The Great Depression was well underway in the US and spreading around the world. Hitler first rose to power in Germany. And Prohibition was repealed in the US. I’ll certainly drink to that. Some famous folks were born in 1933 as well. Luminaries such as Joan Collins, Larry King, and Yoko Ono. Have you seen Larry or Yoko lately? Yeah, 80 seems pretty old. Unless it’s not. My father-in-law turned 80 this year. In fact his birthday was yesterday and he looks a hell of a lot better than most 80-year-olds. He made a joke at his birthday party over the weekend that 80 is the new 60. For him it probably is. He has been both lucky and very healthy. We all think his longevity can be attributed to his outlook on life. He has what we call jokingly _the Happy Gene.. In the 20 years I have been with the Boss I have seen him mad twice. Twice. It’s actually kind of annoying – I probably got mad twice already today. But the man is unflappable. He’s a stockbroker, and has been for 35 years, after 20 years in retail. Stocks go up, he’s cool. Stocks go down, he’s cool. Clients yell at him, he’s cool. He just doesn’t get bent out of shape about anything. He does get fired up about politics, especially when I intentionally bait him, because we see things from opposite sides. He gets excited about baseball and has been known to scream at the TV during Redskins games. But after the game is done or the discussion is over, he’s done. He doesn’t hold onto anger or perceived slights or much of anything. He just smiles and moves on. It is actually something I aspire to. The Boss said a few words at his party and summed it up perfectly. She had this entire speech mapped out, but when I heard her first sentence I told her to stop. It’s very hard to sum up a lifestyle and philosophy in a sentence, but she did it. And anything else would have obscured the beauty of her observation. Worry less, enjoy life more. That’s it. That’s exactly what he does, and it has worked great for 80 years. It seems so simple yet it’s so hard to do. So. Hard. To. Do. But for those, like my father-in-law, who can master worrying less… a wonderful life awaits. Even when it’s not so wonderful. Happy Birthday, Sandy. I can only hope to celebrate many more. –Mike Photo credit: “Dad’s 80th Birthday Surprise” originally uploaded by Ron and Sandy with Kids Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. The Endpoint Security Buyer’s Guide Introduction Continuous Security Monitoring Classification Defining CSM Why. Continuous. Security. Monitoring? Database Denial of Service Attacks Introduction API Gateways Implementation Key Management Developer Tools Security Analytics with Big Data Deployment Issues Integration New Events and New Approaches Use Cases Introduction Newly Published Papers Quick Wins with Website Protection Services Email-based Threat Intelligence: To Catch a Phish Network-based Threat Intelligence: Searching for the Smoking Gun Understanding and Selecting a Key Management Solution Building an Early Warning System Incite 4 U Social responsibility: Before I get too far I need to acknowledge that this is definitely a bit of he said/she said. Now that has been put out there, what we know is that Microsoft released a patch for a bug discovered and released on the full disclosure list by security researcher Tavis Ormandy (who works for Google, but I think that’s incidental here). Microsoft stated last week that the bug is being actively used in targeted attacks after it was disclosed.. Tavis was clear that he didn’t notify Microsoft before posting the details publicly. Here’s what I think: we all have a social responsibility. While MS may have treated Tavis poorly in the past – justified or not – his actions put the rest of us at risk. It’s about users, not Microsoft or the researcher. If Tavis knew the bug was being used in the wild, I support full disclosure. If the vendor doesn’t respond or tries to cover it up and users are at risk, disclose publicly and quickly. But at least give them a chance, which requires thinking about the impact on everyone else first. To be balanced, vendors have a responsibility to respond in a timely fashion, even if it isn’t convenient. But to release a bug with no evidence that anyone else is using it? That doesn’t seem responsible. – RM Identity theft fullz up: Interesting research hit this week from Dell SecureWorks, per a Dark Reading article, about seeing complete packets of stolen information (fullz), including healthcare information, appearing in marketplaces for $1,200 or so. With a full kit an attacker would have everything they need for identity theft, including counterfeit documents such as driver’s licenses and health insurance cards. Also interesting were that credit cards with CVV can be had for $1-2 each, although a prestige card (such as AmEx Black) can cost hundreds. This is Mr. Market at work. Prices for commodities go down but valuable information still demands a premium. It appears online game accounts are a fraud growth market because turning virtual items into real money can be easier due to less stringent fraud detection. – MR Easier than making coffee: Gunnar Peterson’s keynote at CIS2013 was full of valuable witticisms (the entire presentation is on his blog) – but made a particularly profound point regarding code security: it needs to be so easy that a seventeen-year-old can do it time and time again without fail. Gunnar drew a parallel between Five Guys’ recipe for burger success against behemoth competitors:

Share:
Read Post

The Temptation of the Developer

Threat modeling involves figuring out ways the system can be gamed and your [fill in the blank] can be compromised. Great modelers can take anything and come up with new ways to question the integrity of the system. When it comes to 0-day attacks, many tend to focus on increasingly sophisticated fuzzers and other techniques to find holes in code, like the tactics described in the Confessions of a Cyber Warrior interview. But Jeremiah takes a step back to find yet another logic flaw in our assumptions of our adversaries in this succinct and eye opening tweet. As 0-days go for 6 to 7 figures, imagine the temptation for rogue developers to surreptitiously implant bugs in the software supply chain. I can see it now. The application security marketing engine will get ramped up around the rogue developer threat, which is just another form of the insider attack bogey man, starting in 3, 2, 1… But the threat is real. The real question is whether awareness of this kind of adversary would change how you do application security. I’m sure my partners (and many of you) have opinions about what should be done differently. Share:

Share:
Read Post

Continuous Security Monitoring: Classification

As we discussed in Defining CSM, identifying your critical assets and monitoring them continuously is a key success factor for your security program – at least if you are interested in figuring out what’s been compromised. But reality says you can’t watch everything all the time, even with these new security big data analytical thingies. So the success of your security program hinges your ability to prioritize what to do. That was the main focus of our Vulnerability Management Evolution research last year. Prioritizing requires you to determine how different asset classes will be monitored. You need a consistent process to classify assets. To define this process let’s borrow liberally from Mike’s Pragmatic CSO methodology – identifying what’s important to your organization is the critical first step. So a critical step is to make sure you’ve got a clear idea about priorities and to get the senior management buy-in on what those priorities are. You don’t want to spend a lot of money protecting a system that has low perceived value to the business. That would be silly. – The Pragmatic CSO, p25. One of the hallmarks of a mature security program is having this elusive buy-in from all levels and areas of the organization. And that doesn’t happen by itself. Business System Focus When you talk to folks about their data leak prevention efforts, a big impediment to sustainable success for the initiative is the ongoing complexity of classification. It’s just overwhelming to try putting all your organization’s data into buckets and then to maintain those buckets over time. The same issues apply to classifying computing assets. Does this server fit into that bucket? What about that network security device? And that smartphone? Multiply that by a couple hundred thousand servers, endpoints, and users and you start to understand the challenges of classification. An approach that can be very helpful for overcoming this overwhelm is to think about your computing devices in terms of a business system. To understand what that means, let’s return to The Pragmatic CSO: The key to any security program is to make sure that the most critical business systems are protected. You are not concerned about specific desktops, servers or switches. You need only be focused on fully functioning business systems. Obviously every fully functioning system consists of many servers, switches, databases, storage, and applications. All of these components need to be protected to ensure the safety of the system. – The Pragmatic CSO, p23 This requires aligning specific devices to the business systems they serve. Then those devices inherit the criticality of the business system. Simple, right? Components such as SANs and perimeter security gateways are used by multiple business systems, so they need to be classified with the most critical business system they serve. By the way, you are doing this already if you have any regulatory oversight. You know those in-scope assets for your PCI assessment? You associated those devices with PCI-relevant systems with access to protected data. Thoey require protection in accordance with the PCI-DSS guidance. Those efforts have been based on what you need to do to understand your PCI (or other mandate) scope, and we are talking about extending that mentality across your entire environment. Limited Buckets To understand the difficulty of managing all these combinations, consider the inability of many organizations to implement role-based access control on their key enterprise applications. That was largely because something like a general ledger application has hundreds of roles, with each role involving multiple access rules. Each employee may have multiple roles, so RBAC required managing A * R * E entitlements. Good luck with that. We suggest limiting the number of buckets used to classify business systems. Maybe it’s 2. You know, the stuff where you will get fired if breached, and the stuff where you won’t. Or maybe it’s 3 or 5. It’s no more than that. We are talking about monitoring devices in this series, but you needs to implement and manage different security controls for each level. It’s the concept we called Vaulting a couple years ago, also commonly known as “security enclaves”. After identifying and classifying your business systems into a manageable number of buckets you can start to think about how to monitor each class of devices according to its criticality. Be sure to build in triggers and catalysts to revisit your classifications. For example, if a business system is opened to trading partners or you authorize a new device to access critical data. As long as you understand these classifications from a point in time and need to be updated periodically, this process works well. Later in this series we will talk about different levels of security monitoring, based on the data sources and access you have to devices and specific use case(s) you are trying to achieve. Employees Count Too We have been talking about business systems and associated computing devices used to support them, but we cannot forget the weakest link in pretty much every organization: employees. You need to classify employees just like business systems. Do they have access to stuff that would be bad if it’s breached, and how are they accessing it – mobile vs. desktop, remote vs. on-network, etc.? The reality is that you can place very limited trust in endpoint devices. We see a new story of this 0-day or that breach daily, compounded by idiotic action taken by some employee. It is no wonder no one trusts endpoints. And we have no issue with that stance. If that forces you to apply more discipline and tighter controls to devices it’s all good. There is definitely a different risk profile to a low-level employee operating on a device sitting on the corporate network, compared to your CFO accessing unannounced financials on an Android tablet from a cafe in China. Part of your CSM process must be classifying, protecting, and monitoring employee devices. Where legally appropriate, of course. Gaining Consensus Now that you have bought into this classification discipline, you need to make it reality. This is where the fun begins – it requires buy-in within the organization, which is

Share:
Read Post

Living to fight another day…

Our man Dave Lewis has a great post on CSO Online, When Disaster Comes Calling, about the importance of making sure your disaster recovery plan actually can help you when you have, uh, a disaster. Folks don’t always remember that sometimes success is living to fight another day. At one organization that I worked for the role of disaster recovery planning fell to an individual that had neither the interest nor the wherewithal to accomplish the task. This is a real problem for many companies and organizations. The fate of their operations can, at times, reside in the hands of someone who is disinclined to properly perform the task. Sounds like a recipe for failure to me. I would say the same goes for incident response. Far too many organizations just don’t put in the time, effort, or urgency to make sure they are prepared. Until they get religion – when their business is down or their darkest secrets show up on a forum in Eastern Europe. Or you can get a bit more proactive by asking some questions and making sure someone in your organization knows the answers. So what is the actionable take away to had from this post? Take some time to review your organizations disaster recovery plans. Are backups taken? Are they tested? Are they stored offsite? Does the disaster recovery plan even exist anywhere on paper? Has that plan been tested with the staff? No plan survives first contact with the “enemy” but, it is far better to be well trained and prepared than to be caught unawares. What Dave said. Share:

Share:
Read Post

The Endpoint Security Buyer’s Guide [New Series]

Last year we documented our thoughts on buying Endpoint Security Management offerings, which basically include patch, configuration, device control, and file integrity monitoring – increasingly bundled in suites to simplify management. We planned to dig into the evolution of endpoint security suites earlier this year but the fates intervened and we got pulled into other research initiatives. Which is just as well because these endpoint security and management offerings have consolidated more quickly than we anticipated, so it makes sense to treat all these functions within a consistent model. We are pleased to kick off a new series called the “Endpoint Security Buyer’s Guide,” where we will discuss all these functions, update some of our research from last year, and provide clear buying criteria for those of you looking at these solutions in the near future. As always we will tackle the topic from the perspective of an organization looking to buy and implement these solutions, and build the series using our Totally Transparent Research methodology. Before we get going we would like to thank our friends at Lumension for potentially licensing the content when the project is done. They have long supported of our research, which we certainly appreciate. The Ongoing Challenge of Securing Endpoints We have seen this movie before – in both the online and offline worlds. You have something and someone else wants to steal it. Or maybe your competitors want to beat you in the marketplace through less than savory tactics. Or you have devices that would be useful as part of a bot network. You are a target, regardless of how large or small your organization is, whether you like it or not. Many companies make the serious mistake of thinking it won’t happen to them. With search engines and other automated tools looking for common vulnerabilities everyone is a target. Humans, alas, remain gullible and flawed. Regardless of the training you provide, employees continue to click stuff, share information, and fall for simple social engineering attacks. So your endpoints remain some of the weakest links in your security defenses. Even worse for you, unsophisticated attacks on the endpoints remain viable, so your adversaries do not need serious security kung fu to beat your defenses. The industry has responded, but not quickly enough. There is an emerging movement to take endpoints out of play. Whether using isolation technologies at the operating system or application layer, draconian whitelisting approaches, or even virtualizing desktops, organizations no longer trust endpoints and have started building complimentary defenses in acknowledgement of that reality. But those technologies remain immature, so the problem of securing endpoints isn’t going away any time soon. Emerging Attack Vectors You cannot pick up a technology trade publication without seeing terms like “Advanced Malware” and “Targeted Attacks.” We generally just laugh at all the attacker hyperbole thrown around by the media. You need to know one simple thing: these so-called “advanced attackers” are only as advanced as they need to be. If you leave the front door open they don’t need to sneak in through the ventilation ducts. Many successful attacks today are caused by simple operational failures. Whether it’s an inability to patch in a timely fashion, or to maintain secure configurations, far too many people leave the proverbial doors open on their devices. Or attackers target users via sleight-of-hand and social engineering. Employees unknowingly open the doors for attackers – and enable data compromise. There is no use sugar-coating anything. Attacker capabilities improve much faster than defensive technologies, processes, and personnel. We were recently ruminating in the Securosis chat room that offensive security (attacking things) continues to be far sexier than defense. As long as that’s the case, defenders will be on the wrong side of the battle. Device Sprawl Remember the good old days, when devices consisted of DOS PCs and a few dumb terminals? The attack surface consisted of the floppy drive. Yeah, those days are gone. Now we have a variety of PC variants running numerous operating systems. Those PCs may be virtualized and they may connect in from anywhere in the world – including networks you do not control. Even better, many employees carry smartphones in their pockets, but ‘smartphones’ are really computers. Don’t forget tablet computers either – each with as much computing power as a 20-year-old mainframe. So any set of controls and processes you implement must be consistently enforced across the sprawl of all your devices. You need to make sure your malware defenses can support this diversity. Every attack starts with one compromised device. More devices means more complexity, a far greater attack surface, and a higher likelihood of something going wrong. Again, you need to execute on your endpoint security strategy flawlessly. But you already knew that. BYOD As uplifting as dealing with emerging attack vectors and device sprawl is, we are not done complicating things. It is not just endpoints you have to defend any more. Many organizations support employee-owned devices. Don’t forget about contractors and other business partners who may have authorized access to your networks and critical data stores, connecting with devices you don’t control. Most folks assume that BYOD (bring your own device) just means dealing with those pesky Android phones and iPads, but we know many finance folks itching to get all those PCs off the corporate books. That means you need to eventually support any variety of PC or Mac any employee wants to use. Of course the security controls you put in place need to be consistent, whether your organization or an employee owns a device. The big difference is granularity of management. If a corporate device is compromised you just nuke it from orbit, as they say. Well not literally, but you need to wipe the machine down to bare metal ensuring no vestiges of the malware remain. But what about those pictures of Grandma on an employee’s device? What about their personal email and address book? Blow those away and the uproar is likely to be much worse than just idling someone for a few hours while

Share:
Read Post

Incite 7/10/2013: Selfies

Before she left for camp XX1 asked me to download her iPhone photos to our computer, so she could free up some space. Evidently 16gb isn’t enough for these kids today. What would Ken Olson say about that? (Dog yummy for those catching the reference.) I happened to notice that a large portion of her pictures were these so-called selfies. Not in a creeper, micro-managing Dad way, but in a curious, so that’s what the kids are up to today way. A selfie is where you take a picture of yourself (and your friends) with your camera phone. Some were good, some were bad. But what struck me was the quantity. No wonder she needed to free up space – she had all these selfies on her phone. Then I checked XX2 and the Boy’s iTouch devices, and sure enough they had a bunch of selfies as well. I get it, kind of. I have been known to take a selfie or two, usually at a Falcons game to capture a quick memory. Or when the Boss and I were at a resort last weekend and we wanted to capture the beauty of the scene. My Twitter avatar remains a self-defense selfie, and has been for years. I haven’t felt the need to take a new selfie to replace it. Then I made a critical mistake. I searched Flickr for selfies. A few are interesting, and a lot are horrifying. I get that some folks want to take pictures of themselves, but do you need to share them with the world? Come on, man (or woman)! There are some things we don’t need to see. Naked selfies (however psuedo-artistic) are just wrong. But that’s more a statement about how social media has permeated our environment. Everyone loves to take pictures, and many people like to share them, so they do. On the 5th anniversary of the iTunes App Store, it seems like the path to success for an app is to do photos or videos. It worked for Instagram and Snapchat, so who knows… Maybe we should turn the Nexus into a security photo sharing app. Pivoting FTW. As for me, I don’t share much of anything. I do a current status every so often, especially when I’m somewhere cool. But for the most part I figure you don’t care where I am, what my new haircut looks like (pretty much the same) or whether the zit on my forehead is pulsating or not (it is). I guess I am still a Luddite. –Mike Photo credit: “Kitsune #selfie” originally uploaded by Kim Tairi Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. Continuous Security Monitoring Defining CSM Why. Continuous. Security. Monitoring? Database Denial of Service Attacks Introduction API Gateways Key Management Developer Tools Access Provisioning Security Enabling Innovation Security Analytics with Big Data Deployment Issues Integration New Events and New Approaches Use Cases Introduction Newly Published Papers Quick Wins with Website Protection Services Email-based Threat Intelligence: To Catch a Phish Network-based Threat Intelligence: Searching for the Smoking Gun Understanding and Selecting a Key Management Solution Building an Early Warning System Incite 4 U If it’s code it can be broken: A fascinating interview in InfoWorld with a guy who is a US government funded attacker. You get a feel for how he got there (he like to hack things) and that they don’t view what they do as over the line – it’s a necessary function given that everyone else is doing it to us. He maintains they have tens of thousands of 0-day attacks for pretty much every type of software. Nice, eh? But the most useful part of the interview for me was: “I wish we spent as much time defensively as we do offensively. We have these thousands and thousands of people in coordinate teams trying to exploit stuff. But we don’t have any large teams that I know of for defending ourselves. In the real world, armies spend as much time defending as they do preparing for attacks. We are pretty one-sided in the battle right now.” Yeah, man! The offensive stuff is definitely sexy, but at some point we will need to focus on defense. – MR Open to the public: A perennial area of concern with database security is user permission management, as Pete Finnigan discussed in a recent examination of default users in Oracle 12cR1. Default user accounts are a security problem because pretty much everything comes with default access credentials. That usually means a default password, or the system may require the first person to access the account to set a password. But regardless it is helpful to know the 36 issues you need to immediately address after installing Oracle. Pete also notes the dramatic increase in use of the PUBLIC permissions, a common enabler of 0-day database exploits. More stuff to add to your security checklist, and if you rely upon third party assessment solutions it’s time to ask your provider for updated policies. By the way, this isn’t just an issue with Oracle, or databases for that matter. Every computing system has these issues. – AL Want to see the future of networking? Follow the carriers… I started my career as a developer but I pretty quickly migrated down to the network. It was a truism back then (yes, 20+ years ago – yikes) that the carriers were the first to play around and deploy new technologies, and evidently that is still true today. Even ostriches have heard of software-defined networking at this point. The long-term impact on network security is still not clear, but clearly carriers will be leading the way with SDN deployment. given their need for flexibility and agility. So those of you in the enterprise should be paying attention, because as inspection and policy enforcement (the basis of security) happens in software, it will have

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.