Securosis

Research

Sales/Marketing Spend, Cash Generation, and the FireEye-PO

It doesn’t happen very often so it’s highly scrutinized. No, it’s not me being nice to someone. It’s a security company IPO. Last week the folks at FireEye filed their Form S-1, which is the first step toward becoming a public company. The echo chamber blew up, mostly because of FireEye’s P&L. Basically, FireEye spent more on sales and marketing in the first half of 2013 than they took in revenue. Their top line revenue was $61.6MM; they spent $66.1MM on sales and marketing. Their total loss was $63.4MM. Yes, you read that correctly. On the surface that’s pretty ugly. Sure, revenue growth has been significant ($11.3MM, $33.6MM, and $83.3MM; in 2010, 2011, and 2012, respectively). But to lose that much money seems a bit troubling, no? Well, those folks squawking about the losses don’t really understand financial filings too well. In 2012, as I mentioned, they had $83.3MM in revenues. They showed a loss of $35.7MM. But they had positive net cash of $21.5MM in 2012. WTF? Those numbers don’t add up, do they? Keep in mind that bookings does not equal revenue. So FireEye figured out a way to get companies to pay them $75MM more than they could recognize in revenue due to accounting nuances. That’s listed on the balance sheet as Deferred Revenue. They had $43.7MM in current-year deferred revenue (which will be recognized over the coming 12 months), and $32.6MM in revenue deferred for longer than 12 months. And in the first half of 2013 they added another $12MM to their current deferred revenue and over $14MM to non-current deferred. To be clear, they booked a lot more than $83MM in 2012 and a lot more than $61.6MM in the first 6 months. Those revenue numbers are only the stuff they could recognize. The difference between bookings and recognized revenue is services they sold and got paid for now, which they will recognize over the life of the subscription. These multi-year agreements are great for cash flow, but not so great for the income statement. The good news is that companies don’t pay their employees with income statements. The sell-side analysts can do the math to figure out roughly what bookings were, but my point is not to get confused by nuances of the income statement, versus what they actually sell. To be clear, there are a number of issues with that kind of growth in sales and marketing spend. FireEye sees this as a land grab, and the best way to get land is to hire like drunken sailors, put scads of folks in the field, and try to sell product now. It is very expensive to build a global direct sales force, and it is usually done over a long period of time. Clearly FireEye sees their opportunity right now, so they are taking the express train to a huge go-to-market engine. FireEye is making a bet, which you can see in their spending on equipment including demo units (which presumably become production appliances when customers buy). They are betting that once they get a demo box installed on a customer site they will close the deal. They spent $18MM on “property and equipment and demonstration units” last year. That’s a lot of hardware, folks. So they are putting reps everywhere and giving them demo units to get into customer sites before the competition. At some point they will need to dial back the spending and show profits. Ultimately that’s what public company investors demand. But they get a pass on that for the time being because of the huge revenue growth, as well as the need to invest internationally to gain market share now. Which they had better do, because the competition is coming. Pretty much every network security vendor has a network-based malware detection device. They are all gunning for FireEye, which is why FireEye is grabbing as much real estate as they can, right now. Clearly the market for malware detection is red hot and very high profile for enterprises of all sizes. But we don’t expect a stand-alone technology to ultimately prevail for this capability. We expect malware detection to be part of a much bigger security strategy that spans not just the perimeter network, but also endpoints. So FireEye needs a much broader story in the works, or they’ll hit the wall hard. Perhaps they will use public market currency to acquire their way to a broader product line. They have also announced an ecosystem, while remaining focused on the malware detection space. Will that be enough? Time will tell. Listen, I’m not a stock analyst. Personally I cannot invest in the companies I cover, so I don’t have any skin in the game. But I can read a balance sheet, and FireEye is in land grab mode, spending like crazy to build global momentum. Maybe it will pay off and maybe it won’t. Maybe this S-1 is the catalyst they need for a bigger company to acquire them. Maybe they will IPO, broaden their story, and become a sustainable public company. Maybe they don’t. But it will be fun to watch in any case. Photo credit: “DKHouse 082” originally uploaded by May Monthong Share:

Share:
Read Post

We’re at Black Hat—Go Read a Book

Pretty much the entire team is out at the Black Hat conference. Yes, we really are working. Heck, by the time you read this, Rich and James will have taught 2 separate cloud security classes. Although we think Mike may be enjoying a Vegas cabana as this post goes live, based on his calendar. We will resume regular posting next week. Share:

Share:
Read Post

Endpoint Security Buyer’s Guide: Buying Considerations

We have covered the reasons endpoint security is getting more challenging, and offered some perspective on what is important when buying anti-malware and endpoint hygiene products – or both in an integrated package. Then we addressed the issues BYOD and mobility present for protecting endpoints. To wrap up we just need to discuss the buying considerations driving you toward one solution over another, and develop a procurement process that can work for your organization. Platform Features As in most technology categories (at least in security), the management console (or ‘platform’, as we like to call it) connects the sensors, agents, appliances, and any other security controls. You need several platform capabilities for endpoint security: Dashboard: You should have user-selectable elements and defaults for technical and non-technical users. You should be able to only show certain elements, policies, and alerts to different authorized users or groups, with entitlements typically stored in the enterprise directory. Nowadays, given the state of widget-based interface design, you can expect a highly customizable environment, letting each user configure what they need and how they prefer to see it. Discovery: You cannot protect an endpoint (or any other device) if you don’t know it exists. So the next key platform feature is discovery. Surprise is the enemy of the security professional, so make sure you know about new devices as quickly as possible – including mobile devices. Asset repository integration: Closely related to discovery is the ability to integrate with an enterprise asset management system or CMDB for a heads-up whenever a new device is provisioned. This is essential for monitoring and enforcing policies. You can learn about new devices proactively via integration or reactively via discovery, but either way you need to know what’s out there. Policy creation and management: Alerts are driven by the policies you implement, and of course policy creation and management are also critical. Agent management: Anti-malware defense requires a presence on the endpoint device so you need to distribute, update, and manage agents in a scalable and effective fashion. You need alerts when a device hasn’t updated for a certain period of time, along with the ability to report on the security posture of these endpoints. Alert management: A security team is only as good as its last incident response, so alert management is key. It enables administrators to monitor for potential malware attacks and policy violations which might represent an attack. Time is of the essence during any response, so the ability to provide deeper detail via drill-down, and to send relevant information into a root cause analysis / incident response process, are critical. The interface should be concise, customizable, and easy to read at a glance – responsiveness key. When an administrator drills down into an alert the display should cleanly and concisely summarize the reason for the alert, the policy violated, the user(s) involved, and any other information helpful for assessing criticality and severity. System administration: You can expect the standard system status and administration capabilities within the platform, including user and group administration. For larger distributed environments you will want some kind of role-based access control (RBAC) and hierarchical management to manage access and entitlements for a variety of administrators with varied responsibilities. Reporting: As we mentioned under specific controls, compliance tends to fund and drive these investments, so substantiating their efficacy is necessary. Look for a mixture of customizable pre-built reports and tools to facilitate ad hoc reporting – both at the specific control level and across the entire platform. Cloud vs. Non-cloud The advent of cloud-based offerings for endpoint security has forced many organizations to evaluate the value of running a management server on premise. The cloud fashionistas focus on the benefit of not having to provision and manage a server or set of servers to support the endpoint security offering – which is especially painful in distributed, multi-site environments. They talk about continuous and transparent updates to the interface and feature set of the platform without disruptive software upgrades. They may even mention the ability to have the environment monitored 24/7, with contractually specified uptime. And they are right about all these advantages. But for an endpoint security vendor to manage their offering from the cloud requires more than just loading a bunch of AWS instances with their existing software. The infrastructure now needs to provide data segregation and protection for multi-tenancy, and the user experience needs to be rebuilt for remote management, because there are no longer ‘local’ endpoints on the same network as the management console. Make sure you understand the vendor’s technology architecture, and that they protect your data in their cloud – not just in transit. You also want a feel for service levels, downtime, and support for the cloud offering. It’s great to not have another server on your premise, but if the service goes down and your endpoints are either bricked or unprotected, that on-premise server will look pretty good. Buying Considerations After doing your research to figure out which platforms can meet your requirements, you need to define a short list and ultimately choose something. One of the inevitable decision points involves large vs. small vendors. Given the pace of mergers and acquisitions in the security space, even small vendors may not remain independent and small forever. As a rule, every small vendor is working every day to not be small. Working with a larger vendor is all about leverage. One type is pricing leverage, achieved by buying multiple products and services from the vendor and negotiating a nice discount on all their products. But smaller vendors can get aggressive on pricing as well, and sometimes have even more flexibility to sell cheaper. Another type is platform leverage from using multiple products managed via a single platform. The larger endpoint security vendors offer comprehensive product lines with a bunch of products you might need, and an integrated console can make your life easier. Given the importance of intelligence for tracking malware and keeping current on patches, configurations, and file integrity, it is important to consider the size and breadth of the vendor’s research

Share:
Read Post

Friday Summary: Dead Tree Edition

Phoenix can be a wild place for weather. We don’t get much rain, but when we do it often arrives with fearsome vengeance. When I first moved down here I thought “monsoon season” was just a local colloquialism to make Phoenicians think they were all tough or something. I mean, surely the weather here couldn’t rival what I was used to in Colorado, where occasional 100mph gusts are called ‘invigorating’ rather than ‘tornadoes’ – tornadoes go in circles. The last 7 years have educated me. The winds out here aren’t as consistently powerful as those in Colorado. No catabolic winds screaming down the mountains. The storms are tamer and less frequent. Therein lies the problem. Storms in the desert, especially during monsoon season, are as arbitrary as my cat. The bitchy one, not the nice one. The weather sits here calmly humming away at a nice 107F with a mild breeze, and then come evening storms roll in. No, not one big storm that hits the metro area, but these tiny little thunderstorms that slam a few square miles like a dainty little hammer. Except when it’s the big one. Friday night it looked a little stormy out but I didn’t think much about it. With a 5-month-old messing with our sleep I take full advantage of any opportunity for rest I can snag. I went to bed around 9pm. At 5:40am our four-year-old woke us up. “Daddy, a tree fell on my little house”. Having worked many a night shift in the firehouse, I normally wake up pretty cognizant of my surroundings, but this one threw me. “Garrr…. huh?” That’s when my wife, who went to sleep an hour after me, informed me that a tree might have fallen in our yard. This is what I saw. For perspective, that is the biggest tree in our yard – the one that shades everything. An hour after the landscapers started clearing it out. Storms in Phoenix are intense for very short periods of time, and are arbitrary and dispersed enough that the landscape doesn’t necessarily adjust. The ground doesn’t absorb water, many native plants and trees don’t have deep roots, and microbursts destroy as randomly as our four-year-old. I called our landscapers early and they cleared it. We’ll get a replacement in, but will have to spend a couple years wearing pants in the yard so we don’t scare the neighbors. Which sucks. The wind didn’t merely uproot the tree – it literally snapped it clean off two of the three roots that held tight in the hard-packed dirt. I was depressed, but life goes on. Another storm hit on Sunday, missing our yard but flooding my in-laws’ neighborhood so bad they couldn’t drive down the street. It was less than a localized inch of rain, but a mere half-inch or less, landing on hard-pack, funneled into a few culverts, is a serious volume of water. Flash flooding FTW. Our kid’s playhouse survived surprisingly well. If I ever move to Oklahoma I’m totally building my house out of pink injection-molded plastic. That stuff will survive the heat death of the universe. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Mike in Dark Reading on the emerging threat of APIs. Mike quoted in SC Magazine on Cisco/Sourcefire. CSO Online lifts some of our Cisco/Sourcefire analysis. Mike quoted in Dark Reading on Cisco/Sourcefire. Mike’s column in Dark Reading on M&A Success. Dave Lewis writing for CSO Online: Screaming Machines And Situational Awareness. Dave again: On Coffee Rings And Data Exfiltration Securosis highlighted in an article on cybersecurity business in Arizona. Okay, we might know the author. Rich mentioned in a post on security APIs at LayeredTrust. Favorite Securosis Posts Mike Rothman: Database Denial of Service: Countermeasures. I like this series from Adrian, especially when it gets down to how to actually do something about DoS targeting. Waiting for it to blow over isn’t a very good answer. Adrian Lane: Cisco FIREs up a Network Security Strategy. Mike nails why this is acquisition is a great move for CISCO, despite its $2.7b price tag. Rich: My post, since I learned a lot piecing together even that minimal code – Black Hat Preview 2: Software Defined Security with AWS, Ruby, and Chef. Other Securosis Posts Gonzales’ Partners Indicted. API Gateways: Buyers Guide. Incite 7/23/2013: Sometimes You Miss. Continuous Security Monitoring: The Attack Use Case. Bastion Hosts for Cloud Computing. New Paper: Defending Cloud Data with Infrastructure Encryption. If You Don’t Have Permission, Don’t ‘Test’. Exploit U. Apple Developer Site Breached. Endpoint Security Buyer’s Guide: The Impact of BYOD and Mobility. Endpoint Security Buyer’s Guide: Endpoint Hygiene and Reducing Attack Surface. Favorite Outside Posts Mike Rothman: How To Self-Publish A Bestseller: Publishing 3.0. Some days when the grind gets overly grindy, I dream of just writing novels. It seems like a dream – or is it? Adrian Lane: Data Fundamentalism. Good perspective on CVE and vulnerability statistics. Research Reports and Presentations Defending Cloud Data with Infrastructure Encryption. Network-based Malware Detection 2.0: Assessing Scale, Accuracy and Deployment. Quick Wins with Website Protection Services. Email-based Threat Intelligence: To Catch a Phish. Network-based Threat Intelligence: Searching for the Smoking Gun. Understanding and Selecting a Key Management Solution. Building an Early Warning System. Implementing and Managing Patch and Configuration Management. Defending Against Denial of Service (DoS) Attacks. Securing Big Data: Security Recommendations for Hadoop and NoSQL Environments. Top News and Posts Feds put heat on Web firms for master encryption keys. PayPal Cuts Off “Pirate Bay” VPN iPredator, Freezes Assets. Cybercrime said to cost US $140 billion, radically less than previous estimates. White House opposes amendment to curb NSA spying. Hackers foil Google Glass with QR codes. Healthcare data breaches: Reviewing the ramifications. Blog Comment of the Week This week’s best comment goes to John, in response to Continuous Security Monitoring: The Attack Use Case. Sometimes I forget about the Securosis blog, and then when I rediscover it, there’s a great series of posts like this one. There are two things that jump out at me

Share:
Read Post

API Gateways: Buyers Guide

We will close out this series by examining key decision criteria to help you select an API gateway. We offer a set of questions to determine which vendor solutions support your API technically, as well as the features your developers and administrators need. These criteria can be used to check solutions against your design goals and help you walk through the evaluation process. Nota bene: use cases first It is tempting to leap to a solution. After all, API development is a major trend, and security teams want to help solve API security problems. API gateways have been designed to enable developers to jump in quickly and easily. But there is no generic API security model good enough for all APIs. APIs are a glue layer, so the priorities and drivers are found by analyzing your API use cases: from what components you are gluing together, from what environment (enterprise, B2B, legacy, etc.), to what environment (mobile, Internet of Things, third-party developers, etc). This analysis provides crucial weighting for your priorities. Product Architecture Describe the API gateway’s deployment model (software, hardware only, hardware + software, cloud, or something else). Describe the scalability model. Does the API gateway scale horizontally or vertically? What connectors and adapters, to other software and cloud services, are included? How are new versions and updates handled? What key features do you believe your product has that distinguishes it from competitors? Access Provisioning and Developer Power Tools What credentials and tokens does the API gateway support for developers and API consumers? How is access governed? What monitoring, management, and metrics features does the gateway offer? Does the product offer client-side helper SDKs (iOS, Android, JavaScript, etc.) to simplify API consumer development? Describe a typical “day in the life” of a developer, from registering a new API to production operationalization. Describe out-of-the-box self-service features for registering new APIs. Describe out-of-the-box self-service features for acquiring API keys and tokens. Describe out-of-the-box self-service features for testing APIs. Describe out-of-the-box self-service features for versioning APIs. Describe how your API catalog help developers understand the available APIs and how to use them. Development What integration is available for source code and configuration management? For extending the product, what languages and tools are required to develop wrappers, adapters, and extensions? What continuous integration tools (e.g., Jenkins) does your product work with? Access Control How are API consumers authenticated? How are API calls from API consumers authorized? What level of authorization granularity is checked? Please describe where role, group, and attribute level authorization can be enforced. What out-of-the-box features does the API gateway have for access key issuance, distribution, and verification? What out-of-the-box features does the API gateway have for access key lifecycle management? What tools are used to define technical security policy? Describe support for delegated authorization. What identity server functionality is available in the API gateway? e.g., OAuth Authorization Server, OAuth Resource server, SAML Identity Provider, SAML Relying Party, XACML PEP, XACML PDP, … What identity protocol flows are supported, and what role does the API gateway play in them? Interoperability What identity protocols and versions are supported (OAuth, SAML, etc.)? What directories are supported (Active Directory, LDAP, etc.)? What application servers are supported (WebSphere, IIS, Tomcat, SAP, etc.)? What Service and Security gateways are supported (DataPower, Intel, Vordel, Layer7, etc.)? Which cloud applications are supported? Which mobile platforms supported? Security Describe support for TLS/SSL. Is client-side TLS/SSL (“2-way mutual authentication”) supported? How. Please describe the API gateway’s support for whitelisting URLs. What out-of-the-box functionality is in place to deal with injection attacks such as SQL injection? How does the product defend against malicious JavaScript? How does the gateway defend against URL redirect attacks? How does the gateway defend against replay attacks? What is the product’s internal security model? Is Role-Based Access Control supported? Where? How is access audited? Cost Model How is the product licensed? Does cost scale based on number of users, number of servers, or another criterion? What is the charge for adapters and extensions? This checklist offers a starting point for analyzing API gateway options. Review product capabilities to identify the best candidate, keeping in mind that integration is often the most important criterion for successful deployment. It is not as simple as picking the ‘best’ product – you need to find one that fits your architecture, and is amenable to development and operation by your team. Share:

Share:
Read Post

Endpoint Security Buyer’s Guide: The Impact of BYOD and Mobility

When thinking about endpoint security it is important to decide what you consider an endpoint. We define an endpoint as any computing device that can access corporate data. This deliberately broad definition includes not just PCs, but also mobile devices (smartphones and tablets). We don’t think it is too broad – employees today expect to access the data they need, on the device they are using, from wherever they are, at any time. And regardless of the details, the data needs to be protected. Of course the buzzword du jour is Bring Your Own Device (BYOD), which means you need to support employee-owned devices, just as you support corporate-owned devices today. These folks go to the local big box retailer and come home with the shiny new iDevice or Android thingy, then show up the next working day expecting their email and access to the systems they need to do their job on the shiny new device. For a while you said no because you couldn’t enforce policies on that device, nor could you assume the employee’s children or friends wouldn’t get into email and check out the draft quarterly financials. Then you were summoned to the CIOs office and told about the new BYOD policy put in place by the CFO to move some of these expensive devices off the corporate balance sheet. At that point, ‘no’ was no longer an option, so welcome to the club of everyone who has to support BYOD – without putting corporate data at risk. The first step is to define the rules of engagement – which means policies. The reality is you probably have policies in place already, so it is a case of going back and revisiting them to ensure they reflect the differences in supporting both mobile devices and the fact thats you may not own said devices. This is a Buyer’s Guide and not a policy guide, so we won’t focus on specific policies, but we will point out that without an updated set of policies to determine what employees can and cannot do – covering both mobile devices and BYOD – you have no shot at controlling anything. BYOD First let’s blow up the misconception that BYOD = mobile devices. Employees may decide they want to run their office applications in a virtual window on their new Mac, not the 4-year-old Windows XP laptop they were assigned. Which means you need to support it, even though you don’t own the device. This changes how you need to provision and protect the device, particularly in terms of enforcement granularity. For devices you don’t own, you need the ability to selectively enforce policies. You cannot dictate what applications employees run on their own machines. You cannot whitelist the websites they visit. You cannot arbitrarily decide nuke a device from orbit if it shows indicators of possible malware. Actually, if your policy says so, you probably can legally control and wipe the device. But it would make you very unpopular if you decided to blow away a device and lost a bunch of personal pictures and videos in the process. So the key with BYOD is granularity. It is reasonable to do a periodic vulnerability scan on the device to ensure it’s patched effectively. It is also reasonable to require the device be encrypted so the corporate data on it is protected. It is fair to block access to corporate networks if the device isn’t configured properly or seems to be compromised. BYOD has several implications for security. Let’s examine the impact of BYOD in terms of the aspects we have discussed already: Anti-malware: If you require anti-malware on corporate owned computers, you probably want to require it on employee-owned machines as well. It also may be required by compliance mandates for devices which access protected information. The question is whether you require each employee to use the corporate standard anti-malware solution. If so, you would use your existing anti-malware solution’s enterprise management console. If not you need the capability to confirm whether anti-malware protection is running on each device on connection. You also need to decide whether you will mandate anti-malware protection for mobile devices, given the lack of malware attacks on most mobile platforms. Hygiene: Under our definition (patch management, configuration management, and device control), the key change for BYOD is reassessment of the security posture of employee-owned device on each connection to the network. Then it comes down to a policy decision on whether you allow insecurely configured or unpatched devices on the network, or you patch and update the device using enterprise management tools. Keep in mind there may be a software licensing cost to use enterprise tools on BYOD devices. The ability to deal with BYOD really comes down to adding another dimension to policy enforcement. You need to look at each policy and figure out whether it needs to change for employee-owned devices. It is also a good idea to make sure you can both visualize and report on employee-owned devices because there will be sensitivity around ensuring they comply with BYOD policies. Mobility We just explained why mobile devices are endpoints, so we need to provide guidance on protecting them. As with most newish technology, the worst initial problem is more than security. The good news is that mobile devices are inherently better protected from attack due to better underlying operating system architectures. That means makes hygiene – including patching, configuration, and determining which applications can and should run on the devices – the key security requirement. That doesn’t mean there is no mobile malware threat. Or that rooting devices, having employees jailbreak them, dealing with new technologies which extend the attack surface such as NFC (Near Field Communications), and attackers exploiting advanced device capabilities, aren’t all real issues. But none of these is currently the most pressing issue. That can and probably will change, as attackers get better and management issues are addressed. But for now we will focus on managing mobile devices. The technologies that enable us to manage mobile devices fall into a

Share:
Read Post

Gonzales’ Partners Indicted

This is all over the news, but Wired was the first I saw to put things in the right context: Four Russians and one Ukrainian have been charged with masterminding a massive hacking spree that was responsible for stealing more than 160 million bank card numbers from companies in the U.S. over a seven-year period. The alleged hackers were behind some of the most notorious breaches for which hacker Albert Gonzalez was convicted in 2010 and is currently serving multiple 20-year sentences simultaneously. The indictments clear up a years-long mystery about two hackers involved in those attacks who were known previously only as Grig and Annex and were listed in indictments against Gonzalez as working with him to breach several large U.S. businesses, but who have not been identified until now. The hackers continued their activities long after Gonzalez was convicted, however. According to the indictment, filed in New Jersey, their spree ran from 2005 to July 2012, penetrating the networks of several of the largest payment processing companies in the world, as well as national retail outlets and financial institutions in the U.S. and elsewhere, resulting in losses exceeding $300 million to the companies. And this tidbit: A second indictment filed in New York charges one of defendants with also breaching NASDAQ computers and affecting the trading system. This is a very big win for law enforcement. There aren’t many crews working at that level any more. It also shows the long memory of the law – most of the indictments are for crimes committed around five years ago. Share:

Share:
Read Post

Database Denial of Service: Countermeasures

Before I delve into the meat of today’s post I want to say that the goal of this series is to aid IT security and database admins in protecting relational databases from DoS attacks. During the course of this research I have heard several rumors of database DoS but not found anyone willing to go on record or even provide details anonymously. Which is too bad – this type of information helps the community and helps reduce the number of companies affected. Another interesting note: we have been getting questions from network IT and application management teams rather than DBAs. In hindsight this is not so surprising – network security is the first line of defense and cloud database service providers (e.g., ISPs) don’t have database security specialists. Now let’s take a look at database DoS countermeasures. There is no single way to stop database DoS attacks. Every feature is a potential avenue for attack so no single response can defend against everything, short of taking the databases off the Internet entirely. The good news is that there are multiple countermeasures at your disposal, both detective and preventative, with most preventative security measures essentially free. All you need to do is put the time in to patch and configure your databases. But if your databases are high-profile targets you need to employ preventative and detective controls to provide reasonable assurances they won’t be brought down. It is highly unlikely that you will ever be able totally stop database DoS, but the following mitigate the vast majority of attacks: Configuration: Reduce the of attack surface of a database by removing what you don’t need – you cannot exploit a feature that’s not there. This means removing unneeded user accounts, communications protocols, services, and database features. A feature may have no known issues today, but that doesn’t mean none are awaiting discovery. Relational databases are very mature platforms, packed full of features to accommodate various deployment models and uses for many different types of customers. If your company is normal you will never use half of them. But removal is not easy and takes some work on your part, to identify what you don’t need, and either alter database installation scripts or remove features after the fact. Several database platforms provide the capability to limit resources on a per-user basis (i.e., number of queries per minute – resource throttling for memory and processors), but in our experience these tools are ineffective. As with the judo example in our earlier attack section, attackers use resource throttling against you to starve out legitimate users. Some firms rely upon these options for graceful degradation, but your implementation needs to be very well thought out to prevent them from impinging on normal operation. Patching: Many DoS attacks exploit bugs in database code. Buffer overflows, mishandling malformed network protocols or requests, memory leaks, and poorly designed multitasking have all been exploited. These are not the types of issues you or your DBA can address without vendor support. A small portion of these attacks are preventable with database activity monitoring and firewalls, as we will discuss below, but the only way to completely fix these issues is to apply a vendor patch. And the vendor community, after a decade of public shaming by security researchers, has recently been fairly responsive in providing patches for serious security issues. The bad news is most enterprises patch databases every 14 months on average, choosing functional stability over security, despite quarterly security patch releases. If you want to ensure bugs and defects don’t provide an easy avenue for DoS, patch your databases. Database Activity Monitoring: One of the most popular database protection tools on the market, Database Activity Monitoring (DAM) alerts on database misuse. These platforms inspect incoming queries to see whether they violate policy. DAM has several methods for detecting bad queries, with examination of query metadata (user, time of day, table, schema, application) most common. Some DAM platforms offer behavioral monitoring by setting a user behavior baseline to define ‘normal’ and alerting when users deviate. Many vendors offer SQL injection detection by inspecting the contents of the WHERE clause for known attack signatures. Most DAM products are deployed in monitor-only mode, alerting when policy is violated. Some also offer an option block malicious queries, either through an agent or by signaling a reverse proxy on the network. Database monitoring is a popular choice as it combines a broad set of functions, including configuration analysis and other database security and compliance tools. Database Firewalls: We may think of SELECT as a simple statement, but some variations are not simple at all. Queries can get quite complex, enabling users to do all sorts of operations – including malicious actions which can confuse the database into performing undesired operations. Every SQL query (SELECT, INSERT, UPDATE, CREATE, etc.) has dozens of different options, allowing hundreds of variations. Combined with different variables in the FROM and WHERE clauses, they produce thousands of permutations; malicious queries can hide in this complexity. Database firewalls are used to block malicious queries by sitting between the application server and the database. They work by both understanding legitimate query structures and which query structures the application is allowed to use. Database firewalls all whitelist and blacklist queries for fine-grained filtering of incoming database requests – blocking non-compliant queries.This shrinks the vast set of possible queries to a small handful of allowed queries. Contrast this against the more generic approach of database monitoring, alerting on internal user misuse, and detection of SQL injection. DAM is excellent for known attack signatures and suspect behavior patterns, but database firewalls reduce the threat surface of possible attacks by only allowing known query structures to reach the database, leaving a greatly reduced set of possible complex queries or defect exploitations. Web Application Firewall: We include web application firewalls (WAF) in this list as they block known SQL injection attacks and offer some capabilities to detect database probing. For the most part, they do not address database denial of service attacks, other than blocking specific queries or access to network ports external users should not see. Application and Database Abstraction

Share:
Read Post

Incite 7/23/2013: Sometimes You Miss

The point of sending the kids to sleepaway camp is that they experience things they normally wouldn’t. They expand their worldviews, meet new people, and do things they might not normally do when under the watchful (and at times draconian) eyes of their parents. As long as it’s legal and appropriate I’m cool. We got a letter from XX1 yesterday. The Boss and I really treasure the letters we get because it gives us some comfort to know that they are 1) still alive, and 2) having fun. All the kids go to Hershey Park at the end of their first month at camp. So I asked in one of my daily messages, what rides did she go on? The letter told me she went on the SooperDooperLooper and also the Great Bear. Two pretty intense roller coasters. Wait, what? When we went to Six Flags over Georgia a few years ago, I spent the entire day coercing her to go on a very tame wooden coaster. I had to bribe her with all sorts of things to get her on the least threatening ride at Universal last year. I just figured she’d be one of those kids who aren’t be comfortable on thrill rides. I was wrong. Evidently she loved the rides, and is now excited to go on everything. She overcame her fears and got it done, without any bribes from me. Which is awesome. And I missed it. I was with XX2 when she rode her first big coaster. But I missed when XX1 inevitably had second thoughts on line, the negotiations to keep her in the line, the anticipation of the climb, the screaming, and then the sense of satisfaction when the ride ends. I was kind of bummed. But then I remembered it’s not my job to be there for absolutely everything. My kids will live their own lives and do things in their own time. And sometimes I won’t be there when that time comes. As long as they get the experiences and can share them with me later, that needs to be enough. So it is. That doesn’t mean I won’t become a Guilt Ninja when she gets home. But I’ll let her off the hook, at a cost. We will need to make a blood oath to ride all the coasters when we go to Orlando next summer. Me, my girls, and a bunch of roller coasters. I don’t think it gets much better than that… –Mike Photo credit: “Great Bear 2” originally uploaded by Steve White Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. The Endpoint Security Buyer’s Guide Endpoint Hygiene: Reducing Attack Surface Anti-Malware, Protecting Endpoints from Attacks Introduction Continuous Security Monitoring The Attack Use Case Classification Defining CSM Why. Continuous. Security. Monitoring? Database Denial of Service Attacks Introduction API Gateways Implementation Key Management Developer Tools Security Analytics with Big Data Deployment Issues Integration New Events and New Approaches Use Cases Introduction Newly Published Papers Network-based Malware Detection 2.0: Assessing Scale, Accuracy, and Deployment Quick Wins with Website Protection Services Email-based Threat Intelligence: To Catch a Phish Network-based Threat Intelligence: Searching for the Smoking Gun Understanding and Selecting a Key Management Solution Incite 4 U Sideshow Bob: One of the advances big data clusters offer SIEM is the capability to collect more data – particularly as vendors begin to capture all network traffic rather than a small (highly filtered) subset. As Mike likes to say, that’s how you react faster and better. But stored data is of little use unless we do something with it – such as extract actionable intel from the data. This is why I stress that you need to stop thinking about “big data” as a lot of data – big data offers a fully customizable technology platform that can help you derive information from data you collect. Don’t be awed by the size – it’s what you do with it that counts. There’s a joke in there somewhere… A big data platform can also handle much larger data, but that’s a sideshow to the main event. – AL Pick a number, any number: I have long argued that we lack the fundamental structural frameworks to even consider measuring economic losses due to cybercrime. We can barely measure losses associated with physical theft – never mind IT. For example, how do you define downtime or response time, so you can measure is cost? I’ll bet your definition doesn’t match the person who sat next to you at your last conference, and neither of you really measures it consistently over the course of a year to produce valid statistics. This is why I slam all the Ponemon loss surveys – no matter how well the survey is built, there aren’t enough people in the world actually tracking these things to provide meaningful data. So it comes as no surprise that a report released by McAfee and the Center for Strategic and International Studies pegs cybercrime losses at somewhere between $300B and $1T. I give them props for honesty – they cite the problems I mentioned and more. But not even governments can make decision based on ranges like that. Maybe we should just say “bigger than a breadbox” and be done with it. – RM Make that a triple mocha grande exfiltration: One of our favorite Canadians (tied with Mr. Molson), Dave Lewis is now writing a blog for CSO Online, and doing a great job. Not that I’m surprised – Dave is not just an epic beard with security kung fu. The dude can write and come up with cool analogies, such as how data exfiltration is like a coffee ring on the table. Huh? Dave points out that like that inexplicable coffee ring, sometimes data is just lost. Then he goes through the fundamentals of incident response and data protection. Even telling a story or two

Share:
Read Post

Continuous Security Monitoring: The Attack Use Case

We have discussed why continuous security monitoring is important, how we define CSM, and finally how you should be classifying your assets to figure out the most appropriate levels of monitoring. Now let’s dig into the problems you are trying to solve with CSM. At the highest level we generally see three discrete use cases: Attacks: This is how you use security monitoring to identify a potential attack and/or compromise of your systems. This is the general concept we have described in our monitoring-centric research for years. Change: An operations-centric use case is to monitor for changes, both to detect unplanned (possibly malicious) changes, and to verify that planned changes complete successfully. Compliance: Finally, there is the check the box use case, where a mandate or guidance requires monitoring and/or scanning technology; less sophisticated organizations have no choice but to do something. But keep in mind the mandated product of this initiative is documentation that you are doing something – not necessarily an improved security posture, identification of security issues, or confirmation of activity. In this post and the next we will dig into these use cases, describe the data sources applicable to each, and deal with the nuances of making CSM work to solve each problem. Before we dig in we need to make a general comment about these use cases. Notice that they are listed from broadest and most challenging, to narrowest and most limited. The attack use case is bigger, broader, and more difficult than change management; compliance is the least sophisticated. Obviously you can define more granular use cases, but these three cover most of what people expect from security monitoring. So if we missed something we are confident you will let us know in the comments. This is a reversal of the order in which most organizations adopt security technologies, and correlates to security program sophistication. Many start with a demand to achieve compliance, then grow an internal control process to deal with changes — typically internal — and finally are ready to address potential attacks, which entails changes to devices posture. Of course the path to security varies widely — many organizations jump right to the attack use case, especially those under immediate or perpetual attack. We made a specific decision to address the broadest use case first — largely because even if you are not yet looking for attacks, you will need to soon enough. So we might as well lay out the entire process, and then show how you can streamline your implementation for the other use cases. The Attack Use Case As we start with how you can use CSM to detect attacks, let’s begin with the NIST’s official definition of Continuous Security Monitoring: Information security continuous* monitoring (ISCM) is maintaining ongoing* awareness of information security, vulnerabilities, and threats to support organizational risk management decisions. *The terms “continuous” and “ongoing” in this context mean that security controls and organizational risks are assessed, analyzed and reported at a frequency sufficient to support risk-based security decisions as needed to adequately protect organization information. Data collection, no matter how frequent, is performed at discrete intervals. NIST 800-137 (PDF) Wait, what? So to NIST ‘continuous’ doesn’t actually mean continuous, but instead a “frequency … needed to adequately protect organization information.” Basically, your monitoring strategy should as continuous as it needs to be. A bit like the fact that advanced attackers are only as advanced as they need to be. We like this clarification, which reflects the fact that some assets need to be monitored at all times, and others not so much. But let’s be a bit more specific about what you are trying to identify in this use case: Determine vulnerable (and exploitable) devices Prioritize remediating those devices based on which have the most risk of compromise Identify malware in your environment Detect intrusion attempts at all levels of your environment Gain awareness and track adversaries in your midst Detect exfiltration of sensitive data Identify the extent of any active compromise and provide information useful in clean-up Verify clean-up and elimination of the threat Data Sources To address this laundry list of goals, you need the following data sources: Assets: As we discussed in classification, you cannot monitor what you don’t know about; without knowing how critical an asset is you cannot choose the most appropriate way to monitor it. As we described in our Vulnerability Management Evolution research, this requires an ongoing (and dare we say “continuous”) discovery capability to detect new devices appearing on your network, and then a mechanism for profiling and classifying them. Network Topology/Telemetry: Next you need to understand the network layout, specifically where critical assets reside. Assets which are accessible to attackers are of course higher priority than inaccessible assets, so it is quite possible to have a device which is technically vulnerable and contains critical data, but is less important than a less-valuable asset which is clearly in harm’s way. Events/Logs: Any technological device generates log and event data. This includes security gear, network infrastructure, identity sources, data center servers, and applications, among others. Patterns in the log may indicate attacks if you know how to look; logs also offer substantiation and forensic evidence after an attack. Configurations: Configuration details and unauthorized configuration changes may also indicate attacks. Malware generally needs to change device configuration to cause its desired behavior. Vulnerabilities: Known vulnerabilities provide another perspective on device vulnerability, can be attacked by exploits in the wild. Device Forensics: An advanced data source would the very detailed information (including memory, disk images, etc.) of what’s happening on each monitored device to identify indicators of compromise and facilitate investigation of potential compromise. But this kind of information can be invaluable to confirm compromise. Network Forensics: Capturing the full packet stream enables replay of traffic into and out of devices. This is very useful for identifying attack patterns, and also for forensics after an attack. That is a broad list of data, but — depending on the sophistication of your CSM process — you may not need all these sources. More data is better than less data, but everyone needs to strike a balance between capturing

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.