Securosis

Research

The Mobile App Sec Triathlon

A quick announcement for those of you interested in Mobile Application Security: Our very own Gunnar Peterson is putting on a 3 day class with Ken van Wyk this coming November. The Mobile App Sec Triathlon will provide a cross-platform look at mobile application security issues, and spotlight critical areas of concern. The last two legs of the Triathlon cover specific areas of Android and iOS security that are commonly targeted by attackers. You’ll be learning from some of the best – Ken is well known for his work in secure coding, and Gunnar is one of the world’s best at Identity Management. Classes will be held at the eBay/PayPal campus in San Jose, California. Much more information is on the web site, including a picture of Gunnar with his ‘serious security’ face, so check it out. If you have specific questions or want to make sure specific topics are covered during the presentation, go ahead and email info@mobileappsectriathlon.com. Share:

Share:
Read Post

Detecting and Preventing Data Migrations to the Cloud

One of the most common modern problems facing organizations is managing data migrating to the cloud. The very self-service nature that makes cloud computing so appealing also makes unapproved data transfers and leakage possible. Any employee with a credit card can subscribe to a cloud service and launch instances, deliver or consume applications, and store data on the public Internet. Many organizations report that individuals or business units have moved (often sensitive) data to cloud services without approval from, or even notification to, IT or security. Aside from traditional data security controls such as access controls and encryption, there are two other steps to help manage unapproved data moving to cloud services: Monitor for large internal data migrations with Database Activity Monitoring (DAM) and File Activity Monitoring (FAM). Monitor for data moving to the cloud with URL filters and Data Loss Prevention. Internal Data Migrations Before data can move to the cloud it needs to be pulled from its existing repository. Database Activity Monitoring can detect when an administrator or other user pulls a large data set or replicates a database. File Activity Monitoring provides similar protection for file repositories such as file shares. These tools can provide early warning of large data movements. Even if the data never leaves your internal environment, this is the kind of activity that shouldn’t occur without approval. These tools can also be deployed within the cloud (public and/or private, depending on architecture), and so can also help with inter-cloud migrations. Movement to the Cloud While DAM and FAM indicate internal movement of data, a combination of URL filtering (web content security gateways) and Data Loss Prevention (DLP) can detect data moving from the enterprise into the cloud. URL filtering allows you to monitor (and prevent) users connecting to cloud services. The administrative interfaces for these services typically use different addresses than the consumer side, so you can distinguish between someone accessing an admin console to spin up a new cloud-based application and a user accessing an application already hosted with the provider. Look for a tool that offers a list of cloud services and keeps it up to date, as opposed to one where you need to create a custom category and manage the destination addresses yourself. Also look for a tool that distinguishes between different users and groups so you can allow access for different employee populations. For more granularity, use Data Loss Prevention. DLP tools look at the actual data/content being transmitted, not just the destination. They can generate alerts (or block) based on the classification of the data. For example, you might allow corporate private data to go to an approved cloud service, but block the same content from migrating to an unapproved service. Similar to URL filtering, you should look for a tool that is aware of the destination address and comes with pre-built categories. Since all DLP tools are aware of users and groups, that should come by default. This combination isn’t perfect, and there are plenty of scenarios where they might miss activity, but that is a whole lot better than completely ignoring the problem. Unless someone is deliberately trying to circumvent security, these steps should capture most unapproved data migrations. Share:

Share:
Read Post

Fact-Based Network Security: Operationalizing the Facts

In the last post, we talked about outcomes important to the business, and what types of security metrics can help make decisions to achieve those outcomes. Most organizations do pretty well with the initial gathering of this data. You know, when the reports are new and the pie charts are shiny. Then the reality – of the amount of work and commitment required to implement a consistent measurement and metrics process – sets in. Which is when most organizations lose interest and the metrics program falls by the wayside. Of course, if the there is a clear and tangible connection between gathering data and doing your job better, you make the commitment and stick with it. So it’s critical, especially within the early phases of a fact-based network security process, to get a quick win and capitalize on that momentum to cement the organization’s commitment to this model. We’ll discuss that aspect later in the series. But consistency is only one part of implementing this fact-based network security process. In order to get a quick win and warrant ongoing commitment, you need to make sense of the data. This issue has plagued technologies such as SIEM and Log Management for years – having data does not mean you have useful and valuable information. We want to base decisions on facts, not faith. In order to do that, you need to make gathering security metrics an ongoing and repeatable process, and ensure you can interpret the data efficiently. The keys to these are automation and visualization. Automating Data Collection Now that you know what kind of data you are looking for, can you collect it? In most cases the answer is yes. From that epiphany, the focus turns to systematically collecting the types of data we discussed in the last post. Data sources like device configuration, vulnerability, change information, and network traffic can be collected systematically in a leveraged fashion. There is usually a question of how deeply to collect data, whether you need to climb the proverbial stack in order to gather application and database events/logs/transactions, etc. In general, we Securosis folk advocate collecting more rather than less data. Not all of it may be useful now (or ever). But once you miss the opportunity to capture data you don’t get it back. It’s gone. And of course which data sources to leverage depends on the problems you are trying to solve. Remember, data does not equal information, and as much as we’d like to push you to capture everything, we know it’s not feasible. So balance data breadth and fidelity against cost and storage realities. Only you can decide how much data is enough to answer the questions of prioritizing activities. We tend to see most organizations focus on network, security, and server logs/events – at least initially. Mostly because that information is plentiful and largely useful in pinpointing attacks and substantiating controls. It’s beyond the scope of this paper to discuss the specifics of different platforms for collecting and analyzing this data, but you should already know the answer is not Excel. There is just too much data to collect and parse. So at minimum you need to look for some kind of platform to automate this process. Visualizaton Next we come come up against that seemingly intractable issue of making sense of the data you’ve collected. In this case, we see (almost every day) that a picture really is worth thousands of words (or a stream of thousands of log events). In practice, pinpointing anomalies and other suspicious areas which demand attention, is much easier visually – so focusing on dashboards, charts, and reports become a key part of operationalizing metrics. Right, those cool graphics available in most security management tools are more than eye candy. Who knew? So which dashboards do you need? How many? What should they look like? Of course it depends on which questions you are trying to answer. At the end of this series we will walk through a scenario to describe (at a high level, of course) the types of visualizations that become critical to detecting an issue, isolating its root cause, and figuring out how to remediate it. But regardless of how you choose to visualize the data you collect, you need a process of constant iteration and improvement. It’s that commitment thing again. In a dynamic world, things constantly change. That means your alerting thresholds, dashboards, and other decision-making tools must evolve accordingly. Don’t say we didn’t warn you. Making Decisions As we continue through our fact-based network security process, you now have a visual mechanism for pinpointing potential issues. But if your environment is like others we have seen, you’ll have all sorts of options for what you can do. We come full circle, back to defining what is important to your organization. Some tools have the ability to track asset value, and show visuals based on the values. Understand that value in this context is basically a totally subjective guess as to what something is worth. Someone could arbitrarily decide that a print server is as important as your general ledger system. Maybe it is, but this gets back to the concept of “relative value” earlier in the series. This relative understanding of an asset/business system’s value yields a key answer for how you should prioritize your activities. If the visualization shows something of significant value at risk, then fix it. Really. We know that sounds just too simple, and may even be so obvious it’s insulting. We mean no offense, but most organizations have no idea what is important to them. They collect very little data and thus have little understanding of what is really exposed or potentially under attack. So they have no choice but to fly blind and address whatever issue is next on the list, over and over again. As we have discussed, that doesn’t work out very well, so we need a commitment to collecting and then visualizing data, in order to

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.