Securosis

Research

Information Security vs. Information Survivability: Retaking Our Vocabulary

Chris Hoff and I (and a few others, like Adrian Lane and Gu er Peterson) have started waxing philosophic quite a bit lately. From debates over Jericho to emotional rants on staying motivated in security, to the security vs. survivability debate, we’ve strayed from our more practical advice and wandered into the land of coffee shops, security jazz, and stupid black berets on our heads. While I realize many of you just want advice on how to secure virtualization, or which DLP tool to buy, these discussions are more than simple intellectual masturbation or self-promotional BS. We work in a complex profession that’s constantly challenged to manage the baggage of the past while preparing for a nebulous future. It’s just as important to gut check how we’re doing today and plan for the future as it is to keep the bad guys out today. I’m working on a longer post for tomorrow on security and innovation, but today Hoff posted his primer on information security vs. information survivability. In it, he uses part of a discussion we had on the phone earlier this week: It’s very important to recognize that I’m not saying that Information Security is “wrong” or that the operational practitioners that are in the trenches every day fighting what they perceive to be the “good fight” are doing anything wrong. However, and as Rich Mogull so eloqently described, we’ve lost the language to describe what it is we should be doing and the title, scope, definition and mission of “Information Security” has not kept up with the evolution of business, culture, technology or economics. I’d like to elaborate for a moment on what I said during that call. I (and I believe Chris) firmly believe that information security is the correct term for what we do. “Survivability” conjures images in my head of scrambling, half-starved proto-mammals clinging to the underbrush as the predators roam the jungle. Survival is little more than the process of not dying. A noble goal, but sometimes a half-rodent wants a little more out of life. “Security” brings images of the predators. No, scrap that, not a mangy predator forever hunting for that next meal, but the farmer (with a well armed security force) that merely needs to wander over to the barn with an axe for a full belly. According to the dictionary, security is the state of being free from danger or threat. The definition of survivable is “not fatal”. The problem is that we’ve lost control of our own vocabulary. “Information security” as a term has come to define merely a fraction of its intended scope. Thus we have to use terms like security risk management and information survivability to re-define ourselves, despite having a completely suitable term available to us. It’s like the battle between the words “hacker” and “cracker”. We’ve lost that fight with “information security”, and thus need to use new language to advance the discussion of our field. When Chris, myself, and others talk about “information survivability” or whatever other terms we’ll come up with, it’s not because we’re trying to redefine our practice or industry, it’s because we’re trying to bring security back to its core principles. Since we’ve lost control of the vocabulary we should be using, we need to introduce a new vocabulary just to get people thinking differently. To me this is all security, but I fully recognize that to break us out of bad habits, we need to break in with some new language to retake control of our profession and mission. Share:

Share:
Read Post

What Drives Security Innovation?

According to the time tracking feature of my Wii (which you can’t disable, nice parental feature), I played 3 hours and 46 minutes of Guitar Hero III last night after picking it up at Target. I have to fully admit I was skeptical of the whole Guitar Hero thing when it first came out, but it’s incredibly addictive. And not just when I’m drunk at a Christmas party. Not that I’d drink at a Christmas party and play video games. That wouldn’t be proper behavior for a non-practicing Jew. I’ve been gaming my entire life but have definitely strayed the past few years. Sure, there was plenty of compelling game content, but nothing really innovative. I don’t have the time for something like World of Warcraft, and some of the coolest games were so difficult that us mere mortals who just wanted to pick them up for an hour or two a week were totally excluded. Then comes the Wii, where the simplest of games take no learning but entertain for hours on end. Sure, the graphics aren’t that great, but that’s not the point. I’m loving that I can pick it up for 15 minutes and actually get something out of it; be it a quick game of te is, a few rounds of golf, or a couple of songs on the guitar. Nintendo rethought gaming and made it fun again. For everyone, not just the hard core. Oh wait, this is my security blog. Got it, so what the heck does the Wii have to do with security? Other than fuzzing the browser? Innovation my friends, innovation. (This post is inspired by some conversations over the past few months with Chris Hoff, based on his disruptive innovation series). Nintendo knew they couldn’t beat Sony and Microsoft head-on, so they tossed out the rules and changed the game. By focusing on casual gaming and a younger audience they didn’t fight for existing market share- they grew the entire market. Innovation in business is nearly always driven by the same need- competitive advantage. Either you innovate to create it, innovate to regain it, or innovate to increase efficiency and thus profitability. Nintendo made two major breaks with the rest of the industry- they designed a console they could sell at a profit out of the gate (MS and Sony lose money on every box and make it up with games). Then they changed the entire game interaction mechanism to appeal to a wider audience. But security follows different rules. We have very little control of the environment around us in security. As much as we like to get ahead of the game, we are responsive by the nature of our mission. Innovation is driven through three needs: Improving Efficiency: The one driver we share with the business. By increasing efficiency, we reduce costs and improve effectiveness, thus contributing to the bottom line. Responding to Threats: The bad guys are just like a business- they innovate to improve the top line, but at the expense of our bottom line. We can never fully predict their innovation, and as they come up with new attacks we are forced to respond with new defenses. Responding to Business Innovation: Just as the bad guys are looking for competitive advantage, so are the businesses we support. They adopt new technologies before we’re fully able to understand and protect them. Just when we have our program operationalized, someone comes up with a new business initiative (Web 2.0 anyone?) or internal technology. Most pundits (and startups, and investors, and…) fail to accurately predict the future of security because they fail to account for all three drivers. Most often, they look at pure threats without accounting for either efficiency or business innovation. Or, they look at business innovation solely as a threat, rather than an opportunity for security innovation (or the related problem- by the time they recognize the business innovation it’s already in production, and thus a new risk/threat). When you look at security innovation, either to predict next year’s budget or to predict the market in three years, study the world around you. Understand your business and general technology trends as deeply as the threats. Pay particular attention to business technology trends, like the consumerization of IT, that change the game. In many cases we can make decisions today that make our lives much easier when that business or threat innovation is in full swing. It’s your opportunity to get ahead of the curve and look like a freaking genius. < p style=”text-align:right;font-size:10px;”>Technorati Tags: Security Innovation Share:

Share:
Read Post

Data Security Lifecycle- Technologies, Part 2

In our last post on this topic we covered the technologies that encompass the Create and Store stages of the Data Security Lifecycle. Today we’ll detail out the tools for Use and Share. As a reminder, since we’ll be delving into each technology in more detail down the road, these posts will just give a high-level overview. There are also technologies used for data security, such as data-in-motion encryption and enterprise kay management, that fall outside the lifecycle and will be covered separately. Use Activity Monitoring and Enforcement: Probably one of the most underutilized tools in the security arsenal. Application Activity Monitoring and Enforcement is more than simply collecting audit logs, although it can include that, but uses more advanced techniques to capture all user activity within the application or database context. Database Activity Monitoring: Monitoring all database activity, including all SQL activity. Can be performed through network sniffing of database traffic, agents installed on the server, or via external monitoring, typically of transaction logs. Many tools combine monitoring techniques, and network-only monitoring is not recommended. DAM tools are managed externally to the database to provide separation of duties from database administrators (DBAs). All DBA activity can be monitored without interfering with their ability to perform job functions. Tools can alert on policy violations, and some tools can block certain activity. Application Activity Monitoring: Similar to Database Activity Monitoring, but at the application level. Third-party tools that can integrate with a number of application environments, such as standard web application platforms, SAP, and Oracle, and monitor user activity at the application level. As with DAM, tools can use network monitoring or local agents, and can alert and sometimes block on policy violations. Many Application Activity Monitoring tools are additional products or features from Database Activity Monitoring vendors. Endpoint Activity Monitoring: Watching all user activity on a workstation or server. Includes monitoring of network activity, storage/file system activity, and system interactions like cut and paste, mouse clicks, application launches, etc. Provides deeper monitoring than endpoint DLP/CMF tools that focus only on content that matches policies. Capable of blocking activity- such as launching a P2P application or pasting content from a protected directory into an instant message. Extremely useful for auditing administrator activity on servers. Will eventually integrate with endpoint DLP/CMF. File Activity Monitoring: Monitoring access and use of files in enterprise storage, such as file servers, SAN, and NAS. Gives an enterprise the ability to audit all file access and generate reports (which can sometimes aid compliance reporting). Capable of independently monitoring even administrator access and can alert on policy violations. Portable Device Control: Tools to restrict access to portable storage such as USB drives and DVD burners. Also capable of allowing access but auditing file transfers and sending that information to a central management server. Some tools integrate with encryption to provide dynamic encryption of content passed to portable storage. Will eventually be integrated into endpoint DLP/CMF tools that can make more granular decisions based on the content, rather than blanket policies that apply to all data. Some DLP/CMF tools already include this capability. Endpoint DLP: Endpoint data loss prevention/content monitoring and filtering tools that monitor and restrict usage of data through content analysis and centrally administered policies. While current capabilities vary highly among products, tools should be able to monitor what content is being accessed by an endpoint, any file storage or network transmission of that content, and any transfer of that content between applications (cut/paste). For performance reasons endpoint DLP is currently limited to a subset of enforcement policies (compared to gateway products) and endpoint-only products should be used in conjunction with network protection in most cases. Rights Management: Rights are assigned and implemented in the Create and Store phases, while policies are enforced in the Use phase. Rights are managed by labels, metadata, and tagging- as opposed to more complex logic enforced by logical controls. Label Security: Access to database objects (table, column, row) is enforced based on the user/group and the label. For example, in a healthcare environment employees without manager access can be restricted from seeing the records of famous patients that are labeled as sensitive. Enterprise DRM: Discussed more extensively in Part 1, Enterprise DRM enforces complex use rights based on policies assigned during creation. During the Use phase, EDRM limits the actions a user can perform with a given piece of content (typically a file). For example, the user may be able to add, edit, and delete parts of the document but not cut and paste to another document. A user might be allowed to view the document, but not print it, email it, or store it on a portable device. Logical Controls: Logical controls expand the brute-force restrictions of access controls or EDRM that are based completely on who you are and what you are accessing. Logical controls are implemented in applications and databases and add business logic and context to data usage and protection. While we expect to see logical controls for unstructured content there are currently no technology implementations. Object (Row) Level Security: Creating a rule-set restricting use of a database object based on multiple criteria. For example, limiting a sales executive to only updating account information for accounts assigned to his territory. While you can always do this through queries, triggers, and stored procedures, some database management systems offer it as an enforcement feature applied to the database object, outside of having to manually add it to every query. Today most DBMSs offer this only for rows, but the feature is expected to expand to other database objects. Structural Controls: Taking advantage of database design features to enforce security logic. For example, using the database schema to limit integrity attacks or restricting connection pooling to improve auditability. Application Logic: Enforcing security logic in the application through design, programming, or external enforcement. Today needs to be implemented by the application itself, but over time certain types of logic may be enforced through external services or tools. Application Security: Effective data

Share:
Read Post

Latest TidBITS Article Posted- Leopard Security

I just posted an explanation of Leopard Security (that’s Mac OS X 10.5 for you non-Apple geeks) up on TidBITS. It’s based on my original blog post here, but expanded and simplified to appeal to a more general audience. I realize I took some liberties with the explanations of buffer overflows, ASLR, vulnerabilities, and exploits, but I had to tailor the content for a less-security-geek audience. Check it out, and feel free to flame me here. I do believe that if everything works as advertised this is a very significant release. There are still some big holes (Quicktime anyone?), but Apple seems to be taking security more seriously than in the past few versions. Share:

Share:
Read Post

Vormetric Encrypts IBM Databases. Sort Of.

IBM and Vormetric announces a deal yesterday where… well, I’ll let them say it: LAS VEGAS, NV – (MARKET WIRE) – 10/18/2007 – Vormetric, Inc. today announced that it has partnered with IBM to deliver database encryption capabilities for DB2 on Windows, Linux and Unix. IBM will offer Vormetric’s highly acclaimed data security solution as part of its data server portfolio, addressing customer demand for increased protection of sensitive data. This new capability is delivered in IBM Database Encryption Expert, initially available for the new DB2 9.5 “Viper 2” data server. First of all, I need to say I’m a big fan of Vormetric. They were the first distributed encryption product on the market and watching what they’ve done has really helped me evolve my thinking on enterprise class encryption. That said, I have a huge nit to pick with them over database encryption. Mostly because they don’t do it, at least as most think about database encryption. Vormetric is a file encryption product. A good one, with some cool additional features like user and application level access and encryption controls. but they don’t do field-level database encryption. Remember, I think encrypting the database files, especially when used with Database Activity Monitoring, is an extremely effective security control. But it doesn’t replace field level encryption, not in the long run. The role of file level encryption for databases is media protection first, with a little separation of duties second. It protects that database on disk and in backups. It also limits who can access the raw database files, but offers no protection for authorized users and administrators in the database. The role of field (column) level encryption is to provide separation of duties within the database. You can protect sensitive fields from those who have database access, including protection against database administrators. Two kinds of encryption. Two different roles. Two different problems solved. This is where I get annoyed with Vormetric’s (and now IBM’s) marketing. It confuses customers and tries to position file level encryption for databases as superior, instead of admitting that it solves a different problem. They seem to refuse to admit that field-level encryption plays a valid role in protecting database data. I realize it’s the job of their marketing to best position their product, but it’s my job to cut through the marketing and give you practical advice. Here it is: Vormetric does file encryption, which is a good option for media protection. Field level encryption is better for enforcing separation of duties, but since it’s hard to implement on certain systems you may need to start with file-level (preferably used with DAM) to buy you the time to migrate to field level. If you don’t need separation of duties, you don’t need field-level encryption, and file encryption is fine. I don’t like marketing that could place customers at risk or is designed to confuse the market, even when I like the product being marketed. Share:

Share:
Read Post

Network Security Podcast, Episode 81

Martin is on the road starting up his new job as a PCI auditor for Trustwave so I made my best attempt to record the podcast. More than a few technical difficulties later, we finally completed recording. Sorry about the extra reverb, I’m still figuring out my setup and accidentally left it a little high. For the record, Audio Hijack Pro rocks and I regret trying to record without it. The show is shorter tonight to account for Martin’s travel. We spend a fair bit of time talking about Apple products due to the upcoming release of OS X 10.5 and happenings in the world of the iPhone. I also chastise Martin for being in Denver and thinking the Rockies are just the big pointy things in the distance. I lived in Boulder for 16 years, and although I’m in Phoenix now I still tend to root for the old home teams. Except the Nuggets. Now if you’ll excuse me, I need to go pre-order Leopard for my Mac… Show Notes: OS X Leopard release and security features The Apple Store Rich’s commentary on Leopard iPhone Metasploit package HD Moore: Cracking the iPhone part 2.1 Rich’s commentary on the iPhone exploit Apple opening iphone, still scared of evil hax0rs Russian Business Network Citrix flaws or bad configuration Blame bad Citrix admins for poor site security, experts say Citrix; Owning the legitimate backdoor Sorry, no music tonight Network Security Podcast, Episode 81, October 17, 2007 Share:

Share:
Read Post

When Software Bugs Kill: Robotic Cannon Kills 9

No, this isn’t science fiction. According to Wired’s Danger Room, an automatic defense system went out of control in South Africa during a live fire exercise. Nine soldiers lost their lives, and fourteen were injured. I’m not going to make any jokes about this one, since we’ve crossed from the theoretical to the real, with a tragic loss of life. There’s not much else to say. Share:

Share:
Read Post

Product News And Two Misjudgments I’ve Made On DLP (Reconnex and Vontu)

One of the reasons I spend so much time talking about DLP around here is that it’s one of the first markets I covered as an analyst and I’ve been able to watch it grow from the start. It also means that over 5-6 years of coverage the odds are pretty high I’ve made some mistakes. The Usual Disclaimer: There are a lot of good DLP products on the market and I work with some of the companies. This post isn’t an explicit endorsement, and i’ll likely be highlighting competitors in future posts as they come out with their own product updates. Just keeping you informed, and you need to run through a full selection process to pick the best tool for your circumstances. With the strong rumors about the acquisition of Vontu, and since it was my first big mistake in this space, it’s a good time to come clean. Way back when Vontu was first coming to market they stopped off to meet me for lunch at the Walnut Brewery in Boulder, Colorado. I think I had a turkey burger because it’s only available at lunch, and I really like it. They described their key differentiator- using real database data to detect leaks, what they call Exact Data Matching (EDM). I wasn’t impressed, and informed them that Vericept could do it all with regular expressions. I walked away thinking I’d never see them again. A combination of factors proved me wrong. For the next 2 years Vericept didn’t recognize the value of the DLP market, continued to focus on acceptable use enforcement, and got their clocks cleaned by Vontu. A combination of aggressive execution, some key client references, and tight focus on leak prevention put Vontu in the top spot in the market. For the record, Vericept later brought in some new management that turned the company around, putting them in second place in terms of revenue by last year. Nice thing about an early market, you can afford some mistakes. Most customers still don’t use EDM, but that’s not the point. I thought, at the time, that a general platform would be more successful, but it was the focused solution that clients were more interested in. Even if the Symantec deal doesn’t happen, that laser focus on the business problem has already paid off. The next example of poor judgement concerns Reconnex. Reconnex is unique in the DLP market in that they can collect all traffic, not just policy violations. I used to call this full forensics since it was essentially structured network forensics. Back when they released the first versions of the product this feature wasn’t an advantage for DLP. There was no reason to collect all that traffic; sure, it might be helpful in an investigation, but few DLP clients were interested. Management at the time (since changed) focused so much on that feature that they let the user interface and performance slack. With their new release, I may be changing my mind. They’ve now turned the capture capability from a forensics tool into a data mining and policy validation tool. Aside from still being useful in investigations, you can now generate a DLP policy and run it on old data. Instead of having to tune a policy in production as you go, you can tune it offline and play with changes without affecting production. They’ve also added data mining so you can use the tool to help identify sensitive data that’s not currently protected by a policy by looking at behavior/history. I haven’t talked to any references about this yet, but it looks promising. They’ve also revamped the user interface and it’s much more usable with better workflow. I know some of the other DLP vendors are working up their next releases and it will be interesting to see what pops. I’ve already heard some good things about the endpoint capabilities of one of them, although they haven’t briefed me. Share:

Share:
Read Post

Apple Opening iPhone!!! Still Scared Of Evil Hax0rs.

Honey? My Blackberry broke. What? I don’t know, it just stopped working. Yeah, I know it looks like it fell off the roof, but I don’t know how that could have happened. Okay, I’ll still probably wait for a 3G version since I really like my Blackberry Pearl, but this is an awesome move. I will, however, call bubkis on this next part: Apple “[is] excited about creating a vibrant third party developer community around the iPhone and enabling hundreds of new applications for our users,” but they are taking the time to do it properly “because we’re trying to do two diametrically opposed things at once – provide an advanced and open platform to developers while at the same time protect iPhone users from viruses, malware, privacy attacks, etc.” Wait, last time I checked the Mac was an open platform, relatively safe from “viruses, malware, privacy attacks, etc.”? And doesn’t the iPhone run on OS X? Last time I asked those questions the response was… a little chilly. Updated: Glenn over at TidBITS predicted this last week. Great scoop! Share:

Share:
Read Post

Up On Twitter

As rmogull. Adam Engst got me started with this article. Seems more useful than I expected. I’ve added it to the contact links on the home page of the blog. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.