Securosis

Research

Risk Management: Set Your Domain Experts Free

The blogoshpere is kind of funny sometimes as we all run around referencing each other constantly, so you’ll have to excuse the “my sister’s best friend’s 2nd cousin twice removed’s boyfriends bookie” path for this post. (Actually, I really dig all our cross referencing, I think it creates a cool community of experts). Everything started with Alex Hutton’s What Risk Management Isn’t post, to which Mike Rothman replied, to which Arthur at Emergent Chaos replied. Follow that? Me neither, so here’s most of Arthur’s post (hopefully he doesn’t mind I lifted so much of it). And if it’s confusing at all make sure you read Alex’s original post: Rothman: But I can’t imagine how you get all of the “analysts and engineers to regularly/constantly consider likelihood and impact.” Personally, I want my firewall guy managing the firewall. As CSO, my job is to make sure that firewall is protecting the right stuff. To me and maybe I’m being naive and keeping the proletariat down, but risk management is a MANAGEMENT discipline, and should be done by MANAGERS. Arthur: I have to disagree here. Risk management in the end is the responsibility of management and as such the final decision belongs to them. But how can I as a manager make the right decision and know that a firewall is protecting the right stuff, if my team isn’t well educated on what the risks are? How am I supposed to make the right decisions if don’t know what the issues are? I need to have a staff of analysts, architects and engineers that I can trust to be regularly analyzing and evaluating the systems, applications and networks, so I can make the right choices or recommendations. I don’t need someone who blindly follows a help desk ticket. I don’t know a single CSO who wants to be micromanaging those sorts of decisions. About 5 years ago I got tasked with writing some research in risk management, and it took me over two years to actually get anything published. It’s like, hard, and stuff. Anyway, I came to the usual conclusions that risk management is stuck in too many silos, too many people focus on numbers of no real validity, management can’t understand detailed risks in a specific area, but domain experts can’t understand or manage risks outside of their domain. (I even ended up authoring a called “The Gartner Simple Enterprise Risk Management Framework” (sorry, my employer owns it so you have to be a client to read it)). The thing is, as these various posts illustrate, risk management falls into silos for a reason. We want domain experts making risk decisions in their domains. After nearly 20 years of various rescue work I can make a snap risk decision in that domain that’s far more accurate than any BS statistical model someone else comes up with. By the same token there’s no friggen way that expertise allows me to make a good risk decision outside that domain. In physical security I was great at managing the crowd safety issues at a concert, but we probably would have gone out of business if I chose the acts. (All Buffett all the time baby). So whatever risk approach you take, you want one where executive management makes overall risk tolerance decisions, but individual domain experts measure risk in their areas of expertise. You want a system that gives you the ability to communicate between management and operations. No manager needs a detailed analysis of the latest RPC DCOM flaw, they just need to know if that could cause problems for the overall enterprise, how bad, and where. So Rothman, Arthur, and Alex are all right. Mangement is responsible for overall risk, but domain experts must be the ones making and measuring risk decisions in their specific areas. Management needs to communicate risk tolerance to the experts in a language they can understand, and those domain experts need to communicate enterprise risks back to management in a way they can understand. Yes, it’s possible and probably easier than you think. It can speed up risk management since you don’t get wrapped up in garbage stats and fake ROI arguments. It just takes a good framework, a little bit of effort, and a few people that know what they’re doing to kick it off. Share:

Share:
Read Post

The Three Types of Best Practices

Jim over at DCS Security (a great new blog) just finished his last in a series of good posts on security layers. He brings up a favorite subject of mine, best practices: Essentially best practices is a bunch of smart (hopefully) guys sitting around in Gartner, Forester, D&T, PWC, E&Y, SANS, and other groups coming to a consensus on which controls cover the closest to 100% for a given threat they are looking at and which are the best controls to put in place. I hate to dash his hopes, but it turns out that’s not really how things work. I break best practices into three categories: Analyst best practices: What us white coat dudes who don’t work for a living come up with as best practices. These are the more aggressive, forward looking best practices that probably don’t reflect your operational realities. Basically, it’s what a bunch of industry experts think everyone should do, not that they (we) actually have to do it. Analyst best practices will make you really fracking secure, but probably cost more than a CEOs parachute and aren’t always politically correct. Maybe 2% of enterprises (and probably far fewer) adopt comprehensive analyst best practices, but a lot of you pick and choose and implement at least a few. Industry best practices: These are the more formal best practices that more closely align to operational realities. ISO standards, the NERC/FERC CIP standards, PCI, etc. More measurable, more auditable, and while hard, more operationally realistic for most organizations. Let’s guess and call it 20% of enterprises, mostly large, that really hit the full spectrum of industry best practices. Thanks to compliance I expect this to rise significantly over the next 2 years. Some industries, like financial services, are better than others. Industry practices never represent the cutting edge, but are the foundstones of a good security program. Common practices: what everyone is really doing: When most people ask about best practices, they really just want to know what everyone else is doing. It’s a dumb approach, but they figure as long as they don’t fall too far behind they won’t get in too much trouble when it hits the fan. Being a follower in security isn’t always the best idea; most crimes are crimes of opportunity. It’s the virtual equivalent of walking around a parking lot and seeing who left their car door unlocked, rather than picking that hot Beemer and figuring out how to bypass all the extra security. But the entire Internet is that big parking lot, and the bad guys can scan anonymously, at will, without anyone noticing them lurking around. Just because someone else is doing something doesn’t make it right. Especially when everyone faces equal threats, never mind some of the industry specific threats. Best practices are not best practices. It’s another term we tend to overuse without really delving into the meaning. Share:

Share:
Read Post

How I Know There Are Very Few

Anton Chuvakin eviscerates me here for claiming there are very few 0days (what Shimel is starting to call Less than Zero Days). Come, one, Rich? How do YOU know? Given that we know (and you yourself state) that there very few ways to prevent, block or even detect it … What might be more true is that an average security-sloppy enterprise has more to fear and more to lose from “stale” attacks; however, it is NOT the same as to say that there are few 0days out there. I am stunned when folks make those claims. BTW, check out this list that Pete Lindstrom maintains on public exposures of 0day attacks. But how many were used and are not on his (or anybody’s) list? Ominous silence is the answer 🙂 How do I know? Because we’d all be out of business if I were wrong. Most of our IT systems work, most of us aren’t seeing our bank accounts drained every month, most companies stay in business and don’t lose all their intellectual property, and most networks and servers seem to run fine with common security controls and without all sorts of strange back channel traffic we’d probably notice eventually. Ergo the number of true 0day exploits is small enough we don’t have to freak out about them on a daily basis. When we start seeing all sorts of mysterious failures and losses, then I’ll believe those 0days are something that we all need to start really worrying about. We can hype up as many threats as we want, but as long as everyone seems to be able to do business as normal without the kind of losses we actually notice, we should save the FUD for when we need. That’s how I can make that claim. At least for now. If an exploit falls in the forest and no one hears it, are you really 0wn3d? (remember- I’m talking real losses. yes, you can be hacked and not know it, but for this argument I’m assuming there are enough smart security types in enough enterprises that we’d notice something. it sometimes happens (e.g. some of the Office hacks and the .wmf vulnerability), but those attacks are in the vast minority). Share:

Share:
Read Post

My Last Pitch for Defining

Alan Shimel is reviving the zero day debate and coins a term “less than zero day” for vulnerabilities that are unknown from the public at large. Check out his series starting here, then here, and finally here. Rothman mostly agrees here, but (like me) isn’t enamored of the name. As I stated in my initial support for Alan’s position I think he’s mostly nailed it. There is a distinct difference between an unknown vulnerability, an unknown vulnerability for which there’s an active exploit, a new vulnerability that’s not patched (what most people call a 0 day), and regular old vulnerabilities. The difference being that I define the first case (a non-public vulnerability) as the real meaning of a zero day. Why? Because the vulnerability is discovered (day 0), but not propagated. This is Shimel’s “less than zero day”. I don’t want to get caught up in any definition battles; especially when I’m fighting the marketing arms of every security vendor out there who claims they stop a 0 day. I’m willing to fight the noble fight, but let some other idiot go down with the ship. Since the vulnerability is known, by however small a group, it’s a 0 day. If exploited, it’s a 0 day exploit. When it’s public knowledge, but not patched, it’s just an unpatched vulnerability, not a 0 day. If we use this terminology we can get past everyone claiming 0 day protection when they just block an unpatched vulnerability. Zero day can regain its mythical splendor as the representation of evil, unknown vulnerabilities that will cause planes to crash and erase the history of all financial records. Or screw up your browser, whichever you consider worse. There’s my last pitch. (In case I lose and we keep calling unpatched vulnerabilities 0 days, I propose “T- ” instead of less than zero day.) Share:

Share:
Read Post

This is not the Mac security you’re looking for.

Arthur over at Emergent Chaos posted an amusing story on an organization’s reason for switching to Macs. It’s security. Just not necessarily what we mean when we say Macs are more secure. Yes- this company installed Windows on Intel Macs since Macs are more secure. We’re not talking virtualization or anything, but taking off OS X and installing Windows XP. I really never thought of that. (updated : direct link to the original story at deadbeat cafe) Share:

Share:
Read Post

It’s Time to Turn Off WiFi and Bluetooth When Not In Use (Mac or PC)

A little birdie pointed me to the latest post over at the Metasploit blog. For those of you that don’t know, Metasploit is the best thing to hit penetration testing since sliced bread. To oversimplify, it’s a framework for connecting vulnerability exploits to payloads. Before Metasploit it was a real pain to convert a new vulnerability into an actual exploit. You had to figure out how to trigger the vulnerability, figure out what you could actually do once you took advantage of the vulnerability, and inject the right code into the remote system to actually do something. It was all custom programming, so script kiddies had to sit idly by until someone who actually knew how to program made a tool for them. The Metasploit framework solves most of that by creating a standard architecture where you can plug the exploit in one end, then choose your attack payload on the other. Assuming you can script (or find) the exploit, Metasploit takes care of all the difficult programming to connect to convert that exploit into something that can actually do anything. New exploits and payloads appear on a regular basis, and the tool is so easy even an analyst like me can use it (web interfaces are just so friendly). Commercial equivalents used by penetration testers are Core Impact and Immunity Canvas. I tend to think the commercial versions are more powerful, but the open source nature of Metasploit means exploits usually appear faster, and it’s plenty powerful. Besides, any script kiddie (or analyst) can download it for free and be up and running in no time (full disclosure- I use Core Impact and Metasploit in live demos, and am on the Daily Dave email list run by Immunity). So what the heck does this have to do with turning off wireless? Metasploit is working on a module to transition kernel mode exploits into user mode. This is, say, exactly what you’d need to plug in a wireless driver hack on one side, and use that to create a reverse shell under root on the other. Sound familiar? This was one of the tricks Maynor demonstrated in the Black Hat wireless video (and why he didn’t need root). The kernel runs in ring 0- this is below any concept of a user account. Think of it as the world before root even exists. When you exploit something in the kernel you’ve bypassed nearly every security control and can do whatever you want, but since you’re running at such a low level, without any user accounts, the kinds of commands we’re used to are a lot more limited. You can’t list a directory because “ls” or “dir” don’t exist yet. If you want a reverse shell, to execute user commands, or whatever you need to convert that kernel mode access into userland access- where concepts like user accounts and shells exist. In Maynor’s case he dropped code in the kernel to create a reverse shell to his second system over a second wireless connection. Tricky stuff (so I hear, it’s not like I can do any of this myself). The Metasploit team specifically cites wireless driver hacks as one of their reasons for adding this to the framework. With confirmed vulnerabilities on multiple platforms and devices this could foretell a new wave in remote exploits- attacks where you just need to be in wireless (including Bluetooth) range, not even on the same network. I’ve heard underground rumors of even more vulnerabilities on the way in all sorts of wireless devices. The module isn’t complete, but everything in Metasploit tends to move fast. Based on this advancement I no longer feel confident in leaving my wireless devices running when they aren’t in use. I’m not about to shut them off completely, but my recommendation to the world at large is it’s time to turn them off when you aren’t using them. More device driver hacks are coming in 2007, and wireless will be the big focus. Share:

Share:
Read Post

Apple, Security, and Trust

Before I delve into this topic I’d like to remind readers that I’m a Mac user and Apple fan. We are a 2 person, 2 Mac, 3 iPod, 2 Airport Express household, with another Mac in the plans this spring. By the same token I don’t think Microsoft is evil and consider some of their products to be quite good. That said I prefer OS X and have no plans to switch to Vista, although I’ll probably run it in a virtual machine on my Mac. What I’m about to say is in the nature of protecting, not attacking, one of my favorite vendors. Apple faces a choice. Down one path is the erosion of trust, lost opportunities, and customers facing increased risk. On the other path is increased trust, greater opportunities, and happy, safe, customers. I have a lot vested in Apple, and I’d like to keep it that way. As most of you probably know by now, Apple shipped a limited number of video iPods loaded with a Windows virus that could infect an attached PC. The virus is well known and all antivirus software should stop it, but the reality is this is an extremely serious security failure on the part of Apple. The numbers are small and damages limited, but there was obviously some serious breakdown in their security controls and QA process. As with many recent Apple security stories this one was about to quietly fade into the night were it not for Apple PR. In Apple’s statement they said, “As you might imagine, we are upset at Windows for not being more hardy against such viruses, and even more upset with ourselves for not catching it.”. As covered by George Ou and Amrit Williams, this statement is embarrassing, childish, and irresponsible. It’s the technical equivalent of blaming a crime victim for their own victimization. I’m not defending the security problems of XP, which are a serious epidemic unto themselves, but this particular mistake was Apple’s fault, and easily preventable. While Mike Rothman agrees with Ou and Williams, he correctly notes that this is just Apple staying on message. That message, incorporated into all major advertising and marketing, is that Macs are more secure and if you’d just switch to a Mac you wouldn’t have to worry about spyware and viruses. It’s a good message, today, because it’s true. I bought my mom a Mac and talked my sister into switching her small business to Macs primarily because of security. I’m overprotective and no longer feel my friends and family can survive on the Internet on XP. Vista is a whole different animal, fundamentally more secure than its predecessors, but it’s not available yet so I couldn’t consider that option. Thus it was iMac and Mac mini city. But when Apple sticks to this message in the face of a contradictory reality they expose themselves, and their customers, to greater risks. Reality is starting to change and Apple isn’t, and therein lies my concern. All relationships are founded on trust and need. (Amrit has another good post on this topic in business relationships). One of the keystones of trust is security. I like to break trust into three components: Intent: How do you intend to treat participants in a relationship? Capability: Can you behave in compliance with your intent? Communication: Can you effectively communicate both your intent and capability? Since there’s no perfect security we always need to make security tradeoffs. Intent decides how far you need to go with security, while capability defines if you’re really that secure, and communication is how you get customers to believe both your intent and capability. Recent actions by Apple are breaking their foundations of trust. As a business this is a critical issue; Apple relies heavily on trust to grow their market. Trust that their products work well, are simple to use, include superior capabilities, and are more secure. Apple’s message is that Macs are secure, simple, elegant, and reliable. Safe and secure is a powerful message, one that I suspect (based on personal experience) drives many switchers. When I told my cab driver today that Macs have no spyware or active viruses he was stunned. Should Apple lose either their intent to provide superior security, their capability to achieve security, or their ability to communicate either of those, they face reasonable risk of losing customers, or at least growth opportunities. Security, today, is one of Apple’s cornerstones. Anything that erodes it increases their business risks. At the same time, should communication disconnect from either intent or capability, Apple places then places both their trust relationship, and their customers, at risk. Take my favorite snake-oil salesmen at Diebold– by having no intent to secure their products and no security capabilities in their products, and communicating that the products are secure, they create huge potential for security failures. Less educated customers buy products thinking they’re secure, but the products are so flawed it places these customers (the voting public) at extreme risk. Software vendors have done this in the past- claiming products are secure and covering up failures in the hopes the customers and prospects won’t notice. Recent events indicate that Apple may stay on an impossible message (perfect security) and face failures in capability despite the best intent. The entire Black Hat debacle showed Apple pushing the message so hard that the debate lived far longer than needed, exposing more of the public to a potential security failure than would have otherwise noticed, drawing the attention of researchers who may now want to prove Apple isn’t invincible, and losing the trust of some of us in the industry disappointed by PR’s management of the incident. The iPod virus infections shows a lack of capability (security QA in shipping products) and poor communications (failure to take full responsibility). It’s a very small problem, but their arrogant approach to spinning the story lead me to question how they might respond to more serious issues. We have, over the course

Share:
Read Post

Are Phishers Getting Lazy?

I’ve noticed a marked decrease in the customer service from my phishers. Lately spam messages have been originating from “On-line Bank” and other generic addresses. Spelling mistakes are returning, and links no longer even pretend to go to a real bank’s site. Where’s the customer service guys? What’s wrong- is my business no longer important to you? Can’t you even make the effort to personalize your fraudulent messages and entice me with your ever-so-mangled, yet poetic, use of English? Phishing must be big business these days because, like other big businesses, they no longer seem to make the effort to acquire and retain customers through personal services. I really think I’m worth the effort. At least make me think you’re trying. Share:

Share:
Read Post

Data Protection- it’s More than A + B + C

Stiennon covered the McAfee/Onigma deal over at Threat Chaos this weekend. Although I knew about the deal I try and avoid vendor/industry coverage here at Securosis, and, to be honest, it really isn’t worth covering. (Onigma is tiny and agent based, not really the direction the market is heading, and by the time McAfee integrates the tech they’ll be WAY behind the ball). But Richard does make an interesting statement; defining data protection as leak prevention + encryption + device management. It’s a reasonable start, but far too narrow. For the past 5 years I’ve covered data security pretty exclusively; long before it was cool and sexy. Until recently data security’s been the red-headed step-child of the security world- always hanging out on the side of the playground, but the last kid you’d pick for your kickball team. These days that little red-head is all grown up, making his way through the early draft picks and getting read to go pro (take THAT you overused security metaphors). I like to define defensive security as four main security stacks (listed in a data/application centric order, you network guys tend to look at it differently): Host Security: a secure place to put stuff Data Security: securing the stuff Application Security: securing the things that access the stuff Network Security: securing the environment around the stuff On the data security side I took about two years to develop a framework to pull together the disparate technologies being thrown at the problem, from database encryption, to DRM, to activity monitoring. While I can’t dig in too deep here (since all that intellectual property is controlled by my employer), I can still outline the framework since, at this point, all the information’s been used in multiple press interviews and public presentations. The Data Security Hierarchy consists of: Content Monitoring and Filtering (sometimes called leak prevention) Activity Monitoring and Enforcement Logical Controls Encryption Enterprise DRM Access Controls These are just high-level general layers that sometimes encompass multiple technologies. CMF is usually a single technology, but, for example, there are about 10 different encryption technologies/markets. Overall there are about 20-30 different technologies shoved into the different layers, some with a very narrow scope (like portable device control), others with a pretty broad scope (like CMF). Data security isn’t just a bunch of additive technologies tossed together. Just as we spent the 90’s and early 00’s devising models, frameworks, and approaches to network security, we need to do the same for data security. Protecting data is very different from protecting networks and one of the bigger challenges in security in the coming years is to manage it strategically… …and it ain’t just encrypt everything. Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.