Securosis

Research

Microsoft Partially Caves to Symantec and McAfee.

Microsoft is making key changes to Vista to avoid antirust problems. They’re adding an API to PatchGuard, and loosening control on the Security Center. From the ZDNet article: In another change, Microsoft had planned to lock down its Vista kernel in 64-bit systems, but will now allow other security developers to have access to the kernel via an API extension, Smith said. Additionally, Microsoft will make it possible for security companies to disable certain parts of the Windows Security Center when a third-party security console is installed, the company said. … Microsoft will provide a way to ensure that Windows Security Center will not send an alert to a computer user when a competing security console is installed on the PC and is sending the same alert, the company said. Opening the kernel through a secure API is a reasonable idea- not as secure as a complete lockdown, but it does enable some valuable security tools outside of antivirus and host intrusion prevention that would have been locked out (like activity monitoring). MS would have had to do this eventually. I’m not as thrilled with the Security Center change- I want the operating system itself to warn me when core security functions are changing. In both cases I hope code signing will be required to limit hacker exploitation of these functions, but I doubt MS will be allowed to enforce it. Share:

Share:
Read Post

The Real Definition of a Zero Day

Shimel has a good post on the whole 0day vulnerability thing. He nails it. This has been a pet peeve of mine for a long time. A real 0day isn’t the time from when a vulnerability is announced until a patch is released. A real zero day is a vulnerability no one knows about except those who discovered it. A zero day exploit is an attack against a non-public, unknown vulnerability. A real zero day is bad juju. It slices through any signature based security defenses since there’s no known signature. If it’s on a common port, and you don’t detect it through some sort of behavioral based or impact based technique (like the server dying), it’s hard or impossible to stop. A smart attacker with a true zero day implementing a targeted attack is extremely hard, if not impossible, to stop. Odds (for us) are a little better if they’re dumb enough to go for the mass exploit, thus setting off all sorts of alarms (maybe). There are very few true zero day attacks. Even fewer on a large scale. Be thankful they don’t happen more often. Those “0day” protection tools you bought or compiled on your own probably won’t help a whole lot. Layer the defenses, follow best practices, and realize you can’t stop them all. Share:

Share:
Read Post

Cybercrime- You Can’t Win Only With Defense

I picked up the ever-ubiquitous USA Today sitting in front of my hotel room door this morning and noticed an interesting article by Jon Swartz and Byron Acohido on cybercrime markets. (Full disclosure, I’ve served as a source for Jon in the past in other security articles). Stiennon over at Threat Chaos is also writing on it, as are a few others. About 2-3 years ago I started talking about the transition from experimentation to true cybercrime. It’s just one of those unfortunate natural evolutions- bad guys follow the money, then it takes them a little bit of time to refine their techniques and understand new technologies. I can guarantee that before banks started buying safes and storing cash in them, the only safecrackers were bored 13 year old pimply faced boys trying to impress girls. Or the guys who make the safes and spend all their time breaking the other guy’s stuff. Trust me, I have a history degree. We all know financial cybercrime is growing and increasingly organized. Unlike most of the FUD out there, the USA Today article discusses specific examples of operating criminal enterprises. Calling themselves “carders” or “credit card resellers” these organizations run the equivalent of an eBay for bad guys. And this is only one of the different kinds of criminal operations running on the web. We, as an industry, need to start dealing with these threats more proactively. We can’t win if all we do is play defense. I used to teach martial arts, and we’d sometimes run an exercise with our students where they’d pair of for sparring, but one person was only allowed to defend. No attacks, no counterattacks, blocking only. The only way you can win is if the other guy gets so tired they pass out. Not the best strategy. This is essentially how we treat security today. As businesses, government, and individuals we pile on layers and layers of defenses but we’re the ones who eventually collapse. We have to get it right every time. The bad guys only have to get it right once. Now I’m not advocating “active defenses” that take down bad guys when they attack. That’s vigilantism, and isn’t the kind of thing regular citizens or businesses should be getting into. Something like a tar pit might not be bad, but counterattacking is more than a little risky- we might be downing grandma’s computer by mistake. One of the best tools we have today is intelligence. We in the private sector can pass on all sorts of information to those in law enforcement and intelligence who can take more direct action. Sure, we provide some intelligence today, but we’re poorly organized with few established relationships. The New York Electronic Crimes Task Force is a great example of how this can work. One of the problems those of us on the private side often have with official channels is those channels are a black hole- we never know if they’re doing anything with the info we pass on. If we think they’re ignoring us we might go try and take down a site ourselves, not knowing we’re compromising an investigation in the process. Basically, none of this works if we don’t develop good, trusted relationships between governments and the private sector. When it comes to intelligence gathering we in the security community can also play a more active role, like those guys on Dateline tracking pedophiles and working with police directly to build cases and get the sickos off the street. Those of you on the vulnerability research side are especially suited for this kind of work- you have the skills and technical knowledge to dig deep into these organizations and sites, identify the channels, and provide information to shut them down. We just can’t win if all we do is block. While we’re always somewhat handcuffed by playing legal, we can do a heck of a lot more than we do today. It’s time to get active. But I want to know what you think… Share:

Share:
Read Post

McKeay’s Right- There’s Always Someone Smarter

Martin McKeay has a great addition to my post on experts. I’d like to add one point to this: There’s always going to be someone who knows more about the subject than you do. I don’t care how good you are, somewhere there’s someone who understands what you’re working on better than you do He’s right. Really right. I just want to know who the heck that guy at the end of the chain is. Probably some monk in the mountains with a metaphysical relationship to the OSI model. Share:

Share:
Read Post

Security and Risk Management Are Lovers; Don’t Mistake Them for Twins

I’m on the plane heading back home from Symposium and have to admit I noticed a really weird trend this week. Maybe not a trend per se, but something I haven’t heard before, and I heard it more than once. In two separate one on one meetings clients told me they’d reorganized their security teams and were now calling them “risk management”. No security anymore, just risk management. I’m a big proponent of risk management. I even wrote a framework before it was cool (the Gartner Simple Enterprise Risk Management framework if you want to look it up). Now all the kids are into it, but I get worried when any serious topic enters the world of glamorous trend. Usually it means anyone with a tambourine starts jumping on the bandwagon. Problem is, without a lead guitar, drummer, keyboardist, or even, god forbid, a bassist, there’s a lot of noise but they ain’t about to break out in a sudden rendition of Freebird. Probably. Not. Risk management is a tool used by security practitioners, and security is a powerful tool for risk management. If you catch me in a rare moment of spiritual honesty I’ll even admit that security is all risk management. I even often recommend that security report to a Chief Risk Officer (or your title-happy equivalent). Risk management is mitigating loss or the potential for loss. Security is one tool to reduce risk, and a good security team uses risk management as a technique for balancing the costs and benefits of security controls and deciding where to focus limited resources. (At this point I’d like credit for not expanding the innuendo of the title with some… uh… circular arguments. I’m not completely juvenile. Probably. Not.) But dropping the name “security” is just silly. Both security and risk management are established disciplines with related but different skills. Risk management plays the higher-level role of evaluating risk across the enterprise, helping business unite design risk controls, measuring exposures, and taking action when those exposures exceed tolerance. It’s a guiding role since risk managers will NEVER have the same depth of domain expertise as someone with years of experience in their particular business specialty. Security is one of those specialties (and notice I didn’t just say “information” security). Yes, good security professionals have strong risk management skills since nearly every security decision involves risk. That doesn’t mean we’re experts in all types of risk. It does mean we’re domain experts in ensuring the confidentiality, integrity, and availability of either IT systems (for us geeks) or the physical world (for us goons). It’s security. Don’t re-label it risk management. It’s okay to report to risk management, but it’s still security. Share:

Share:
Read Post

There’s a Reason We Have Security (or any) Experts

I’m on a break here in Orlando and made the mistake of checking my work email. A coworker from another team is pushing a prediction around data security that, depending on how you interpret it, is either: Already in multiple commercial products No harder to break than existing technologies I won’t name names or even the specific proposal, but now we’re in a big internal debate since I’m fighting publication of a prediction that I think could embarrass us among security professionals. Unfortunately this person’s team is backing him/her and are really excited about this new security concept, without really understanding security. We see this all the time in any complex field of study or practice. Someone from the outside, either left field or a related field, gets a really cool idea that they think is paradigm shifting. This person believes their outside view is “clearer” than those stuck in the tradition of their various area of expertise. On very rare occasion such genius exists. But it isn’t you. When I was younger I made the same mistake myself; all of us egotistical analytical or academic types are prone to errors of youth or inexperience. Some fields are more prone to, what I’ll call “exploding lightbulbs” than others. Physicists, cryptographers, and doctors battle this on a sometimes daily basis. The truth is we have experts for a reason. I’ve read that true expertise can take 10 years of experience in a field under most circumstances. It takes that long to learn the basic skills & history, and gain necessary practical experience. You can be really good or smart in a field, but expertise takes a lot longer. We see it all the time in security. Someone out of networking, development, or wherever reads a book or takes a course and considers themselves an expert. Really, they’re just starting down the path. In some cases they might be an expert in some small area, but it doesn’t translate to the entire field. I was a paramedic. I’m not a doctor, even if I might catch some doctor’s mistakes on occasion. But when I think I know more than the doctor, and I’m wrong, I become very dangerous. It’s the same in security and many other fields. I was good at security fairly early on, but it took many years to become an expert. And even then, my expertise is only really deep in a couple of areas and some general principles. We have experts for a reason, and not every practitioner is an expert. Expertise takes time, study, experience, and hard work. In security if you think: You’ve invented a new, unbreakable encryption algorythm You just created a new, unbreakable defense against 0day attacks You perfected any single tool, at any layer, that can stop any attack, of any kind You built something to eliminate the insider threat You can take a couple classes and defend a large enterprise You have designed unbreakable DRM You’re wrong. If it’s really important to you go immerse yourself and become an expert. And I’m not talking about some 5 day CISSP class. Take the time, be an expert, or work with experts to convert your theoretical idea to reality. Very rarely that bright bulb won’t explode. But most of the time we’re left with ugly shards of glass that just hurt everyone standing nearby. Share:

Share:
Read Post

Enterprise DRM- Not Dead, Just in Suspended Animation

I just finished up my last of 4 presentations here in Orlando and am enjoying a nice PB&J and merlot here in my room. Too much travel really kills the taste buds for hotel food. Today’s presentation was on data security; the area I’ve been focusing on during my 5 years as an analyst. And when you talk about data security you have to talk about DRM. Enterprise DRM is quite different from consumer DRM, even if they both follow the same basic principles. One of the biggest differences being enterprise DRM is focused on reducing the risk of exposure, consumer DRM on eliminating it (you know, the mythical perfect security). There are a few third party DRM vendors but Microsoft and Adobe are the big elephants in the room. But even those behemoths struggle for more than a workgroup-scale deployment (oh, they may sell seats but few people use it day to day). Which, as we struggle with problems like information leaks, seems pretty weird. I mean here we have a technology that can stop everything from unapproved email forwarding, to printing, to cutting and pasting. Seems pretty ideal, so what’s the problem? All that capability comes with a price- not sticker price, but deep enterprise integration with every single application that needs to read the content. But that’s not the big problem. The big problem is DRM relies on the people creating documents actually remembering to turn on the DRM, then understanding which rights to apply, and then figuring out who the heck is supposed to have all those various rights. I can barely remember my family, never mind which of my far flung coworkers should be allowed to print the doc I just sent them. Thus most DRM deployments don’t make it past the workgroup. Now imagine if the rights were automatically applied, or at least suggested, based on the content of the document. If there’s a credit card number one set of rules is applied. If it’s an engineering plan, or a secret marketing doc (based on the verbiage inside) different rules are set. All based on central policies. Sure, it won’t catch everything, but it’s a heck of a lot better than not doing anything. Hmm… I wonder where we could find a policy based tool capable of taking action based on deep content inspection using advanced linguistic, statistical, or conceptual analysis? Oh yeah- content monitoring and filtering, often called information leak prevention. CMF will save DRM. It will make it viable outside the workgroup by taking everyday decisions out of the hands of overworked employees, while applying central policies based on what’s actually in the files. It won’t work every time, and users will often have to confirm the correct rights are applied, but it’s the only way enterprise DRM is viable. Share:

Share:
Read Post

SCADA- It’s Probably Cheaper to Keep Those Networks Separate

Thanks to a missing arrival I’m blogging live from the “Analyst Hamster Maze” at Symposium in Orlando. That’s how we refer to the One-on-One area in the Swan hotel- there’s really no other way to describe about 100 temporary booths in a big conference room filled with poorly fed and watered analysts. If you’ve never been to a Gartner conference, any paying attendee can sign up for a 30 minute face to face analyst meeting for Q&A on pretty much anything. I like to call it “Stump the Analyst”, and it’s a good way for us to interact with a lot of end users. (You vendors need to stop abusing the system with veiled briefings and inane “face time”). It does, however, get pretty brutal by day 5. My first meeting today was pretty interesting. The discussion started with SAP security and ended with SCADA. For those that don’t know, SCADA (Supervisory Control and Data Acquisition) is the acronym to cover the process control systems that connect the digital and physical worlds in utilities and manufacturing. These are large-scale systems and networks for controlling everything from a manufacturing floor, to a power network, to a sewage system. SCADA is kind of interesting. These are systems that do things, from making your Cheerios to keeping your electricity running. When SCADA goes down it’s pretty serious. When an outsider gets in (very rare, but there are some cases) they can do some really nasty sh*t. We’re talking critical infrastructure here. SANS seems to be focusing a lot on SCADA these days, either out of good will or (more likely) because it’s hot enough they can make some money on it. I started writing about SCADA around 5 years ago and my earlier work sort of martyred me in the SCADA security world (or with the few people who read the research). These days I’m feeling a bit vindicated as the industry shifts a tad towards my positions. What’s the debate? There’s been a trend for a while to move process control networks onto TCP/IP (in other words, Internet compatible) based networks and standard (as in Windows and UNIX) systems. SCADA developed long before our modern computing infrastructure, and until the past 5-10 years most systems ran on proprietary protocols, networks, and applications. It’s only natural to want to leverage existing infrastructure, technology advancements, standardization, and skill sets by moving to fairly universal platforms. The problem is the very proprietary nature of SCADA was an excellent security control- few outsiders understood it and it wasn’t accessible from the Internet. You know, that big global network, home to the script kiddies of the world. To exacerbate the problem, many companies started converging their business networks with their process control (SCADA) networks. Now their engineers could control the power grid from the same PCs they read email and browsed porn on. It was early in the trend, and my advice was to plan carefully going forward and keep these things separate, often at additional cost, before we created an insurmountable problem with our critical infrastructure. I saw three distinct problems emerging as we moved to TCP/IP and standard platforms: Network failure due to denial of service: if the network is saturated with attack traffic, such as we saw with the SQL Slammer and Blaster viruses/worms, then you can’t communicate with the SCADA systems. Even if they aren’t directly affected- and most have failsafes to keep them from running amok- you still can’t monitor and adjust for either efficiency or safety. The failsafe might shut down that boiler before it explodes, but there are probably some serious costs in a situation like that. I’ve heard rumors that Blaster interfered with the communications during the big Northeast power outage- it didn’t infect SCADA systems, but it sure messed up all those engineer PCs and email between power stations. Exploitation of the standard platform: if your switching substation, MRI, or chemical mixer runs on Windows or Unix; it’s subject to infection/exploitation through standard attacks/viruses/worms. Even locked down, we’ve seen plenty of vulnerabilities in the cores of all operating systems that are remotely exploitable over standard ports via mass infection. Mix your email and web servers on the same network as these things, even with some firewalls in the mix, and you’re asking for trouble. Direct exploitation: plenty of hackers would love to pwn a chemical plant. I know of one case, outside the US, where external attackers controlled a commuter train system on 2 separate occasions. Maybe it was just a game of Railroad Tycoon gone bad a la “War Games”, but I’d rather not have to worry about these kinds of things. Standard networks, platforms, and Internet connectivity sure make this a lot easier. So where are we today? One way to solve the problem is to completely isolate networks. More realistically we can use virtual air gaps, and what I call virtual air locks, to safely exchange information between process control and business networks. Imagine an isolated server, between two firewalls, running TCP/IP on one side and maybe something like IPX (a different network protocol) on the other, with only one, non-standard, port for exchanging information. The odds of traversing to the process control network are pretty darn slim. (For more details check out the Gartner research I wrote on this; I don’t want to violate my employer’s copyright). Thanks to what I hear are some close calls, the industry is taking security a heck of a lot more seriously. The power industry consortium (NERC) issued some security guidelines recently that redefine the security perimeter to include everything connected to the SCADA side. The federal regulatory body, FERC, requires conformance with the NERC standard. Thus if you converge your process control and business networks you have to secure and audit the business side as tightly as the process control side (a much tougher standard). The business costs could be extreme. The result? It’s quite possibly now cheaper to isolate the networks and secure the heck out

Share:
Read Post

IE7 Coming This Month (Maybe as a Security Update?)- If You’re Staying on MS, Better Get It

Over at the Washington Post, Krebs is reporting that Microsoft is releasing Internet Explorer 7 this month. At first it sounded like it might be released as a security update (part of Patch Tuesday, when Microsoft releases all their security patches every month). Now it looks like it might just be released as a regular old update. I’ve heard IE7 is pretty good, although some of the best security sauce won’t work until Windows Vista ships this year/next year/next decade/centruy/whatever. The usual advantage of IE is that it won’t break all those sites coded specifically for IE, of which there are plenty. These days most of them are internal sites, but there are still plenty of external websites that don’t play nice with other browsers. The problem with IE7 is that it’s a MUCH better browser than your current version of Internet Explorer, much more standards compliant, and much more secure. In other words, it will probably break stuff. My guess is IE7 will probably go out this month, but not as part of Patch Tuesday. It will still be branded as a security update and the faster consumers get it in their hands the better. It’s a significant update and could really help reduce some of the browser security problems floating around. It’ll probably break some corporate apps, but you enterprise IT guys hopefully won’t let this spread internally until you’ve tested it. As for your personal browsing- get it. As soon as it’s released. Download it for grandma. I still prefer Firefox and Safari (sorry, Mac only), but IE7 is more than an incremental improvement. Share:

Share:
Read Post

Speaking at the Gartner Symposium

I’m packing up my bags and heading down to Orlando for the Gartner Symposium and IT Expo. It’s a busy year, with 3 presentations and a panel: Tuesday, 8 am: Oracle, SAP, and Beyond: Securing Major Enterprise Applications Tuesday, 3:15 pm: Enterprise Risk Management, the Benefits of Risk (panel) Wednesday, 8:30 am: Content Monitoring and Filtering: Vendor Choices, User Issues Wednesday, 3:15 pm: Keeping Regulators and Customers Happy with Data Security The data and application security pitches are getting a bit stuffed and should keep you geeks happy. I think this might be my 6th Orlando Symposium, which is a bit frightening. If any of you are down there and want to meet up for a beer just drop me a line… Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.