Securosis

Research

Movement In The DLP Market?

Rumors are a major deal in the DLP market might drop soon. As in an acquisition. Being just a rumor I’ll keep the names to myself for now, but it’s an interesting development. One that will probably stir the market and maybe get things moving, even if the acquisition itself fails. Share:

Share:
Read Post

Woops- Comments Should Really Be Open Now

A while back I opened up the comments so you didn’t have to register, but somewhere along the lines that setting was reset. They should be open now, and I’ll keep them open until the spam or trolls force me to change things. Share:

Share:
Read Post

Lessons On Software Updates: Microsoft and Apple Both Muck It Up

I know this is going to sound intensely weird, or somewhat disturbing, but I’m fascinated by how we treat software as a product. It’s kind of a mashup between content like movies and music, which we sort of purchase, but are really just licensing to use, and “hard” products like TVs, hammers, and decorative toilet paper dispensers. Most software companies just sell us a license to use their product, with all sorts of onerous (and potentially unenforceable) restrictions is what we politely refer to as “End User License Agreements”, or EULAs. We only call them that because “Non-Consentual Ass Fuck” doesn’t have as legitimate a ring to it. But there’s a HUGE difference between software and media. Media is passive- we read it, watch it, and listen to it, but it doesn’t affect anything else it touches. A bad book doesn’t screw up your library, and a bad CD doesn’t ruin your CD player. Software, on the other hand, deeply affects our work and personal lives. We install software on systems running other software, and one bad error in one little program can ruin our entire system, corrupt data in other applications, or even damage hardware. Because software is so different than other products, it exists, in essence, in a state of perpetual recall. A sizable portion of the technology industry is dedicated to pushing updates to our software. In some cases these updates change functionality, adding new features. In other cases these updates fix security or other product flaws. For a media file it would be like buying the original Star Wars on DVD, then later updating it will all the improvements Lucas made like emasculating Han and having Greedo shoot first. For physical products it would be like plugging my DeWalt compound miter saw into the wall to add a variable speed feature, or to extend the length of the finger guard. This is an intensely new way of buying, selling, and owning products. One I’m not convinced we fully understand the implications of yet. Let’s turn back to software, keeping in mind that many products today, from MP3 players to phones, now ship with updateable software. As I mentioned before, we tend to lump updates into two categories: Functionality changes: adding or changing features Fixes- repairing security or functionality flaws Ideally these updates benefit the customer by improving the product, but in some cases the update goes in entirely the opposite direction. Vendors can even use updates to deliberately remove functionality you paid for. Take a look at the Pioneer I o; its FM feature to listen to XM radio using your car stereo was completely removed during a software update (Pioneer forgot to get FCC approval). We thus have two situations we’ve never really encountered before in the world of buying and selling stuff. Updates can change how a product you paid for works. Updates can change how other products you paid for, on the same system, work. This is a powerful change to the concepts of product ownership and customer relations and comes with certain responsibilities. Over the past few weeks we’ve seen two of the biggest technology names in the world totally muck it up: Microsoft and Apple. One of the cardinal rules of software updates is that you never force an update. The change you’re pushing might change vital functionality, and, to be honest, it isn’t your right to change my system. That’s called cybercrime. It appears Microsoft messed up and pushed out a “stealth” update for the Windows Update feature of Windows XP. This update installed itself even if you told Windows not to install updates. Worse yet, it essentially ruined the Windows Repair function of the system. Press aside, Microsoft probably opened themselves up for some lawsuits. Another rule (probably more of a best practice) is that you should separate security and functionality changes in updates. This is something Microsoft generally does well these days (except for Service Packs) and Apple does extremely poorly. Security and other flaw updates should be separate from functionality updates because while a user may not want to be hacked, they might not want to change how their product works to be safe.This would be like turning in your car for a recall around a defective airbag and having the speedometer changed from miles to kilometers as a “bonus”. Apple updated the iPhone with critical security updates, but these updates are bundled with serious functionality changes. Thus if I don’t want a little Starbucks logo to appear on my phone every time I walk past one, I have to leave myself vulnerable to attack. Nice one Apple. I really do think we’re redefining the concept of ownership, and the privacy advocate in my is worried things are swinging in the wrong direction. Device manufacturers are practically engaged in an all out war with their own customers, and most of it is driven by the content protection requirements of the media industry. Here are a few recommendations when dealing with software updates: All updates should be optional Don’t bundle security updates with functionality updates Don’t break unrelated applications If you’re an application, don’t change the underlying platform Clearly notify customers what features/functions will change with the update Or to be a little clearer- don’t force updates, don’t take away functions, tell people what you’re doing, and don’t break anything else. Share:

Share:
Read Post

Yes, Hackers Can Take Down The Power Grid. Maybe.

I didn’t plan on writing about the DHS blowing up a power generator on CNN, but I’m in my hotel room in Vegas waiting for a conference call and it’s all over the darn TV. Martin and Amrit also talked about it, and I hate to be late to a party. That little video has started an uproar. Based on the press coverage you’ve got raving paranoids on one side, and those in absolute denial on the other. We’re already seeing accusations that it was all just staged to get some funding. I’ve written about SCADA (the systems used to control power grids and other real-world infrastructure like manufacturing systems) for a while now. I’ve written about it here on the blog, and authored two research notes with my past employer that didn’t make me too popular in certain circles. I’ve talked with a ton of people on these issues, researched the standards and technologies, and my conclusion is that some of our networks are definitely vulnerable. The problem isn’t so bad we should panic, but we definitely need to increase the resources used to defend the power grid and other critical infrastructure. SCADA stands for Supervisory Control And Data Acquisition. These are the systems used to supervise physical things, like power switches or those fascinating mechanical doohickies you always see on the Discovery Channel making other doohickies (or beer bottles). They’ve been around for a very long time and run on technologies that have nothing to do with the Internet. At least they used to. Over the last decade or so, especially the past five years, we’ve seen some changes in these process control networks. The first shift was starting to use commodity hardware and software, the same technology you use at work and home, instead of the proprietary SCADA stuff. Some of these things were O L D old, inefficient, and took special skill to maintain. It’s a lot more efficient for a vendor to just build on the technology we all use every day; running special software on regular hardware and operating systems. Sounds great, except as anyone reading this blog knows there are plenty of vulnerabilities in all that regular hardware and software. Sure, there were probably vulnerabilities in SCADA stuff (we know for a fact there were), but it’s not like every pimply faced teenage hacker in the world knew about them. A lot of new SCADA controllers and servers run on Microsoft Windows. Nothing against Microsoft, but Windows isn’t exactly known as a vulnerability free platform. Worse yet, some of these systems are so specialized that you’re not allowed to patch them- the vendor has to handle any software updates themselves, and they’re not always the most timely of folks. Thus we are now running our power plants and beer bottling facilities on stuff that’s on the same software all the little script kiddies can slice through, and we can’t even patch the darn things. I can probably live without power, but definitely not the beer. I brew at home, but that takes weeks to months before you can drink it, and our stash definitely won’t last that long. Especially without any TV. Back to SCADA. Most of these networks were historically isolated- they were around long before the Internet and didn’t connect to it. At least before trend number two, called “convergence”. As utilities and manufacturing moved onto commodity hardware and software, they also started using more and more IT to run the business side of things. And the engineers running the electric lifeblood of our nation want to check email just as often as the rest of us. And they have a computer sitting in front of them all day. Is anyone surprised they started combining the business side of the network with the process control side? Aside from keeping engineers happy with chain letters and bad jokes, the power companies could start pulling billing and performance information right from the process control side to the business side. They merged the networks. Not everyone, but far more companies than you probably think. I know what you’re all thinking right now, because this is Securosis, and we’re all somewhat paranoid and cynical. We’re now running everything on standard platforms, on standard networks, with bored engineers surfing porn and reading junk email on the overnight shift. Yeah, that’s what I thought, and it’s why I wrote the research. This isn’t fantasy; we have a number of real world cases where this broke real world things. During the Slammer virus a safety system at a nuclear power plant went down. Trains in Sydney stopped running due to the Sasser virus. Blaster was a contributing factor to the big Northeast power outage a few years ago because it bogged down the systems the engineers used to communicate with each other and monitor systems (rumor has it). I once had a private meeting in a foreign country that admitted hackers had gained access to the train control system on multiple occasions and could control the trains. Thus our infrastructure is vulnerable in three ways: A worm, virus, or other flaw saturating network traffic and breaking the communications between the SCADA systems. A worm, virus, or other attack that takes down SCADA systems by crashing or exploiting common, non-SCADA, parts of the system. Direct attack on the SCADA systems, using the Internet as a vector Some of these networks are now so messed up that you can’t even run a vulnerability scan on them without crashing things. Bad stuff, but all hope isn’t lost. Not everyone connects their systems together like this. Some organizations use air gaps (totally separate, isolated networks), virtual air gaps (connected, but an isolated one-way connection), or air-locks (a term I created to describe two separate networks with a very controlled, secure system in the middle to exchange information both ways, not network traffic). NERC, the industry body for the power networks, created a pretty good standard (CIP, Critical Infrastructure Protection) for securing these

Share:
Read Post

The Internet Isn’t Still Running Because Bad Guys Don’t Want To Burn Their Houses Down

Richard Bejtlich, commenting on a Marcus Ranum article, said: “Continuing to function” is an interesting concept. The reason the “Internet” hasn’t been destroyed by terrorists, organized crime, or others is that doing so would cut off a major communication and funding resource. Criminals and other adversaries have a distinct interest in keeping computing infrastructure working just well enough to exploit it. I have to disagree here. While there are a lot of smart bad guys just out for a little profit, there are plenty of malicious psychos looking to cause damage. When I did physical security and worked as a paramedic there was a distinct difference between profit-driven crime and ego-driven crime, even in the same criminal act. Ego crimes, ranging from vandalism to spousal abuse, originate in flaws of character where logic and self-preservation don’t necessarily play a role. Or sometimes they’re just fueled by testostahol, the powerful substance created when alcohol and testosterone mix in a juvenile male’s bloodstream. There are plenty of people who would bring the Internet down either to show they could, or to damage society out of some twisted internal motivation. The root DNS servers are constantly under attack, and not just because someone thinks they can make a buck doing it. Marcus said, Will the future be more secure? It’ll be just as insecure as it possibly can, while still continuing to function. Just like it is today. Not because the bad guys want it that way, but because once crime crosses the threshold where society can’t function at some arbitrary level of efficiency or safety, the populace and governments wake up and take action to preserve our quality of life. There really isn’t much motivation to invest in security that’s more than “good enough” to keep things running. We all have acceptable losses and only act when those are exceeded. Share:

Share:
Read Post

Metasploit Is Ready For Your iPhone Exploits

H D Moore got an iPhone. This is both good news and bad news for Apple. The bad news is that once some remote vulnerabilities appear (including clientside vulns), and get coded into exploits, the Metasploit Framework is ready for them with some iPhone-specific payloads. Let the iPhone pwnage begin. The good news is that I think this will help keep the iPhone more secure. There will be clear motivation to keep this thing patched, and researchers and Apple’s own developers can more easily demonstrate the exploitability of any particular vulnerabilities. And the really good news is you can update your iPhone. Easily. This is a first for the mobile phone market and a clear security advantage. Even if Apple makes mistakes (which they have and will), they can fix them far more easily than other mobile phone manufacturers. Share:

Share:
Read Post

Heading to Vegas for SANS

I get in early Wednesday morning and head home Friday. If you want to meet up, drop me a line at rmogull@securosis.com. Share:

Share:
Read Post

Network Security Podcast, Episode 78

I think Martin and I have definitively proven that recording a podcast at 8 am isn’t the smartest idea in the world. Sure, the content is still there, but there are quite a few more “ums” and “ahs” than usual. Martin had to run to San Francisco today, and we had to push recording from last night due to a stray cat problem at my house. Not to worry, we still managed to talk about a little security. I probably went a little overkill and used Core Impact to help me work out some of my home network issues and reconfigure my wireless design. I’m having a little trouble identifying all my devices on the network and am too lazy to just turn them off and figure out that way. Once we finished our personal geek-rambling, we finally dug into some honest to goodness security issues. Finally, congrats to Martin, who is both gainfully employed again and the proud daddy of a new XBox 360. Show notes: Rich’s blog entry on TD Ameritrade Hacking Sermo.com, the social network for doctors parts 1 & 2 Brian Krebs: Is Cyber crime really the FBI’s #3 priority? PCI Extends its reach to application security Tonight’s music: Dragons by The Switch Network Security Podcast, Episode 78 Time: 53:01 Share:

Share:
Read Post

The Data Security Lifecycle: Beta 1

I never meant to become that “data security” dude. Back when I first transitioned from a consultant to an analyst I was given a hodgepodge of technologies to cover. Since I’d been a DBA and programmer I picked up database security. No one was covering encryption, so that fell in my lap. We’d recently lost the person covering forensics and acceptable use, so I ended up with that as well. This was all about 5 or so years ago, and at the time it seemed like a random collection of technologies. Then I started noticing some similarities and overlap. Clients would call in to ask about these different technologies yet they were all often working on solving the same problems. At first it was defending against, “the insider threat”, but then it started to transition into protecting data/content. I started digging in and realized that although we in security have spent years talking about insider threats and protecting data, our advice was typically little more than hand waving or “encryption”, without really understanding what encryption can and cannot protect against. I decided to try and pull this all together into a framework and my first pass was the Data Security Hierarchy. While a good start at figuring out the various layers used to protect data, it really doesn’t help you figure out when to apply controls and which ones work best under which circumstances. It was little more than an interesting conglomeration of generic technology layers that isn’t actually very practical in designing security controls. Thus I’m proud to announce my next attempt- the Data Security Lifecycle. This time I’ve broken security controls out based on the lifecycle stage of the data. From creation to destruction, the Data Security Lifecycle shows which controls should apply at which phase. This provides more practical guidance and helps prioritize data security technology investments. This diagram is the high-level controls view. While in some cases these controls map directly to a specific technology, in other cases a single control may map to multiple technologies. Future posts will map specific technologies to specific controls, so don’t beat me up over the genericism quite yet. This view represents both structured and unstructured data; future posts will break them out separately since you can’t treat a database the same as a Word document. Finally, this view does not prioritize controls based on data classification. Again, that’s fodder for a future post. Yep, I’ve got a heck of a lot to write about here and will be breaking it out into manageable chunks. In developing the Data Security Lifecycle I reviewed many of the information lifecycles out there, and paid particular attention to Information Lifecycle Management (ILM). I didn’t feel that ILM mapped as well as we needed to the security domain so I decided to borrow elements of it, but in the end designed a more security-specific lifecycle. The stages are: Create: This is probably better named Create/Update since it applies to creating or changing a data/content element, not just a document or database. Creation is defined as generation of new digital content, either structured or unstructured. In this phase we classify the information and determine appropriate rights. Sounds hard, but in many cases this will be performed by technology or default classification and rights applied based on point of origin. Store: Storing is the act committing the digital data to structured or unstructured storage (database vs. files). Here we map the classification and rights to security controls, including access controls, encryption and rights management. I include certain database controls like labeling in rights management – not just DRM. Controls at this stage also apply to managing content in our storage repositories, such as using content discovery to ensure that data is in approved/appropriate repositories. Use: These controls apply to data at the point of use- typically a user’s PC or an application. We include both detective controls like activity monitoring, and preventative controls like rights management. Logical controls are typically applied in databases and applications. I’ve also lumped in application security although that’s a massive domain on its own and mostly outside the scope of this lifecycle. Share: These controls apply as we exchange data between users, customers, and partners. This again includes a mix of detective and preventative controls, such as DLP/CMF/CMP, encryption for secure exchange of data, and (again) logical controls and application security. Archive: In this phase data leaves active use and enters long-term storage. We’ll use a combination of encryption and asset management to protect the data and ensure its availability. Destroy: Not all data is permanently retired, but when it is we need to delete it securely and use tools like content discovery to track down any lingering copies. For you ILM geeks, here’s a mapping of the Data Security Lifecycle phases to ILM: All of this is a work in progress. Over the next few posts I’ll start mapping these high level controls to specific technologies (distinguishing between structured and unstructured data) and prioritize based on classification level. Not all the technologies we’ll be discussing are the most mature in the world, so we’ll also prioritize a little bit based on what’s effective and practical in today’s markets. I don’t consider this anything revolutionary; it’s merely a logical progression, as we see improvements in both the available technologies and our understanding of how data is compromised. I’m trying to present it in an organized big picture. It’s one of those funny things that seems to take endless hours of thought and doodling to build a simple looking diagram that doesn’t look like much. Oh well. This is still all under development and any feedback (preferably in the comments) is appreciated. Eventually I’d like to use this as a basis for a comprehensive book on data security, but that’s still a little ways out unless one of you fine readers is independently wealthy and would like to support my lifestyle while I write full-time. Share:

Share:
Read Post

Go Check Your Gmail Settings… XSS Vulnerability

I always wonder what I’ll wake up to on a Monday morning. Today it was a nice new cross-site scripting (XSS) vulnerability over in Google. The details are over at bedford. org (link broken since it’s a little risky), and the focus is on Google Mail. Bedford has three proofs of concept up. The first exploits Blogspot polls, the second Gmail contacts, and the third forwards all your incoming mail to Bedford. I tested them out, and while the contacts one didn’t work for me in a quick test, the forward definitely worked. This means anyone can send you an email or embed code in their web page that will then forward all your Google mail to an address of their choosing. This isn’t a particularly stealthy exploit- if you go into your Gmail settings you can check if your account is forwarding. Just click on settings, Forwarding and POP, and make sure Disable forwarding is checked (as in this screenshot). The proof of concept was posted on September 24th, so it’s not like this is the first day it’s public Umm… I should have coffee and check the calendar before I blog; that’s today . And while my little advice will help with the forwarding problem, the base code looks like it can do pretty much anything it wants with your Google Mail account, so there’s all sorts of other possible nastiness. Some people are recommending FireFox with NoScript. Personally, I suggest you just log out of Gmail with your web browser and set up your mail client to access Gmail directly (no browser access). All of these are crappy workarounds until Google plugs the hole. Update: I shouldn’t blog before my first cup of coffee. If you’re going to enable POP access, you need to first log in from a “clean” browser, change your password, then set up encrypted POP access. Google’s instructions for this are pretty easy, and seriously, don’t skip that changing your password step. (Thanks to Maynor/Errata for the heads up). Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.