Securosis

Research

The Official Securosis

I now know that $40 and a quick web search will let any doofus figure out most of my former addresses, neighbors, home values, roommates, birthday, etc. But what’s really out there on me? Like any egotistical analyst I run the occasional masturbatory Google search on myself, but I suspect there’s far more out there than I realize. I also think there’s value in seeing what a total stranger can find on me. Thus we officially open the Securosis “Invade My Privacy Challenge”. Here are the rules: Use any legal Internet tool at your disposal to dig up whatever dirt you can find. No pretexting or other illegal activities! If it’s sensitive (including anything someone could use for identity theft), email me at rmogull@securosis.com. If it’s interesting or embarrassing, feel free to post it as a comment. For sensitive entries, you can also post a comment telling what you found, but not the details. You must cite all sources- this is full disclosure, you know. Entries without the source will be eliminated. For-pay sources are allowed if you’re willing to cover the cost yourself. Close friends and others with inside knowledge are ineligible. Will and Scott are especially ineligible. The contest ends in two weeks (October 13), or when all my details are compromised. The top prize, based on the value of the information found, is a hardcover copy of Vernor Vinge’s Rainbows End, the most mind-blowing book I’ve read lately. If you find my SSN there will be a bonus prize that’s worth your effort (I’ll post it once I figure it out). < p style=”text-align:right;”> Yes, I know what I’m inviting here. But better to know what’s out there than live with my head in the sand. I promise you there’s plenty to embarrass me with, and a few interesting tidbits like my employee number. I’ll probably regret this… Share:

Share:
Read Post

The ATM Hacks: Disclosure at Work

Last week the guys over at Matasano did some seriously great work on ATM hacking. So many blogs were running with it at the time, and I was on the road dealing with a family emergency, that I didn’t cover it here, but I think this is such an excellent example of disclosure working that it deserves a mention. It’s also just a cool story. It all started with a small article in a local newspaper about a strange gas station ATM with a propensity for doling out a bit more cash than perhaps the account holders were expecting. No mere case of spontaneous mechanical altruism, a little investigation of the video surveillance footage showed some strange behavior on the part of a particular customer who entered a tad more digits than necessary on the keypad to make a withdrawal. From then on every $20 withdrawn was marked on the account as $5. The best part of the story, one that affirms my somewhat cynical views on human behavior, was it took nine days before someone finally reported the charitable ATM! I realize it’s possible that an ATM in a small town gas station might go nine days without use, but I kind of doubt it. When the article first made the rounds most of us were pretty skeptical- small town papers aren’t always known for the most accurate of reporting, especially where technology is concerned. Personally I wrote it off. But Dave Goldsmith at Matasano decided it deserved a little more digging, and struck the mother lode. A little more investigation at the ATM manufacturers website showed these things have master passwords. A mere 15 minutes later Dave acquired a manual for the ATM model in question, including default security codes and instructions for configuring the denominations for the cash trays!!! Yep- all the attacker had to do was tell the ATM the $20 tray held $5 (like any ATM carries fivers anymore) and everyone”s withdrawals, as far as the bank is concerned, they got 3x free money. Dave posted a summary on the Matasano blog and this rapidly made the rounds, including coverage over at Wired. It’s an example of some great security research. Here’s why it’s also an example of good full disclosure. (Almost, Dave held the location of the manuals secret, but they aren’t hard to find). This problem wasn’t unknown; some ATM manufacturers published advisories to their clients, but I suspect most of them assumed the risk was so low it wasn’t worth the effort to change the password. Thus a small group of criminals could keep up their nefarious activities, whose costs are eventually passed onto us consumers. By disclosing enough details of the hack that any bad guy with a modicum of technical skills and the ability to run a Google search could take advantage of it, Dave’s actions should eventually force both ATM manufacturers and their clients to increase security. No ostriches allowed here; I suspect within a few months those default master passwords will be on quite a few less ATMs. In the short term the risk and cost to the financial institutions supporting those ATMs increases, but after the initial shock the overall security of the system will increase. This isn’t a 0day- the vulnerability was known and patching no harder than having the tech change the password on his next trip to fill the trays. By exposing this flaw to the public, combined with accurate reports of real exploits, Dave helped make us all a little more secure, but cost a few lucky individuals their free money. (Wait- doesn’t Diebold make ATMs? What a surprise!) Share:

Share:
Read Post

Do We Have A Right to Privacy in the Constitution?

In a brief analysis/link to my privacy post Mike Rothman states we have a right to privacy in the Constitution, but the problem is enforcement. Thing is, I’m not sure the Constitution explicitly provides for any right to privacy. I’m not a Constitutional lawyer, but I’m going to toss this one to the comments. Anyone know for sure? And if we don’t have that right, what are the implications for society in a digital age? Without explicit constitutional protection lawmakers have incredible amounts of wiggle room to legislate away our privacy on any whim, perhaps to pay for their extended golf vacations in Scotland. As much as we seem to assume we have a right to privacy, I don’t think we do, and if we really don’t, it’s our responsibility to aggressively defend and demand those rights. Share:

Share:
Read Post

Why Someone Will Eventually Hack This Site (and Maybe Your Computer in the Process)

I hate to admit it, but someone will probably hack this site at some point. And they may even use it to hack your computer. And there’s not a darn thing I can do about it. Security, and hacking, are kind of trendy. Both the good guys and the bad guys have a habit of focusing on certain attacks and defenses based on what’s “hot”. We’re kind of the fashion whores of the IT world. I mean I just can’t believe Johnny calls himself a 1337 hax0r for finding a buffer overflow in RPC. I mean that’s just so 2002. Everyone knows that all the cool hackers are working on XSS and browser attacks. The trend of the month seems to be cross-site scripting and embedding attacks into trusted websites. Cross site scripting (XSS) is a form of attack where the attacker takes advantage of poorly-programmed or poorly-configured web pages, and can embed his or her own code in the page to go after your browser (a seriously simple explanation, check out Wiki for more). MITRE (they speak CVE!) called cross-site scripting the number 1 vulnerability of all time (in terms of volume). Dark Reading reports a number of major sites hacked recently this way. Possibly hundreds of sites hosted on HostGator were hacked (not with cross site scripting) and code inserted (using an iframe for you geeks) to infect anyone with the temerity to visit the sites using Internet Explorer (we DID warn you). None of this is new. We’ve had attackers embedding attacks into trusted sites for years. It may be trendy, but it isn’t new by any means. Some are pretty devious- like hacking advertising servers that then distribute their ads on sites all over the net. It’s a great form of social engineering- compromising a trusted authority and using that to distribute your attack. Not that I’m assuming my paranoid readers actually trust this site, but I won’t be surprised if it’s hacked, and hopefully most of you are following security precautions and won’t be compromised yourself. Why? Because this site is hosted. I manage my little part of the server, but I don’t control it myself. I use all sorts of tools like WordPress and cPanel, all of which have their own security flaws. Sure, I’ve managed secure servers and coded secure pages in the past, but I kind of have a day job now. I rely on my hosting provider, and while I tried to choose one with a good reputation, my ego can’t write checks their bodies can’t cash. We, as users, need to take some responsibility ourselves. Just staying away from “those” sites isn’t enough, we also need to understand trusted sites may be compromised at some point, too. So far I’m safe on a Mac, and you Windows users can stay off IE (maybe until 7 comes out) and use anti-spyware and antivirus tools with maybe a little host intrusion prevention. Not that I don’t want you to trust me, but heck, I don’t even really trust myself. I’m just some dude with a blog waiting for the fall security colors… Share:

Share:
Read Post

How to Smell Security Snake Oil in One Sentence or Less

If someone ever tells you something like the following: “We defend against all zero day attacks using a holistic solution that integrates the end-to-end synergies in security infrastructure with no false positives.” Run away. Share:

Share:
Read Post

It Ain’t Over- Apple Responds to Ou/Toorcon Showdown?

I swear, every time I think this thing is dead, its pale desiccated hand reaches from the grave, grabbing at our innocent ankles. Lynn Fox at Apple responded to some very direct questions from George Ou at ZDNet. At this point I’m surprised Apple is letting this drag on; all it does is bring the black spotlight of security on them which, as Microsoft and Oracle will attest to, isn’t necessarily a good thing. Fox’s response seems risky unless she is absolutely certain Maynor and Ellch have nothing, and are basically, you know, suicidal. That doesn’t jive with what I know- even what I’m allowed to (and have) revealed. Toorcon is the end of this week. Ou will be there to watch Maynor and Ellch present. I suspect it will be somewhat interesting. I never suspected a chance meeting at Defcon would drop me into what’s become one of the most bizarre disclosure situations I’ve ever seen. This is even making Ciscogate look tame. Share:

Share:
Read Post

Amrit Loves Cowbell

Amrit Williams is a coworker over at Gartner and he’s obsessed with cowbell and security tools that go to 11. Let’s just say this post isn’t the first time he’s brought it up. Seriously, Amrit is a great analyst and welcome addition to the security blogging world. Unlike many of us he worked his way through the trenches of the vendor world, including stints at McAfee and NCircle. And, in this case, he’s right. A dirty secret of security is that if you do your job too well, people stop buying new product. Remember when AV was $30 with unlimited free updates (and didn’t bring your system to its knees)? Seriously, it was. Here’s a snippet, and check out his site: Bottom line: You should not have to pay more for increased functionality year over year – demand more from your vendors, tell them that you don’t need an anti-virus, anti-spyware, anti-rootkit, anti-phishing, anti-x, with a personal firewall, host-based intrusion detection, and wireless security and networking configuration capabilities each sold to you at a premium – get them all for a single price, the price you paid last year for AV. Let them know that turning it up to 11 is not going to win the gig when what you are really looking for is more cow-bell. Share:

Share:
Read Post

Sorry, Logging IS a Privacy Risk

In a post titled “Access of Access + Audit” Dr. Anton Chuvakin discusses the importance of logging, well pretty much everything. When it comes to working in the enterprise environment I tend to agree- audit logs are some of the most useful security, troubleshooting, and performance management tools we have. Back when I was operational I had two kinds of bad log days- those hair pulling, neurotic-in-a-here’s-johnny-way days spent combing, manually, through massive logs, and (even worse) those really I’m-so-screwed days where we didn’t have the logs at all. Since, thanks to better search and analysis tools, those former days are much rarer, we can focus on the latter. But here’s where my fractured personality splits like a tree hit by lightning- while I believe we should respect personal privacy at work, there’s no expectation of privacy, nor should there be. We’re paid to help our employer succeed, using their resources, and it’s their right to watch everything we’re doing. I advise my corporate clients to be respectful, but activity monitoring is an absolutely essential security tool. But personal life is a whole different bowl of Cheerios and, despite a noted absence in the Constitution, I believe we have a right to privacy in our personal lives. Be it the right to be left alone, or the right to control how our information is collected and used, privacy is essential to freedom. {yes, I’m wearing a flag around my shoulders as I type this} But Dr. Chuvakin seems to think a little different: So, what is the connection between the above definition and my call for “no access without logging”? Logging is NOT a privacy risk; inappropriate use for collected data is. Before you object by invoking the infamous “guns don’t kill people; gaping holes in vital organs do” 🙂 I have to say that the above privacy definition is about access to information about people, not about the existence of said information. And, yes, Virginia, there IS a difference! Similarly, nowadays many folks are appalled when they see stuff like this (“Fresh calls for ISP data retention laws. US attorney general cranks up the volume.”), but it actually – gasp! – seems reasonable to me, in light of the above. Admittedly, if your bandwidth is so huge that you cannot log and retain, you might be able avoid logging or at least avoid long term log retention, but that is a different story altogether. We live in a digital age. One we don’t fully comprehend. One that requires new thinking in ways we haven’t even thought about yet. One of the essential features of this age is a redefinition of scope and scale. Rules of the past break with the reach of networks and the volume of data we collect- data that can exist, effectively, forever. So I propose “Mogull’s Rules of Privacy” (remember, I’m kind of egotistical): All data, once stored, is never lost Collected data is never private data Everyone has a different definition of appropriate use (corollary to 1: unless, of course, you need the data for a disaster recovery) What do I mean? Once we record a digital track it’s nearly impossible to assure that said track is ever really deleted. There’s everything from backups to forensic analysis. Do we lose data every day? Of course. Back as a sysadmin I was really good at it. But when dealing with private data we have to assume it’s eternal. Now why is this data never private? Because everyone has a different definition of appropriate use. Be it law enforcement, a disgruntled employee, or the head of marketing, someone, somewhere, will eventually come up with an “appropriate” reason to use the data. Privacy is like virginity- you can’t get it back. Yes, long-term logging can help in criminal investigations, but if we’re going to pretend we live in a free society, widespread logging or monitoring of innocent citizens is not acceptable. Since our digital lives are now our physical lives, digital communications should be as sacrosanct as the mail or phone calls. I’m all for legal and aggressive monitoring, logging, and wiretapping of known criminals and those under reasonable suspicion, but the day we give in and start logging everyone, just in case, we should just dump all the voting machines, electronic or otherwise, in the Potomac and stop pretending we still believe in the Constitution. Then again, maybe it’s too late. Share:

Share:
Read Post

The NYT on the Increase in the Terrorist Threat

An article just posted by the New York Times reveals that the latest National Intelligence Estimate on terrorism concludes that our involvement in Iraq has increased the global terror threat. Most of the time I make fun of security pundits that think because they stopped a few hackers they’re qualified to discuss issues of national security, but this time I just can’t help myself. I’ve become what I loathe. Edited- I take that back, and the rest of the post. There are people losing their lives over this; I deleted my initial comments. Just go read the article and make your own decision. Apologies for letting my ego temporarily get the better of me. Share:

Share:
Read Post

The Non-Geeks Guide to Consumer DRM: Why Your New TV Might Not Work With Tomorrow’s DVD player

There’s a lot going on in the world of Digital Rights Management (DRM) these days and I realized not everyone understands exactly what DRM is, how it works, and what the implications are. This has popped up a few times recently among friends and family as (being the alpha geek) I’ve been asked to explain why certain music or movie files don’t work on various players. Before digging into some of the security issues around DRM I thought it would be good to post a (relatively) brief overview. I’ll be honest – as objective as I try to be, the title of this post alone should indicate that I have some serious concerns with the current direction of consumer DRM. While one of the better parts of having a personal blog is being able to throw objectivity off a very tall bridge to a very messy landing, tossing all objectivity to the wind often seriously undermines core arguments. Thus I’ll try and keep this a relatively (but not perfectly) impartial overview of the technology. In future posts I’ll dig into the security issues of DRM and make specific recommendations on security requirements for any consumer DRM system. If you’ve ever wondered why you can’t just copy a DVD, why a song you downloaded from iTunes only plays on an iPod, why a song downloaded from Napster won’t play on an iPod, or why you can print some .pdf files but not others… keep reading. If you wonder why it’s so hard to get HDTV on a TiVo or computer… keep reading. If you want to know what that new expensive HDMI cable for your XBox 360 or flat panel really is… keep reading. (and if you know all this stuff you might want to skip this post and wait for the big DRM security analysis in the coming weeks) DRM Defined Digital Rights Management (DRM) is a collection of technologies used to control the use of digital media like music, movies, television, and text. DRM decides and controls who is allowed to read or play a file, copy it, print it, email it, download it to a portable player, burn it to CD, and so on. We broadly divide the market into two halves- the consumer world, and the enterprise world (businesses). While some of the technologies overlap, this is pretty much a hard split and the use and implications of enterprise DRM are very different than consumer DRM. I’m going to simplify a bit here, but DRM essentially works by encrypting a file and tagging it with rules on how that file is allowed to be used (the rules are also protected). Whatever reads that file must be able to both decrypt it and understand (and be able to enforce) those rules. A DRM system has two technical goals: Control/protect content by restricting what software and devices can read it. Control/protect content by restricting what that software/device (and thus the user) can do with it. On the user side, this leads to two major implications: Users are restricted in how they can use content (copying, saving, etc.). Content (and thus users) are locked into using specific players/readers. Thus content publishers and technology companies use DRM as a tool to protect their content (mostly from copying, but there are other implications), and to force you to use their devices. There’s also no single standard technology for DRM, creating a bit of confusion among us consumers. Consumer DRM is actually really hard, since we’re talking about an environment where the user can hack away at both the protected content and the players (devices and software) privately, which tends to give them an advantage over time. Rather than boring you with all sorts of technical jargon I’ll explain a little bit of how this works by comparing two kinds of shiny plastic disks- CDs and DVDs. Compact Discs- Living a Life of Freedom CDs were one of the first (maybe the first, we skipped that in my college history classes) formats for digital distribution. Before music CDs all music distributed to consumers was analog, and one of the characteristics of analog is it tends to degrade over time, and as we make copies, noise sneaks into the signal. CDs changed all that by distributing music in digital form. Not that anyone was playing these things on computers in the 80s, but CDs barged into our lives with the promise of crystal-pure digital music- no scratchy records or stretchy tape. Back then all most of us knew was “it’s digital”, and beyond that we really didn’t think about it. Until CD drives started turning up in computers, that is. Most anyone who has ripped a CD into iTunes now knows that a CD is really just a collection of bits. CDs are totally unprotected unless the music label adds some sort of DRM (which rarely works, since it’s not part of the Compact Disk Digital Audio standard, and our players don’t understand it). As soon as we started putting CD drives in computers we were able to pull perfect copies off CDs onto our computers. Once CD writers and discs became cheap enough we could make perfect copies of these commercial CDs. Then we learned about file compression (to squeeze those big music files into something easier to store and trade) and combined that with the Internet and broadband and all of a sudden anyone, anywhere, could trade nearly-perfect digital music with anyone else in the world without a cent going to the music labels (or artists). They really didn’t like this. It really pissed them off. Their response? Sue the hell out of everyone and write some laws. You see, there’s a huge disparity in perception between content companies and consumers when we buy those CDs. Historically we think of it as “buying music”. We paid money, we own the CD, thus don’t we own the music? Not really- the copyright holder always owns the music, we’re just allowed to use it.

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.