Securosis

Research

Who’s Responsible for Cloud Security? (NetworkWorld Roundtable)

I recently participated in a roundtable for NetworkWorld, tackling the question of Who is responsible for cloud security?. First of all the picture is hilarious, especially because it shows my head photoshopped onto some dude with a tie. Like I’d wear a tie. But some of the discussion was interesting. As with any roundtable, you get a great deal of puffery and folks trying to make themselves sound smart by talking nonsense. Here are a couple good quotes from yours truly, who has never been known to talk nonsense. NW: Let’s start with a basic question. When companies are building hybrid clouds, who is responsible for what when it comes to security? What are the pain points as companies strive to address this? ROTHMAN: A lot of folks think having stuff in the cloud is the same as having it on-premises except you don’t see the data center. They think, “I’ve got remote data centers and that’s fine. I’m able to manage my stuff and get the data I need.” But at some point these folks are in for a rude awakening in terms of what the true impact of not having control over layer four and down is going to mean in terms of lack of visibility. NW: As Sutherland mentioned earlier, a lot of this has to be baked into the contract terms. Are there best practices that addresses how? ROTHMAN: A lot has to do with how much leverage you have with the provider. With the top two or three public cloud providers, there’s not going to be a lot of negotiation. Unless you have a whole mess of agencies coming along with you, as in [Kingsberry’s] case, you’re just a number to these guys. When you deal with smaller, more hungry cloud providers, and this applies to SaaS as well, then you’ll have the ability to negotiate some of these contract variables. NW: How about the maturity of the cloud security tools themselves? Are they where they need to be? ROTHMAN: You’ll walk around the RSA Conference and everybody will say their tools don’t need to change, everything works great and life is wonderful. And then after you’re done smoking the RSA hookah you get back to reality and see a lot of fundamental differences of how you manage when you don’t have visibility. Yes, I actually said RSA hookah and they printed it. Win! Check out the entire roundtable – they have some decent stuff in there. Photo credit: “THE BLAME GAME” originally uploaded by Lou Gold Share:

Share:
Read Post

Developers and Buying Decisions

Matt Asay wrote a very though provoking piece on Oracle’s Big Miss: The End Of The Enterprise Era. While this blog does not deal with security directly, it does highlight a couple of important trends that effect both what customers are buying, and who is making the decisions. Oracle’s miss suggests that the legacy vendors may struggle to adapt to the world of open-source software and Software as a Service (SaaS) and, in particular, the subscription revenue models that drive both. No. Oracle’s miss is not a failure to embrace open source, and it’s not a failure to embrace SaaS; it’s a failure they have not embraced and flat out owned PaaS. Oracle limiting itself to just software would be a failure. A Platform as a Service model would give them the capability of owning all of the data center, and still offering lower cost to customers. And they have the capability to address the compliance and governance issues that slow enterprise adoption of cloud services. That’s the opposite of the ‘cloud in a box’ model being sold. Service fees and burdensome cost structures are driving customers to look for cheaper alternatives. This is not news as Postgres and MySQL, before the dawn of Big Data, were already making significant market gains for test/dev/non-critical applications. It takes years for these manifestations to fully hit home, but I agree with Mr. Asay that this is what is happening. But it’s Big Data – and perhaps because Mr. Asay works for a Big Data firm he felt he could not come out an say it – that shows us commodity computing and virtually free analytics tools provide a very attractive alternative. One which does not require millions in up front investment. Don’t think the irony of this is lost on Google. I believe this so strongly that I divested myself all Oracle stock – a position I’d held for almost 20 years – because they are missing too many opportunities. But while I find all of that interesting as it mirrors the cloud and big data adoption trends I’ve been seeing, it’s a sideline to what I think is most interesting in the article. Redmonk analyst Stephen O’Grady argues: With the rise of open source…developers could for the first time assemble an infrastructure from the same pieces that industry titans like Google used to build their businesses – only at no cost, without seeking permission from anyone. For the first time, developers could route around traditional procurement with ease. With usage thus effectively decoupled from commercial licensing, patterns of technology adoption began to shift…. Open source is increasingly the default mode of software development….In new market categories, open source is the rule, proprietary software the exception. I’m seeing buying decisions coming from development with increasing regularity. In part it’s because developers are selecting agile and open source web technologies for application development. In part it’s that they have stopped relying upon relational concepts to support applications – to tie back to the Oracle issue. But more importantly it’s the way products and service fit within the framework of how they want them to work; both in the sense they have to meld with their application architecture, and because they don’t put up with sales cycle B.S. for enterprise products. They select what’s easy to get access to. Freemium models or cloud services, that you can sample for a few weeks just by supplying a credit card. No sales droid hassles, no contracts to send to legal, no waiting for ‘purchasing cycles. This is not an open-source vs. commercial argument, it’s an ease of use/integration/availability argument. What developers want right now vs. lots of stuff they don’t want with lots of cost and hassles: When you’re trying to ship code, which do you choose? As it pertains to security, development teams play an increasing role in product selection. Development has become the catalyst when deciding between source code analysis tools and DAST. They choose REST-ful APIs over SOAP, which completely alters the application security model. And on more than a few occasions I’ve seen WAF relegated to being a ‘compliance box’ simply because it could not be effective and efficiently integrated into the development-operations (dev-ops) process. Traditionally there has been very little overlap between security, identity and development cultures. But those boundaries thaw when a simple API set can link cloud and on-prem systems, manage clients and employees, accommodate mobile and desktop. Look at how many key management systems are fully based upon identity, and how identity and security meld on mobile platforms. Open source may increasingly be the default model for adoption, but not because it’s lacks licensing issues; it’s because of ease of availability (less hassles) and architectural synergy more than straight cost. Share:

Share:
Read Post

Server Side JavaScript Injection on MongoDB

A couple years ago Brian Sullivan of Microsoft demonstrated blind SQLi and server-side JavaScript injection attacks on Mongo, Neo4j, and other big data engines, but this is the first time I have seen someone get a shell and bypass ASLR. From the SCRT Information Security Team Blog, they found an 0-day to do just that: Trying some server side javascript injection in mongodb, I wondered if it would be possible to pop a shell. … nativeHelper is a crazy feature in spidermonkey missused by mongodb: the NativeFunction func come from x javascript object and then is called without any check !!! … This feature/vulnerability was reported 3 weeks ago to 10gen developers, no patch was commit but the default javascript engine was changed in last version so there is no more nativeHelper.apply function. A metasploit module is comming soon… Go read the post! They laid out their work step by step, so it’s easy to see how they performed their analysis and tried different tweaks to get this to work. A side note to NoSQL vendors out there: It may be time for some of you to consider a bug bounty program on commonly used components – or maybe throw some money SCRT’s way? Nice work, guys. A big “thank you” to Zach (@quine) for spotting this post and bringing it to our attention! Share:

Share:
Read Post

Identifying vs. Understanding Your Adversaries

You read stories about badasses tracking down trolls and showing up at their houses, and you get fired up about attribution. The revenge gene is strong in humans and there is nothing like taking that Twitter gladiator out the woodshed for a little good old fashioned medieval treatment. Now, payback daydreams aside, Keith Gilbert asks a pretty important question about attribution. Do you really need to know exactly who the attacker is? The question: Do you or your organization need to know the PERSON sitting behind the keyboard at the other end of the attack? I still believe that the answer, in most situations, is no. The exceptions I see are localized (physical tampering, skimming, etc) types of crime, or for organizations that are serious about prosecuting (which usually means a financial motivation) the perpetrator. That’s right. It may make you feel better to know the perpetrator was brought to justice, but in most cases doing the work to pinpoint the person is well past the point of diminishing returns. That being said, though it’s not critical to identify the actual attacker, you need to understand the tactics and profile of the adversaries. The idea is that knowing the tactics that the adversary is likely to use can be immensely valuable in prioritizing defenses and focusing employees. While understanding tactics is part of knowing your adversary, it also helps to understand the motivations behind your attackers. Why are you a target? What data are they going after (or prevent others from reaching)? How will they attempt to reach their goal? This is really no different than any other business intelligence function. If you don’t have a clear profile of your adversaries, how can you figure out how to protect yourself? As long as you understand that the profile is dynamic (meaning the attackers are always changing) and that you’re using the intelligence to make educated guesses about the controls that will protect your environment, it’s all good. Photo credit: “Caught red handed?” originally uploaded by Will Cowan Share:

Share:
Read Post

How Cloud Computing (Sometimes) Changes Disclosure

When writing about the flaw in Apple’s account recovery process last week, something set my spidey sense tingling. Something about it seemed different than other similar situations, even though exploitation was blocked quickly and the flaw fixed within about 8 hours. At first I wanted to blame The Verge for reporting on an unpatched flaw without getting a response from Apple. But I can’t, because the flaw was already public on the Internet, they didn’t link to it directly, and users were at active risk. Then I realized that it is the nature of cloud attacks and disclosures in general. With old-style software vulnerabilities when a flaw is disclosed, whatever the means, attackers need to find and target victims. Sometimes this is very easy and sometimes it’s hard, but attacks are distributed by nature. With a cloud provider flaw, especially against a SaaS provider, the nature of the target and flaw give attackers a centralized target. Full disclosure can be riskier for users of the service, depending on the degree of research or effort required to attack the service. All users are immediately at risk, all exploits are 0-days, and users may have no defensive recourse. This places new responsibilities on both cloud providers and security researchers. I suspect we will see this play out in some interesting ways over the next few years. And I am neither equating all cloud-based vulnerabilities, nor making a blanket statement on the morality of disclosure. I am just saying this smells different, and that’s worth thinking about. Share:

Share:
Read Post

Friday Summary: March 22, 2013, Rogue IT Edition

What happened to the guru? The magician? The computer expert at your company who knew everything. I have worked at firms that had several who knew IT systems inside and out. They knew every quirky little trick of how applications worked and what made them fail, and they could tell you which page of the user manual discussed the exact feature you were interested in. If something went wrong you needed a guru, and with a couple keystrokes they could fix just about anything. You knew a guru by their long hair, shabby dress, and the Star Trek paperback in their back pocket. And when you needed something technical done, you went to see them. That now seems like a distant memory. I have lately been hearing a steady stream of complaints from non-IT folks that IT does not respond to requests and does not seem to know how to get out of their own way. Mike Rothman recently made a good point in The BYOD problem is what? BYOD is not a problem because it’s already here and is really useful. Big Data is the same. Somewhere along the line business began moving faster than IT could keep up. Users no longer learn about cool new technologies from IT. If you want a new Android or iPad for work, you don’t ask IT. You don’t ask them about “the cloud”. You don’t consult them about apps, websites, or even collecting credit card payments. In fact we do the opposite – we see what our friends have and what our kids are doing, Google what we need to know, and go do it! The end-run around IT is so pervasive that we have a term for it: Rogue IT. Have credit card, will purchase. How did the most agile and technically progressive part of business become the laggard? Several things caused it. High-quality seamless rollouts of complex software and hardware take lots of time. Compliance controls and reports are difficult to set up and manage. It takes time to set up identity and access management systems to gate who gets to access what. Oh, and did I mention security? When I ask enterprise IT staff and CISOs about adoption of IaaS services, the general answer is “NO!” – none of the controls, systems, and security measures they rely on are yet fully vetted, or they simply do not work well enough. The list goes on. Technologies are changing faster than they can be deployed into controlled environments. Their problems are not just a simple download away from being addressed, and no trip to the Apple Store will solve them. It’s fascinating to watch the struggle as several disruptive technologies genuinely disrupt technology management. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian’s DR paper: Security Implications Of Big Data. Rich quoted on Watering Hole Attacks. Gunnar’s DR Post: Your Password Is The Crappiest Identity Your Kid Will Ever See. Favorite Securosis Posts Mike Rothman & Adrian Lane: When Bad Tech Journalism Gets Worse. Totally ridiculous. The downside of page view whores in all its glory. Certainly wouldn’t want a fact to get in the way of the story… Other Securosis Posts Services are a startup’s friend. New Paper: Email-based Threat Intelligence. Who comes up with this stuff? The World’s Most Targeted Critical Infrastructure. DHS raises the deflector shields. Incite 3/20/2013: Falling down. If you don’t know where you’re going… When Bad Tech Journalism Gets Worse. The Right Guy; the Wrong Crime. New Job Diligence. Preparation Yields Results. The Dangerous Dance of Product Reviews. Limit Yourself, Not Your Kids – Friday Summary: March 15, 2013. Ramping up the ‘Cyber’ Rhetoric. Favorite Outside Posts Adrian Lane: Firefox Cookie-Block Is The First Step Toward A Better Tomorrow. Mike Rothman: Indicators of Impact. Kudos to Russell Thomas for floating an idea balloon trying to assess the impact of a breach. I’ll do a more thorough analysis over the next week or so, but it’s a discussion we as an industry need to have. Project Quant Posts Email-based Threat Intelligence: To Catch a Phish. Network-based Threat Intelligence: Searching for the Smoking Gun. Understanding and Selecting a Key Management Solution. Building an Early Warning System. Implementing and Managing Patch and Configuration Management. Defending Against Denial of Service (DoS) Attacks. Securing Big Data: Security Recommendations for Hadoop and NoSQL Environments. Tokenization vs. Encryption: Options for Compliance. Top News and Posts Critical updates for Apple TV and iOS available Ring of Bitcoins: Why Your Digital Wallet Belongs On Your Finger Subway Hit By The Ultimate Cyberthief Inside Job: A Double-Insider. Two opportunities to vet – both failed. Cisco switches to weaker hashing scheme, passwords cracked wide open Why You Shouldn’t Give Retailers Your ZIP Code Microsoft, Too, Says FBI Secretly Surveilling Its Customers The World Has No Room For Cowards. Krebs ‘SWATted’ in case you missed it. On Security Awareness Training Gravatar Email Enumeration in JavaScript. Clever. Spy Agencies to Get Access to U.S. Bank Transactions Database Blog Comment of the Week This week’s best comment goes to Dwayne Melancon, in response to New Job Diligence. Good advice, Mike. Surprised at how many people don’t look before they leap. If you apply some of your own “social engineering for personal gain” to this, you can avoid a lot of pain. Mining LinkedIn is a great shortcut, assuming the company you’re investigating has a decent presence there. Not only can you talk with specific people (including the ones who’ve left, as you mentioned), you can get a feel for whether there is a mass exodus going on. If there is, it can be a sign of a) opportunity b) Hell, or c) both. But at least you know what you’re getting into. Share:

Share:
Read Post

Apple Disables Account Resets in Response to Flaw

According to The Verge, someone discovered a way to take over Apple IDs using only the owner’s email address and date of birth. This appears to be an error exposed when they enabled 2-factor authentication, but as soon as it went public Apple disabled the iForgot feature and locked all accounts down. This seems to be one of those annoying cases where someone decided to disclose something in the press instead of just reporting it and getting it fixed. That’s really damn dangerous when cloud services are involved. I expect this to be resolved pretty quickly. Possibly before my bracket is blown to unrecoverable levels. We’ll update as we learn more … Share:

Share:
Read Post

New Paper: Email-based Threat Intelligence

The next chapter in our Threat Intelligence arc, which started with Building an Early Warning System and then delved down to the network in Network-based Threat Intelligence, now moves on to the content layer. Or at least one layer. Email continues to be the predominant initial attack mechanism. Whether it is to deliver a link to a malware site or a highly targeted spear phishing email, many attacks begin in the inbox. So we thought it would be useful to look at how a large aggregation of email can be analyzed to identify attackers and prioritize action based on the adversaries’ mission. In Email-based Threat Intelligence we use phishing as the jumping-off point for a discussion of how email security analytics can be harnessed to continue shortening the window between attack and detection. This excerpt captures what we are doing with this paper: So this paper will dig into the seedy underbelly of the phishing trade, starting with an explanation of how large-scale phishers operate. Then we will jump into threat intelligence on phishing – basically determining what kinds of trails phishers leave – which provides data to pump into the Early Warning system. Finally we will cover how to get Quick Wins with email-based threat intelligence. If you can stop an attack, go after the attackers, and ultimately disrupt attempts to steal personal data you will, right? We wrote this paper to show you how. You can see the landing page in the research library or download Email-based Threat Intelligence (PDF) directly. We would like to thank Malcovery Security for licensing the content in this paper. Obviously we wouldn’t be able to do the research we do, or offer it to you folks for this most excellent price, without sponsors licensing our content. Share:

Share:
Read Post

Services are a startup’s friend

I try to read a variety of different non-security resources each week, to stay in touch with both technology and startup culture. Of course, we at Securosis are kind of a startup. We are small and we’re investing significantly in software (which is late and over budget, like all software projects). But we choose not to deal with outside investors and to have reasonable growth expectations, since ultimately we do this job because we love it. Not because we’re trying to retire any time soon. It is instructive to read stuff from former operating folks who find themselves advising other startups. Mark Suster, who is now a VC, has a good post on TechCrunch about One of the Biggest Mistakes Enterprise Startups Make. He’s talking about the hazards of trying to introduce an enterprise product without a professional services capability. Ultimately any startup must be focused on customer success, and drop shipping a box (or having them download software) may not be enough. The line of reasoning goes, “Services businesses are not scalable and the market won’t reward this revenue so make sure that third-parties do your implementation or clients do it themselves. We only want software revenue.” This is a huge mistake. If you’re an early-stage enterprise startup services revenue is exactly what you need. In the security business this is a pretty acute fear. Let’s call this ArcSight-itis. Customers can be very resistant to technology that requires more investment in services than in software. The old ERP model of paying X for software and 4X for services to make it work is pretty much dead. Thus the drive to make things easier to use, requiring less services. And they don’t want to revisit their experience with early SIEM offerings. But as with everything, there is nuance. Ultimately customers want to be successful which is why they bought the product in the first place. So if customers left to their own devices can’t get quick value from any technology investment, then who’s the loser? Everyone, that’s who. Mark’s point is that for startups in emerging markets, customers don’t know what to do with the technology. They haven’t done the integration to provide a whole product (yes, break out Crossing the Chasm if you don’t know what I’m talking about). And the channel doesn’t have the expertise to really support the customer. So the startup needs to provide that expertise. Even better, services can goose revenues to partially cover costs while the software business matures. Over time, license revenues (or increasingly services/SaaS revenues) are far more highly valued. But Mark’s point is that if smaller companies selling an enterprise product don’t have the capability to integrate and service the product, they may not be around long enough for the software to mature. Of course there are exceptions, and that is why he prefaced everything with the ‘enterprise’ term. If a mid-market focused offering requires significant services it’s a epic fail. But if the Global 2000 is the target market, recruit good services folks early and often. Share:

Share:
Read Post

Who comes up with this stuff?

Galaxy Note II security flaw lets intruders gain full device access. Confirmed: iOS 6.1.3 Has Another Passcode Security Flaw The iOS one in particular is very limited, but I am continuously astounded by the creativity of some of these passcode flaws. Give me SQL injection or heap sprays any day… Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.