Securosis

Research

Register for Our Cloud Security Training Class at RSA

As we previously mentioned, we will teach the very first CSA Cloud Computing Security Knowledge (Enhanced) class the Sunday before RSA. We finally have some more details and the registration link. The class costs $400 and include a voucher worth $295 to take the Cloud Computing Security Knowledge (CCSK) exam. We are working with the CSA and this is our test class to check out the material before it is sent to other training organizations. Basically, you get a full day of training with most of the Securosis team for $105. Not bad, unless you don’t like us. The class will be in Moscone and includes a mix of lecture and practical exercises. You can register online and we hope to see you there! (Yes, that means it’s a long week. I’ll buy anyone in the class their first beer to make up for it). Share:

Share:
Read Post

Microsoft, Oracle, or Other

I ran across Robin Harris’s analysis of the Hyder transaction database research project, and his subsequent analysis on how Microsoft could threaten Oracle in the data center on his ZDNet blog. Mr. Harris is raising the issue of disruption in the database market, a topic I have covered in my Dark Reading posts, but he is also pointing out how he thinks this could erode Oracle’s position in the data center. I think looking at Hyder and like databases as disruptive is spot on, but I think the effects Mr. Harris outlines are off the mark. They both miss the current trends I am witnessing and seem to be couched in the traditional enterprise datacenter mind set. To sketch out what I mean, I first offer a little background. From my perspective, during the Internet boom of the late 90’s, Oracle grew at a phenomenal rate because every new development project or web site selected Oracle. Oracle did a really smart thing in that they made training widely available so every DBA I knew had some Oracle knowledge. You could actually find people to architect and manage Oracle, unlike DB2, Sybase and Informix (SQL Server was considered a ‘toy’ at the time). What’s more, the ODBC/JDBC connectors actually worked. This combination made development teams comfortable with choosing Oracle, and the Oracle RDBMS seemed ubiqitous as small firms grew out of nothing. Mid-sized firms chose databases based upon DBA analysis of requirements, and they tended to skew the results to the platforms they knew. But this time it’s different. This latest generation of developers, especially web app developers, are not looking for transactional consistancy. They don’t want to be constrained by the back end. And most don’t want to be burdened by learning about a platform that does not enhance the user experience or usability of their applications. Further, basic application behavior is changing in the wake of fast, cheap and elastic cloud services. Developers conceptualize services based upon the ability ot leverage these resources. Strapping a clunky relational contraption on the back of their cheap/fast/simple/agile services is incongrous. It’s clear to me that growth in databases is there, but the choice is non-relational databases or NoSQL variants. Hyder could fill the bill, but only if it was a real live service, and only if transactional consistancy was a requirement. Ease of use, cheap storage, throughput and elasticity are the principle requirements. The question is not if Oracle will lose marketshare to Microsoft because because of Hyder – nobody is going to rip out an entrenched Oracle RDBMS as migration costs and instability far outweigh Hyder’s percieved benefits. This issue is developers of new applications are losing interest in relational databases. The choice is not ‘Hyder vs. Oracle’, it’s ‘can I do everything with flat files/NoSQL or do I need a supporting instance of MySQL/Postgres/Derby for transactional consistency’? The architectural discussion for non-enterprise applications has fundamentally shifted. I am not saying relational databases are dead. Far from it. I am saying that they are not the first – or even second – choice for web application developers, especially those looking to run on cloud services. With the current app development surge relational technologies are an afterthought. And that’s important as this is where a lot of the growth is happening. I have not gone into what this means for database security as that is the subject for future posts. But I will say that monitoring, auditing and assessment all change, as does the application of encryption and masking technologies. Share:

Share:
Read Post

React Faster and Better: Organizing for Response

Now that we have a sense of what data to focus on at the beginning of an incident, it’s time to start digging into the response and investigations process itself and talk specifically about what they entail. In larger enterprises, organizing the response process and teams can be extremely complex, due both to the volume of incidents and the complexity of the organizational structure (politics). Some teams align with business units, others with tools, and yet others are centralized. Leading organizations we speak with consistently display a range of established best practices for responding to threats. Each is a little different on specifics, but they all have tiered escalation plans, optimized to specific threat types, planned out in advace. Occasionally we see a radical re-architecting of these structures and incident response processes due to significant changes in the nature of security risks, regulatory changes, or volume of incidents. Support tools and technology also evolve to support changing processes. We start the process once an alert has triggered and front-line personnel are initiating the response process. This involves multiple teams and tiers, depending on the nature of the incident. Before detailing the organizational structure, there are a few points to keep in mind: There is no ‘right’ organization: Team organization is influenced by the overall organizational layout and nature of the business. We describe a hierarchical and centralized structure, but we have talked with organizations which spread these functions across different teams to align with business units. That said, nearly every organization has a top-tier team or individual responsible for major incidents and those crossing business or agency lines. Organize for longevity: Organize around skills and responsibilities rather than tools. Tools come and go, and it’s important that the team utilize platform-specific skills without devolving to a focus on specifici tools. Communicate early, even if you don’t have answers yet: It’s important to communicate the basic nature of incidents up the chain early, but not necessarily the details. Higher-level tiers need to know that an incident is occurring and the basics, even if they won’t be directly involved. This helps them prepare resources early and identify incidents with broad scope, even if the early responder doesn’t realize the full impact. Not every incident needs to be passed on, especially as many low-level incidents are handled pretty much immediately, but anything with broader potential should result in a ‘heads-up’ notification. Carefully define containment policies: Advanced attacks, as well as those potentially involving law enforcement, require different handling than a simple external intrusion attempt. Cutting off malware or instantly cleaning systems could trigger an attacker response and result in a deeper and more complex infection. Our instinct is to cut all attacks off when we detect them, but this may result in more and longer term damage; sometimes partial containment, monitoring, or other action (or even inaction) is more appropriate. Plan containment scenarios for major attack types early, communicate them, and make sure junior personnel are trained to react properly. Clearly define roles and responsibilities: Every team member should know when to escalate, as well as who to notify and when. All too often, a crisis occurs because junior folks tried to manage risk which they lacked the scope, authority, or ability to handle. The key to managing incidents in large environments is to focus on people and process. The right foundation optimizes incident response and enables nimble and graceful escalation. Making incident response look easy is actually very very hard, and take a lot of work and practice. But the benefits are there. The faster and more effectively you can engage the right resources, the less time the attacker has to wreak havoc in your environment. In our next posts we will walk through the response tiers and talk about types of incidents, tools, and skills involved at each level. Share:

Share:
Read Post

Incite 1/25/2011: The Real-Time Peanut Gallery

For those of you who are not American Football fans, we’re in the middle of the playoffs over here. Teams work all year to get into the tournament and secure a high seeding. And of course the best laid plans sometimes end up at the wrong end of a blowout (yes, ATL Falcons, I’m talking about you). This past week’s NFC Championship provided a lot more drama than in the past, and not because it was a competitive, exciting game. Instead it was the reaction from all sorts of folks when Chicago’s QB, Jay Cutler, was taken out the game with an alleged knee injury. It did seem kind of strange, with Cutler walking around on the sideline. How hurt could he be? In years past, the commentators and analysts would weigh in and focus on the game. But the game has clearly changed. Lots of folks chimed in on Twitter and in blogs about how hurt (or not) Cutler was. Some NFL players called him a wimp. Some questioned his heart. All in real time. And even better, without any real information from which to judge. You don’t need no stinking proof. Guys in testosterone overload talked smack about needing to be taken off the field on a stretcher before they’d leave a championship game. The chatter around the news has actually become the news, which is rather weird. The past 48 hours haven’t been about how Chicago played the game or even the Packers trip to the Super Bowl after sliding into the tournament as #6 seed. It was about Cutler. Now he’s got to defend whether he should have been playing on a Level 2 MCL sprain (which is really a tear). Welcome to the Real Time generation. Who needs proof? There’s tweeting to do! We see this in security as well. You have folks live tweeting conference presentations, and half the time in meetings during their work days. I hear about stupid clients and funny jokes, in real time. This is both good and bad. I used to judge my pitches based on heads nodding and how many folks came up after the session and chatted. At least now I know where I stand. If I suck, someone in the crowd has tweeted it. Why have an off-day with 100 folks, when you can be laid bare to the entire Twitterverse? Likewise, if I’m killing it, I get that feedback right when I step off the stage. Fortunately I haven’t gotten so wrapped up around this real time feedback that once I’m done I defer real life conversation to re-tweet flattering comments. Though Rich has been known to use Twitter for Q&A when he moderates panels. I’m still trying to calibrate the true effect of this real-time communication, but I have time. Real time isn’t going away anytime soon. -Mike Photo credits: “Pile of Peanuts” originally uploaded by falcon1961 Last Call. Vote for Me. Is it too late to grovel? I think you can still vote for the Social Security Blogger Awards. The Incite has been nominated in the Most Entertaining Security Blog Category. My fellow nominees are Jack Daniel’s Uncommon Sense, the Naked Sophos folks, and some Symantec bunker dwellers from the UK. All very entertaining and worthy competition. Help a brother out with a vote. If I win, Swedish pumps for all! Yeah, baby! Incite 4 U Trojan opens the malware umbrella: It seems the Trojan man has upped the ante in the latest round of malware punch/counter-punch. Cloud AV helps leverage reputation and a much broader library of bad stuff to detect, and dramatically improves effectiveness to still pretty crappy. So it’s not surprising that bad guys would just block calls to any external service from the AV client. It’s no different than when some malware uninstalled other root kits. Once a machine is owned, why wouldn’t they install the software they want and disable stuff they don’t? Even worse, it’s not clear how the AV vendors can block this behavior. Any ideas? – MR A little security theater on the way out: Back in 2005 when the FFIEC told banks they had to start using two-factor authentication, the industry responded with one of the most impressive acts of security theater I’ve ever seen. Instead of giving us all tokens or linking our accounts to text messages on our phone, they used these idiotic browser/system detection technologies that are effectively worthless. But according to my former colleague Avivah Litan in this NetworkWorld article, the FFIEC might be correcting their mistake. Get ready for the screaming from both banks and consumers, but this one could tighten the window the bad guys have to drain your account once they grab your credentials. – RM Scratching Bottom: When I used to develop software, prior to release I would do a sanity check of the publicly exposed methods in my code to determine my “threat surface”. More to the point, what interfaces would attackers target, and which methods in particular could expose functions or data critical to the system? It’s a rather myopic programmer’s view of attack surface, but addressed the parts I was most interested in and the components under my control. When Microsoft announced the Attack Surface Analyzer last week I was somewhat non-plussed, as their tool focuses on “classes of security weaknesses as applications are installed on the Windows operating system”. As a developer my responsibility was the top of the stack, not the bottom. Sure, I might be responsible for Apache `httpd` and the database, but not the platform nor other supporting applications. But security of the platform matters – even if attack surface analysis of the OS is not part of your SDL/release management process. Tools like Threat Surface Analyzer would be handy to `diff` revisions over time so you could confirm applications and OS configurations are what you expect. Most IT admins have tools that verify application sets, and others to verify configuration and patch settings, but this is a different

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.