Securosis

Research

Friday Summary: Halloween 2013 Edition

  While you’re thinking about little kids in scary costumes, I’m here thinking about adults who write scary code. As I go through the results of a couple different companies’ code scans I am trying to contrast good vs. bad secure development programs. But I figure I should ask the community at large: What facet of your secure software development program has been most effective? Can you pinpoint one? For many years I felt placing requirements within the development lifecycle (i.e.: process modifications) yielded the greatest returns. I have spoken with many development teams over the past year who said that security awareness training was the biggest benefit, while others most appreciated threat modeling. Still others claimed that external penetration testing or code scans motivated their teams to do better, learn more about software defects, and improve internally. The funny bit is that every team states one of these events was the root cause which raised awareness and motivated changes. Multiple different causes for the same basic effect. I have been privy to the results from a few different code scans at different companies this summer; some with horrific results, and one far better than I could have ever expected, given the age and size of the code base. And it seems the better the results, the harder the development team takes external discoveries of security defects. Developers are proud, and if security is something they pride themselves on, defect reports are akin to calling their children ugly. I am typically less interested in defect reports than in understanding the security program in general. Part of my interest in going through each firm’s secure development program is seeing what changes were made, and which the team found most beneficial. Once again, the key overall benefit reported by each team varies between organizations. Many say security training, but training does not equal development success. Others say “It’s part of our culture”, which is a rather meaningless response, but those organizations do a bit of everything, and they scored considerably better on security tests. It is now clear to me, despite my biases for threat modeling and process changes, that for organizations that have been doing this a while no single element or program that makes the difference. It is the cumulative effect of consistently making security part of code development. Some event started the journey, and – as with any skill – time and effort produced improvement. But overall, improvement in secure code development looks glacial. It is a bit like compound interest: what appears minuscule in the beginning becomes huge after a few years. When you meet up with organizations that have been at it for a long time, it is startling see just how well the basic changes work. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Dave Lewis’s CSO post: “LinkedIn intro: Data, meet security issues”. Juniper blog quotes Adrian on DB-DoS. Adrian’s DR post: Simple is better. Gunner on The Internet-of-Things. Favorite Securosis Posts David Mortman: Don’t Mess with Pen Test(ers). Adrian Lane: Thinking Small and Not Leading. It is unfortunately common to discover that a job is quite different than you thought. And how best to accomplish your goals often involves several rounds of trial and error. Mike Rothman: The Pragmatic Guide to Network Security Management: The Process. Rich had me at Pragmatic… Other Securosis Posts The Pragmatic Guide to Network Security Management: SecOps. Incite 10/30/2013: Managing the Details. New Series: The Executive Guide to Pragmatic Network Security Management. Summary: Planned Coincidence. Favorite Outside Posts Dave Lewis: Buffer Hacked. David Mortman: Adventures in Dockerland. Not a security article but something for security to keep in mind. Docker is making big inroads in the cloud, especially PaaS, so you need to understand it. Adrian Lane: Big Data Analytics: Starting Small. A short post with pragmatic advice. Mike Rothman: Time doesn’t exist until we invent it. As always, interesting perspective from Seth Godin about time… “Ticking away, the moments that make up a dull day…” Gal: Fake social media ID duped security-aware IT guys. Research Reports and Presentations Firewall Management Essentials. A Practical Example of Software Defined Security. Continuous Security Monitoring. API Gateways: Where Security Enables Innovation. Identity and Access Management for Cloud Services. Dealing with Database Denial of Service. The 2014 Endpoint Security Buyer’s Guide. The CISO’s Guide to Advanced Attackers. Defending Cloud Data with Infrastructure Encryption. Network-based Malware Detection 2.0: Assessing Scale, Accuracy and Deployment. Top News and Posts Kristin Calhoun Keynote at API Strategy and Practice WhiteHat has a new secure browser; what does the Firefox say? via Wendy Nather. A More Agile Healthcare.gov NSA Chief: ‘To My Knowledge’ Agency Didn’t Tap Google, Yahoo Data Centers Mozilla Shines A Light With Lightbeam Alleged Hacker V& In DOE, HHS Breaches MongoHQ Suffers Security Breach Blog Comment of the Week This week’s best comment goes to Zac, in response to Don’t Mess with Pen Test(ers). As you say, we try not to focus on or fixate on the potential risks. There are however ways to mitigate or reduce the risk. Foremost for me is to consider any and all electronic transactions to be accessible and therefore never put anything I want to keep private out of electronic records. Just like how in the past you wouldn’t speak of things you wanted to keep private today you don’t post it (Facebook is training people to do all the wrong things). And when you consider that medical offices, tax agencies, government agencies, companies all either experience breaches or just plain send your informaiton to the wrong people… let alone work at getting your informaiton. Or how snail mail post can end up in the wrong mailbox… One may as well stay home due to a fear of being hit by a car while walking the dog. tl;dr – if you want to keep something private… keep it to yourself. Share:

Share:
Read Post

Friday Summary: October 18, 2013

I have been taking a lot of end-user calls on compliance lately. PCI, GLBA, Sarbanes-Oxley, state privacy laws, and the like. Today I was struck by how consistently these calls are more challenging than security discussions. With security users want to address a fairly well-defined problem. For example “How do we stop our IP from leaving the organization?” or “How can we protect users from phishing?” or “How do we verify administrator activity?” These discussions are far easier because of their much narrower scope, both in terms of technical approach and user perception of how they want to deal with the problem. With compliance I often feel like someone dropped a dead cow at my feet. I don’t even know where to start the conversation – it is not clear what the customer even wants. What can or should I do with this giant steaming pile of stuff that just landed on me? What matters to you? Which compliance mandates are in play, what are your internal policies, and what security do you have that actually work for you and what do not. I always ask whether the customer just wants to get compliant, or whether they are actually looking to improve security – because it matters, and you cannot assume either way. Even then, there are dozens of avenues of discussion – such as data-at-rest protection, data-in-motion, application security, user issues, and network security issues. There are many possible approaches such as prevention vs. detection, monitoring vs. blocking, and so on. How much staff and budget can you dedicate to the problem? Even if the focus is on something specific like GLBA, often the customer has not even decided what GLBA compliance means, because they are not sure whether the auditor who flagged them for a violation is even asking for the right controls. It is a soupy mess, and very difficult to have constructive conversations until you set ground rules – which usually involves focusing on a few critical tasks and then setting the strategy. So I guess what I learned this week is to approach these conversations more like threat modeling in the future. Break down the problem down to specific areas, identify the threats and/or requirements, and then discuss two or three relevant approaches. Walk them through one scenario and then repeat. After a few iterations a clear trend of what is right for the specific firm emerges. Perhaps start with how to secure archives, then move on to how to secure disk files, how to secure database files, how to secure document server/sharepoint archives, and so on. In many cases the best solution is suddenly apparent, and provides a consistent approach across the enterprise which works in 90% or better of cases. It becomes much easier when you examine the task in smaller pieces, looking at threats, and providing the customer with the proper threat responses. Trying to “eat the elephant” is not just a bad idea during execution – it can be fatal during planning too. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Rich presents changes in the crypto landscape October 30th. Mike quoted by George Hulme in CIO on security spending. Mortman on a podcast about security and privacy, and the Internet of Things. Mike’s presentation on Vulnerability Management. Rich quoted on hacking car computers. Adrian’s recorded Cloud IAM webcast series. Adrian Quoted on Big Data Security Analytics, liking it. Adrian Quoted on Big Data Security Analytics, not liking it. Favorite Securosis Posts Mike Rothman: The Week in Webcasts. We have been a bit of the suck on blogging lately. But it’s because a bunch of work is going on which you don’t necessarily see. Like webcasts and working with our retainer clients. So I pulled a copout to highlight a fraction of our recent speaking activity. You missed these events, but check out the recordings. We pontificate well. Rich: Mike’s post on millennial in security.. I hate that term, and this isn’t about that particular generation, it’s about anyone younger than you. Those damn kids. Adrian Lane: Building Strengths. Fan of this methodology, and no surprise mine are similar to Mike’s: Relator, Activator, Maximizer, Strategic, Analytical. David Mortman: Reality Check for Millennials Looking at Security. Other Securosis Posts Security Awareness Training Evolution: Focus on Great Content. Why a vBulletin Exploit Matters to Enterprise Security. Summary: Age is wasted on the… middle aged. Firewall Management Essentials [New Paper]. Friday Summary: October 4, 2013. Favorite Outside Posts Mike Rothman: Spy-shy: Mugger thwarted by ‘NSA intern’ on Capitol Hill. Talk about quick thinking and having a security mindset. A lady in the process of being mugged told the assailant she worked for the NSA and her phone is bugged and tracked. That was enough to get the perpetrator to make haste away from her. Who thinks of that? Totally awesome. Rich: Wade Baker on the kind of data we need in breach disclosures. Yup. Adrian Lane: Adrian Cockcroft on High Availability. It is the opposite of normal – each time I read a blog post by or interview with Adrian Cockcroft, I learn something new. David Mortman: Making Systems Operable. Research Reports and Presentations Firewall Management Essentials. A Practical Example of Software Defined Security. Continuous Security Monitoring. API Gateways: Where Security Enables Innovation. Identity and Access Management for Cloud Services. Dealing with Database Denial of Service. The 2014 Endpoint Security Buyer’s Guide. The CISO’s Guide to Advanced Attackers. Defending Cloud Data with Infrastructure Encryption. Network-based Malware Detection 2.0: Assessing Scale, Accuracy and Deployment. Top News and Posts NSA Director Alexander Admits He Lied about Phone Surveillance Stopping 54 Terror Plots. If secrecy, misdirection and counter-intelligence is part of your job description, isn’t lying a given? Attackers in Asia compromise data for nearly 150k in California. Software Firm Breached, 60k records stolen. Freedom Of The Press SecureDrop. Could also be an interesting NSA honeypot. How To Defend Against Backdoor Access. Schneier’s history lesson is interesting. Oracle Releases Critical Java Patches Breach at PR

Share:
Read Post

The Week in Webcasts

On Tuesday – that’s tomorrow for you working this Columbus day – Gunnar Peterson and I will be taking about API gateways with Intel’s Travis Broughton. We will run this webcast as an open discussion, and focus on the practical questions and issues of using API gateways. Our goal is to focus on end-user questions we have been getting, so bring your questions too – we plan to be very interactive. You can sign up here: API Gateways: Where Security Enables Innovation. On Wednesday at 9am Pacific, I will also be finishing up the series on Identity and Access Management for Cloud services. This segment will guide you on how to put the implementation strategy together with some vendor evaluation tips, helping you put together a process to help get what you need during selection. You can get the full research paper and the first two recorded webcasts on the Symplified site. And at 10am Pacific Wednesday, Mike Rothman will present on Taking a Hard Look at Your Vulnerability Management Program. In this webcast, Mike will revisit our “Vulnerability Management Evolution” research and discuss how to take a hard look at your VM environment. He will also touch on scenarios where you should consider moving to a new platform. As always, pragmatic and Incite-ful. Share:

Share:
Read Post

Friday Summary: October 4, 2013

I was never a big fan of the Rolling Stones. Heard them on the radio all the time growing up but never bought any of their stuff. It was good but not good enough to spend my hard-earned money. Recently a friend, a hardcore Stones addict, convinced me I needed some in my music collection. A couple clicks on Amazon, and three days later I had a big box of music waiting for me when I got back from the Splunk conference. In need of a little rest after a hectic few weeks, I cracked open the package and gave it a listen. And WTF? This is not what I heard on the radio. This song is hardcore blues. The next song is honky-tonk. Then rock and roll, followed by some delta blues. Singer, guitarist, and drummer all changing styles with each song like each one was a style they had played all their lives. This is amazing. Different, but (ahem) I liked it! The band as I heard it on the radio growing up is not the band on CDs and records. There is depth here. Versatility. Ingenuity. What I thought is not what they are. Their popularity suddenly makes sense. The songs played on radio and streaming services do a disservice to the band, and fail to capture special aspects of what they are (and were) about. So this morning I realized the answer to a simple question, which I have been hearing for years without a good answer. The question is most often asked as “We are evaluating SIEM solutions but this vendor Splunk came up. Who are they and what do they do?” The security community primarily knows them as an almost-SIEM platform. They do more than log management, but less than SIEM. And that is accurate – most of the security press talks about Splunk in that grey area between SIEM and LM, but fails to explain what’s going on or why the platform is popular. What you have read in the press and seen in… let’s call them “Supernatural Quadrangles” for the sake of argument… does not capture what is going on or how this platform fits into the enterprise. Yes, I said enterprise. This came up because Splunk was kind enough to invite me to their conference in Las Vegas this week to catch up on recent platform enhancements and speak with some of their customers. I don’t get paid to go, in case you’re wondering, but it is worth spending a couple days speaking with customers and hearing what they are really doing. The customer conversations were the optimistic variety I expected, but the keynote was something else entirely. Their CEO talked about mining the data feeds from aircraft and trains to help optimize safety and efficiency. About getting telemetry from mobile endpoints to gauge app reliability. I heard user stories about using the platform as a basis for consumer buying trend analysis and fraud analytics. This is not security – this s generalized analytics, applied to all different facets of the business. Even weirder was the enthusiastic fanboi audience – security customers normally range from mildly disgruntled to angry protagonist. These people were happy to be there and happy with the product – and the open bar was not yet open. Wendy Nather and I did a quick survey of the crowd and discovered that we were not among a security audience – it was IT Operations. Splunk’s core is a big data platform. That means it stores lots of data, with analytics capabilities to mine that data. And like most big data platforms, you can apply those capabilities to all sorts of different business – and security – problems. It is a Swiss Army Knife for all sorts of stuff, with security as the core use case. To understand Splunk you need to know that in addition to security it also does IT analytics, and is applicable to general business analytics problems. Another similarity to “Big Data” platforms is that many commercial and open source projects extend its core functionality. The only security platform I know of with a similar level of contributions is Metasploit. Again, it’s not SIEM. It is not the ideal choice for most enterprise security buyers who want everything nicely packaged together and want fully automated analytics. Correlation and enrichment are not built into Splunk. Enterprises need reports and to ensure that their controls are running, so anything different is often unacceptable. They don’t want to rummage around in the data, or tweak queries – they need results. Well, that’s not Splunk. Not out of the box. Splunk is more flexible because it is more hands-on. It offers more use cases, with a cost in required customization. Those are the tradeoffs. There is no free lunch. A few years ago I mocked Splunk’s “Enterprise Security Module”. I said that it did not contain what enterprise security centers want, they did not understand enterprise security buyers, and they didn’t offer what those buyers demand in a security platform. Yeah, in case you were wondering, I failed charm school. Splunk has gotten much closer in features and functions over three years, but it is still not a SIEM. In some ways that is a good thing – if you are just looking to plug in a SIEM, you are missing their value proposition. Splunk pivoted vertically to leverage their capabilities across a broader set of analysis problems, rather wage trench warfare with the rest of the event management market. The majority of people I spoke with from larger enterprise belonged to operations teams. At those firms, if security uses the product, they piggy-back off the Ops installation, leveraging additional security features. The other half of customers I spoke with were security team members at mid-sized firms, applying the platform to highly diverse security use cases and requirements. To understand why Splunk has so many vocal advocates and protagonists you need to broaden your definition of a security platform.

Share:
Read Post

API Gateways [New Research]

If you are thinking about skipping this post because you are not a developer, or think APIs are irrelevant to you, stop! You are missing the point of an important trend in both security and development. Today we launch our research paper on API gateways. It includes a ton of information about what these gateways are, how they work, and how best to take advantage of them. Additionally, we describe this industry trend and how it bakes security into the services. Even non-developers will be seeing these and working with one in the near future. On a more personal note, I need to say that this was one of the more fun projects I have worked on recently. The best research projects are the ones where you learn a lot. A full third of the content in this paper either was previously unknown to me, or I had not connected the dots to fully realize the picture they create, before Gunnar Peterson and I started the project. And for you jaded security and IT practitioners who have seen it all, I am willing to bet there is a lot going on here you were not aware of either. Going into the project I did not understand a few key things, such as: That lumbering health care company exposed back-office services to the public. Via the Internet? They can’t get out of their own way on simple IT projects, so how did they do that? I understand what OAuth is, but why is it so popular? It doesn’t make sense! How did that old school brick and mortar shop deliver Android and iOS apps? They don’t develop software! Someone is making money with apps? Bull$!^&: That’s ‘labor of love’ stuff. Show me how, or I don’t buy it! The word ‘enablement’ is one of those optimistic, feel-good words product vendors love. I stopped using it when I started working at Securosis because we hear a poop-storm of bloated, inappropriate, and self-congratulatory terms without any relevance to reality. When I am feeling generous I call it ‘market-leading’ optimism. So when Gunnar wanted the word ‘enablement’ in the title of the paper I let out a stream of curse words. “Are you crazy? That has got to be the dumbest idea I’ve ever heard. Security tech does not enable. Worse, we’ll lose credibility because it will sound like a vendor paper!” But by the end of the project I had caved. Sure enough, Gunnar was right. Not purely from a technical perspective, but also operationally. Security, application development, and infrastructure have evolved with a certain degree of isolation, which enables companies to provide external services while satisfying compliance requirements, often despite lacking in-house development skills. Anyway, this has been one of the more interesting research projects I have worked on. Gunnar and I worked hard to capture the essence of this trend, so I hope you find it as educational as I did. We would like to heartily thank Intel for licensing this content- they have an API Management solution and you can download the report from Intel’s API Gateway resource center that has tutorials and other related technical papers. We’ll have an upcoming webcast with Intel so I encourage you to register with them if you want more details. You can also download a free copy from our library : API Gateway research. Share:

Share:
Read Post

Friday Summary: September 20, 2013

I have been so totally overwhelmed with projects that I have had very little time to read, research, or blog. So I was excited this morning to take a few minutes to download the new SDL research paper from Microsoft’s blog. It examines vendors using Microsoft’s SDL in both Microsoft and non-Microsoft environments. And what did I learn? Nothing. Apparently their research team has the same problem as the rest of us: no good metrics, and the best user stories get sanitized into oblivion. I am seriously disappointed – this type of research is sorely needed. If you are new to secure software development programs and want to learn, I still encourage you to download the paper, which raises important topics with snippets of high-level information. As a bonus it includes an overview of Microsoft’s SDL. If you aren’t new to secure development, you would be better off learning about useful strategies from the BSIMM project. If you are a developer and want more detailed information on how to implement Microsoft’s SDL, use the blog and the web site. They offer a ton of useful information – you just have to dig a bit to find what you want. Back to the subject at hand: There are two basic reasons to examine previous SDL implementations: tell me why I should do it, and how do I do it. Actually three if you count failure analysis, but that is an unpopular pastime. Let’s stick with the two core reasons. Those who have built software with secure coding techniques and processes have seen the positive benefits. And in many cases they have seen that security can be effective without costing an arm and a leg. But objectively proving that is freaking hard. Plenty of people talk about business benefits, but few offer compelling proof. Upper management wants numbers or it’s not real. I have made the mistake of telling management peers, “We will be more secure if we do this, and we will save money in the long run as we avoid fixing broken stuff in the future, or buying bolt-on security products.” Invariably they ask “How secure?” “How much money?” or “How far into the future?” – all questions I am unable to answer. “Trust me” doesn’t work when asking for budget or trying to get a larger salary allocation for a person who has been trained in secure coding. It is very hard to quantify the advantages until you are coding, or trying to fix broken code. One of the advantages at larger financial firms is that they have been building or integrating software for a long time, have been under attack from various types of fraudsters for a long time, and can apply lessons from failed – and poorly executed – projects to subsequent projects. They have bugs, they understand fraud rates, and they can use internal metrics to see what fixes work. Over the long term they can objectively measure whether specific process changes are making a difference. Microsoft has. This report should have. Developers and managers need research like this to justify secure software development. So where do you start? How do you do it? You ask your friends, usually. The CISOs, developers, and DevOps teams I speak with use tools and techniques their peers tried and had good experiences with. You have the same problem as your buddy at BigCo, and he tried SDLC, and it worked. Ideal? No. Scientific? Hell, no. It’s the right course of action, for the wrong reasons. Still, though, peer encouragement is often how these efforts start. Word of mouth is how Agile development propagated. Will a company see the same successes as a peer? Almost assuredly not. Your people, your process, your code. Totally different. But overall, from a decade of experience doing this, I know it works. It’s not plug and play, there are growing pains, and it takes effort, but it works. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Mortman’s speaking at BruCon Back in Black. Dave’s doing a BruCon panel as well, just in case you couldn’t get enough during the keynote. Mike’s Dark Reading post on fear mongers vs. innovation. Cloud IAM webcast next week: Check it out! Favorite Securosis Posts Adrian Lane: Defending Against Application Denial of Service Attacks. Mike is delving into application layer DoS, which is much more interesting than network DoS – there are tons of creative ways to kick over a server. This will be a fun series! David Mortman: Firewall Management Essentials: Change Management. Rich: Mike’s Incite this week. Mike is old. Then again, he’s the only one who wrote anything this week. Me? Baby no sleep, Rich no write. Mike Rothman: No Sleep Mismash. I have been where Rich is now. No sleep. Trying to be productive. Pile on a job change and relocation to ATL. I don’t miss that period in my life. Other Securosis Posts Firewall Management Essentials: Optimizing Rules. Black Hat West Cloud Security Training. Threat Intelligence for Ecosystem Risk Management [New Paper]. Firewall Management Essentials: Change Management. Favorite Outside Posts Adrian Lane: Crooks Hijack Retirement Funds Via SSA Portal. Great post and very informative regarding a growing problem with fraud. And the onus is not on every person with a social security number to fix the SSA’s operational problem – the SSA needs to a) do a better job vetting users, and b) stop payouts through pre-paid cards. That entire arrangement is an uncontrollable clusterfsck. They put the infrastructure on the Internet, so they are responsible for operational security. Not that it’s easy, but intractability is why many IT projects don’t get started in the first place. USA Today interview with Jony Ive. Some tidbits on design, and the one I really like is the focus on making functions invisible. His example of Touch ID is perfect – it just works, no “scanning… AUTHENTICATED” animations. Mike Rothman: Is the Perimeter Really Dead? Of course not. But it’s definitely changing. Decent take on the issue in Dark Reading. David Mortman: Managing Secrets With Chef Vault. Research Reports and Presentations Identity and Access Management for

Share:
Read Post

Friday Summary: September 6, 2013

When my wife an I were a young couple looking for a place in the hills of Berkeley, we came across an ad for an apartment with “Views of the Golden Gate Bridge”. The price was a bit over our budget and the neighborhood was less than thrilling, but we decided to check it out. We had both previously lived in places with bay views and we felt that the extra expense would be worth it. But after we got to the property the apartment was beyond shabby, and no place we wanted to live. What’s more, we could not find a view! We stayed for a while searching for the advertised view, and when neither of us could find it we asked the agent. She said the view was from the side of the house. As it turns out, if you either stood on the fence in the alley, or on the toilet seat of the second bathroom, and looked out the small window, you could see a sliver of the Golden Gate. The agent had not lied to us – technically there was a bridge view. But in a practical sense it did not matter. I would hardly invite company over for a glass of wine and have them stand on tiptoes atop the toilet lid for an obstructed view of the bridge. I think about this often when I read security product marketing collateral. There are differing degrees of usefulness of security products, and while some offer the full advertised value, others are more fantasy than reality. Or require you to have a substance abuse problem – either works. This is of course one of the things we do as analysts – figure out not only whether a product addresses a security problem, but how usefully it does so, and which of the – often many – use cases it deals with. And that is where we need to dig into the technology to validate what’s real vs. a whitewash. One such instance occurred recently, as I was giving a presentation on how malware is the scourge of the earth, nothing solves the problem, and this vendor’s product stops it from damaging your organization. If you think my paraphrasing of the presentation sounds awkward, you are not alone. It was. But every vendor is eager to jump on the anti-malware bandwagon because it is one of the biggest problems in IT security, driving a lot of spending. But the applicability of their solution to the general use case was tenuous. When we talk to IT practitioners about malware they express many concerns. They worry about email servers being infected and corporate email and contacts being exposed. They worry that their servers will be infected and used to send across the Internet. They worry that their servers will become bots and participate in DoS attacks. They worry that their databases will be pilfered and their information will stream out to foreign countries. They worry that their users will be phished, resulting in malware being dropped on their machines, and the malware will assume their user credentials. They even worry about malware outside their networks, infecting customer machines and generating bogus traffic. And within each of these use cases are multiple attack avenues, multiple ways to pwn a machine, and multiple ways to exfiltrate data. So we see many legitimate techniques applied to address malware, with each approach a bit better or worse suited, depending on the specifics of the infection. And this is where understanding technology comes in, as you start to see specific types of detection and prevention mechanisms which work across multiple use cases. Applicability is not black and white, but graduated. The solutions that only apply to one instance of one use case make me cringe. As with the above reference vendor, they addressed a use case customers seem least interested in. And they provide their solution in a way that really only works in one or two instances of that use case. Technically the vendor was correct: their product does indeed address a specific type of malware in a particular scenario. But in practice it is only relevant in a remote niche of the market. That is when past and present merged: I was transported back in time to that dingy apartment. But instead of the real estate agent it was the security vendor, asking me to teeter on the toilet seat lid with them, engaging in the fantasy of a beautiful Golden Gate Bridge view. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Rich talking about how hackers find weaknesses in car computer systems. Mogull and Hoff wrote a book? Amazon says so! Only four copies left, so hurry! Mike’s DDoS research quoted in the Economist… Really. Security issues are clearly becoming mass market news. Mike quoted in Dark Reading about Websense’s free CSO advisory offering. Favorite Securosis Posts Adrian Lane: Friday Summary: Decisions, Decisions. Rich is being modest here – he created a couple really cool tools while learning Chef and Puppet. Even some of the professional coders in the audience at BH were impressed. Drop him a line and give him some feedback if you can. There will be loads of work in this area over the coming months – this is how we will manage cloud security. David Mortman: Dealing with Database Denial of Service. Other Securosis Posts [New Paper] Identity and Access Management for Cloud Services. Incite 9/4/2013: Annual Reset. [New Paper] Dealing with Database Denial of Service. Friday Summary: Decisions, Decisions. Firewall Management Essentials: Introduction [New Series]. Tracking the Syrian Electronic Army. The future of security is embedded. Third Time is the Charm. Security is Reactive. Learn to Love It. Deming and the Strategic Nature of Security. Incite 8/27/2013: You Can’t Teach Them Everything. Reactionary Idiot Test. PCI 3.0 is coming. Hide the kids. Ecosystem Threat Intelligence: Use Cases and Selection Criteria. Random Thought: Meet Your New Database. VMWare Doubles Down on SDN. Ecosystem Threat Intelligence: Assessing Partner Risk. China Suffers Large DNS DDoS Attack. Favorite Outside Posts David Mortman: Busting the Biometric Myth.

Share:
Read Post

[New Paper] Identity and Access Management for Cloud Services

We are happy to announce the release of our Identity and Access Management for Cloud Services research paper. Identity, access management, and authorization are each reasonably complicated subjects, but they all reside at the center of most on-premise security projects. Cloud computing and cloud security are both very complex subjects. Mix them all together, in essence federating your on-premise identity systems into the cloud, and you have complexity soup! Gunnar and I agreed that in light of the importance of identity management to cloud computing, and the complexity of the subject matter, users need a guide to help understand what the heck is going on. Far too often people talk about the technologies (e.g.: SAML, OAuth, and OpenID) as the solution, while totally missing the bigger picture: the transformation of identity as we knew it into Cloud IAM. We are witnessing a major shift in how we both provide and consume identity, which is not obvious to a tools-centric view. This paper presents the nuts and bolts of how Cloud IAM works, but more importantly it frames them in the bigger picture of how Cloud IAM services work, and how this industry trend is changing identity systems. Moving the trust model outside the enterprise, with multiple internal and external services cooperating to support IAM, is a radical departure from traditional on-premies directory services. We liken the transition from in-house directory services to Cloud IAM as akin to moving from an Earth-centric view of the universe to a heliocentric view: a complete change in perspective. This is not your father’s LDAP server! If you want to understand what Cloud Identity is all about, we encourage you to download the paper and give it a read. And we greatly appreciate Symplified for licensing this content! While most vendors we speak with only want to talk about their Single Sign-On capability, federation module, SAML connector, mobile app, or management dashboard – or whichever piece of the whole they think holds their competitive advantage – Symplified shares our vision that you need to understand the cloud IAM ecosystem first, and how everything fits together, before diving into the supporting technologies. You can get a copy of the paper from Symplified or our Research Library. Share:

Share:
Read Post

[New Paper] Dealing with Database Denial of Service

We are pleased to put the finishing touches on our Database Denial of Service (DB-DoS) research and distribute it to the security community. Unless you have had your head in the sand for the past year, you know DoS attacks are back with a vengeance. Less visible but no less damaging is the fact that attackers are “moving up the stack” to the application and database layers. Rather than “flooding the pipes” with millions of bogus packets, we now see cases where a single request topples a database – halting the web services it supported. Database DoS requires less effort for the attacker, and provides a stealthier approach to achieving their goals. Companies that have been victimized by DB-DoS are not eager to share details, but here at Securosis we think it’s time you know what we are hearing about so you can arm yourself with knowledge of how to defend against this sort of attack. Here is an except from the paper: Attackers exploit defects by targeting a specific weakness or bug in the database. In the last decade we have seen a steady parade of bugs that enable attackers – often remotely and without credentials – to knock over databases. A single ‘query of death’ is often all it takes. Buffer overflows have long been the principal vulnerability exploited for Db – DoS. We have seen a few dozen buffer overflow attacks on Oracle that can take down a database – sometimes even without user credentials by leveraging the PUBLIC privilege. SQL Server has its own history of similar issues, including the named pipes vulnerability. Attackers have taken down DB2 by sending UDP packets. We hear rumors at present of a MySQL attack that slows databases to a crawl. We would like to thank DBNetworks for licensing the content in this paper. Obviously we wouldn’t be able to do the research we do, or offer it to you for this most excellent price, without clients licensing our content. If you have comments of questions about this research please feel free to email us with questions! Download the paper, free of charge: Dealing with Database Denial of Service. Share:

Share:
Read Post

Random Thought: Meet Your New Database

Something has been bugging me. It’s big data. Not the industry but the term itself. Every time I am asked about big data I need to use the term in order to be understood, but the term itself steers the uninitiated in the wrong direction. It leaves a bad taste in my mouth. It’s wrong. It’s time to stop thinking about big data as big data, and start looking at these platforms as the next logical step in data management. What we call “big data” is really a building block approach to databases. Rather than the pre-packaged relational systems we have grown accustomed to over the last two decades, we now assemble different pieces (data management, data storage, orchestration, etc.) together in order to fit specific requirements. These platforms, in dozens of different flavors, have more than proven their worth and no longer need to escape the shadow of relational platforms. It’s time to simply think of big data as modular databases. Big data has had something a chip on its shoulder, with proponents calling the movement ‘NoSQL’ to differentiate these platforms from relational databases. The term “big data” was used to describe this segment, but as it captures only one – and not even the most important – characteristic, the term now under-serves the entire movement. These databases may focus on speed, size, analytic capabilities, failsafe operation, or some other goal, and they allow computation on a massive scale for a very small amount of money. But just as importantly, they are fully customizable to meet different needs. And they work! This is not a fad. It is are not going away. It is not always easy to describe what these modular databases look like, as they are as variable as the applications that use them, but they have a set of common characteristics. Hopefully this post will not trigger any “relational databases are dead” comments. Mainframe databases are still alive and thriving, and relational databases have a dominant market position that is not about to evaporate either. But when you start a new project, you are probably not looking at a relational database management system. Programmers simply need more flexibility in how they manage and use data, and relational platforms simply do not provide the flexibility to accommodate all the diverse needs out there. Big data is a database, and I bet within the next couple years when we say ‘database’ we won’t think relational – we will mean big data modular databases. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.