Securosis

Research

Earth to Symantec: AV doesn’t stop the APT

If you read saw the press release title Symantec Introduces New Security Solutions to Counter Advanced Persistent Threats, what would you expect? Perhaps a detailed security monitoring solution, or maybe they bought a full packet capture solution, or perhaps really innovated with something interesting? Now what if told you that it’s actually about the latest version of Symantec’s endpoint protection product, with a management console for AV and DLP? You’d probably crap your pants from laughing so hard. I know that’s what I did, and my laundromat is not going to be happy. It seems someone within Symantec believes that you can stop an APT attack with a little dose of centrally managed AV and threat intelligence. If the NFL was in season right now, Symantec would get a personal foul for ridiculous use of APT. And then maybe another 15 yards for misdirection and hyperbole. To continue my horrible NFL metaphor, Symantec’s owners (shareholders) should lock the folks responsible for this crap announcement out of any marketing meetings, pending appeals that should take at least 4-5 years. From a disclosure standpoint, we got a briefing last week on Big Yellow’s Symantec Protection Center, its answer to McAfee’s Enterprise Policy Orchestrator (ePO). Basically the product is where ePO was about 5 years ago. It doesn’t even gather information from all of Symantec’s products. But why would that stop them from making outlandish claims about countering APT? Rich tore them into little pieces, politely rubbishing, in a variety of ways, their absurd claims that endpoint protection is an answer to stopping persistent attackers. He did it nicely. He told them they would lose all credibility with anyone who actually understands what an APT really is. The folks from Symantec thanked us for the candid feedback. Then they promptly ignored it. Ultimately their need to jump on a bandwagon outweighed their desire to have a shred of truth or credibility in an announcement. Sigh. Symantec contends that its “community and cloud-based reputation technology” blocks new and unknown threats missed by other security solutions. You know, like the Excel file that pwned RSA/EMC. AV definitely would have caught that, because another company would have been infected using the exact same malware, so the reputation system would kick into gear. Oh! Uh-oh… It seems Symantec cannot tell mass attacks from targeted 0-day attacks. So let me be crystal clear. You cannot stop a persistent attacker with AV. Not gonna happen. I wonder if anyone who actually does security for a living looked at these claims. As my boys on ESPN Sunday Countdown say, “Come on, man!” I’m sure this won’t make me many friends within Big Yellow. But I’m not too worried about that. If I were looking for friends I’d get a dog. I can only hope some astute security marketing person will learn that using APT in this context doesn’t help you sell products – it makes you look like an ass. And that’s all I have to say about that. Share:

Share:
Read Post

Incite 5/4/2011: Free Agent Status Enabled

Last weekend was a little oasis in the NFL desert that has been this offseason. It looked like there would be court-ordered peace, now maybe not so much. The draft reminded me of the possibilities of the new season, at least for a little while. One of the casualties of this non-offseason has been free agency. You know, where guys who have put in their time shop their services to the highest bidder. It’s not a lot different in the workforce. What most folks don’t realize is that everyone is a free agent. At all times. My buddy Amrit has evidently been liberated from his Big Blue shackles. Our contributor Dave Lewis also made the break. Both announced “Free Agent Status Engaged.” But to be clear, no one forced either guy to go to work at their current employer each day. They were not restricted (unless a heavy non-compete was in play) from taking a call from a recruiter and working for someone else. That would be my definition of free agency, anyway. But that mentality doesn’t appear to be common. When I first met Dave Shackleford, he was working for a reseller here in ATL. Then he moved over to the Center for Internet Security and we worked together on a project for them. I was a consultant, but he made it clear that he viewed himself as a consultant as well. In fact, regardless of whether he’s working on a contract or a full-time employee, Dave always thinks of himself as a consultant. Which is frickin’ brilliant. Why? Because viewing yourself as a consultant removes any sense of entitlement. Period. Consultants always have to prove their value. Every project, every deliverable, every day. When things get tight, the consultants are the first to go. Fail to execute flawlessly and add sufficient value, and you won’t be asked back. That kind of mindset seems useful regardless of job classification, right? Consultants also tend to be good at building relationships and finding champions. They get face time and are always looking for the next project to sink their teeth into. They actively manage their careers because no one else is going to do that for them. Again, that seems like a pretty good approach even inside an organization. Either you are managing your career or it is managing you. Which do you prefer? As happy as I am for Amrit and Dave as they embark on the next step of their journeys, I wish more folks would consider themselves perpetual free agents and start acting that way. And it’s not necessarily about always looking for a bigger and better deal. It’s about being in a position to choose your path, not have it chosen for you. -Mike Incite 4 U This is effective? I saw a piece on being an “effective security buyer” by Andreas Antonopoulos and I figured it was about managing the buying process. Like my eBook (PDF) on the topic. But no, it’s basically what to buy, and I have some issues with his guidance. Starting from the first, “never buy a single-purpose tool.” Huh? Never? I say you get leverage where you can, but there are some situations where you have to solve a single problem, with a single control. To say otherwise is naive. Andreas also talks about standards, which may or may not be useful depending on the maturity of what you are buying. Early products, to solve emerging problems, don’t know dick about standards. There are no standards at that point. And even if there are, I’d rather get stuff that works than something that plays with some arbitrary standard. But that’s just me. To be fair, there is some decent stuff in here, but as always: don’t believe everything you read. – MR Game over, man! Sony is on track to win the award for most fscked-up breach response of 2011. Any time you have to take your entire customer network down for two weeks, it’s bad. Telling 77 million customers their data might be compromised? Even worse. And 10 million of them might have had their credit cards compromised? Oh, joy. But barely revealing any information, and saying things like “back soon”? Heh. Apparently it’s all due to SQL injection? Well, I sure hope for their sake it was more complex than xp_cmdshell. But let’s be honest: there are some cultural issues at play here, and a breach of this magnitude is no fun for anyone. – RM ePurse chaser: eWallets are the easy part of mobile payment security. The wallet is the encrypted container where we store credit cards, coupons, signatures, and other means of identification. The trouble is in authenticating who is accessing the wallet. Every wallet has some form of an API to authenticate requests, and then return requested wallet contents to requesting applications. What worries me with the coming ‘eWallet revolution’ (which, for the record, started in 1996) is not the wallets themselves, but how financial institutions want to use them: direct access to point of sale devices through WiFi, Bluetooth, proximity cards, and other near-field technologies. Effectively, your phone becomes your ATM card. But rather than you putting your card into an ATM, near-field terminals communicate with your phone whenever you are ‘near’. See any problems with that? Ever had to replace your credit card because the number was ‘hacked’? Ever have to change your password because it was ‘snooped’ at Starbucks? Every near-field communication medium becomes a new attack vector. Every device you come into contact with has the ability to probe for weakness. The scope of possible damage escalates when you load arbitrary billing and payment to the phone. And what happens when the cell is cloned and your passwords are discovered through a – possibly unrelated – breach? It’s not that we don’t want financial capabilities on the phone – it’s that users need a one-to-one relationship with the bank to reduce exposure. – AL Mac users: BOO! A new version of scareware

Share:
Read Post

Software vs. Appliance: Virtual Appliances

For Database Activity Monitoring, Virtual Appliances result from hardware appliances not fitting into virtualization models. Management, hardware consolidation, resource and network abstraction, and even power savings don’t fit. Infrastructure as a Service (IaaS) disrupts the hardware model. So DAM vendors pack their application stacks into virtual machine images and sell those. It’s a quick win for them, as very few changes are needed, and they escape the limitations of hardware. A virtual appliance is ‘built’ and configured like a hardware appliance, but delivered without the hardware. That means all the software – both third party and vendor created – contained within the hardware appliances is now wrapped in a virtual machine image. This image is run and managed by a Virtual Machine Manager (VMware, Xen, Hyper-V, etc.), but otherwise functions the same as a physical appliance. In terms of benefits, virtual appliances are basically the opposite of hardware appliances. Like the inhabitants of mirror universes in Star Trek, the participants look alike but act very differently. Sure, they share some similarities – such as ease of deployment and lack of hardware dependancies – but many aspects are quite different than software or hardware based DAM. Advantages over physical hardware include: Scale: Taking advantage of the virtual architecture, it’s trivial to spin up new appliances to meet demand. Adding new instances is a simple VMM operation. Multiple instances still collect and process events, and send alerts and event data to a central appliance for processing. You still have to deploy software agents, and manage connections and credentials, of course. Cloud & Virtual Compatibility: A major issue with hardware appliances is their poor fit in cloud and virtual environments. Virtual instances, on the other hand, can be configured and deployed in virtual networks to both monitor and block suspicious activity. Management: Virtual DAM can be managed just like any other virtual machine, within the same operational management framework and tools. Adding resources to the virtual instance is much easier than upgrading hardware. Patching DAM images is easier, quicker, and less disruptive. And it’s easy to move virtual appliances to account for changes in the virtual network topology. Disadvantages include: Performance: This is in stark contrast to hardware appliance performance. Latency and performance are both cited by customers as issues. Not running on dedicated hardware has a cost – resources are neither dedicated nor tuned for DAM workloads. Event processing performance is in line with software, which is not a concern. The more serious issue is disk latency and event transfer speeds, both of which are common complaints. Deployment of virtual DAM is no different than most virtual machines – as always, you must consider storage connection latency and throughput. DAM is particularly susceptible to latency – it is designed to function in real time monitoring – so it’s important to monitor I/O performance and virtual bottlenecks, and adjust accordingly. Elasticity: In practice the VMM is far more elastic the the application – virtual DAM appliances are very easy to replicate, but don’t take full advantage of added resources without reconfiguration. In practice added memory & processing power help, but as with software, virtual appliances require configuration to match customer environments. Cost: Cost is not necessarily either an advantage or a problem, but it is a serious consideration when moving from hardware to a virtual model. Surprisingly, I find that customers using virtual environments have more – albeit smaller – databases. And thus they have more virtual appliances backing those databases. Ultimately, cost depends entirely on the vendor’s licensing model. If you’re paying on a per-appliance or per-database model costs go up. To reduce costs either consolidate database environments or renegotiate pricing. I did not expect to hear about deconsolidation of database images when speaking with customers. But customer references demonstrate that virtual appliances are added to supplement existing hardware deployments – either to fill in capacity or to address virtual networking issues for enterprise customers. Interestingly, there is no trend of phasing either out in favor of the other, but customers stick with the hybrid approach. If you have user or vendor feedback, please comment. Next I will discuss data collection techniques. These are important for a few reasons – most importantly because every DAM deployment relies on a software agent somewhere to collect events. It’s the principal data collection option – so the agent affects performance, management, and separation of duties. Share:

Share:
Read Post

Standards: Should You Care? (Probably Not)

I just wrote up my portions of tomorrow’s Incite, and talked a bit about the importance of standards in product selection. But it’s hard to treat cogently in 30 words, so let me dig into it a bit more here. Mostly because of prevailing opinion on the importance of standards, and to what degree standards support should be a key selection criteria. From the news angle, our pals at the Cloud Security Alliance are driving down the standards path, recently partnering with the ISO to get some standards halo on the CSA Guidance. Selfishly, I’m all for it, mostly because wide acceptance of the CSA Guidance means more demand for the CCSK certification. That means more demand for CCSK training, which Securosis is building. So from that perspective it’s all good. (Note: Our next CCSK training class will be June 8-9 in San Jose, taught by Rich and Adrian.) But if I can see through my own selfish economically driven haze, let’s take a step back to understand where standards matter and where they don’t. Just thinking out loud, here goes: Mature markets: Standards matter in mature markets and mature products. In these, you will likely need to support a heterogeneous environment, because buying criteria are more about price/TCO than functionality. So being able to deal with standard interfaces and protocols to facilitate interoperability is a good thing. Risk averse cultures: Yes, this goes hand in hand with mature markets. Most risk-averse organizations aren’t buying early market products (before standards have gelled), but when they do, if a product does support a “standard,” it reduces their perceived risk. This is what the CSA initiative is about. Folks want legitimacy, and for many people legitimacy = standards. I’m hard pressed to find other situations where standards matter. Did I miss one (or many)? Let me know in the comments. As I tried to describe, standards don’t matter when dealing with emerging threats, where people are still figuring out the best way to solve the problem. Standards also don’t matter if a company tends to buy everything from a single vendor – assuming the vendor actually integrates their stuff, which isn’t a safe assumption (ahem, Big Yellow, ahem. Cough. Barf.) And vendors tend to push their proprietary technology through a standards process for legitimacy. Obviously if the vendor can say their technology is in the process of being standardized, it reduces perceived risk. But the unfortunate truth is that by the time any technology works its way through the standards process, the game has already changed. Twice. So keep that in mind when you are preparing those fancy RFPs asking for all kinds of standards support. Are you asking because you need it, or to reduce your risk? Or maybe just to give the vendor a hard time, which I’m cool with. Share:

Share:
Read Post

SDLC and Entropy

I really enjoy having Gunnar Peterson on the team. Seems like every time we talk in our staff meeting I laugh and learn something – two rare outcomes in this profession. We were having a laugh Friday morning about the tendencies of software development organizations to trip over themselves in order to improve. Several different clients were having the same problem in understanding how to apply security to code development. Part of our discussion: Gunnar: There are no marketing requirements, so no code, right? Adrian: I’ll bet the developers are furiously coding as we speak. No MRD, no problem. Gunnar: The Product Manager said “You start coding, I’ll go find out what the customer wants.” Adrian: Ironic that what they’re doing is technically Agile. Maybe if it’s a Rapid Prototyping team I’d have some sympathy, but someone’s expecting production code. Gunnar: I wonder what they think they are building? Don’t talk to me about improving Waterfall or Agile when you can’t get your organizational $&!% together. What do I mean by that? Here is an example of something I witnessed: Phase 1: Development VP, during an employee review, says, “What the heck have you been doing the last six months?” In a panic, developer mentions a half-baked idea he had, and a prototype widget he’s been working on. An informal demo is scheduled. Phase 2: VP says “I love that! That is the coolest thing I have seen in a long time”. The developer’s chest swells with pride. Phase 3: VP says “Let’s put that in the next release”. The developer’s brain freezes, thinking about the engineering challenges of turing a half-baked widget into production code, suddenly realizing there is no time to do any other sprint tasks. The VP takes the developer’s stunned silence as a commitment and walks away. Phase 4: Developer says to product manager “Yeah, we’re including XYZ widget. The VP asked for it so I checked it into the code base”. Product Manager says “Are you effing crazy? We don’t even have tests for it”. And they make it happen because, after all, it’s employee review time. It’s not news to many of you, but that’s how features get put in, and then you ‘fix’ the feature. Security plays catch-up somewhere down the road because the feature is too awesome to not put in, and to wait until it’s fully sussed out. I used to think this was a process issue, but now I believe it’s a byproduct of human nature. Managers don’t realize the subtle ways they change others’ behavior, and their own excitement over new technology pushes rules right out the window. It’s less about changing the process than not blowing up the one you have. Gunnar’s take is a little different: If you’re in security, don’t assume that you can change process and don’t assume your job is to make process more formal. Instead look at concrete ways to reduce vulnerabilities in the context of the existing process. As any teenage girl knows, don’t listen to a word the boy says – watch what he actually does. Likewise, security people working on SDLC, don’t believe the process documents! Instead observe developers in the wild – sit in their cubes and watch what they actually do. If you strip away the PowerPoints, process documents, and grand unified dreams of software development (be they Agile, Scrum, or Rational) this is how real world software development occurs. It’s a chaotic and messy process. This assumption leads you in a different direction – not formalism, but winning the hearts and minds of developers who will deliver on building the security mechanisms, and finding quick and dirty ways to improve security. Share:

Share:
Read Post

What’s Old Is New again

The entire credit card table was encrypted and we have no evidence that credit card data was taken. The personal data table, which is a separate data set, was not encrypted, but was, of course, behind a very sophisticated security system that was breached in a malicious attack. That’s from the news, analyst and Sony PR reports that are coming out about the PlayStation Network/Qriocity breach. Does anyone trust Sony’s statement that the credit card data was not ‘taken’? If attackers got the entire customer database, wouldn’t you think they grabbed the encrypted card numbers and will attempt to crack them later? Is the comment about “a very sophisticated security system” supposed to make customers feel better, or to generate sympathy for Sony? Does labeling their breached security system “very sophisticated” reduce your faith in the likelihood their crypto and key management systems will withstand scrutiny? How many of you thought the name “Qriocity” was defacement the first time you read the story? My general rule over the last three years is to not write about breaches unless there is something unusual. There are just too many of them, and the questions I asked above could apply to any of the lame-assed breach responses we have been hearing for the last decade. But this one has plenty of angles that make it good spectator sport: It’s new: It’s the first time I have seen someone’s network hacked through a piece of dedicated hardware – of their own design. It’s old: It’s the classic (developer) test environment that was the initial point of entry and, just like so many breaches before it, for some mysterious reason the test environment could access the entire freakin’ customer database. It’s new: I can’t think of another major data breach that will have this degree of international impact. I’m not talking about the fraud angle, but rather how governments and customers are reacting. It’s old: Very little information dribbling out, with half-baked PR “trust us” catchphrases like “All of the data was protected …” It’s new: Japanese culture values privacy more than any other country I am familiar with. Does that mean they’ll develop the same dedication to security as they do quality and attention to detail? It’s old: It’s interesting to me that a culture intensely driven to continuous improvement has an oh-so-common allergic reaction to admitting fault. Sure, I get the ‘blameless’ angle written about in management texts throughout the 80s, but the lack of ownership here has a familiar ring. Obviously I was not the only one thinking this way. It’s new: We don’t, as a rule, see companies basically shut down their divisions in response to breaches, and the rumored rebuild of every compromised system is refreshing. It’s old: Their consumer advice is to change your password and watch your credit card statements. Ultimately I am fascinated to see how this plays internationally and if this breach has meaningful long-term impact on IT security processes. Yeah, not holding my breath either. Share:

Share:
Read Post

Software vs. Appliance: Software

“It’s anything you want it to be – it’s software!” – Adrian. Database Activity Monitoring software is deployed differently than DAM appliances. Whereas appliances are usually two-tier event collector / manager combinations which divide responsibilities, software deployments are as diverse as customer environments. It might be stand-alone servers installed in multiple geographic locations, loosely coupled confederations each performing different types of monitoring, hub & spoke systems, everything on a single database server, all the way up to N-tier enterprise deployments. It’s more about how the software is configured and how resources are allocated by the customer to address their specific requirements. Most customers use a central management server communicating directly with software agents with collect events. That said, the management server configuration varies from customer to customer, and evolves over time. Most customers divide the management server functions across multiple machines when they need to increase capacity, as requirements grow. Distributing event analysis, storage, management, and reporting across multiple machines enables tuning each machine to its particular task; and provides additional failover capabilities. Large enterprise environments dedicate several servers to analyzing events, linking those with other servers dedicated to relational database storage. This later point – use of relational database storage – is one of the few major differences between software and hardware (appliance) embodiments, and the focus of the most marketing FUD (Fear, Uncertainty, and Doubt) in this category. Some IT folks consider relational storage a benefit, others a detriment, and some a bit of both; so it’s important to understand the tradeoffs. In a nutshell relational storage requires more resources to house and manage data; but in exchange provides much better analysis, integration, deployment, and management capabilities. Understanding the differences in deployment architecture and use of relational storage are key to appreciating software’s advantages. Advantages of software over appliances include: Flexible Deployment: Add resources and tune your platforms specifically to your database environment, taking into account the geographic and logical layout of your network. Whether it’s thousands of small databases or one very large database – one location or thousands – it’s simply a matter of configuration. Software-based DAM offers a half-dozen different deployment architectures, with variations on each to support different environments. If you choose wrong simply reconfigure or add additional resources, rather than needing to buy new appliances. Scalability & Modular Architecture: Software DAM scales in two ways: additional hardware resources and “divide & conquer”. DAM installations scale with processor and memory upgrades, or you can move the installation to a larger new machine to support processing more events. But customers more often choose to scale by partitioning the DAM software deployment across multiple servers – generally placing the DAM engine on one machine, and the relational database on another. This effectively doubles capacity, and each platform can be tuned for its function. This model scales further with multiple event processing engines on the front end, letting the database handle concurrent insertions, or by linking multiple DAM installations via back end database. Each software vendor offers a modular architecture, enabling you to address resource constraints with very good granularity. Relational Storage: Most appliances use flat files to store event data, while software DAM uses relational storage. Flat files are extraordinarily fast at writing new events to disk, supporting higher data capture rates than equivalent software installations. But the additional overhead of the relational platform is not wasted – it provides concurrency, normalization, indexing, backup, partitioning, data encryption, and other services. Insertion rates are lower, while complex reports and forensic analyses are faster. In practice, software installations can directly handle more data than DAM appliances without resorting to third-party tools. Operations: As Securosis just went through a deployment analysis exercise, we found that operations played a surprisingly large part in our decision-making process. Software-based DAM looks and behaves like the applications your operations staff already manages. It also enables you to choose which relational platform to store events on – whether IBM, Oracle, MS SQL Server, MySQL, Derby, or whatever you have. You can deploy on the OS (Linux, HP/UX, Solaris, Windows) and hardware (HP, IBM, Oracle, Dell, etc.) you prefer and already own. There is no need to re-train IT operations staff because management fits within existing processes and systems. You can deploy, tune, and refine the DAM installation as needed, with much greater flexibility to fit your model. Obviously customers who don’t want to manage extra software prefer appliances, but they are dependent on vendors or third party providers for support and tuning, and need to provide VPN access to production networks to enable regular maintenance. Cost: In practice, enterprise customers realize lower costs with software. Companies that have the leverage to buy hardware at discounts and/or own software site licenses can scale DAM across the organization at much lower total cost. Software vendors offer tiered pricing and site licenses once customers reach a certain database threshold. Cost per DAM installation goes down, unlike appliance pricing which is always basically linear. And the flexibility of software allows more efficient deployment of resources. Site licenses provide cost containment for large enterprises that roll out DAM across the entire organization. Midmarket customers typically don’s realize this advantage – at least not to the same extent – but ultimately software costs less than appliances for enterprises. Integration: Theoretically, appliances and software vendors all offer integration with third party services and tools. All the Database Activity Monitoring deployment choices – software, hardware, and virtual appliances – offer integration with workflow, trouble-ticket, log management, and access control systems. Some also provide integration with third-party policy management and reporting services. In practice the software model offers additional integration points that provide more customer options. Most of these additional capabilities are thanks to the underlying relational databases – leveraging additional tools and procedural interfaces. As a result, software DAM deployments provide more options for supporting business analytics, SIEM, storage, load balancing, and redundancy. As I mentioned in the previous post, most of these advantages are not visible during the initial deployment phases

Share:
Read Post

Friday Summary: April 29, 2011

I’ve taught a lot of different classes over the years, and always found the different structures to be pretty interesting. On one end were highly scripted first aid classes that forced us to show crappy “Help! I’ve fallen!” videos produced in 1878 accompanied by a mandatory script. The name of the game was baseline consistency. Lock everything down as tight as possible because you can’t predict the quality of the instructor. Heck, few CPR instructors have ever actually done CPR. I know how I taught changed after I cracked some ribs on mostly-dead people. (No, they don’t wake up and thank you like on Baywatch. And they are never that hot or in bikinis. Well sometimes bikinis, but trust me, you really should dress more appropriately before letting your heart stop.) In a completely different direction is martial arts – which is all about tailoring the experience to best connect with the student over many years. I only ran a solo class for about 6 months while my instructor ran off to start his family, and learned a hell of a lot in the process. Then my IT career hit and that was the end of that. Why bring this up now? I’ve been hip-deep in pulling together all the final materials for the first fully packaged CCSK class we will be teaching June 8-10. For the first time I’m in the position of developing courseware for a structured class, with hands-on, which others will have to teach. The lecture slides are pretty straightforward, although we have to be careful to include plenty of instructor notes and not assume any experience level. The hands-on exercises? Those are a challenge. Building the scenarios wasn’t too tough. But it takes me 5 times longer to convert one into a package someone else can teach from. Everything has to be scripted, packaged, and able to run on everything from a high-end Mac Pro to a freaking Speak-n-Spell. And run a private cloud for 40 students on a Windows ME netbook. A lot more people have performed CPR than have built private clouds. I’m not complaining – it’s a blast to work with my hands again. Although I have always sucked at debugging, and my wife is pissed I keep bleeding on the floor from banging my head against all our walls. But it’s very cool to put everything together like a puzzle. Pre-script pieces in module 1 we won’t need until module 8, just so students can focus on the concepts rather than the command lines, while still giving advanced folks freedom to explore and play so they don’t get bored. I just hope it all works. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian quoted in CSO Magazine. Rich on security and the AWS outage. The Network Security Podcast, Episode 239. With special guest Josh Corman. Favorite Securosis Posts Mike Rothman: Why We Didn’t Pick the Cloud (Mostly) and That’s OK. Who else gives you such a look into the thought processes behind major decisions? Right, no one. You’re welcome. David Mortman: Why We Didn’t Pick the Cloud (Mostly), and That’s Okay. Adrian Lane: Why We Didn’t Pick the Cloud. Operations played a bigger part in the decision process than we expected. Rich: Software vs. Appliance: Software. Other Securosis Posts Incite 4/27/2011: Just Write. Security Benchmarking, Beyond Metrics: Benchmarking in Action. Security Benchmarking, Beyond Metrics: Index. Favorite Outside Posts Mike Rothman: DHS chief: What we learned from Stuxnet. How cool would it have been if Secretary Napolitano had just said “We’re screwed.”? We are, but this article hits on responding faster and more effectively. David Mortman: TCP-clouds, UDP-clouds, “design for fail” and AWS. Because DR is a security issue Adrian Lane: Anatomy of a SQL Injection Attack. Dave Lewis: DHS needs to point finger at self, not private industry. Rich: Richard Bejtlich’s Cooking the Cucko’s Egg. Research Reports and Presentations React Faster and Better: New Approaches for Advanced Incident Response. Measuring and Optimizing Database Security Operations (DBQuant). Network Security in the Age of Any Computing. The Securosis 2010 Data Security Survey. Monitoring up the Stack: Adding Value to SIEM. Network Security Operations Quant Metrics Model. Network Security Operations Quant Report. Understanding and Selecting a DLP Solution. Top News and Posts Sony’s PlayStation Network and Qriocity hacked. How SmugMug survived the Amazonpocalypse. Flash + 307 Redirect = Game Over. Amazon Is Amazing! Smells of back-handed compliments, but much of the content is accurate. Share:

Share:
Read Post

Incite 4/27/2011: Just Write

All I wanted to do on Monday night was go to sleep. I had a flight in the morning and thought it would be a good idea to get some rest. So I sit down with the Boss and we catch up on the day, discuss some tactics to deal with issues the kids face, and I’m ready to hit the rack. Then I notice she’s watching a movie called One Week (Netflix streaming FTW) where basically a guy is given a week to live and sets off on a cross-Canada jaunt on a motorcycle to discover himself, meet some interesting people, and do stuff that happens in movies. Movie trap, awesome. 90 minutes later, we start discussing the movie and she asks me point blank, “what would you do?” Crap, hate that question. I start thinking about the right answer, and then unconsciously I blurt out, “I’d write. A lot.” Wow. I’m given a week (or whatever) to live and my first thought is to write. Not travel. Not do exciting things. But write. Huh. She then asks why. I respond that I’ve been to lots of places. I’ve done some fairly interesting things. So, I don’t have a great desire to see places or check items off some bucket list. Of course that doesn’t mean I don’t want to see more places and do more things. But if I had to really prioritize given a very limited amount of time, I’d focus on teaching – which for me means writing. I like to think I’ve learned a bunch of stuff (mostly by screwing it up), and I’d want to document that. I figure my road rash and stories would be useful to my kids. But maybe other folks too. Or maybe that’s just my arrogance talking. I’d write about the stuff I’ve screwed up. I’d write about the stuff I’ve done right. I’d write about the stuff I’d do differently (which wouldn’t be much, by the way). I’d focus on the importance of relationships, and the unimportance of collecting things. And I’d make sure the people I care about had something to remember me. I’ve got a face made for radio, so I’d write. Then I thought about how lucky I was. If I had a week to live, I’d write. Which, by the way, is pretty much what I do every day. I try to impart some wisdom through each week’s Incite. And our other research all strives to relay the perspectives we build through our travels, hoping it’s all useful to someone. The topics would be a bit different with a different sort of deadline – pun intended. But the tactics wouldn’t. Very interesting. Then something utterly profound happened. My wife said, “You know, you don’t have to wait to get a death sentence to write that kind of stuff.” Holy crap. She’s right. Sure, life gets in the way, but those are just excuses. Obviously I’ll need to work around my day job a bit, but what the hell am I waiting for? Just write. I think I will. It’s going to be an interesting summer. -Mike Photo credits: “Seven Days” originally uploaded by Laurie Pink Incite 4 U Security = Money (again): It appears that investor interest is swinging back to security. I guess that’s inevitable, if you wait long enough. A couple weeks ago Bit9 raised $12.5MM and Verdasys raised another $15MM. That kind of money, primarily from existing investors, typically means they think they’ll get good multiples on the investment. They invest much less when they are just trying to keep a company on life support. And it even seems the IPO market could be receptive to security deals. TrustWave filed an S1 with the SEC this week and you’d expect a couple of the other high profile start-ups or private equity buyouts from the last few years to test the waters at some point. The fundamentals for continued growth in security remain good. The question is whether smaller companies can sustain growth long enough to find an upstream partner, since that’s how this movie ends regardless of whether there is an IPO somewhere in there. Some will, most won’t, and the pendulum will swing back and forth. It always does. – MR Someone needs a carder’s union: I’m going to be pretty annoyed if there isn’t any NFL this fall, and I know Mike will too. I couldn’t give a crud about baseball, but I do enjoy my Sunday football. But I have to respect the rights of the player’s union, even if I don’t like their (or the owners’) tactics. I’m not the biggest fan of unions, but can admit that in certain industries we still need them. Take carders (the credit card fraudsters). One poor bloke plead guilty to fraud involving $36M in transactions. That’s a pretty good take, right? Well, he only earned somewhere around $150K, which is only .4%. That’s a downright crappy margin. He’d be better off opening a convenience store, where at least the margins are 2-5%. Plus he has to make reparations for at least some of the $36M lost to fraud. Seriously, dude – call the Teamsters. No way would they put up with .4%, and maybe they could get you some health and retirement benefits. – RM I see you: I was not surprised that MLB programming on Apple TV would not allow me to view certain games, but I was surprised that my location was not based on my registered address – instead it’s based upon the Apple TV’s uplink IP address (assigned by the ISP). Geolocation from IP and gateway has certainly been a hot feature for service providers over the last couple years, with vendors such as PayPal factoring location into fraud detection. This capability continues to evolve, with Northwestern University recently claiming they can pinpoint user locations within half a mile. Their methodology uses a combination of known locations/IP addresses of major landmarks and government buildings, then compares

Share:
Read Post

Why We Didn’t Pick the Cloud (Mostly), and That’s Okay

It’s no secret that we are currently working on a new software platform to deliver actionable security research to a broader market, engage folks, and… umm… feed our families. As you might expect, like any software project, it’s running about 30% late and 70% over budget. I just can’t seem to stop making our developers find exactly the right imagery and user experience to best represent the Securosis brand. Mike has coined a new term, ‘analness’, to describe the gyrations we’ve gone through, but I’m okay with that because we have spent years building our reputation and aren’t about to roll out a huge steaming pile of crap just to hit a delivery date. As we close in on the finish line, we faced a huge decision on how to host this. Our current provider is pretty good, but we ran into some issues earlier this year that prompted us to look at alternatives. And we are co-hosted, which won’t work once we start loading sensitive content into a paid service. So we began the long evaluation process of picking the right architecture and host. Well, that and satisfying our paranoia regarding site security. Despite being heavy cloud folks, we eventually decided on a dedicated server model offered by a specialized hosting company. Yes, we understand that’s probably counterintuitive, so here’s why we didn’t go that way. Co-hosting and VPS For the most part our current site is totally fine with our current load, and our hosting provider is a lot more security-conscious than most. I launched securosis.com as a blog over at Bluehost, on a WordPress co-host. It worked totally fine, but as we started expanding it was clear that platform couldn’t meet our growing needs. We decided to switch to a better content management system (ExpressionEngine), and while we could technically run it there, we decided to go with a more specialized provider (enginehosting.com). We have been mostly happy with the change, even though EH is considerably more expensive, because we get a lot more for what we pay. They also have excellent growth options to expand to a Virtual Private Server or even dedicated boxes if needed. But it’s still a co-host model. The one problem we hit earlier this year appeared after a major platform upgrade. Our back end became nearly unusable due to performance problems, and when I submitted a support request they kept blaming our configuration or plugins. We are big boys, and willing to accept when we screw up. We turned our system upside down and couldn’t find anything that would kill the performance of the admin console. As it turned out we were right. Another client in our cluster over-used resources – as I had initially suggested. We were bothered by their lack of investigation, and by the (realized) potential for another customer to impact us. That convinced us we need to get off co-hosting, and into VPS or cloud. We also had to factor in all the security reasons to drop a co-hosted model once we have content we want to protect. VPS vs. Cloud We quickly ruled out VPS. As our knowledge and experience working with various cloud services grew, we saw no reason to pick VPS over a pure cloud model. To be honest, while I see co-hosting surviving for a while, I definitely see the allure of VPS cratering in the next few years, as customers keep comparing VPS offerings against the rapidly evolving public cloud offerings. I decided we would go completely cloud. Aside from the lack of advantages to VPS, we were conscious of the importance of eating our own dogfood, now that we are working so deeply with the Cloud Security Alliance and advising people on cloud projects. Our criteria for a cloud provider including a security conscious shop, judged on both what they publish and checks with various industry connections. We wanted some IPS/firewall and patch management support options to improve our baseline security and reduce our management overhead. As our IT guy, I simply don’t have the time to manage all our patches/fixes myself. If I were caught on an international flight when we needed to block and fix a critical 0day, we could be screwed. That was unacceptable. Other factors included our plan to use a cloud-based WAF. Not that it could block everything, but the combination of blocking basic scans and providing better analytics was attractive. We also factored in performance, as we know our potential audience is self-limiting, and what we are delivering isn’t very CPU intensive. We need a little beef, and more importantly the capability to grow, but we couldn’t forsee a need for anything too crazy. It’s not like we are Netflix or anything (yet). So there we were – I thought we were all set, until… From Cloud to Dedicated I wasn’t fully satisfied with the options I found (all of which cost a heck of a lot more than a basic AWS deployment), but I felt confident that we could get what we need at a reasonable price. Then we mentioned what we were doing to a trusted friends in the industry. For now I won’t mention who we are working with, but someone we highly respect offers dedicated hosting in a special section of a major data center they lease (their own cage). I am not sure they expected us to take them up on the offer. It’s not like they were soliciting our business – this came up over beer. These folks are as paranoid as we are (maybe more), and aside from hosting the site they will implement some stringent and unusual security controls we couldn’t possibly get anywhere else for any reasonable price. Normally they don’t use this model even with their existing clients, and we are going to be their first test case beyond internal infrastructure. As a bonus, their data center guarantees 100% infrastructure uptime. In writing. (Note: this doesn’t mean our boxes, just their network and power). Trusted

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.