Securosis

Research

Friday Summary: May 6, 2011

A few months back one my dogs knocked over one my speakers. Sent it flying, actually. 3’ 50lb wood cabinet speaker – as if it wasn’t there. The culprit is still a puppy, but when she gets ripping, she can pretty much take out any piece of furniture I own. And she has a big butt. She seems to run into everything butt first, which is impressive as she does not walk backwards. Wife calls her ‘J-Lo’. She learned how to spin from playing with my boxer, and now she spins out of control when she is amped up. Big ass, right into a chair… BANG! I miss having music in the living room, so I thought I would solve the problem by bringing out a pair of tower speakers from the back room. They are six feet tall and weigh 180lb each. I thought that was the perfect solution, until she moved the piano a half of an inch with one of her spins. For the sake of the speakers, and my health, I removed all stereo components from the living room. But I still want music so I have been searching for small electronics to put on the shelf in the kitchen. My requirements were pretty simple: Decent quality music that won’t become a projectile of death. I began shopping and found, well, everything. I found hundreds of portable DAC’s, the size of a cigarette pack, for the iPhone & iPad. There are lots of boom boxes, desktop radios, and miniature receivers. I ordered the iHome IP1 because it got good reviews and – while the audiophile in me hates to admit it – it just looked good. I was really excited when it arrived last week and I cleared off a space for it, cleaned up the shelf, got everything plugged in, and updated my music library with some fresh tunes. Only problem – it sucked. Or maybe it was defective, I don’t really know. Won’t play music from an iPhone 4, iPad, or iPod touch – only the iPhone 3GS. And when it did play, it sounded underwater. Ugh. Really freakin’ bad. So I am still searching for a good desktop radio that I can stream music to from my iDevices. If you have reasonably priced recommendations let me know. For now I am just playing from the built in speakers, which is better than nothing. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Mike on the Importance of Application Intelligence. Adrian’s DR post on How To Respond To The Sony Attacks. Favorite Securosis Posts Adrian Lane: SDLC and Entropy. See Gunnar’s take. David Mortman: What’s Old Is New again. And we wonder why our lives (in security anyway) are described as the “hamster wheel of pain.” We repeat the same stuff over and over again. With maybe a twist or two (as Adrian astutely points out), but the plot is the same. So is the end result. Sigh. Mike Rothman: Why We Didn’t Pick the Cloud (Mostly) and That’s OK. Who else gives you such a look into the thought processes behind major decisions? Right, no one. You’re welcome. Other Securosis Posts Earth to Symantec: AV doesn’t stop the APT. Incite 5/4/2011: Free Agent Status Enabled. Standards: Should You Care? (Probably Not). Software vs. Appliance: Virtual Appliances. Software vs. Appliance: Data Collection. Favorite Outside Posts Adrian Lane: VMWare Building Clouds? An interesting look at virtual platform use by cloud providers. David Mortman: The Rise of Data-Driven Security. I love it when we get validated by a heavy hitter like Scott. Mike Rothman: Summary of the Amazon EC2 and Amazon RDS Service Disruption in the US East Region. Great explanation from Amazon about their EC2 FAIL a few weeks back. You can learn a lot about cloud architecture, as well as get a feel for how complicated it is to really scale. It’s like a tightrope walk every time they have to scale (which is probably constantly). This time they fell off and went splat. Let’s hope the net is positioned a bit more effectively next time. Project Quant Posts DB Quant: Index. NSO Quant: Index of Posts. NSO Quant: Health Metrics–Device Health. NSO Quant: Manage Metrics–Monitor Issues/Tune IDS/IPS. NSO Quant: Manage Metrics–Deploy and Audit/Validate. NSO Quant: Manage Metrics–Process Change Request and Test/Approve. Research Reports and Presentations React Faster and Better: New Approaches for Advanced Incident Response. Measuring and Optimizing Database Security Operations (DBQuant). Network Security in the Age of Any Computing. The Securosis 2010 Data Security Survey. Monitoring up the Stack: Adding Value to SIEM. Top News and Posts Fake Mac Security Software It’s drive-by malware… if you actually click all the buttons and install it. Anonymous claims no involvement in Sony hacks. How to disappear completely. Yeah, more Sony mayhem. Barracuda Breach Post Mortem Analysis. Test-Driving IBM’s SmartCloud. Interesting analysis of IBM’s ‘SmartCloud’ trial product. In fairness, it’s very early in the development process. Zero-Day Attack trends via Krebs. Second installment. Makes you think security companies are not eating their own dog food. LastPass Forces Users to Pick Another Password It’s bad when the salt is stolen with the hashed passwords… now it becomes a dictionary attack. If it was a foreign government (wink-wink), they have the resources to crack all the passwords. Nikon Image Authentication System Compromised. Interesting read. Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to ds, in response to Earth to Symantec: AV doesn’t stop the APT . The reality here is that SYMC is a very successful security vendor with a lot of customers and many solutions. They aren’t stupid and press announcements like this aren’t driven by ignorance. Sadly, they will sell product to customers based on this. It speaks volumes about the buyer and their ability to understand complex security issues and appropriate remedies. In short, most “security” professionals can’t, and many companies don’t even have “security” professionals on

Share:
Read Post

Sophos Wishes upon A-star-o

In the security industry successful companies need have breadth and scale. Security is and will remain an overhead function, so end users must strive to balance broad coverage against efficiency to control, and hopefully reduce, overhead. Scoff as you may, but integration at all levels of the stack does happen, and that favors bigger companies with broader product portfolios. That trend drove Sophos’s rather aggressive move this morning to acquire Astaro, a UTM vendor. I won’t speculate on deal size, but Astaro did about $60MM on the top line last year and was profitable. They also were owned by the management team (after a recent buy-out of the investors), so there was no economic driver forcing the deal. So you have to figure Sophos made a generous offer to get it done. And congrats to Sophos for not mentioning APT in the deal announcement – not even once. At least the Europeans can show some restraint. Deal Rationale Get big or get out. It’s pretty simple, and given the deep private equity pockets (APAX Partners) that acquired Sophos last year, it’s not surprising for them to start making aggressive moves to broaden the portfolio. We believe Astaro is a good partner, given the lack of overlap in product lines, general synergies in the target market, and ability to leverage each other’s strengths. Let’s hit each of these topics. First of all, Sophos has no network security products. There are only two must-have mass market security technologies: AV and firewalls. If Sophos is going to be a long term player in the space they need both. The only overlap is in the content security space, where Sophos has email and web security gateways. But Sophos’ products are hardly competitive in that market so moving customers to Astaro’s integrated platform makes sense. We also like the value Sophos’ research team can bring to Astaro. Clearly reputation and malware analysis is valuable at all levels of the security stack, and Astaro can make their network security products better immediately by integrating the content into the gateway. Astaro brings a lot of customer intelligence to the table. By that I mean Astaro’s real time link to each gateway in the field and granular knowledge of what each box is doing, where it’s deployed, and what it’s running. That kind of intelligence can add value to endpoints as well. Both companies have also largely targeted the mid-market – although they each point to some enterprise accounts, the reality is that they excel with smaller companies. They’ll be strong in EMEA and Asia, but have their work cut out for them in the US. The ability to field a broad product line should help bring additional channel partners onboard, perhaps at the expense of less nimble AV incumbents. There are also some good cultural synergies between the companies. Both European. Both known for strong technology, and not such strong marketing. Given that both endpoint and network security are replacement markets, it’s usually about sucking less than the incumbent, and we think the bigger Sophos should be able to grow share on that basis. Achilles Heel Keep in mind that Sophos did one other deal of this magnitude, Utimaco, a couple years back, which turned into a train wreck. The real issue in the success of this deal isn’t markets or synergies – it’s integration. If they didn’t learn anything from the Utimaco situation this won’t end well. But current indications that they will leave Astaro as a stand-alone entity for the time being, while looking for good opportunities for integration, which would be a logical plan. The key will be to make both product lines stronger quickly, with limited integration. Check Point never did much with their endpoint offering because it didn’t leverage the capabilities of the perimeter platform and vice-versa. Sophos can’t afford to make that same mistake. We also hope Sophos locked in Astaro’s management for a couple years and would look to leverage some of that talent in bigger roles within Sophos. Competitive Impact Having offerings on both the endpoint and network gives Sophos a differentiated position, with only McAfee (of the big players) having products in both spaces. Given the need for mid-market companies to alleviate the complexity of securing their stuff, having everything under one roof is key. Will Symantec or Trend now go and buy a network security thingy? Probably not in the short term (especially given the lack of compelling choices to buy), but in the long run big security companies need products in both categories. Overall, we like this deal. The devil is in the integration details, but this is the kind of decisive move that can make Sophos one of the long term survivors in the security space. Share:

Share:
Read Post

Software vs. Appliance: Data Collection

Wrapping up our Software vs. Appliance series, I want to remind the audience this series was prompted by my desire to spotlight the FUD in Database Activity Monitoring sales processes. I have mentioned data collection as one of the topics Data collection matters. As much as we would like to say the deployment architecture is paramount for performance and effectiveness, data collection is crucial too, and we need to cover a couple of the competitive topics that get lumped into bake-offs. One of the most common marketing statements for DAM is, “We do not require agents.” This statement is technically correct, but it’s (deliberately) completely misleading. Let’s delve into the data collection issues that impact the Appliance vs. Software debate: Yes, We Have No Agents: No database activity monitor solution requires an agent. You’ll hear this from all of the vendors because they have to say that to address the competitive ‘poison pill’ left by the previous vendor. All but one DAM product can collect SQL and events without an agent. But the statement “We don’t require an agent” is just marketing. In practice all DAM products – software, hardware, and virtual – use agents. It’s just a fact. They do this because agents, of one form or another, are the only reliable way to make sure you get all important events. It’s how you get the whole picture and capture the activity you need for security and compliance. Nobody serious about compliance and/or security skips installing an agent on the target database. No Database Impact: So every DAM vendor has an agent, and you will use yours. It may collect SQL from the network stack by embedding into the OS; or by scanning memory; or by collecting trace, audit, or transaction logs. No vendor can credibly claim they have no impact on the target database. If they say this, they’re referring to the inadequate agent-less data collection option you don’t use. Sure, the vendor can provide a pure network traffic collection option to monitor for most external threats, but that model fails to collect critical events on the database platform. Don’t get me wrong – network capture is great for detecting a subset of security specific events, and it’s even preferable for your less-critical databases, but network scanning fails to satisfy compliance requirements. Agent-less deployments are common, but for cases where the database is a lower priority. It’s for those times you want some security controls, but it’s not worth the effort to enforce every policy all the time. Complete SQL Activity: DAM is focused on collection of database events. Agents that collect from the network protocol stack outside the database, or directly from the network, focus on raw unprocessed SQL statements in transit, before they get to the database. For many customers just getting the SQL statement is enough, but for most the result of the SQL statement is just as important. The number of rows returned, or whether the query failed, is essential information. Many network collectors do a good job of query collection, but poor result collection. In some cases they capture only the result code, unreliably – I have seen capture rates as low as 30% in live customer environments. For operations management and forensic security audits this is unacceptable, so you’ll need to verify during vendor review. Database Audit vs. Activity Audit: This is a personal pet peeve, something that bothers most DAM customers once they are aware of it. If your agents collects data from outside the database, you are auditing activity. If you collect data from inside the database you are auditing the database. It’s that simple. And this is a very important distinction for compliance, where you may need to know database state. It is considerably more difficult to collect from database memory, traces, transaction logs, and audit logs. Using these data sources has more performance impact – anywhere from a bit to much more impact than activity auditing, depending upon the database and the agent configuration. Worse, database auditing doesn’t always pick up the raw SQL statements. But these data sources are used because they give provide insight to the state of the database and transactions – multiple statements logically grouped together – that activity monitoring handles less well. Every DAM platform must address the same fundamental data collection issues, and no one is immune. There is no single ‘best’ method – every different option imposes its own tradeoffs. In the best case, your vendor provides multiple data collection options for you to choose from, and you can select the best fit for each deployment. Share:

Share:
Read Post

Earth to Symantec: AV doesn’t stop the APT

If you read saw the press release title Symantec Introduces New Security Solutions to Counter Advanced Persistent Threats, what would you expect? Perhaps a detailed security monitoring solution, or maybe they bought a full packet capture solution, or perhaps really innovated with something interesting? Now what if told you that it’s actually about the latest version of Symantec’s endpoint protection product, with a management console for AV and DLP? You’d probably crap your pants from laughing so hard. I know that’s what I did, and my laundromat is not going to be happy. It seems someone within Symantec believes that you can stop an APT attack with a little dose of centrally managed AV and threat intelligence. If the NFL was in season right now, Symantec would get a personal foul for ridiculous use of APT. And then maybe another 15 yards for misdirection and hyperbole. To continue my horrible NFL metaphor, Symantec’s owners (shareholders) should lock the folks responsible for this crap announcement out of any marketing meetings, pending appeals that should take at least 4-5 years. From a disclosure standpoint, we got a briefing last week on Big Yellow’s Symantec Protection Center, its answer to McAfee’s Enterprise Policy Orchestrator (ePO). Basically the product is where ePO was about 5 years ago. It doesn’t even gather information from all of Symantec’s products. But why would that stop them from making outlandish claims about countering APT? Rich tore them into little pieces, politely rubbishing, in a variety of ways, their absurd claims that endpoint protection is an answer to stopping persistent attackers. He did it nicely. He told them they would lose all credibility with anyone who actually understands what an APT really is. The folks from Symantec thanked us for the candid feedback. Then they promptly ignored it. Ultimately their need to jump on a bandwagon outweighed their desire to have a shred of truth or credibility in an announcement. Sigh. Symantec contends that its “community and cloud-based reputation technology” blocks new and unknown threats missed by other security solutions. You know, like the Excel file that pwned RSA/EMC. AV definitely would have caught that, because another company would have been infected using the exact same malware, so the reputation system would kick into gear. Oh! Uh-oh… It seems Symantec cannot tell mass attacks from targeted 0-day attacks. So let me be crystal clear. You cannot stop a persistent attacker with AV. Not gonna happen. I wonder if anyone who actually does security for a living looked at these claims. As my boys on ESPN Sunday Countdown say, “Come on, man!” I’m sure this won’t make me many friends within Big Yellow. But I’m not too worried about that. If I were looking for friends I’d get a dog. I can only hope some astute security marketing person will learn that using APT in this context doesn’t help you sell products – it makes you look like an ass. And that’s all I have to say about that. Share:

Share:
Read Post

Incite 5/4/2011: Free Agent Status Enabled

Last weekend was a little oasis in the NFL desert that has been this offseason. It looked like there would be court-ordered peace, now maybe not so much. The draft reminded me of the possibilities of the new season, at least for a little while. One of the casualties of this non-offseason has been free agency. You know, where guys who have put in their time shop their services to the highest bidder. It’s not a lot different in the workforce. What most folks don’t realize is that everyone is a free agent. At all times. My buddy Amrit has evidently been liberated from his Big Blue shackles. Our contributor Dave Lewis also made the break. Both announced “Free Agent Status Engaged.” But to be clear, no one forced either guy to go to work at their current employer each day. They were not restricted (unless a heavy non-compete was in play) from taking a call from a recruiter and working for someone else. That would be my definition of free agency, anyway. But that mentality doesn’t appear to be common. When I first met Dave Shackleford, he was working for a reseller here in ATL. Then he moved over to the Center for Internet Security and we worked together on a project for them. I was a consultant, but he made it clear that he viewed himself as a consultant as well. In fact, regardless of whether he’s working on a contract or a full-time employee, Dave always thinks of himself as a consultant. Which is frickin’ brilliant. Why? Because viewing yourself as a consultant removes any sense of entitlement. Period. Consultants always have to prove their value. Every project, every deliverable, every day. When things get tight, the consultants are the first to go. Fail to execute flawlessly and add sufficient value, and you won’t be asked back. That kind of mindset seems useful regardless of job classification, right? Consultants also tend to be good at building relationships and finding champions. They get face time and are always looking for the next project to sink their teeth into. They actively manage their careers because no one else is going to do that for them. Again, that seems like a pretty good approach even inside an organization. Either you are managing your career or it is managing you. Which do you prefer? As happy as I am for Amrit and Dave as they embark on the next step of their journeys, I wish more folks would consider themselves perpetual free agents and start acting that way. And it’s not necessarily about always looking for a bigger and better deal. It’s about being in a position to choose your path, not have it chosen for you. -Mike Incite 4 U This is effective? I saw a piece on being an “effective security buyer” by Andreas Antonopoulos and I figured it was about managing the buying process. Like my eBook (PDF) on the topic. But no, it’s basically what to buy, and I have some issues with his guidance. Starting from the first, “never buy a single-purpose tool.” Huh? Never? I say you get leverage where you can, but there are some situations where you have to solve a single problem, with a single control. To say otherwise is naive. Andreas also talks about standards, which may or may not be useful depending on the maturity of what you are buying. Early products, to solve emerging problems, don’t know dick about standards. There are no standards at that point. And even if there are, I’d rather get stuff that works than something that plays with some arbitrary standard. But that’s just me. To be fair, there is some decent stuff in here, but as always: don’t believe everything you read. – MR Game over, man! Sony is on track to win the award for most fscked-up breach response of 2011. Any time you have to take your entire customer network down for two weeks, it’s bad. Telling 77 million customers their data might be compromised? Even worse. And 10 million of them might have had their credit cards compromised? Oh, joy. But barely revealing any information, and saying things like “back soon”? Heh. Apparently it’s all due to SQL injection? Well, I sure hope for their sake it was more complex than xp_cmdshell. But let’s be honest: there are some cultural issues at play here, and a breach of this magnitude is no fun for anyone. – RM ePurse chaser: eWallets are the easy part of mobile payment security. The wallet is the encrypted container where we store credit cards, coupons, signatures, and other means of identification. The trouble is in authenticating who is accessing the wallet. Every wallet has some form of an API to authenticate requests, and then return requested wallet contents to requesting applications. What worries me with the coming ‘eWallet revolution’ (which, for the record, started in 1996) is not the wallets themselves, but how financial institutions want to use them: direct access to point of sale devices through WiFi, Bluetooth, proximity cards, and other near-field technologies. Effectively, your phone becomes your ATM card. But rather than you putting your card into an ATM, near-field terminals communicate with your phone whenever you are ‘near’. See any problems with that? Ever had to replace your credit card because the number was ‘hacked’? Ever have to change your password because it was ‘snooped’ at Starbucks? Every near-field communication medium becomes a new attack vector. Every device you come into contact with has the ability to probe for weakness. The scope of possible damage escalates when you load arbitrary billing and payment to the phone. And what happens when the cell is cloned and your passwords are discovered through a – possibly unrelated – breach? It’s not that we don’t want financial capabilities on the phone – it’s that users need a one-to-one relationship with the bank to reduce exposure. – AL Mac users: BOO! A new version of scareware

Share:
Read Post

Software vs. Appliance: Virtual Appliances

For Database Activity Monitoring, Virtual Appliances result from hardware appliances not fitting into virtualization models. Management, hardware consolidation, resource and network abstraction, and even power savings don’t fit. Infrastructure as a Service (IaaS) disrupts the hardware model. So DAM vendors pack their application stacks into virtual machine images and sell those. It’s a quick win for them, as very few changes are needed, and they escape the limitations of hardware. A virtual appliance is ‘built’ and configured like a hardware appliance, but delivered without the hardware. That means all the software – both third party and vendor created – contained within the hardware appliances is now wrapped in a virtual machine image. This image is run and managed by a Virtual Machine Manager (VMware, Xen, Hyper-V, etc.), but otherwise functions the same as a physical appliance. In terms of benefits, virtual appliances are basically the opposite of hardware appliances. Like the inhabitants of mirror universes in Star Trek, the participants look alike but act very differently. Sure, they share some similarities – such as ease of deployment and lack of hardware dependancies – but many aspects are quite different than software or hardware based DAM. Advantages over physical hardware include: Scale: Taking advantage of the virtual architecture, it’s trivial to spin up new appliances to meet demand. Adding new instances is a simple VMM operation. Multiple instances still collect and process events, and send alerts and event data to a central appliance for processing. You still have to deploy software agents, and manage connections and credentials, of course. Cloud & Virtual Compatibility: A major issue with hardware appliances is their poor fit in cloud and virtual environments. Virtual instances, on the other hand, can be configured and deployed in virtual networks to both monitor and block suspicious activity. Management: Virtual DAM can be managed just like any other virtual machine, within the same operational management framework and tools. Adding resources to the virtual instance is much easier than upgrading hardware. Patching DAM images is easier, quicker, and less disruptive. And it’s easy to move virtual appliances to account for changes in the virtual network topology. Disadvantages include: Performance: This is in stark contrast to hardware appliance performance. Latency and performance are both cited by customers as issues. Not running on dedicated hardware has a cost – resources are neither dedicated nor tuned for DAM workloads. Event processing performance is in line with software, which is not a concern. The more serious issue is disk latency and event transfer speeds, both of which are common complaints. Deployment of virtual DAM is no different than most virtual machines – as always, you must consider storage connection latency and throughput. DAM is particularly susceptible to latency – it is designed to function in real time monitoring – so it’s important to monitor I/O performance and virtual bottlenecks, and adjust accordingly. Elasticity: In practice the VMM is far more elastic the the application – virtual DAM appliances are very easy to replicate, but don’t take full advantage of added resources without reconfiguration. In practice added memory & processing power help, but as with software, virtual appliances require configuration to match customer environments. Cost: Cost is not necessarily either an advantage or a problem, but it is a serious consideration when moving from hardware to a virtual model. Surprisingly, I find that customers using virtual environments have more – albeit smaller – databases. And thus they have more virtual appliances backing those databases. Ultimately, cost depends entirely on the vendor’s licensing model. If you’re paying on a per-appliance or per-database model costs go up. To reduce costs either consolidate database environments or renegotiate pricing. I did not expect to hear about deconsolidation of database images when speaking with customers. But customer references demonstrate that virtual appliances are added to supplement existing hardware deployments – either to fill in capacity or to address virtual networking issues for enterprise customers. Interestingly, there is no trend of phasing either out in favor of the other, but customers stick with the hybrid approach. If you have user or vendor feedback, please comment. Next I will discuss data collection techniques. These are important for a few reasons – most importantly because every DAM deployment relies on a software agent somewhere to collect events. It’s the principal data collection option – so the agent affects performance, management, and separation of duties. Share:

Share:
Read Post

Standards: Should You Care? (Probably Not)

I just wrote up my portions of tomorrow’s Incite, and talked a bit about the importance of standards in product selection. But it’s hard to treat cogently in 30 words, so let me dig into it a bit more here. Mostly because of prevailing opinion on the importance of standards, and to what degree standards support should be a key selection criteria. From the news angle, our pals at the Cloud Security Alliance are driving down the standards path, recently partnering with the ISO to get some standards halo on the CSA Guidance. Selfishly, I’m all for it, mostly because wide acceptance of the CSA Guidance means more demand for the CCSK certification. That means more demand for CCSK training, which Securosis is building. So from that perspective it’s all good. (Note: Our next CCSK training class will be June 8-9 in San Jose, taught by Rich and Adrian.) But if I can see through my own selfish economically driven haze, let’s take a step back to understand where standards matter and where they don’t. Just thinking out loud, here goes: Mature markets: Standards matter in mature markets and mature products. In these, you will likely need to support a heterogeneous environment, because buying criteria are more about price/TCO than functionality. So being able to deal with standard interfaces and protocols to facilitate interoperability is a good thing. Risk averse cultures: Yes, this goes hand in hand with mature markets. Most risk-averse organizations aren’t buying early market products (before standards have gelled), but when they do, if a product does support a “standard,” it reduces their perceived risk. This is what the CSA initiative is about. Folks want legitimacy, and for many people legitimacy = standards. I’m hard pressed to find other situations where standards matter. Did I miss one (or many)? Let me know in the comments. As I tried to describe, standards don’t matter when dealing with emerging threats, where people are still figuring out the best way to solve the problem. Standards also don’t matter if a company tends to buy everything from a single vendor – assuming the vendor actually integrates their stuff, which isn’t a safe assumption (ahem, Big Yellow, ahem. Cough. Barf.) And vendors tend to push their proprietary technology through a standards process for legitimacy. Obviously if the vendor can say their technology is in the process of being standardized, it reduces perceived risk. But the unfortunate truth is that by the time any technology works its way through the standards process, the game has already changed. Twice. So keep that in mind when you are preparing those fancy RFPs asking for all kinds of standards support. Are you asking because you need it, or to reduce your risk? Or maybe just to give the vendor a hard time, which I’m cool with. Share:

Share:
Read Post

SDLC and Entropy

I really enjoy having Gunnar Peterson on the team. Seems like every time we talk in our staff meeting I laugh and learn something – two rare outcomes in this profession. We were having a laugh Friday morning about the tendencies of software development organizations to trip over themselves in order to improve. Several different clients were having the same problem in understanding how to apply security to code development. Part of our discussion: Gunnar: There are no marketing requirements, so no code, right? Adrian: I’ll bet the developers are furiously coding as we speak. No MRD, no problem. Gunnar: The Product Manager said “You start coding, I’ll go find out what the customer wants.” Adrian: Ironic that what they’re doing is technically Agile. Maybe if it’s a Rapid Prototyping team I’d have some sympathy, but someone’s expecting production code. Gunnar: I wonder what they think they are building? Don’t talk to me about improving Waterfall or Agile when you can’t get your organizational $&!% together. What do I mean by that? Here is an example of something I witnessed: Phase 1: Development VP, during an employee review, says, “What the heck have you been doing the last six months?” In a panic, developer mentions a half-baked idea he had, and a prototype widget he’s been working on. An informal demo is scheduled. Phase 2: VP says “I love that! That is the coolest thing I have seen in a long time”. The developer’s chest swells with pride. Phase 3: VP says “Let’s put that in the next release”. The developer’s brain freezes, thinking about the engineering challenges of turing a half-baked widget into production code, suddenly realizing there is no time to do any other sprint tasks. The VP takes the developer’s stunned silence as a commitment and walks away. Phase 4: Developer says to product manager “Yeah, we’re including XYZ widget. The VP asked for it so I checked it into the code base”. Product Manager says “Are you effing crazy? We don’t even have tests for it”. And they make it happen because, after all, it’s employee review time. It’s not news to many of you, but that’s how features get put in, and then you ‘fix’ the feature. Security plays catch-up somewhere down the road because the feature is too awesome to not put in, and to wait until it’s fully sussed out. I used to think this was a process issue, but now I believe it’s a byproduct of human nature. Managers don’t realize the subtle ways they change others’ behavior, and their own excitement over new technology pushes rules right out the window. It’s less about changing the process than not blowing up the one you have. Gunnar’s take is a little different: If you’re in security, don’t assume that you can change process and don’t assume your job is to make process more formal. Instead look at concrete ways to reduce vulnerabilities in the context of the existing process. As any teenage girl knows, don’t listen to a word the boy says – watch what he actually does. Likewise, security people working on SDLC, don’t believe the process documents! Instead observe developers in the wild – sit in their cubes and watch what they actually do. If you strip away the PowerPoints, process documents, and grand unified dreams of software development (be they Agile, Scrum, or Rational) this is how real world software development occurs. It’s a chaotic and messy process. This assumption leads you in a different direction – not formalism, but winning the hearts and minds of developers who will deliver on building the security mechanisms, and finding quick and dirty ways to improve security. Share:

Share:
Read Post

What’s Old Is New again

The entire credit card table was encrypted and we have no evidence that credit card data was taken. The personal data table, which is a separate data set, was not encrypted, but was, of course, behind a very sophisticated security system that was breached in a malicious attack. That’s from the news, analyst and Sony PR reports that are coming out about the PlayStation Network/Qriocity breach. Does anyone trust Sony’s statement that the credit card data was not ‘taken’? If attackers got the entire customer database, wouldn’t you think they grabbed the encrypted card numbers and will attempt to crack them later? Is the comment about “a very sophisticated security system” supposed to make customers feel better, or to generate sympathy for Sony? Does labeling their breached security system “very sophisticated” reduce your faith in the likelihood their crypto and key management systems will withstand scrutiny? How many of you thought the name “Qriocity” was defacement the first time you read the story? My general rule over the last three years is to not write about breaches unless there is something unusual. There are just too many of them, and the questions I asked above could apply to any of the lame-assed breach responses we have been hearing for the last decade. But this one has plenty of angles that make it good spectator sport: It’s new: It’s the first time I have seen someone’s network hacked through a piece of dedicated hardware – of their own design. It’s old: It’s the classic (developer) test environment that was the initial point of entry and, just like so many breaches before it, for some mysterious reason the test environment could access the entire freakin’ customer database. It’s new: I can’t think of another major data breach that will have this degree of international impact. I’m not talking about the fraud angle, but rather how governments and customers are reacting. It’s old: Very little information dribbling out, with half-baked PR “trust us” catchphrases like “All of the data was protected …” It’s new: Japanese culture values privacy more than any other country I am familiar with. Does that mean they’ll develop the same dedication to security as they do quality and attention to detail? It’s old: It’s interesting to me that a culture intensely driven to continuous improvement has an oh-so-common allergic reaction to admitting fault. Sure, I get the ‘blameless’ angle written about in management texts throughout the 80s, but the lack of ownership here has a familiar ring. Obviously I was not the only one thinking this way. It’s new: We don’t, as a rule, see companies basically shut down their divisions in response to breaches, and the rumored rebuild of every compromised system is refreshing. It’s old: Their consumer advice is to change your password and watch your credit card statements. Ultimately I am fascinated to see how this plays internationally and if this breach has meaningful long-term impact on IT security processes. Yeah, not holding my breath either. Share:

Share:
Read Post

Software vs. Appliance: Software

“It’s anything you want it to be – it’s software!” – Adrian. Database Activity Monitoring software is deployed differently than DAM appliances. Whereas appliances are usually two-tier event collector / manager combinations which divide responsibilities, software deployments are as diverse as customer environments. It might be stand-alone servers installed in multiple geographic locations, loosely coupled confederations each performing different types of monitoring, hub & spoke systems, everything on a single database server, all the way up to N-tier enterprise deployments. It’s more about how the software is configured and how resources are allocated by the customer to address their specific requirements. Most customers use a central management server communicating directly with software agents with collect events. That said, the management server configuration varies from customer to customer, and evolves over time. Most customers divide the management server functions across multiple machines when they need to increase capacity, as requirements grow. Distributing event analysis, storage, management, and reporting across multiple machines enables tuning each machine to its particular task; and provides additional failover capabilities. Large enterprise environments dedicate several servers to analyzing events, linking those with other servers dedicated to relational database storage. This later point – use of relational database storage – is one of the few major differences between software and hardware (appliance) embodiments, and the focus of the most marketing FUD (Fear, Uncertainty, and Doubt) in this category. Some IT folks consider relational storage a benefit, others a detriment, and some a bit of both; so it’s important to understand the tradeoffs. In a nutshell relational storage requires more resources to house and manage data; but in exchange provides much better analysis, integration, deployment, and management capabilities. Understanding the differences in deployment architecture and use of relational storage are key to appreciating software’s advantages. Advantages of software over appliances include: Flexible Deployment: Add resources and tune your platforms specifically to your database environment, taking into account the geographic and logical layout of your network. Whether it’s thousands of small databases or one very large database – one location or thousands – it’s simply a matter of configuration. Software-based DAM offers a half-dozen different deployment architectures, with variations on each to support different environments. If you choose wrong simply reconfigure or add additional resources, rather than needing to buy new appliances. Scalability & Modular Architecture: Software DAM scales in two ways: additional hardware resources and “divide & conquer”. DAM installations scale with processor and memory upgrades, or you can move the installation to a larger new machine to support processing more events. But customers more often choose to scale by partitioning the DAM software deployment across multiple servers – generally placing the DAM engine on one machine, and the relational database on another. This effectively doubles capacity, and each platform can be tuned for its function. This model scales further with multiple event processing engines on the front end, letting the database handle concurrent insertions, or by linking multiple DAM installations via back end database. Each software vendor offers a modular architecture, enabling you to address resource constraints with very good granularity. Relational Storage: Most appliances use flat files to store event data, while software DAM uses relational storage. Flat files are extraordinarily fast at writing new events to disk, supporting higher data capture rates than equivalent software installations. But the additional overhead of the relational platform is not wasted – it provides concurrency, normalization, indexing, backup, partitioning, data encryption, and other services. Insertion rates are lower, while complex reports and forensic analyses are faster. In practice, software installations can directly handle more data than DAM appliances without resorting to third-party tools. Operations: As Securosis just went through a deployment analysis exercise, we found that operations played a surprisingly large part in our decision-making process. Software-based DAM looks and behaves like the applications your operations staff already manages. It also enables you to choose which relational platform to store events on – whether IBM, Oracle, MS SQL Server, MySQL, Derby, or whatever you have. You can deploy on the OS (Linux, HP/UX, Solaris, Windows) and hardware (HP, IBM, Oracle, Dell, etc.) you prefer and already own. There is no need to re-train IT operations staff because management fits within existing processes and systems. You can deploy, tune, and refine the DAM installation as needed, with much greater flexibility to fit your model. Obviously customers who don’t want to manage extra software prefer appliances, but they are dependent on vendors or third party providers for support and tuning, and need to provide VPN access to production networks to enable regular maintenance. Cost: In practice, enterprise customers realize lower costs with software. Companies that have the leverage to buy hardware at discounts and/or own software site licenses can scale DAM across the organization at much lower total cost. Software vendors offer tiered pricing and site licenses once customers reach a certain database threshold. Cost per DAM installation goes down, unlike appliance pricing which is always basically linear. And the flexibility of software allows more efficient deployment of resources. Site licenses provide cost containment for large enterprises that roll out DAM across the entire organization. Midmarket customers typically don’s realize this advantage – at least not to the same extent – but ultimately software costs less than appliances for enterprises. Integration: Theoretically, appliances and software vendors all offer integration with third party services and tools. All the Database Activity Monitoring deployment choices – software, hardware, and virtual appliances – offer integration with workflow, trouble-ticket, log management, and access control systems. Some also provide integration with third-party policy management and reporting services. In practice the software model offers additional integration points that provide more customer options. Most of these additional capabilities are thanks to the underlying relational databases – leveraging additional tools and procedural interfaces. As a result, software DAM deployments provide more options for supporting business analytics, SIEM, storage, load balancing, and redundancy. As I mentioned in the previous post, most of these advantages are not visible during the initial deployment phases

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.