Securosis

Research

Friday Summary: May 20, 2011

I stumbled on my last employer’s shutdown plans while rummaging around my old email archives. Those messages were from today’s date 3 years ago – not coincidentally the day Rich and I began to discuss me joining Securosis. At milestones like this I tend to get all philosophical and look back at the change, and what I like and dislike about the move. How do I feel about this change in my career? Where are we as a company, and is it anywhere near what we planned? I had no idea what an analyst really did – I just wanted to help people understand security technologies and be involved much more broadly than just database security. I kinda thought I was getting out of the startup game, but Securosis has the feel of a startup – the freedom to follow our vision, the pressure to focus on what’s most important, the agility in decision making and long hours. But it also feels like people appreciate our take on what analysts can be, which makes me think we have a shot at making this little shop a success. Personally, leaving 20+ years of pure technology roles was a big leap. Actually I had no single role – any day it may include architecture, product strategy, development, design, evangelism, and team management. But being able to cover a dozen areas in security – and the independence to say whatever I think – gives me a lot of satisfaction. And I love doing research. Unfortunately the single biggest detriment of the job – and it’s a big one – is writing. It’s what I spend the majority of my day doing, and it’s quite possibly my worst skill. I find writing to be a slow and painful process. It’s common to go two days without writing anything substantive, followed by a single day where I get crank out 15 pages. That’s nerve-racking when you have deadlines – pretty much every day. I never had this problem coding – why the English language causes me problems that neither C nor Java ever did remains a mystery. Learning how to write better is one of the more painful processes I have been through. And for those of you who sent me hate mail early on – one of you called me ‘Hitler’ for atrocities against the English language – you are totally correct. A-holes, but still right. From a business standpoint, if there is one singularly important difference you learn when moving from technology to an analyst role, it is perspective. A vendor’s view of what a customer needs is usually off the mark. Vendors do a lot of searching for the ‘secret sauce’, and constructing very logical arguments for why their particular product is needed – even a ‘must have’. But logical vendor arguments are usually wrong and don’t resonate with customers because they fails to account for the limitations faced by businesses. Customers each work within a set of existing constraints – some mixture of perfectly logical and perfectly absurd – which binds them to a specific perspective and approach to problem solving. I constantly hear vendors say, “Everyone should do X because it makes sense,” to which customers say nothing at all. This is even harder for startups with innovative technologies. How do you know the difference between “Customers just don’t get it yet” and “This innovative product will never be adopted”? The evangelism to educate the market is tough, and it’s easy as a vendor to close your ears to negativity and bad press because they are just part of the evolutionary process. The ability to shine a light on these messaging and strategy issues, and realign companies with customer requirements, is a big part of the value we provide. Vendors are so busy spinning – and get so used hearing to ‘No’ for both good and bad reasons – that they lose perspective. I’ve been there. Several times, in fact. For whatever reason customers tell analysts stuff they would never tell a vendor. I get to see a lot of the inner workings of IT organizations, which has been very educational – and unexpected. Three years later I find I have two great business partners, I get to interact with extraordinary people, and I work on cool projects. I do really love this job. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Rich quoted on Database Security. LonerVamp expands on Adrian’s SIEM: Out with the Old. Adrian quoted on BeyondTrust acquisition of Lumigent. Adrian’s Dark Reading Post on Secure Access to Relational Data. Favorite Securosis Posts Mike Rothman: BeyondTrust Acquires Lumigent Assets. Hindsight is 20/20, and there are lots of lessons in the failure of Lumigent. Adrian Lane: VMware Buys Shavlik: One Stop Shop for Virtual Infrastructure? Especially the bit on patch consistency with VMs in storage. David Mortman: Defining Failure. Rich: SIEM: Out with the Old. Other Securosis Posts Incite 5/18/2011: Trophies. Defining Failure. Cybersecurity’s RICO Suave: Assessing the Proposed Legislation. Favorite Outside Posts Mike Rothman: What I Wish Someone Had Told Me 4 Years Ago. Great post on being an entrepreneur. Ultimately a large part of success is just doing something. This lesson applies to almost everything. Adrian Lane: Marcus Ranum and Gary McGraw talk about software security issues. Gary has deep experience so his perspective is interesting. David Mortman: Attacking webservers via .htaccess. Pure awesomeness Project Quant Posts DB Quant: Index. NSO Quant: Index of Posts. NSO Quant: Health Metrics–Device Health. NSO Quant: Manage Metrics–Monitor Issues/Tune IDS/IPS. NSO Quant: Manage Metrics–Deploy and Audit/Validate. NSO Quant: Manage Metrics–Process Change Request and Test/Approve. Research Reports and Presentations React Faster and Better: New Approaches for Advanced Incident Response. Measuring and Optimizing Database Security Operations (DB Quant). Network Security in the Age of Any Computing. The Securosis 2010 Data Security Survey. Monitoring up the Stack: Adding Value to SIEM. Top News and Posts More from Krebs on Point of Sale Skimmers Dropbox Fires Back. And I think they are

Share:
Read Post

BeyondTrust Acquires Lumigent Assets

BeyondTrust announced today that it has acquired the assets of Database Activity Monitoring vendor Lumigent. Some of you are saying “Who?” Others, who have been around the DAM space a few years, shake your heads in dismay at what might have been. There was a time – way back in the 2004-2005 timeframe – that Lumigent had a clear leadership position in the Database Activity Monitoring space. They won many head-to-head sales engagements. They had a good sales and marketing team, the best Sarbanes-Oxley reports in the industry, the only viable auditing tool for Sybase, and the only platform that provided “before and after” query values. The latter was the hot feature for forensic audits and regulatory compliance, and every customer wanted it. Greylock, North Bridge, and NetIQ invested. Lumigent was a shining star in the nascent DAM market and they were making a name for themselves. Fast forward 6 years and we have an asset sale. That’s a politically correct term for fire sale. The kind where they’re selling the fixtures off the sinks. So how did it all go so very wrong? There was actually a long series of missteps, so we’ll discuss several major types of FAIL. It’s a classic example of how to plunge into the chasm, land in a fiery mess at the bottom, and get sold for scrap metal: Strike One: Technology. Lumigent never capitalized on their technology lead. Their engineering team must have known that the triggers and stored procedures they used in the early days would not scale, even though early customers preferred them to native audit and tracing – which Lumigent then added to their mix! It seemed like Dumb and Dumber were managing their product roadmap. Sure, they improved data collection over time, but not enough; nor did they ever find a consistent strategy to collect events across all databases. Additionally, they focused on Sybase and MS SQL Server – to the exclusion of Oracle and IBM, who sell a few databases. Competitors quickly provided more – and better – collection options across all the major platforms. Competitors were easier to deploy and did not kill performance. Don’t get me started on the missed Vulnerability Assessment opportunity. Lest you forget, Lumigent acquired nTier, which was a bad assessment product. Nothing was structurally wrong with it, but it needed a lot of work on policies and reporting to be competitive. During the several years assessment was key to winning deals, Lumigent made no visible investments into the nTier technology. It only covered a couple databases, with only some of the needed policies for security or compliance, when it was acquired. They were not the only vendor stuck in the mud for a while, but the upshot is that they failed to upgrade their product to keep pace. Startups have to innovate, you know? Strike Two: Partnerships. Lumigent heavily courted Microsoft and Sybase. They geared their product strategy to work with these two database vendors to a fault. This helped early on, but both partners wanted far better auditing capabilities – specific to their respective database platforms – before they were willing to really get behind Lumigent. Behind the scenes Lumigent thought acquisition was a sure thing. Not so much – Lumigent neither delivered, nor did they hedge their bets with a heterogenous solution. When Lumigent failed to provide better auditing, the rumored Microsoft and Sybase acquisitions halted, and both partners had conversations with just about every other DAM vendor. The recent partnership with Deltek was solid, but simply not enough to carry the company. They didn’t just count their chickens before they hatched, they counted them the first time the rooster made eye contact. Strike Three: Misunderstanding the market. Lumigent’s story shifted from Database Security; to Compliance; to Database Auditing Solutions for Compliance and Security; to Information Centric Security; to Application Governance, Risk & Compliance; and then back to DAM – each step worse than the one before. The App GRC strategy was the most surprising and saddest, as it looked like a desperate attempt to save the firm by re-inventing their market. I appreciated their ingenuity in repackaging DAM into something totally new, and admired the cojones management displayed with their willingness to walk away from their primary market, but I thought they were nuts. And I told them. Rich and I stopped short of begging Lumigent to reconsider their App GRC path, with at least a half dozen reasons it was a bad idea, along with practical experience about how Information Risk Management and GRC messaging missed DAM buying centers. A couple years later that horse died, and Lumigent was back to square one. Very few start-up firms get three strikes. What does this mean for BeyondTrust? The good news is that DAM extends the PowerBroker functionality, providing a means to detect misuse and compromised credentials. The PowerBroker product family is focused on credential and authorization management, but its value is the ability to delegate capabilities without distributing credentials, and fine-grained task-oriented authorization maps. Before the acquisition the PowerBroker platform was geared for preventative security. DAM provides detective capabilities along with a number of compliance reports deeply focused on the database layer. This gives BeyondTrust users some new toys to play with that improve security and broaden the product line. BeyondTrust surely acquired the assets for a song, so they really can’t lose here. And I like the vision. I hope they take a long look at how their customers will use the technology – a few strategic improvements would go a long way to improve customer satisfaction. But there is some bad news. First, the Lumigent technology is way behind the curve. For Database Activity Monitoring or Vulnerability Assessment, Lumigent cannot compete head-to-head against other established vendors. The technology lacks consistency and capabilities across the board, including data collection, database platform support, policies, and platform management. For most acquirers that wouldn’t matter – BeyondTrust can at least sell ‘new’ Lumigent functions to their existing accounts to enhance security

Share:
Read Post

SIEM: Out with the Old

About 4 years ago we saw the first big wave of replacements of older email security tools with a second generation we now call ‘content security’. Early email security products were deployed in-house and focused on anti-virus, anti-spam, and mail server integration. The current generation of products offered new SaaS and hybrid deployment models, technology advancements in web and content filtering, more elastic service sets, and centralized web management consoles. And let’s not forget the larger security firms with products lagging far behind the state of the art, milking their cash cows while smaller firms innovated. We see the same wave of succession right now in the SIEM market. First generation products – despite being entrenched – make customers uncomfortable enough to start asking what else is available. They are looking for better, easier, and faster. We hear numerous complaints about existing solutions: “We collect every event in the data center, but we can’t answer security questions, only run basic security reports.” “We can barely manage our SIEM today, and we plan on rolling event collection out across the rest of the organization in the coming months.” “I don’t want to manage these appliances – can this be outsourced?” “Do I really need a SIEM, or is log management and ad hoc reporting enough?” “Can we please have a tool that does what it says?” “Why is this this product so effing hard to manage?” We see new products designed to both improve scalability and come closer to real-time analysis. They can collect events from just about every type of network device and application, normalize, and provide better drill-down capabilities. And there are many new analysis features – including enrichment, attack signature patterns, and application-layer monitoring. The first generation of products are looking old and I hear more and more unhappiness with today’s entrenched solutions. I ran across Anton Chuvakin’s How to Replace a SIEM? this week. But his tips apply to a wider audience than just Cisco MARS customers kicking other vendor tires. He offers two excellent vendor migration suggestions that bear repeating. First, leave the existing system running for some time – at least through the migration. This way you are still covered during the migration, and in the event previously collected data is not compatible across systems, you can still run reports and access forensic data. I have seen cases of “rip and replace” where the old system is removed while – or even before – the new system is up and running. That means no coverage for a (potentially extended) period. I sometimes call that ‘optimism’, but you may prefer another term. The sales process is a good time to ensure your (new, hungry) vendor can run in parallel with your existing tool – don’t buy it and then let them tell you that’s an unsupported scenario. Second, have the new vendor help with setup. Deployment issues are some of the most serious problems we hear of. Hiring the vendor not only helps avoid many pitfalls, but also makes it easier and quicker to replicate the rules and reports you currently use. And during the sales process you can negotiate attractive pricing on getting the work done as a condition of the sale. But before you replace a SIEM there are a couple other things you need to consider: Post Mortem: What exactly are the problems with the existing system and what do you hope to accomplish? It’s not hard to come up with a list of problems and areas for improvement – it’s much harder to vet a new technology to confirm addresses your demands without adding its own slew of new pitfalls. The problem here is that vendors will tell you they can do whatever you ask for. Realistically, you need to check with other customers who already own and operate new products before you buy – see what their experiences have been. What you have: The SIEM you have was installed for a reason. Actually, they are normally installed for several reasons, to address a list of business and security problems which grows over time. It’s easy to forget everything your system does when its failings are so easy to see. Make sure you have a complete understanding of what issues are currently being addressed and must be replicated on the new system. This includes compliance and security functions across management, operations, and security organizations. Worse, the existing SIEM likely feeds data to other systems you forgot about. The list you build is almost always much longer than expected. The good news is this process saves time and avoids trouble down the road, and helps form RFP questions and guide proof of concept testing. You want a new product that handles your new wish list, but don’t give up on any core reasons you are already running SIEM. SIEM replacement can be easier than a first installation, but you need to leverage the knowledge you have to make it so. That may sound easy, but it takes work to gather the organizational memory you need and clearly document your goals moving forward. Share:

Share:
Read Post

Friday Summary: May 6, 2011

A few months back one my dogs knocked over one my speakers. Sent it flying, actually. 3’ 50lb wood cabinet speaker – as if it wasn’t there. The culprit is still a puppy, but when she gets ripping, she can pretty much take out any piece of furniture I own. And she has a big butt. She seems to run into everything butt first, which is impressive as she does not walk backwards. Wife calls her ‘J-Lo’. She learned how to spin from playing with my boxer, and now she spins out of control when she is amped up. Big ass, right into a chair… BANG! I miss having music in the living room, so I thought I would solve the problem by bringing out a pair of tower speakers from the back room. They are six feet tall and weigh 180lb each. I thought that was the perfect solution, until she moved the piano a half of an inch with one of her spins. For the sake of the speakers, and my health, I removed all stereo components from the living room. But I still want music so I have been searching for small electronics to put on the shelf in the kitchen. My requirements were pretty simple: Decent quality music that won’t become a projectile of death. I began shopping and found, well, everything. I found hundreds of portable DAC’s, the size of a cigarette pack, for the iPhone & iPad. There are lots of boom boxes, desktop radios, and miniature receivers. I ordered the iHome IP1 because it got good reviews and – while the audiophile in me hates to admit it – it just looked good. I was really excited when it arrived last week and I cleared off a space for it, cleaned up the shelf, got everything plugged in, and updated my music library with some fresh tunes. Only problem – it sucked. Or maybe it was defective, I don’t really know. Won’t play music from an iPhone 4, iPad, or iPod touch – only the iPhone 3GS. And when it did play, it sounded underwater. Ugh. Really freakin’ bad. So I am still searching for a good desktop radio that I can stream music to from my iDevices. If you have reasonably priced recommendations let me know. For now I am just playing from the built in speakers, which is better than nothing. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Mike on the Importance of Application Intelligence. Adrian’s DR post on How To Respond To The Sony Attacks. Favorite Securosis Posts Adrian Lane: SDLC and Entropy. See Gunnar’s take. David Mortman: What’s Old Is New again. And we wonder why our lives (in security anyway) are described as the “hamster wheel of pain.” We repeat the same stuff over and over again. With maybe a twist or two (as Adrian astutely points out), but the plot is the same. So is the end result. Sigh. Mike Rothman: Why We Didn’t Pick the Cloud (Mostly) and That’s OK. Who else gives you such a look into the thought processes behind major decisions? Right, no one. You’re welcome. Other Securosis Posts Earth to Symantec: AV doesn’t stop the APT. Incite 5/4/2011: Free Agent Status Enabled. Standards: Should You Care? (Probably Not). Software vs. Appliance: Virtual Appliances. Software vs. Appliance: Data Collection. Favorite Outside Posts Adrian Lane: VMWare Building Clouds? An interesting look at virtual platform use by cloud providers. David Mortman: The Rise of Data-Driven Security. I love it when we get validated by a heavy hitter like Scott. Mike Rothman: Summary of the Amazon EC2 and Amazon RDS Service Disruption in the US East Region. Great explanation from Amazon about their EC2 FAIL a few weeks back. You can learn a lot about cloud architecture, as well as get a feel for how complicated it is to really scale. It’s like a tightrope walk every time they have to scale (which is probably constantly). This time they fell off and went splat. Let’s hope the net is positioned a bit more effectively next time. Project Quant Posts DB Quant: Index. NSO Quant: Index of Posts. NSO Quant: Health Metrics–Device Health. NSO Quant: Manage Metrics–Monitor Issues/Tune IDS/IPS. NSO Quant: Manage Metrics–Deploy and Audit/Validate. NSO Quant: Manage Metrics–Process Change Request and Test/Approve. Research Reports and Presentations React Faster and Better: New Approaches for Advanced Incident Response. Measuring and Optimizing Database Security Operations (DBQuant). Network Security in the Age of Any Computing. The Securosis 2010 Data Security Survey. Monitoring up the Stack: Adding Value to SIEM. Top News and Posts Fake Mac Security Software It’s drive-by malware… if you actually click all the buttons and install it. Anonymous claims no involvement in Sony hacks. How to disappear completely. Yeah, more Sony mayhem. Barracuda Breach Post Mortem Analysis. Test-Driving IBM’s SmartCloud. Interesting analysis of IBM’s ‘SmartCloud’ trial product. In fairness, it’s very early in the development process. Zero-Day Attack trends via Krebs. Second installment. Makes you think security companies are not eating their own dog food. LastPass Forces Users to Pick Another Password It’s bad when the salt is stolen with the hashed passwords… now it becomes a dictionary attack. If it was a foreign government (wink-wink), they have the resources to crack all the passwords. Nikon Image Authentication System Compromised. Interesting read. Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to ds, in response to Earth to Symantec: AV doesn’t stop the APT . The reality here is that SYMC is a very successful security vendor with a lot of customers and many solutions. They aren’t stupid and press announcements like this aren’t driven by ignorance. Sadly, they will sell product to customers based on this. It speaks volumes about the buyer and their ability to understand complex security issues and appropriate remedies. In short, most “security” professionals can’t, and many companies don’t even have “security” professionals on

Share:
Read Post

Software vs. Appliance: Data Collection

Wrapping up our Software vs. Appliance series, I want to remind the audience this series was prompted by my desire to spotlight the FUD in Database Activity Monitoring sales processes. I have mentioned data collection as one of the topics Data collection matters. As much as we would like to say the deployment architecture is paramount for performance and effectiveness, data collection is crucial too, and we need to cover a couple of the competitive topics that get lumped into bake-offs. One of the most common marketing statements for DAM is, “We do not require agents.” This statement is technically correct, but it’s (deliberately) completely misleading. Let’s delve into the data collection issues that impact the Appliance vs. Software debate: Yes, We Have No Agents: No database activity monitor solution requires an agent. You’ll hear this from all of the vendors because they have to say that to address the competitive ‘poison pill’ left by the previous vendor. All but one DAM product can collect SQL and events without an agent. But the statement “We don’t require an agent” is just marketing. In practice all DAM products – software, hardware, and virtual – use agents. It’s just a fact. They do this because agents, of one form or another, are the only reliable way to make sure you get all important events. It’s how you get the whole picture and capture the activity you need for security and compliance. Nobody serious about compliance and/or security skips installing an agent on the target database. No Database Impact: So every DAM vendor has an agent, and you will use yours. It may collect SQL from the network stack by embedding into the OS; or by scanning memory; or by collecting trace, audit, or transaction logs. No vendor can credibly claim they have no impact on the target database. If they say this, they’re referring to the inadequate agent-less data collection option you don’t use. Sure, the vendor can provide a pure network traffic collection option to monitor for most external threats, but that model fails to collect critical events on the database platform. Don’t get me wrong – network capture is great for detecting a subset of security specific events, and it’s even preferable for your less-critical databases, but network scanning fails to satisfy compliance requirements. Agent-less deployments are common, but for cases where the database is a lower priority. It’s for those times you want some security controls, but it’s not worth the effort to enforce every policy all the time. Complete SQL Activity: DAM is focused on collection of database events. Agents that collect from the network protocol stack outside the database, or directly from the network, focus on raw unprocessed SQL statements in transit, before they get to the database. For many customers just getting the SQL statement is enough, but for most the result of the SQL statement is just as important. The number of rows returned, or whether the query failed, is essential information. Many network collectors do a good job of query collection, but poor result collection. In some cases they capture only the result code, unreliably – I have seen capture rates as low as 30% in live customer environments. For operations management and forensic security audits this is unacceptable, so you’ll need to verify during vendor review. Database Audit vs. Activity Audit: This is a personal pet peeve, something that bothers most DAM customers once they are aware of it. If your agents collects data from outside the database, you are auditing activity. If you collect data from inside the database you are auditing the database. It’s that simple. And this is a very important distinction for compliance, where you may need to know database state. It is considerably more difficult to collect from database memory, traces, transaction logs, and audit logs. Using these data sources has more performance impact – anywhere from a bit to much more impact than activity auditing, depending upon the database and the agent configuration. Worse, database auditing doesn’t always pick up the raw SQL statements. But these data sources are used because they give provide insight to the state of the database and transactions – multiple statements logically grouped together – that activity monitoring handles less well. Every DAM platform must address the same fundamental data collection issues, and no one is immune. There is no single ‘best’ method – every different option imposes its own tradeoffs. In the best case, your vendor provides multiple data collection options for you to choose from, and you can select the best fit for each deployment. Share:

Share:
Read Post

Software vs. Appliance: Virtual Appliances

For Database Activity Monitoring, Virtual Appliances result from hardware appliances not fitting into virtualization models. Management, hardware consolidation, resource and network abstraction, and even power savings don’t fit. Infrastructure as a Service (IaaS) disrupts the hardware model. So DAM vendors pack their application stacks into virtual machine images and sell those. It’s a quick win for them, as very few changes are needed, and they escape the limitations of hardware. A virtual appliance is ‘built’ and configured like a hardware appliance, but delivered without the hardware. That means all the software – both third party and vendor created – contained within the hardware appliances is now wrapped in a virtual machine image. This image is run and managed by a Virtual Machine Manager (VMware, Xen, Hyper-V, etc.), but otherwise functions the same as a physical appliance. In terms of benefits, virtual appliances are basically the opposite of hardware appliances. Like the inhabitants of mirror universes in Star Trek, the participants look alike but act very differently. Sure, they share some similarities – such as ease of deployment and lack of hardware dependancies – but many aspects are quite different than software or hardware based DAM. Advantages over physical hardware include: Scale: Taking advantage of the virtual architecture, it’s trivial to spin up new appliances to meet demand. Adding new instances is a simple VMM operation. Multiple instances still collect and process events, and send alerts and event data to a central appliance for processing. You still have to deploy software agents, and manage connections and credentials, of course. Cloud & Virtual Compatibility: A major issue with hardware appliances is their poor fit in cloud and virtual environments. Virtual instances, on the other hand, can be configured and deployed in virtual networks to both monitor and block suspicious activity. Management: Virtual DAM can be managed just like any other virtual machine, within the same operational management framework and tools. Adding resources to the virtual instance is much easier than upgrading hardware. Patching DAM images is easier, quicker, and less disruptive. And it’s easy to move virtual appliances to account for changes in the virtual network topology. Disadvantages include: Performance: This is in stark contrast to hardware appliance performance. Latency and performance are both cited by customers as issues. Not running on dedicated hardware has a cost – resources are neither dedicated nor tuned for DAM workloads. Event processing performance is in line with software, which is not a concern. The more serious issue is disk latency and event transfer speeds, both of which are common complaints. Deployment of virtual DAM is no different than most virtual machines – as always, you must consider storage connection latency and throughput. DAM is particularly susceptible to latency – it is designed to function in real time monitoring – so it’s important to monitor I/O performance and virtual bottlenecks, and adjust accordingly. Elasticity: In practice the VMM is far more elastic the the application – virtual DAM appliances are very easy to replicate, but don’t take full advantage of added resources without reconfiguration. In practice added memory & processing power help, but as with software, virtual appliances require configuration to match customer environments. Cost: Cost is not necessarily either an advantage or a problem, but it is a serious consideration when moving from hardware to a virtual model. Surprisingly, I find that customers using virtual environments have more – albeit smaller – databases. And thus they have more virtual appliances backing those databases. Ultimately, cost depends entirely on the vendor’s licensing model. If you’re paying on a per-appliance or per-database model costs go up. To reduce costs either consolidate database environments or renegotiate pricing. I did not expect to hear about deconsolidation of database images when speaking with customers. But customer references demonstrate that virtual appliances are added to supplement existing hardware deployments – either to fill in capacity or to address virtual networking issues for enterprise customers. Interestingly, there is no trend of phasing either out in favor of the other, but customers stick with the hybrid approach. If you have user or vendor feedback, please comment. Next I will discuss data collection techniques. These are important for a few reasons – most importantly because every DAM deployment relies on a software agent somewhere to collect events. It’s the principal data collection option – so the agent affects performance, management, and separation of duties. Share:

Share:
Read Post

SDLC and Entropy

I really enjoy having Gunnar Peterson on the team. Seems like every time we talk in our staff meeting I laugh and learn something – two rare outcomes in this profession. We were having a laugh Friday morning about the tendencies of software development organizations to trip over themselves in order to improve. Several different clients were having the same problem in understanding how to apply security to code development. Part of our discussion: Gunnar: There are no marketing requirements, so no code, right? Adrian: I’ll bet the developers are furiously coding as we speak. No MRD, no problem. Gunnar: The Product Manager said “You start coding, I’ll go find out what the customer wants.” Adrian: Ironic that what they’re doing is technically Agile. Maybe if it’s a Rapid Prototyping team I’d have some sympathy, but someone’s expecting production code. Gunnar: I wonder what they think they are building? Don’t talk to me about improving Waterfall or Agile when you can’t get your organizational $&!% together. What do I mean by that? Here is an example of something I witnessed: Phase 1: Development VP, during an employee review, says, “What the heck have you been doing the last six months?” In a panic, developer mentions a half-baked idea he had, and a prototype widget he’s been working on. An informal demo is scheduled. Phase 2: VP says “I love that! That is the coolest thing I have seen in a long time”. The developer’s chest swells with pride. Phase 3: VP says “Let’s put that in the next release”. The developer’s brain freezes, thinking about the engineering challenges of turing a half-baked widget into production code, suddenly realizing there is no time to do any other sprint tasks. The VP takes the developer’s stunned silence as a commitment and walks away. Phase 4: Developer says to product manager “Yeah, we’re including XYZ widget. The VP asked for it so I checked it into the code base”. Product Manager says “Are you effing crazy? We don’t even have tests for it”. And they make it happen because, after all, it’s employee review time. It’s not news to many of you, but that’s how features get put in, and then you ‘fix’ the feature. Security plays catch-up somewhere down the road because the feature is too awesome to not put in, and to wait until it’s fully sussed out. I used to think this was a process issue, but now I believe it’s a byproduct of human nature. Managers don’t realize the subtle ways they change others’ behavior, and their own excitement over new technology pushes rules right out the window. It’s less about changing the process than not blowing up the one you have. Gunnar’s take is a little different: If you’re in security, don’t assume that you can change process and don’t assume your job is to make process more formal. Instead look at concrete ways to reduce vulnerabilities in the context of the existing process. As any teenage girl knows, don’t listen to a word the boy says – watch what he actually does. Likewise, security people working on SDLC, don’t believe the process documents! Instead observe developers in the wild – sit in their cubes and watch what they actually do. If you strip away the PowerPoints, process documents, and grand unified dreams of software development (be they Agile, Scrum, or Rational) this is how real world software development occurs. It’s a chaotic and messy process. This assumption leads you in a different direction – not formalism, but winning the hearts and minds of developers who will deliver on building the security mechanisms, and finding quick and dirty ways to improve security. Share:

Share:
Read Post

What’s Old Is New again

The entire credit card table was encrypted and we have no evidence that credit card data was taken. The personal data table, which is a separate data set, was not encrypted, but was, of course, behind a very sophisticated security system that was breached in a malicious attack. That’s from the news, analyst and Sony PR reports that are coming out about the PlayStation Network/Qriocity breach. Does anyone trust Sony’s statement that the credit card data was not ‘taken’? If attackers got the entire customer database, wouldn’t you think they grabbed the encrypted card numbers and will attempt to crack them later? Is the comment about “a very sophisticated security system” supposed to make customers feel better, or to generate sympathy for Sony? Does labeling their breached security system “very sophisticated” reduce your faith in the likelihood their crypto and key management systems will withstand scrutiny? How many of you thought the name “Qriocity” was defacement the first time you read the story? My general rule over the last three years is to not write about breaches unless there is something unusual. There are just too many of them, and the questions I asked above could apply to any of the lame-assed breach responses we have been hearing for the last decade. But this one has plenty of angles that make it good spectator sport: It’s new: It’s the first time I have seen someone’s network hacked through a piece of dedicated hardware – of their own design. It’s old: It’s the classic (developer) test environment that was the initial point of entry and, just like so many breaches before it, for some mysterious reason the test environment could access the entire freakin’ customer database. It’s new: I can’t think of another major data breach that will have this degree of international impact. I’m not talking about the fraud angle, but rather how governments and customers are reacting. It’s old: Very little information dribbling out, with half-baked PR “trust us” catchphrases like “All of the data was protected …” It’s new: Japanese culture values privacy more than any other country I am familiar with. Does that mean they’ll develop the same dedication to security as they do quality and attention to detail? It’s old: It’s interesting to me that a culture intensely driven to continuous improvement has an oh-so-common allergic reaction to admitting fault. Sure, I get the ‘blameless’ angle written about in management texts throughout the 80s, but the lack of ownership here has a familiar ring. Obviously I was not the only one thinking this way. It’s new: We don’t, as a rule, see companies basically shut down their divisions in response to breaches, and the rumored rebuild of every compromised system is refreshing. It’s old: Their consumer advice is to change your password and watch your credit card statements. Ultimately I am fascinated to see how this plays internationally and if this breach has meaningful long-term impact on IT security processes. Yeah, not holding my breath either. Share:

Share:
Read Post

Software vs. Appliance: Software

“It’s anything you want it to be – it’s software!” – Adrian. Database Activity Monitoring software is deployed differently than DAM appliances. Whereas appliances are usually two-tier event collector / manager combinations which divide responsibilities, software deployments are as diverse as customer environments. It might be stand-alone servers installed in multiple geographic locations, loosely coupled confederations each performing different types of monitoring, hub & spoke systems, everything on a single database server, all the way up to N-tier enterprise deployments. It’s more about how the software is configured and how resources are allocated by the customer to address their specific requirements. Most customers use a central management server communicating directly with software agents with collect events. That said, the management server configuration varies from customer to customer, and evolves over time. Most customers divide the management server functions across multiple machines when they need to increase capacity, as requirements grow. Distributing event analysis, storage, management, and reporting across multiple machines enables tuning each machine to its particular task; and provides additional failover capabilities. Large enterprise environments dedicate several servers to analyzing events, linking those with other servers dedicated to relational database storage. This later point – use of relational database storage – is one of the few major differences between software and hardware (appliance) embodiments, and the focus of the most marketing FUD (Fear, Uncertainty, and Doubt) in this category. Some IT folks consider relational storage a benefit, others a detriment, and some a bit of both; so it’s important to understand the tradeoffs. In a nutshell relational storage requires more resources to house and manage data; but in exchange provides much better analysis, integration, deployment, and management capabilities. Understanding the differences in deployment architecture and use of relational storage are key to appreciating software’s advantages. Advantages of software over appliances include: Flexible Deployment: Add resources and tune your platforms specifically to your database environment, taking into account the geographic and logical layout of your network. Whether it’s thousands of small databases or one very large database – one location or thousands – it’s simply a matter of configuration. Software-based DAM offers a half-dozen different deployment architectures, with variations on each to support different environments. If you choose wrong simply reconfigure or add additional resources, rather than needing to buy new appliances. Scalability & Modular Architecture: Software DAM scales in two ways: additional hardware resources and “divide & conquer”. DAM installations scale with processor and memory upgrades, or you can move the installation to a larger new machine to support processing more events. But customers more often choose to scale by partitioning the DAM software deployment across multiple servers – generally placing the DAM engine on one machine, and the relational database on another. This effectively doubles capacity, and each platform can be tuned for its function. This model scales further with multiple event processing engines on the front end, letting the database handle concurrent insertions, or by linking multiple DAM installations via back end database. Each software vendor offers a modular architecture, enabling you to address resource constraints with very good granularity. Relational Storage: Most appliances use flat files to store event data, while software DAM uses relational storage. Flat files are extraordinarily fast at writing new events to disk, supporting higher data capture rates than equivalent software installations. But the additional overhead of the relational platform is not wasted – it provides concurrency, normalization, indexing, backup, partitioning, data encryption, and other services. Insertion rates are lower, while complex reports and forensic analyses are faster. In practice, software installations can directly handle more data than DAM appliances without resorting to third-party tools. Operations: As Securosis just went through a deployment analysis exercise, we found that operations played a surprisingly large part in our decision-making process. Software-based DAM looks and behaves like the applications your operations staff already manages. It also enables you to choose which relational platform to store events on – whether IBM, Oracle, MS SQL Server, MySQL, Derby, or whatever you have. You can deploy on the OS (Linux, HP/UX, Solaris, Windows) and hardware (HP, IBM, Oracle, Dell, etc.) you prefer and already own. There is no need to re-train IT operations staff because management fits within existing processes and systems. You can deploy, tune, and refine the DAM installation as needed, with much greater flexibility to fit your model. Obviously customers who don’t want to manage extra software prefer appliances, but they are dependent on vendors or third party providers for support and tuning, and need to provide VPN access to production networks to enable regular maintenance. Cost: In practice, enterprise customers realize lower costs with software. Companies that have the leverage to buy hardware at discounts and/or own software site licenses can scale DAM across the organization at much lower total cost. Software vendors offer tiered pricing and site licenses once customers reach a certain database threshold. Cost per DAM installation goes down, unlike appliance pricing which is always basically linear. And the flexibility of software allows more efficient deployment of resources. Site licenses provide cost containment for large enterprises that roll out DAM across the entire organization. Midmarket customers typically don’s realize this advantage – at least not to the same extent – but ultimately software costs less than appliances for enterprises. Integration: Theoretically, appliances and software vendors all offer integration with third party services and tools. All the Database Activity Monitoring deployment choices – software, hardware, and virtual appliances – offer integration with workflow, trouble-ticket, log management, and access control systems. Some also provide integration with third-party policy management and reporting services. In practice the software model offers additional integration points that provide more customer options. Most of these additional capabilities are thanks to the underlying relational databases – leveraging additional tools and procedural interfaces. As a result, software DAM deployments provide more options for supporting business analytics, SIEM, storage, load balancing, and redundancy. As I mentioned in the previous post, most of these advantages are not visible during the initial deployment phases

Share:
Read Post

Friday Summary: April 22, 2011

The Apple-ification of my home continues, as I got an Apple TV as an early birthday present. Tinkerer that I am, I thought “Wouldn’t it be great to hardwire it with Cat5 cable to the Airport Extreme? Download speeds will be awesome”. So I changed the existing phone lines (I’ll never use a POTS land line again) to Ethernet. Which meant changing all the phone jacks, and then the wall plates. And rewiring the central connections. And putting a new router in the closet. And adding new power to the closet. And wiring in a small low-voltage fan. It was the snowball effect, but this was one of the first times I have not minded, because I have Giant Freakin’ Toolbox! It was not always this way. For many years I would find something broken around the house and attempt to fix it. I am a guy, and that’s what I do. You know, something simple like a door latch that’s not working. More often than not, the whole process would just piss me off because it always involved “The Search”. Searching for my tools. Where had they gone to? Where was the Torx wrench I needed? Where was the right screwdriver – the right size with the hardened steel tip? What happened to the beautiful German wood chisels I got for Christmas? When you don’t live in your own house for four years (which happened to me with my previous employer) tools disappear. When you don’t have kids, there are only a couple of options for who used them. As far as I can tell the dogs have no interest in carpentry or automotive repair. You know who to ask about tool storage in random locations, but the question “Have you seen… ” is just not worth asking. The answer, “No. I have no idea” just makes me angrier. On the opposite end of that equation, best case, your wife will only say “No.” – worst case she’ll be pissed at you for insinuating she lost your tools. Then you stumble upon a tool you weren’t looking for – during your desperate search for those tools you need – in the bathroom cupboard, on top of a picture frame, in a box in the attic, or in that decorative ceramic vase in the dining room. During The Search – which takes longer than the time to fix the busted stuff – I would find other broken things that had a higher priority than the stuff I originally set out to fix. More tool searching ensued. You make a trip to Home Depot right after you post missing tool flyers around the house with pictures of your orbital sander. You look at the clock and half the day is gone. Two birthdays ago, my wife got me two giant tool chests, one fitting right on top of the other. With their wonder-twin powers they form Giant Freakin’ Toolbox! When she bought them every guy in the neighborhood showed up – seeing the two boxes in the driveway – and ‘helped’ her assemble then. OK, maybe ‘help’ is the wrong word because they did all the work. And maybe her bikini and the free beer helped too, but she managed to get 6 guys over to the house and they set up the toolbox. She was miffed to discover toolboxes and beer were the main attraction, but she got over it. I was so happy with the present I spent two days going through every square inch of our home to gather up the tools and place them in their new home. Now every tool I own resides there. Every tool is clean. Every drawer is labelled. Almost every tool has been accounted for – except some of the fine German wood chisels that were destroyed by a friend while prying the heads from a small block Chevy. Now projects take 2-5 minutes – perhaps 7 with cleanup. I built a wall bracket and installed a central vacuum system in a couple hours. I can change a light switch or adjust faucets without thinking about it. The time savings and reduction in frustration are astounding. I even assembled a small set of basic tools right by the garage door so ahem – anyone else needing tools can find a hammer, a basic screwdriver and pliers and not go rummaging in the tool chest. Giant Freakin’ Toolbox has a lock! I love my Apple electronics, but they don’t compare to Giant Freakin’ Toolbox. Best. Gift. Ever. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Rich quoted in CIO.in. Rich quoted in ComputerWorld. Adrian’s Dark Reading post on Cloud DB Security. Favorite Securosis Posts Adrian Lane: Dropbox Should Mimic CrashPlan. Mike Rothman: How to Read and Act on the 2011 Verizon DBIR. This is a gold mine, and you could get buried alive. Rich deciphers it. Rich: Categorizing FUD. My prediction: everyone else also chooses this one. Other Securosis Posts Oracle CVSS: ‘Partial+’ is ‘Useful-‘. Software vs. Appliance: Appliances. Incite 4/20/2011: Family Parties. New White Paper: React Faster and Better: New Approaches for Advanced Incident Response. Weekend Reading: Security Benchmarking Series. Security Benchmarking, Going Beyond Metrics: Continuous Improvement. Security Benchmarking, Beyond Metrics: You Can’t Benchmark Everything. Favorite Outside Posts Adrian Lane: My favorite line in the CSA Guidance. Rich: The Science of Why We Don’t Believe Science. In security we need to constantly assess our own cognitive biases. This is a good article that can help you understand risk responses, even though it isn’t about risk. Yes, politics are mentioned, but if you can’t get past that you need to re-evaluate your biases. Mike Rothman: Be wary of the well-certified IT pro. “Certification only goes so far.” What Kevin Beaver said… Project Quant Posts DB Quant: Index. NSO Quant: Index of Posts. NSO Quant: Health Metrics–Device Health. NSO Quant: Manage Metrics–Monitor Issues/Tune IDS/IPS. NSO Quant: Manage Metrics–Deploy and Audit/Validate. NSO Quant: Manage Metrics–Process Change Request and Test/Approve. Research Reports and Presentations React Faster and Better: New Approaches for Advanced Incident Response. Measuring and Optimizing

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.