Securosis

Research

Friday Summary: November 5, 2010

November already. Time to clean up the house before seasonal guests arrive. Part of my list of tasks is throwing away magazines. Lots of magazines. For whatever perverse reason, I got free subscriptions to all sorts of security and technology magazines. CIO Insight. Baseline. CSO. Information Week. Dr. Dobbs. Computer XYZ and whatever else was available. They are sitting around unread so it’s time to get rid of them. While I was at it I got rid of all the virtual subscriptions to electronic magazines as well. I still read Information Security Magazine, but I download that, and only because I know most of the people who write for it. For the first time since I entered the profession there will be no science, technology, or security magazines – paper or otherwise – coming to my door. I’m sure most of you have already reached this conclusion, but the whole magazine format is obsolete for news. I kept them around just in case they covered trends I missed elsewhere. Well, that, and because they were handy bathroom reading – until the iPad. Skimming through a stack of them as I drop them into the recycling bin, I realize that fewer than one article per magazine would get my attention. When I did stop to read one, I had already read about it on-line at multiple sites to get far better coverage. The magazine format does not work for news. I am giving this more consideration than I normally would, because it’s been the subject of many phone calls lately. Vendors ask, “Where do people go to find out about encryption? Where do people find information on secure software development? Will the media houses help us reach our audience?” Honestly, I don’t have answers to those questions. I know where I go: my feed reader, Google, Twitter, and the people I work with. Between those four outlets I can find pretty much anything I need on security. Where other people go, I have no idea. Traditional media is dying. Social media seems to change monthly; and the blogs, podcasts, and feeds that remain strong only do so by shaking up their presentations. Rich feels that people go to Twitter for their security information and advice. I can see that – certainly for simple questions, directions on where to look, or A/B product comparisons. And it’s the prefect medium for speed reading your way through social commentary. For more technical stuff I have my doubts. I still hear more about people learning new things from blogs, conferences, training classes, white papers and – dare I say it? – books! The depth of the content remains inversely proportionate to the velocity of the medium. Oh, and don’t forget to check out the changes to the Securosis site and RSS feeds! On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian’s Dark Reading post: Does Compliance Drive Patching? Rich, Martin, and Zach on the Network Security Podcast, episode 219. Favorite Securosis Posts Rich: IBM Dances with Fortinet. Maybe. Mike reminds us why all the speculation about mergers and acquisitions only matters to investors, not security practitioners. Mike Rothman: React Faster and Better: Response Infrastructure and Preparatory Steps. Rich nails it, describing the stuff and steps you need to be ready for incident response. Adrian Lane: The Question of Agile’s Success. Other Securosis Posts Storytellers. Download the Securosis 2010 Data Security Survey Report (and Raw Data!) Please Read: Major Change to the Securosis Feeds. React Faster and Better: Before the Attack. Incite 11/3/2010: 10 Years Gone. Cool Sidejacking Security Scorecard (and a MobileMe Update). White Paper Release: Monitoring up the Stack. SQL Azure and 3 Pieces of Flair. Favorite Outside Posts Rich: PCI vs. Cloud = Standards vs. Innovation. Hoff has a great review of the PCI implications for cloud and virtualization. Guess what, folks – there aren’t any shortcuts, and deploying PCI compliant applications and services on your own virtualization infrastructure will be tough, never mind on public cloud. Adrian Lane: HTTP cookies, or how not to design protocols. Historic perspective on cookies and associated security issues. Chris’ favorite too: An illuminating and thoroughly depressing examination of HTTP cookies, why they suck, and why they still suck. Mike Rothman: Are You a Pirate? Arrington sums up the entrepreneur’s mindset crisply and cleanly. Yes, I’m a pirate! Gunnar Peterson offered: How to Make an American Job Before It’s Too Late. David Mortman: Biz of Software 2010, Women in Software & Frat House “Culture”. James Arlen: Friend of the Show Alex Hutton contributed to the ISO 27005 <=> FAIR mapping handbook. Project Quant Posts NSO Quant: Index of Posts. NSO Quant: Health Metrics – Device Health. NSO Quant: Manage Metrics – Monitor Issues/Tune IDS/IPS. NSO Quant: Manage Metrics – Deploy and Audit/Validate. NSO Quant: Manage Metrics – Process Change Request and Test/Approve. Research Reports and Presentations The Securosis 2010 Data Security Survey. Monitoring up the Stack: Adding Value to SIEM. Network Security Ops Quant Metrics Model. Network Security Operations Quant Report. Understanding and Selecting a DLP Solution. White Paper: Understanding and Selecting an Enterprise Firewall. Understanding and Selecting a Tokenization Solution. Top News and Posts Gaping holes in mobile payments via Threatpost. Microsoft warns of 0-day attacks. Serious bugs in Android kernel. Indiana AG sues WellPoint over data breach. Windows phone kill switch. CSO Online’s fairly complete List of Security Laws, Regulations and Guidelines. SecTor 2010: Adventures in Vulnerability Hunting – Dave Lewis and Zach Lanier. SecTor 2010: Stuxnet and SCADA systems: The wow factor – James Arlen. RIAA ass-clowns at it again. Facebook developers sell IDs. Russian-Armenian botnet suspect raked in €100,000 a month. FedRAMP Analysis. it sure looks like a desperate attempt to bypass security analysis in a headlong push for cheap cloud services Part 2 of JJ’s guide to credit card regulations. Dangers of the insider threat and key management. Included as humor, not news. Software security courtesy of child labor. Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Andre Gironda, in response to The Question of Agile’s Success . “To

Share:
Read Post

The Question of Agile’s Success

10 years since the creation of the Manifesto for Agile Software Development, Paul Krill of Developer World asks: Did it deliver? Unfortunately I don’t think he adequately answered the question in his article. So let me say that the answer is an emphatic “Yes”, as it has provided several templates and tools for solving problems with people and process. And it has to be judged a success because it has provided a means to conquer problems other development methodologies could not. That said, I can’t really blame Mr. Krill for meandering around the answer. Even Kent Beck waffled on the benefits: “I don’t have a sound-bite answer for you on that.” … and said Agile has … “… contributed to people thinking more carefully about how they develop software, … There’s still a tendency for some people to look for a list of commandments” It’s tough to cast a black or white judgement like “Success” or Failure” on Agile software development. This is partially because Agile is really a broad concept with different implementations – by design – to address different organizational problems. And it is also because each model can be improperly deployed, or applied to situations it is not suited to. Oh, and let’s not forget your Agile process could be run by morons. All these are impediments to success. Of course ‘Agile’ does not fix every problem every time. There are plenty of Agile projects that failed – usually despite Agile tools that can spotlight the problems facing the development team. And make no mistake – Agile is there to address your screwy organizational problems, both inside and outside the development team. Kent Beck’s quotes capture the spirit of this ongoing discussion – for many of the Scrum advocates I meet there is a quasi-religious exactitude with which they follow Ken Schwaber’s outline. To me, Agile has always been a form of object oriented process, and I mix and match the pieces I need. The principal point I am trying to make in my “Agile Development, Security Fail” presentation is that failure to adapt Agile for security makes it harder to develop secure code. Share:

Share:
Read Post

SQL Azure and 3 Pieces of Flair

I have very little social life, so I spent my weekend researching trends in database security. Part of my Saturday was spent looking at Microsoft’s security model for the Azure SQL database platform. Specifically I wanted to know how they plan to address database and content security issues with their cloud-based offering. I certainly don’t follow all things cloud to the degree our friend Chris Hoff over at RationalSurvivability does, but I do attempt to stay current on database security trends as they pertain to cloud and virtual environments. Rummaging around MSDN, looking for anything new on SQL Azure database security, I found Microsoft’s Security Guidelines and Limitations for SQL Azure Database. And I downloaded their Security Guidlines for SQL Azure (docx). All 5 riveting pages of it. I have also been closely following the Oakleaf Systems blog, where I have seen many posts on secure session management and certificate issuance. In fact Adam Langley had an excellent post on the computational costs of SSL/TLS this Saturday. All in all they paint a very consistent picture, but I am quite disappointed in what I see. Most of the technical implementations I have looked at appear sound, but if the public documentation is an accurate indication of the overall strategy, I am speechless. Why, you ask? Firewall, SSL, and user authentication are the totality of the technologies prescribed. Does that remind you of something? This, perhaps?   With thanks to Gunnar Peterson, who many years ago captured the essence of most web application security strategies within a singe picture. Security minimalism. And if they only want to do the minimum, that’s okay, I guess. But I was hoping for a little content security. Or input validation tools. Or logging. I’m not saying they need to go wild with features, but at this point the burden’s on the application developer to roll their own security. Share:

Share:
Read Post

Friday Summary: October 22, 2010

Facebook is for old people. Facebook will ultimately make us more secure. I have learned these two important lessons over the last few weeks. Saying Facebook is for old people is not like saying it’s dead – far from it. But every time I talk computers with people 10-15 years older than me, all they do is talk about Facebook. They love it! They can’t believe they found high school acquaintances they have not seen for 30+ years. They love the convenience of keeping tabs on family and friends from their Facebook page. They are amazed to find relatives who have been out of touch for decades. It’s their favorite web site by far. And they are shocked that I don’t use it. Obviously I will want to once I understand it, so they all insist on telling me about all the great things I could do with Facebook and the wonderful things I am missing. They even give me that look, like I am a complete computer neophyte. One said “I thought you were into computers?” Any conversation about security and privacy went in one ear and out the other because, as I have been told, Facebook is awesome. As it always does, this thread eventually leads to the “My computer is really slow!” and “I think I have a virus, what should I do?” conversations. Back when I had the patience to help people out, a quick check of the machine would not uncover a virus. I never got past the dozen quasi-malicious browser plug-ins, PR-ware tracking scripts sucking up 40% of system resources, or nasty pieces of malware that refused to be uninstalled. Nowdays I tell them to stop visiting every risky site, stop installing all this “free” crap, and for effing sake, stop clicking on email links that supposedly come from your bank or Facebook friends! I think I got some of them to stop clicking email links from their banks. They are, after all, concerned about security. Facebook is a different story – they would rather throw the machine out than change their Facebook habits because, sheesh, why else use the computer? I am starting to notice an increase in computer security awareness from the general public. Actually, the extent of their awareness is that a lot of them have been hacked. The local people I talk to on a regular basis tell me they and all their children, have had Facebook and Twitter accounts hacked. It slowed them down for a bit, but they were thankful to get their accounts back. And being newly interested in security, they changed their passwords to ‘12345’ to ensure they will be safe in the future. Listening to the radio last week, two of the DJs had their Twitter accounts stolen. One DJ had a password that was his favorite team name concatenated with the number of his favorite player. He was begging over the air for the ‘hacker’ to return his access so he could tweet about the ongoing National League series. Social media are a big part of their personal and professional lives and, dammit, someone was messing with them! One of my biggest surprises in Average Joe computer security was seeing Hammacher Schlemmer offer an “online purchase security system”. Yep, it’s a little credit card mag stripe reader with a USB cable. Supposedly it encrypts data before it reaches your computer. I certainly wonder exactly whose public key it might be encrypting with! Actually, I wonder if the device does what it says it does – or anything at all! I am certain Hammacher Schlemmer sells more Harry Potter wands, knock-off Faberge eggs, and doggie step-up ladders than they do credit card security systems, but clearly they believe there is a market for this type of device. I wonder how many people will see these in their in-flight Sky Mall magazines over the holidays and order a couple for the family. Even for aunt Margie in Minnesota, so she can safely send electronic gift cards to all the relatives she found on Facebook. Now that she regained access to her account and set a new password. And that’s how Facebook will improve security for everyone. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian’s Tech Target article on Database Auditing. Adrian’s technical tips on setting up database auditing. Rich at RSA 2010 China. Favorite Securosis Posts Mike Rothman: Monitoring up the Stack: Climbing the Stack. Then end of the MUTS series provides actionable information on where to start extending your monitoring environment. Adrian Lane: Vaults within Vaults. Other Securosis Posts React Faster and Better: Data Collection/Monitoring Infrastructure. White Paper Goodness: Understanding and Selecting an Enterprise Firewall. Incite 10/20/2010: The Wrongness of Being Right. React Faster and Better: Introduction. New Blog Series: React Faster and Better. Monitoring up the Stack: Platform Considerations. Favorite Outside Posts Mike Rothman: Reconcile This. Gunnar calls out the hypocrisy of what security folks focus on – it’s great. The bad guys are one thing, but our greatest adversary is probably inertia. Gunnar Peterson: Tidal Wave of Java Exploitation. Adrian Lane: Geek Day at the White House. Chris Pepper: WTF? Apple deprecates Java. Actuallly they’re dropping the Apple JVM as of 10.7, but do you expect Oracle to build and maintain a high-quality JVM for Mac OS X? A lot of Mac-toting Java developers are looking at each other quizzically today. Project Quant Posts NSO Quant: Index of Posts. NSO Quant: Health Metrics – Device Health. NSO Quant: Manage Metrics – Monitor Issues/Tune IDS/IPS. NSO Quant: Manage Metrics – Deploy and Audit/Validate. NSO Quant: Manage Metrics – Process Change Request and Test/Approve. Research Reports and Presentations Understanding and Selecting a DLP Solution. White Paper: Understanding and Selecting an Enterprise Firewall. Understanding and Selecting a Tokenization Solution. Security + Agile = FAIL Presentation. Data Encryption 101: A Pragmatic Approach to PCI. White Paper: Understanding and Selecting SIEM/Log Management. White Paper: Endpoint Security Fundamentals. Top News and Posts A boatload of Oracle fixes. Judge Clears CAPTCHA-Breaking Case for Criminal Trial Data theft overtakes physical loss. Malware pushers abuse Firefox warning page. Predator

Share:
Read Post

Monitoring up the Stack: Climbing the Stack

As we have discussed through this series, monitoring additional data types can extend the capabilities of SIEM in a number of different ways. But you have lots of options for which direction to go. So the real question is: where do you start? Clearly you are not going to start monitoring all of these data types at once, particularly because most forms require some integration work on your part – often a great deal. Honestly, there are no hard and fast answers on where to start, or what type of monitoring is most important. Those decisions must be based on your specific requirements and objectives. But we can describe a couple common approaches for climbing the monitoring stack. Get more from SIEM The first path we’ll describe involves organizations simply looking to do more with what they have, squeezing additional value from the SIEM system they already own. They start by collecting data on the existing monitoring systems already in place, where they already have the data or the ability to easily get it. From there they add capabilities in order, from easiest to hardest. Usually that means file integrity monitoring first. From the standpoint of additional monitoring capabilities, file integrity is a bit of a standalone feature, but critical because most attacks have some impact on critical system files and so can be detected by monitoring file integrity. Next comes identity monitoring – most SIEM platforms coordinate with server/desktop operations management systems, so this capability is relatively straightforward to add. Why do this? Identity monitoring systems include audit capabilities which provide events to SIEM in order to audit access control system activity, and to map local events back to domain identities. From there it’s a logical progression to add to user activity monitoring. You leverage the combination of SIEM functions and identity monitoring data against a bunch of new rules and dashboards implemented to track user activity. As sophistication increases, 3rd party web security, endpoint agents, and content analysis tools can provide additional data to fill out a comprehensive view of user activity. Once those activities are mastered, these organizations tackle database and application monitoring. These two data types overlap less in terms of analysis and data collection techniques, provide more specialized analysis, and address detection of a different class of attack. Their implementations also tend to be the most resource intensive, so without a specific catalyst to drive implementation they tend to fall to the bottom of the list. Responding to Threats In the second post in this series, we outlined many of the threats that prompt IT organizations to consider monitoring: malware, SQL injection, and other types of system misuse. If managing these threats is the catalyst to extend your monitoring infrastructure, the progression of what data types to add will depend entirely on which attacks you need address. If you’re interested in stopping web attacks, you’ll likely start with application monitoring, followed by database activity and identity monitoring. Malware detection will drive you towards file integrity monitoring initially, and then probably to identity and user activity monitoring, because bad behavior on behalf of users can indicate a malware outbreak. If you want to detect botnets, user activity monitoring and identity monitoring are a good start. Your data type priorities will be driven by what you want to detect, based on the greatest risk you perceive to your organization. Though it’s a bit beyond the scope of this research project, we are big fans of threat modeling because it provides structure for what you need to worry about and how to defend against it. With a threat model – even on the back of an envelope – you can map the threats to information your SIEM already provides, and then decide which supplementary add-on functions are necessary to detect attacks. Privileged Users One area we tend to forget is the folks who hold the keys to the kingdom. Yes, administrators and other folks who hold privileged access to the resources that drive your organization. This is also a favorite for the auditors out there – perhaps something to do with low hanging fruit – but we see a lot of folks look to advanced monitoring to address an audit deficiency. So to monitor activity on the part of your privileged users, you’ll move towards identity and user activity monitoring first. These data types allow you to identify who is doing what, and where, to detect malfeasance. From there you add file integrity monitoring – changing system files is an easy way for someone with access to make sure they can maintain it, and also to hide their trail. Database monitoring would then come next, as users changing database access roles can indicate something amiss. The point here is you’ve probably been doing security far too long to trust anyone, and enhanced monitoring can provide the data you need to understand what those insiders are really doing on your key systems. Political Land Mines Any time new technologies are introduced, someone has to do the work. Monitoring up the Stack is no different, and perhaps a bit harder because it crosses multiple fiefdoms organizations and requires consensus, which translates roughly to politics. And politics means you can’t get anything done without cooperation from your coworkers. We can’t stress this enough: many good projects die not because of need, budget, or technology, but due to a lack of interdepartmental cooperation. And why not? Most of the time the people who need the data – or even fund the project – are not the folks who have to manage things on a day to day basis. As an example, DAM installation and maintenance falls on the shoulders of database administrators. All they see is more work. Not only do they have to install the product, but they get blamed for any performance and reliability issues it causes. Pouring more salt into the wound, the DAM system is designed to monitor database administrators! Not only is the DBA’s job now harder because they can’t use their favorite

Share:
Read Post

Monitoring up the Stack: Platform Considerations

So far in the Monitoring up the Stack series, we have focused on a number of additional data types and analysis techniques that extend security monitoring to gain a deeper and better perspective of what’s happening. We have been looking at the added value that is all good, but we all know there is no free lunch. So now let’s look at some of the problems, challenges, and extra work that come along with deeper monitoring goodness. We know most of you who have labored with scalability and configuration challenges with your SIEM product were waiting for the proverbial other shoe to drop. Each new data type and the associated analysis impact the platform. So in this post we will discuss some of these considerations and think a bit about how to work around the potential issues. To be fair, it’s not all bad news. Some additional data sources are already integrated with the SIEM (as in the case of identity and database activity monitoring), minimizing deployment concerns. However, most options for application, database, user, and file monitoring are not offered as fully integrated features. Monitoring products sometimes need to be set up in parallel – yep, that means another product to deploy, configure, and manage. You’ll configure the separate monitor to feed some combination of events, configuration details, and/or alerts to the SIEM platform – but the integration likely stops there. And each type of monitoring we have discussed has its own idiosyncrasy and/or special deployment requirement, so the blade cuts both ways. To add hard-to-get data and real-time analysis for these additional data sources comes at a cost. But what fun would it be if everything was standardized and worked out of the box? So you know what you’re getting yourself into, the following is a checklist of platform issues to consider when adding these additional data types to your monitoring capabilities. Scalability: When adding monitoring capabilities, integrated or standalone, you need additional processing power. SIEM solutions offer distributed models to leverage multi-tier or multi-platform deployments which may provide the horsepower to process additional data types. You may need to reconfigure your collection and/or analysis architecture to redistribute compute power for these added capabilities. Alternatively, many application and/or database monitoring approaches utilize software agents on the target platform. In some cases this is to access data otherwise not available, or to remove network latency from analysis response times, as well as to distribute the processing load across the organization. Of course there is a downside to agents: overhead and memory consumption could impact the target platform, as well as the normal installation & management headaches. The point is that you need to be aware of the extra work being performed and where, and you will need to absorb that requirement on the target platforms or add horsepower to the SIEM system. Regardless of the deployment model you choose, you will need additional storage to accomodate the extra data collected. You may already be monitoring some application events through syslog, but transaction history can increase event volume per application by an order of magnitude. All monitoring platforms can be set to filter out events by policy, but filtering too much defeats the purpose of monitoring these other sources in the first place. Integration: There are three principle integration points to consider. The first is how to get data into the SIEM and integrated with other event types, and second is how to configure the monitors regarding what to look for. Fully integrated SIEM systems account for both policy management and normalization / correlation of events. While you may need to alter some of your correlation rules and reports to take advantage of these new data types, it can all be performed from a single management console. Standalone monitoring systems can easily be configured to send events, configuration settings, and alerts directly to a SIEM, or drop the data into files for batch processing. SIEM platforms are adept at handling data from heterogenous sources so you just change the correlation, event filtering, and data retention rules to account for the additional data. The second – and most challenging – part of integration is sharing policies & reports between the two systems (SIEM and standalone monitor). Keep in mind that things like configuration analysis, behavioral monitoring, and file integrity monitoring all work by comparing current results against reference values. Unlike hard-coded attribute comparisons in most SIEM platforms, these reference values change over time (by definition). Policies need to be flexible enough to handle these dynamic values so if your SIEM platform can’t you’ll need to use the monitoring platform’s interface for policies, reporting, and data management. We see that with most of the Database Activity Monitoring platforms, where the SIEM is not flexible enough to alert properly. Thus customers need to maintain separate rule bases in the two products. Whenever a rule changes on either side, this disconnection requires manual verification that settings remain consistent between the two platforms. Some monitoring tools have import and export features so you can create a master policy set for all servers, and provide policy reports that detail which rules are active for audit purposes. The third point to consider is that most monitoring systems leverage smart agents, with agent deployment and maintenance managed from the console. Most SIEM platforms leverage a web-based management platform which facilitates central location management, or even the merging of consoles. Many standalone monitoring systems for content, file integrity, and web application monitoring are Windows-specific applications that can’t easily be merged and must be managed as standalone applications. Analysis: Each new data type needs its own set of analysis policies, alerting rules, dashboards, and reports. This is really where the bulk of the effort is spent – to make these broader data sources available and effective. It’s not just that we have new types of data being collected – the flexibility of flat-file event storage used within SIEM products adapts readily enough – but that monitoring tools should leverage more than merely attribute analysis.

Share:
Read Post

Dead or Alive: Pen Testing

Remember the dead or alive game Howard Stern used to do? I think it was Stern. Not sure if he’s still doing it because I’m too cheap to subscribe to Sirius for the total of 5 minutes I spend in the car driving between coffee shops. Pen testing has been under fire lately. Ranum has been talking for years about how pen testing sucks. Brian Chess also called pen testing dead at the end of 2008. It’s almost two years later and the death of pen testing has been greatly exaggerated. Pen testing is not dead. Not by a long shot. But it is changing. And we have plenty of folks weighing in on how this evolution is taking place. First off is the mouth from the South, Dave Maynor. OK, one of the mouths from the South, because I suspect I am another. Dave made some waves regarding whether to use 0-day exploits in a pen test, and then had to respond when everyone started calling him names. Here’s the thing. Dave is right. The bad guys don’t take an oath when they graduate from bad guy school that they won’t use 0-days. They can and do, and you need to know how you’ll respond. Whether it’s part of a pen test or incident response exercise doesn’t matter to me. But if you think you don’t need to understand how you’ll respond under fire, you are wrong. Second, I got to attend a great session by Dave Kennedy and Eric Smith at BSides Atlanta about strategic pen testing. It was presented from the viewpoint of the pen tester, but you can apply a lot of those lessons to how a practitioner runs a pen test in their organization. First off, a pen test is about learning where you can be exploited. If you think it’s about checking a box (for an audit) or making yourself and your team look good, you’ve missed the point. These guys will break your stuff. The question is what can you learn and how will that change your defensive strategies? The pen testers need to operate in a reasonable semblance of a real wold scenario. Obviously you don’t want them taking down your production network. But you can’t put them in a box either. The point is to learn and unless their charter is broad enough to make a difference, again you are wasting your time. Finally, I’ll point to a presentation by Josh Abraham, talking about his “Goal Oriented Pentesting” (PDF) approach. It’s good stuff. Stuff you should know, but probably don’t do. What do all these things have in common? They talk about the need for pen testing to evolve. But by no means are they talking about its death. Listen – at the end of the day, whether you are surprised by what an attacker does to your network is your business. I still believe pen testing can provide insights you can’t get any other way. I think those insights are critical to understanding your security posture. Those enlightened organizations whihc don’t pen test do so at their own risk. And the rest of us should thank them – they are the slow gazelles and the lions are hungry. Share:

Share:
Read Post

IT Debt: Real or FUD?

I just ran across Slashdot’s mention of the Measuring and Monitoring Technical Debt study funded by a research grant. Their basic conclusion is that a failure to modernize software is a form of debt obligation, and companies ultimately must pay off that debt moving forward. And until the modernization process happens, software degrades towards obsolescence or failure. From Andy Kyte at Gartner: “The issue is not just that maintenance keeps on getting deferred, it is that the lack of an application inventory and the absence of a structured review process for the application portfolio. This means the IT management team is simply never aware of the true scale of the problem,” Mr. Kyte said. “This problem, hidden from sight, is getting bigger every year and more difficult to deal with every year.” I am on the fence on the research position – apparently others are as well – and I disagree with many of the assertions because the cost of inaction needs to be weighed against the cost of overhauls. The cost of migration is significant. Retraining users. Retraining IT. New software and maintenance contracts. The necessary testing environments and regression tests. The custom code that needs to be developed in order to work with the software packages. Third party consulting agreements. New workflow and management system integration. Fixing bugs introduced with the new code. And so on. In 2008, 60% of the clients of my former firm were running on Oracle & IBM versions that were 10 years old – or older. They stayed on those version because the databases and applications worked. The business functions operated exactly as they needed them to – after 2-5 years of tweaking them to get them exactly right. A new migration was considered to be another 2-5 year process. So many firms selected bolt-on, perimeter-based security products because there was no way to build security into a platform in pure maintenance mode. And they were fine with that, as the business application was designed to a specification that did not account for changes to the security landscape, and depended on network and platform isolation. But the primary system function it was designed for worked, so overhaul was a non-starter. Yes, the cost of new features and bug fixes on very old software, if needed, was steep. But that’s just it … there were very few features and bug fixes needed. The specifications for business processing were static. Configuration and maintenance costs we at a bare minimum. The biggest reason why “The bulk of the budget cut has fallen disproportionately on maintenance activities –” was because they were not paying for new software and maintenance contracts! Added complexity would have come with new software, rather than keeping the status quo. The biggest motivator to upgrade was that older hardware/OS platforms was either too slow, or began failing. A dozen or so financial firms I spoke with performed this cost analysis and felt that every day they did not upgrade saved them money. It was only in segments that required rapid changes to meet changing market – retail and shipping come to mind – that corporations benefitted from modernization and new functionality to improve customer experience. I’ll be interested to see if this study sways IT organizations to modernize. The “deferred maintenance” message may resonate with some firms, but calling older software a liability is pure FUD. What I hope the study does is prompt firms to compare their current maintenance costs against upgrades and new maintenance – the only meaningful must be performed within a customer environment. That way they can intelligently plan upgrades when appropriate, and be aware of the costs in advance. You can bet every sales organization in the country will be delivering a copy of this research paper to their customers in order to poke and prod them into spending more money. Share:

Share:
Read Post

Friday Summary: October 8, 2010

Chris Pepper was kind enough to forward this interview with James Gosling on the Basement Coders blog earlier in the week. I seldom laugh out loud when reading blogs, but his “Java, Just Free It” & “Set Java Free” t-shirts that were pissing off Oracle got me going. And the “Google is kind of a funny company because a lot of them have this peace love and happiness version of evil” quote had me rolling on the floor. In fact I found the entire article entertaining, so I recommend reading it all the way through if you have a chance. James Gosling is an interesting guy, and for someone I have never met, he has had more impact on my career than any other person on the planet. Around Christmas 1995 I downloaded the Java white paper. At the time I was a porting engineer for Oracle, so my job was to get Oracle and Oracle apps to run on different flavors of Unix. The paper hit me like a ton of bricks. It was the first time I had seen a really good object model, one which could allow good object oriented techniques. But most importantly, being a porting engineer, Java code could run anywhere without the need to be ported. The writing was on the wall that my particular skill set would be decreasing in value every day from then on. As soon as I could, I downloaded the JDK and started programming in Java. At the first Java One developers conference in 1996 – and seeing the ‘Green Project’ handheld Gosling described in the interview – I was beyond sold. I was more excited about the possibilities in computer science than ever before. I scripted my Oracle porting job, literally, in Perl and Expect scripts, to free up more time to program Java. I spent my days not-so-clandestinely programming whatever Java projects interested me. Within months I left Oracle just so I could go somewhere, anywhere, and program Java. The startup I landed at happened to be a security start-up. But that white paper was the major catalyst in my career and pretty much shaped my professional direction for the next 10 years. And so it is again – Gosling’s views on NoSQL actually got me to go back and reconsider some of my negative opinions on the movement. I am still not sold, but there are a handful of people I have so much respect for, that their vision is enough to prompt me to reinvestigate my beliefs. I hope Mr. Gosling gets another chance to research new technologies … the last time he set the industry on its ear. – Adrian On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian’s Dark reading article on Data Security: You’re Doing It Wrong. Rich gets snarky with the Scwartz PR folks when they profile him. Mike’s Endpoint Security Fundamentals: Part 3 Favorite Securosis Posts Mike Rothman: Index of NSO Quant Posts. Yeah, pimping out my own research again. But NSOQ was a monumental amount of work, and this provides quick links to all of it. Adrian Lane: Monitoring up the Stack: Identity Monitoring. Gunnar has an excellent grasp of Identity Monitoring, and it shows in this post. Gunnar Peterson: Monitoring up the Stack: Identity Monitoring. Rich: This week’s Incite. In which Mike admits to thousands of people it’s his birthday this week! Other Securosis Posts Monitoring up the Stack: Identity Monitoring. Incite 10/6/2010: The Answer is 42. Monitoring up the Stack: App Monitoring, Part 2. Favorite Outside Posts Mike Rothman: Why Wesabe Lost to Mint. Not security related, but important nonetheless. The one that makes things easier on the user wins. Sound familiar, Dr. No? If users have to work too hard, they’ll find ways around your controls. Count on it. Adrian Lane: AT&T, Voice Encryption and Trust. Rich: Verizon releases their big PCI compliance report. Seriously good – this actually ties compliance to breaches. Gunnar Peterson: OAuth Bearer Tokens are a Terrible Idea. This is a sad story, because OAuth gained a ton of traction in version 1.0 (many major sites like Twitter & Netflix are using it), and then in the process of moving OAuth to a full-blown IETF standard the primary security protections were dropped! Project Quant Posts NSO Quant: Index of Posts. NSO Quant: Health Metrics – Device Health. Research Reports and Presentations Understanding and Selecting a Tokenization Solution. Security + Agile = FAIL Presentation. Data Encryption 101: A Pragmatic Approach to PCI. White Paper: Understanding and Selecting SIEM/Log Management. Top News and Posts Dennis’s awesome article on Rethinking Stuxnet. FBI Caught Spying. Then they want their toy back? Dumbasses. Record Breaking Patch Tuesday. eBanking Security Guarantees for Gov Institutions. Things are getting bad! LinkedIn Drive-by Malware Attack. Share:

Share:
Read Post

Monitoring up the Stack: DAM, part 2

The odds are, if you already have a SIEM/Log Management platform in place, you already look at some database audit logs. So why would you consider DAM in addition? The real question when thinking about how far up the stack (and where) to go with your monitoring strategy, is whether adding database activity monitoring data will help with threat detection and other security efforts. To answer that question, consider that DAM collects important events which are not in log files, provides real-time analysis and detection of database attacks, and blocks dangerous queries from reaching the database. These three features together are greater than the sum of their parts. As we discussed in part 1 on Database Activity Monitoring, database audit logs lack critical information (e.g., SQL statements), events (e.g., system activity) and query results needed for forensic analysis. DAM focuses on event collection into areas SIEM/Log Management does not venture: parsing database memory, collecting OS and/or protocol traffic, intercepting database library calls, undocumented vendor APIs, and stored procedures & triggers. Each source contains important data which would otherwise be unavailable. But the value is in turning this extra data into actionable information. Over and above attribute analysis (who, what, where, and when) that SIEM uses to analyze events, DAM uses lexical, behavioral, and content analysis techniques. By examining the components of a SQL statement, such as the where and from clauses, and the type and number of parameters, SQL injection and buffer overflow attacks can be detected. By capturing normal behavior patterns by user and group, DAM effectively detects system misuse and account hijacking. By examining content – as it is both stored and retrieved – injection of code or leakage of credit card numbers can be detected as it occurs. Once you have these two capabilities, blocking is possible. If you need to block unwanted or malicious events, you need to react in real time, and to deploy the technology in such a way that it can stop the query from being executed. Typical SIEM/LM deployments are designed to efficiently analyze events, which means only after data has been aggregated, normalized, and correlated. This is too late to stop an attack from taking place. By detecting threats before they hit the database, you have the capacity to block or quarantine the activity, and take corrective action. DAM, deployed in line with the database server, can block or provide ‘virtual database patching’ against known threats. Those are the reasons to consider augmenting SIEM and Log Management with Database Activity Monitoring. How do you get there? What needs to be done to include DAM technology within your SIEM deployment? There are two options: leverage a standalone DAM product to submit alerts and events, or select a SIEM/Log Management platform that embeds these feature. All the standalone DAM products have the capability to feed the collected events to third party SIEM and Log Management tools. Some can normalize events so that SQL queries can be aggregated and correlated with other network events. In some cases they can also send alerts as well, either directly or by posting them to syslog. Fully integrated systems take this a step further by linking multiple SQL operations together into logical transactions, enriching the logs with event data, or performing subsequent query analysis. They embed the analysis engine and behavioral profiling tools – allowing for tighter policy integration, reporting, and management. In the past, most database activity monitoring within SIEM products was ‘DAM Light’ – monitoring only network traffic or standard audit logs, and performing very little analysis. Today full-featured options are available within SIEM and Log Management platforms. To restate, DAM products offer much more granular inspection of database events that SIEM because DAM includes many more options for data collection, and database-specific analysis techniques. The degree to which you extract useful information depends on whether they are fully integrated with SIEM, and how much analysis and event sharing are established. If your requirement is to protect the database, you should consider this technology. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.