Securosis

Research

Friday Summary: May 14, 2010

I was rummaging through the closet yesterday, when I came across some old notebooks from college. Yes, I am a pack rat. One of the books contained notes from Computer Science 110: Algorithm Design. Most of the coursework was looking for ways to make algorithms more efficient, and to select the right algorithm to get the job done. I remember spending weeks on sorting routines: bubble sort, merge sort, heap sort, sorts based upon the Fibonacci sequence, Quicksort, and a few others. All of which we ran against sample data sets; comparing performance; and collecting information on best case, median, and worst case results. Obviously with a pre-sorted list they all ran fast, but depending on the size and distribution of the data set our results were radically different. The more interesting discussion was the worst-case scenarios. One of the topics for discovering them was the Adversary Technique. Basically the adversary would re-arrange the data to make it as difficult as possible to sort. The premise was that, knowing the algorithm compared elements, (e.g., is X >= Y) the adversary would re-arrange all data elements into an order that forced the highest number of comparisons to be made. Some of the sorts were brilliant on average, but would be computing results until the end of time when confronted by a knowledgable adversary. All the sort algorithms are long since purged from my memory, and I can truthfully say I have never needed to develop a sorting routine in my entire career. But the adversary technique has been very useful tool in designing code. I really started using a variant of that method for writing error-handling routines so they worked efficiently while still handling errors. What is the most difficult result I could send back? When you start trying to think of errors to send back to a calling application, it’s amazing what chaos you can cause. The first time I saw an injection attack, a malicious stream sent back from a .plan file, I thought of the intelligent adversary. This is also a pretty handy concept when writing communication protocols, where you have to establish a trust relationship during multi-phase handshaking – the adversary technique is very good for discovering logic flaws. The intelligent adversary teaches you to ask the right questions, and is useful for identifying unnecessary complexity in code. If you don’t do this already, try a little adversarial role-playing the next time you have design work. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Rich at Dark Reading: A New Way to Choose Database Encryption. Adrian’s featured article on Database Activity Monitoring for Information Security Magazine. Adrian quoted in Goldman Sachs Sued for Illegal Database Access. Rich on the Digital Underground podcast with Dennis Fisher. Other Securosis mentions: MSDN SDL group response to Monday’s FireStarter. Robert Graham thinks we’re both full of $#!%. I confess that I am uncertain why Robert thinks our recommendations differ. Favorite Securosis Posts Rich: Unintended Consequences of Consumerization. One of the very first presentations I ever built as an analyst was on consumerization… mostly because I didn’t really know what I was doing at the time. But one tenet from that presentation still holds true – never underestimate the power of consumers, and we are all consumers. Mike Rothman: We Have Ways of Making You … Use a Password. Yet another example of legislation gone wild… David Mortman and Adrian Lane: FireStarter: Secure Development Lifecycle – You’re Doing It Wrong. Other Securosis Posts SAP Buys Sybase. Incite 5/12/2010: the Power of Unplugging. Help Build the Mother of All Data Security Surveys. Download Our Kick-Ass Database Encryption and Tokenization Paper. Favorite Outside Posts Rich: Why I left Facebook. I’m still on Facebook, but I do nothing I remotely consider private there. I only stay on it until there is an alternative to keep me connected with old friends and family. Maybe that’s hypocritical considering some of my other privacy statements. Mike Rothman: Getting the time dimension right. Russell helps understand security metrics versus risk analysis. “But to make a judgement about security and make decisions about alternative security postures, we need a useful estimate of risk to decide how much security is enough.” David Mortman: The Vulnerability Arms Race. Adrian Lane: A Brief, Incomplete, and Mostly Wrong History of Programming Languages. Project Quant Posts DB Quant: Planning Metrics (Part 2). DB Quant: Planning Metrics (Part 1). Research Reports and Presentations Understanding and Selecting a Database Encryption or Tokenization Solution. Low Hanging Fruit: Quick Wins with Data Loss Prevention. Report: Database Assessment. Top News and Posts HTML 5 and SQL Injection. Cigital has announced the latest BSIMM. Now with three times the number of large development shops who publicly admit that they tend to follow best practices. Anti-Malware Bypass. Interesting use of DoS to avoid detection. Verizon’s Cloud Security Strategy. Facebook and the never ending privacy discussion. Personally, I used lilSnitch to block everything Facebook. End of discussion. Building their army of hacker commandos, Chris and Jack are indoctrinating children with a weekly regimen of XSS and pummeling drills. Rumors spread: Hoff to become real-life Matthew Sobol. FBI claimed to be watching closely. Open Source IDS. Beta available. Few details, but Visa posted a warning about settlement fraud scams. Stolen Laptop Exposed Data on 207K. Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. Technically my favorite comment of the week was by David Mortman, professing shock that Andre Gironda actually agreed with someone, on a public forum no less! But alas, as he did not leave it on the blog, the award has to go to starbuck, in response to Secure Development Lifecycle–You’re Doing It Wrong. “Before you know it, HR reps will be including “SDL certification” requirements on every engineering job description, without a clue what they are demanding or why, so let’s stop this train before it runs too far off the tracks.” Damn right. By the way, I

Share:
Read Post

We Have Ways of Making You … Use a Password

MSNBC has an interesting news item: a German court is ordering all wireless routers to have a password, or the owners will be fined if it is discovered that someone used their connection illegally. From the post: Internet users can be fined up to euro 100 ($126) if a third party takes advantage of their unprotected WLAN connection to illegally download music or other files, the Karlsruhe-based court said in its verdict. “Private users are obligated to check whether their wireless connection is adequately secured to the danger of unauthorized third parties abusing it to commit copyright violation,” the court said. OK, so this is yet another lame attempt to stop people from sharing music and movies by trying to make the ‘ISP’ (a router owner in this case) an accessory to the crime. I get that, but a $126.00 fine, in the event someone is caught using your WiFi illegally and they prosecuted, is not a deterrent. But there are interesting possibilities to consider. Would the fine still apply if the password was ‘1234’? What if they had a password, but used WEP? Some routers, especially older routers, use WEP as the default. It’s trivial to breach and gain access to the password, so is that any better? Do we fine the owner of the router, or do we now fine the producer of the router for implementing crappy security? Or is the manufacturer covered by their 78 page EULA? Many laws start as benign, just to get a foothold and set precedence, then turn truly punitive after time. What if the fine was raised to $1,260, or $12,600? Would that alter your opinion? I cannot see an instance where this law makes sense as a deterrent to the actions it levies fines against. Share:

Share:
Read Post

SAP Buys Sybase

I am sitting on the porch reading a Sybase ASE document on transparent database encryption, so it’s ironic that a few minutes ago I got word that SAP bought Sybase for $5.8 billion. SAP posted a press release. This announcement is right on the heels of their partnership announcement last March. It’s been my feeling for several years now that relational databases have been on a steady retreat back into the core of the enterprise, from whence they came. Smaller, modular, more agile repositories are in vogue for everything outside enterprise IT data centers. They are easier and more accessible to developers, and they are free. I bring this up because this is one of the ways Sybase has been squeezed out of the enterprise relational database market. Let’s be honest – people looking for a new database now either go cheap and select a PostgreSQL / MySQL platform, or pay for the name brand / stack synergy / bundled pricing discounts for Oracle or IBM. Sybase has been steadily growing over the last five years due to new product offerings, but they remain something of an afterthought in the enterprise database market. Sybase does not enter into the discussion of new database sales, so they rely on keeping their current installed base happy and growth of their mobile offerings. And it’s not easy to succeed with Oracle undercutting them on price and IBM going after all their hardware vendor relationships. SAP levels the playing field for Sybase, putting them in a position to grow and get visibility with a larger body of prospects. Sybase gives SAP a technologically current database platform, an analytics engine, mobile data/device support and some other tools. Honestly, many of us had been wondering how long it would be before someone like SAP bought them. Sybase could not compete head to head in the relational database space without this relationship – not because of the technology, but due to customer preference to reduce risk by buying from stable providers. I really hate saying it, but the purchase legitimizes Sybase products and viability. An established company with over a billion in revenue should not need such endorsements, but when competing with Oracle and IBM, they do. Share:

Share:
Read Post

FireStarter: Secure Development Lifecycle—You’re Doing It Wrong

I wrote last Monday’s FireStarter on Process and Peer Pressure because there were a few things bothering me that I needed to get out of my system, but I saved a lot for later. I didn’t really intend to write this followup so soon, but I saw that Cisco announced their own Software Development Lifecycle. I wanted to make some statements on SDL later this year when I begin publishing more concrete Secure Software Development Lifecycle (SSDL in Securosis parlance, SDL for most organizations) guidelines, but Cisco’s announcement changes things. I worry that sheer inertia will prompt the industry as a whole to rubber-stamp SDLs. Before you know it, HR reps will be including “SDL certification” requirements on every engineering job description, without a clue what they are demanding or why, so let’s stop this train before it runs too far off the tracks. If you are thinking about incorporating Secure Development Lifecycle practices for software development, that’s great. If you have read about Microsoft’s SDL, witnessed Microsoft’s success, seen Cisco’s endorsement, and believe their model will work for you, just stop. It’s not going to work for you. It’s based on a lot of factors and assumptions that do not pertain to you. It’s not a template for your requirements. Adopting MS-SDL wholesale is a little like a child putting on adult clothes because they want to be ‘big’. You cannot drop that particular process into your development organization and have it fit. More likely you will break everything. Your team will need to change their skills and priorities, and though it sounds cliche, people are resistant to change. Existing processes need to be adjusted to accommodate secure development processes and techniques. You will need new tools, or to augment existing ones. You will need a whole new class of metrics and tracking. And everything you pick the first time will need several iterations of alteration and adjustment before you get it right – this isn’t Microsoft’s first attempt either. It’s not that the SDL is bad – it isn’t. Microsoft did an excellent job with their SDL. It’s very well thought out, incorporates most effective defect detection techniques, has clearly evolved through several revisions, and includes intelligent tradeoffs in places where there is no single ‘right’ answer. But it is their SDL, not yours. If you take the SDL Microsoft has described and try to implement it, you will fail. I am talking to the 99% of people out there who would think about implementing SDL and think “Hey, Microsoft published this new thingie for free; let’s use it and save ourselves the time and money!” Wrong. Here’s why: Too Big: This process is geared for very large firms, with lots of resources, and a genuine desire to get better. You may not need all of it and frankly it would be overwhelming to start with. This is huge – I mean really huge. You are not going to swallow this elephant in a single gulp. Organic Evolution: Microsoft’s success is not just the introduction of process and techniques. It was not just hiring a handful of really good people and helping to educate the development staff. The MS-SDL reflects several years of focused evolution, and in software that is a lot. They spent a long time looking at the code and figuring out what was wrong. They developed their own tools to help discover problems. They developed software to help track their progress and provide metrics to demonstrate what worked and what did not. They evolved their own threat modeling. They tried, revised, fixed and re-implemented most of what they do several times over. Don’t think their publishing a guide can save you this pain – it cannot. Resources: People, Tools, and Time are the three classic resources you have when you build code. Resources are scare. Always. OK, if you have billions of dollars in the bank, or you are a bank, you might not be quite as pinched for resources. But developing quality code is expensive. Microsoft had the money to hire some of the best people, to buy or build the best tools, a willingness to take additional time for security before releasing software, and then hire some more of the best people. Your developers work nights and weekends to get the release out the door and collapse in a heap, dreaming about all the things they wanted to do before the code was released. Cisco? Yeah, they can do this. You? You don’t have the resources to do everything, so you need to pick and choose. Appropriateness of Techniques: Your program calls for white box testing. Great, but you don’t own critical code you rely on. You leverage open source where you can get code, but off-the-shelf software and even Microsoft tools do not provide source code. If you have four thousand web pages, and most of them don’t filter input values, do you really think you are going to fix this in the current release cycle, or are you going to deploy a WAF? If you are starting an application from scratch, your first step will be threat modeling. If you have a huge existing application, forget threat modeling for now – pen testing is probably much more effective and efficient. And it’s not just which techniques, but how you use them. Within all these techniques, there are many variations and supporting requirements that need tweaking so they can work for you. We discuss these tradeoffs in the Use Case portion of Understanding and Selecting a Web Application Security Program white paper, but the point is that the right choice for you is different than the right choice for Microsoft or Cisco, and you can’t discover what’s right for your environment by reading their SDLs. Do what Microsoft did, not what they do: Using the SDL as your program is a really bad thing to do. I really hope people don’t take this as a slam against Microsoft – my point is

Share:
Read Post

Database Security Fundamentals: Encryption

Continuing our theme of quick and effective database security measures, we now move into the data protection phase. The most common (and potentially most effective) security measure for data at rest is encryption. Since we are shooting for fast and effective, we are looking at some form of transparent encryption. Almost every database has transparent encryption built in, and it is effective for securing data files and archives from snooping. Several vendors also offer forms of transparent encryption at the OS/file system level, which behave in a very similar manner, so we will consider those options as well. It’s ironic that I am writing this post today, as I just completed the final editorial sweep through the Securosis Database Encryption & Tokenization paper. Rich and I will be releasing it tomorrow (Cinco de Mayo), so if you want a much deeper dive into the technology tradeoffs and variations, check it paper out when it becomes available (Shameless plug: If you are interested in sponsoring the paper, let us know). There are a handful of business reasons to use data encryption for databases: to buttress access controls in order to protect against unwanted insider access, to protect data at rest, or to comply with an industry or government regulation. Only the last two are covered by transparent encryption, as the former requires encryption at the application layer. Application level encryption requires code changes, database changes, and application recertification, so I exclude it from this Fundamentals series. Encryption embedded within disk drives is transparent, and it protects files on the disk as well. However, purchasing encrypted drives is a significant investment, does not protect exports or tape archives, and does not protect databases moving around virtual environments. Since we are focused on quick wins here, I am limiting the discussion to transparent database options – either using native database capabilities, or through OS/file system support. Native database encryption features are embedded within the database. The encryption operations are handled behind the scenes, with no changes to the tables, columns, indices, or queries. Enabling the feature is at most an add-on package, but in some cases as simple as a handful of DDL statements. The database encrypts the data just prior to writing to disk, and decrypts when processing authenticated queries for encrypted data. Key management is either handled internally (with keys stored within system tables and only accessible by DBAs), or externally (with a dedicated key management server). Internal key storage is easier to manage, and simpler in disaster recovery scenarios, at the expense of weaker security. In either case, keys are used without the end user interaction (or even knowledge). File/OS encryption works by intercepting the database’s writes to disk and encrypting data blocks before storing them. Conversely, data is decrypted as the database requests information from disk. Keys are stored within key management services embedded within the encryption product rather than the database, or provided by external key management products. Keep in mind that this type of product can applied to on specific folders where the data is stored and not just database files. File/OS encryption is attractive for its ability to address both database non-database data security issues. Two options are not a lot, but both transparent options are effective and offer the same business benefits. The choice comes down to four factors, in order of importance: performance, cost, versatility, and comfort level. How much does the solution impact transactional throughput? How much does it cost? How many different problems does it solve? How easy is it to use? Or at least this should be the order of importance, but from experience I know some people reverse that order because they know the database and are comfortable with a particular UI. If you are the sole DBA, how comfortable you are with the interface, or how easy it is for to use, will be the biggest factor because your time is more important than the other factors. If you have been using Sybase for years and are happy with their tools, odds are you will choose that. Regardless, if you have the opportunity, running a couple performance benchmarks is very handy for getting an idea of how much impact encryption will have. It may be 3%, or 12%. Nobody notices 3%, but 12% may mean calls from users. Run some basic performance tests between a) your unaltered database, b) the database vendor’s solution active, and c) an external tool. Understanding the impact on typical database transaction processing really helps with decision making. Get some pricing estimates from vendors. If there are others in your IT organization who already use file/OS encryption, ask them about usage and performance. Yes, this makes this a two-day task instead of a one-day implementation, but it’s worth it. Testing setup and execution will take at least a day, but will give you greater confidence in your decision and make the final rollout a lot easier. Select: The question of what type of transparent encryption to select – internal database native or external file/OS – is a murky one. Weigh your options and make your selection. Acquire the tool or licence. Define Scope: Column level, table level, or entire database? Understand what data you will apply encryption to, read the documentation, and generate your configuration scripts. Configure & Install: Once you have reached this step, you should be able to implement database encryption within an afternoon. Obviously, the first step in the process is to make sure you have a verified backup prior to the installation process. Once you have installed or configured the encryption engine, the first major step will be to generate the keys. Select a good passphrase (not password) to protect the keys. Produce a verified backup of the key archive. If the keys are stored in a system table, take a fresh backup of the database. If the keys are in an external key management service, before you go any further, make sure you have that backed up and can restore

Share:
Read Post

FireStarter: For Secure Code, Process Is a Placebo—It’s All about Peer Pressure

The other day it hit me: Process is not that important to secure code development. Waterfall? Doesn’t matter. Agile process? Secondary. They only frame the techniques that create success. Saying a process helps create secure code is like saying a cattle chute tames a wild Brahma bull. Guidelines, steps, and procedures do little to alter code security, only which code gets worked on. To motivate developers to improve security, try less carrot and more stick. Heck, process is not even a carrot – it’s more like those nylon dividers at the airport to keep polite people from pushing and shoving to the front of the line. No, if you want to developers to write secure code, use peer pressure. Peer pressure is the most effective technique we have for producing secure code. That’s it. Use it every chance to you get. It’s the right thing to do. Don’t believe me? You think pair coding is about cross training? Please. It’s about peer pressure. Co-workers will realize you suck at coding, and publicly ridicule you for failing to validate input variables. So you up your game and double-check what you are supposed to deliver. Quality assurance teams point out places in the code that you screwed up, and bug counts come up during your raise review. Peer pressure. No developer wants his or her API banned because hackers trampled over it like fans at a Who concert. If you have taken management classes, you have heard about the Hawthorne Effect, discovered through studies in the 1920s and ’30s. In attempts to increase factory worker output, they adjusted working conditions, specifically looking for optimal lighting that produced the highest productivity. What they found, however, was that productivity has nothing to do with the light level per se, but went up whenever the light level changed. It was a study, so supervisors paid attention when the light changed to monitor the results. When the workers knew they were being watched, their productivity went up. Peer pressure. Why do you think we have daily scrum meetings? We do it so you remember what you are supposed to be working on, and we do it in front of all your peers so you feel the shame of falling behind. That’s why we ask everyone in the room to participate. These little sessions are especially helpful at waking up those 20-something team members who were up all night partying with their ‘bros’, or drinking Guinness and watching Manchester United till the wee hours of the morning. You know who you are. We have ‘Sprints’ for the same reason universities have exams: to get you to do the coursework. It’s your opportunity to say, “Oh, S$^)#, I forgot to read those last 8 chapters,” and start cramming for the exam. Only at work you start cramming from the deadline. 30 day sprints just provide more opportunities to prod developers with the stick than, say, 180 day waterfall cycles. I think Kent Beck had it wrong when he said that unacknowledged fear is the root cause of all software project failures. I think fear of the wrong things causes project failures. We specify priorities so we understand the very minimum we are responsible for, and we work like crazy to get the basics done. Specify security as the primary requirement, verify people are doing their jobs, and you get results. External code review? Peer pressure. Quality assurance? Peer pressure. Automated build failures? Peer pressure. The Velocity concept? Peer pressure. Testers fuzzing your code? Still peer pressure. Sure, creating stories, checklists, milestones, and threat analysis set direction – but none of those is a driver. Process frame the techniques we use, and the techniques alter behavior. The techniques that promote peer pressure, manifesting itself through fear or pride, are the most effective drivers we have. Disagree? Tell me why. Share:

Share:
Read Post

Understanding and Selecting SIEM/LM: Use Cases, Part 2

Use Case #2: Improve Efficiency Turn back the clock about 5 months – you were finalizing your 2010 security spending, and then you got the news: budgets are going down again. At least they didn’t make you cut staff during the “right-sizing” at the end of 2008, eh? Of course, budget and resources be damned, you are still on the hook to secure the new applications, which will require some new security gadgets and generate more data. And we cannot afford to forget the audit deficiencies detailed in your friendly neighborhood assessor’s last findings. Yes, those have to be dealt with too, and sometime in the first quarter, because the audit is scheduled for early May. This may seem like an untenable situation, but it’s all too real. Security professionals now must continue looking for opportunities to improve efficiency and do more with less. As we look deeper into this scenario, there are a couple of inevitable situations we have got to deal with: Compliance requirements: Government and industry regulations force us to demonstrate compliance – requiring gathering log files, parsing unneeded events, and analyzing transactions into human-readable reports to prove you’re doing things right. IT and Security must help Audit determine which events are meaningful, so regulatory controls are based upon complete and accurate information, and internal and external audit teams define how this data is presented. Nothing gets shut down: No matter how hard we try, we cannot shut down old security devices that protect a small portion of the environment. Thus every new device and widget increases the total amount of resources required to keep the environment operational. Given the number of new attack vectors clamoring for new protection mechanisms, this problem is going to get worse, and may never get better. Cost center reality: Security is still an overhead function and as such, it’s expected to work as efficiently as possible. That means no matter what the demands, there will always be pressure to cut costs. So this use case is all about how SIEM/LM can improve efficiency of existing staff, allowing them to manage more devices which are detecting more attacks, all while reducing the time from detection to remediation. A tall order, sure, but let’s look at the capabilities we have to accomplish this: Data aggregation: Similar to our react faster use case, having access to more data means less time is wasted moving between systems (swivel chair management). This increases efficiency and should allow security analysts to support more devices. Dashboards: Since a picture is worth a thousand words, a well architected security dashboard has to be worth more than that. When trying to support an increasing number of systems, the ability to see what’s happening and gain context with an overview of the big picture is critical. Alerts: When your folks need to increase their efficiency, they don’t have a lot of time to waste chasing down false positives and investigating dead ends. So having the ability to fire alerts based on real events rather than gut feel will save everyone a lot of time. Forensic investigations: Once the problem is verified, it becomes about finding root cause as quickly as possible. The SIEM/LM solution can provide the context and information needed to dig into the attack and figure out the extent of the damage – it’s about working smarter, not harder. Automated policy implementation: Some SIEM/LM tools can build automated policies based on observed traffic. This baseline (assuming it represents normal and healthy traffic) enables the system to start looking for _not normal activity, which then may require investigation. This use case is really about doing more with what you already have, which has been demanded of security professionals for years. There have been no lack of tools and products to solve problems, but the resources and expertise to take best advantage of those capabilities can be elusive. Without a heavy dose of automation, and most importantly a significant investment to get the SIEM/LM system configured appropriately, there is no way we can keep up with the bad folks. Use Case #3: Compliance Automation You know the feeling you get when you look at your monthly calendar, and it shows an upcoming audit? Whatever you were planning to do goes out the window, as you spend countless hours assembling data, massaging it, putting it into fancy checklists and pie charts, and getting ready for the visit from the auditor. Some organizations have folks who just focus on documenting security controls, but that probably isn’t you. So you’ve got to take time from the more strategic or even generally operational tasks you’ve been working on to prepare for the audit. And it gets worse, since every regulation has its own vernacular and rule set – even though they are talking about the same sets of security controls. So there is little you can leverage from last month’s PCI audit to help prepare for next month’s HIPAA assessment. And don’t forget that compliance is not just about technology. There are underlying business processes in play that can put private data at risk, which have to be documented and substantiated as well. This requires more domain expertise than any one person or team possesses. The need to collaborate on a mixture of technical and non-technical tasks makes preparing for an audit that much harder and resource intensive. Also keep in mind the opportunity cost of getting ready for audits. For one, time spent in Excel and PowerPoint massaging data is time you aren’t working on protecting information or singing the praises of your security program. And managing huge data sets for multi-national organizations across potentially hundreds of sites requires ninja-level Microsoft Office skills. Drat, don’t have that. As if things weren’t hard enough, regulatory audits tend to be more subjective than objective, which means your auditor’s opinion will make the difference between the rubber stamp and a book of audit deficiencies that will keep your team busy for two years. So getting as detailed

Share:
Read Post

Friday Summary: April 30, 2010

Project Management Judo In It’s not about risk, Shrdlu got me thinking about the problem of perception. A few years back, I noticed one of my IT staff doing something odd. Every couple weeks, over a period of many months, I would see this person walk into a room with marketing and sales people to attend a half-hour meeting. I was pretty sure the IT staffer did not know these people and had nothing to do with marketing or sales efforts. We were not running any joint projects at the time, so I could not figure out why he was meeting with these other teams. At some point curiosity overcame me and I asked what was going on and the IT guy told me they were figuring out how to set up credit card purchases for online software sales. Uh, what? It had started innocently enough. Someone in sales asked the IT guy if they could have some space on a public FTP server, outside the firewall, to host customer reference documents and user guides. Just benign PDF files. Eager to help, IT made it happen. And it was a success. Soon a sales manager asked for a ‘help’ email account, so an email server was set up on the same box. Marketing got wind of this, and placed their own sales support docs on the server, but asked for a web interface to the documents. Done. A few months later the VP of sales thought there was a lead generation opportunity, so he asked for a sign-in page with logins forwarded to the sales team. Marketing asked if it was possible to simply share the marketing folder to the collateral server to make it easier to push content, and it was finished by day’s end. Each new request was completed as asked. Customers said it would be great if they could pay for some of our upgrades online, so someone in sales said “Absolutely!” and asked the IT guy how quickly taking credit cards could be set up. This is the point I enter the story. I call this a “lose-lose, with a side of bad news” situation. I found that I had an unsecured server outside the firewall, with FTP, email, file sharing, and a web server, opening a gaping hole into the network. Worse, the service was already a success, with several groups dependent upon it. I was about to shut down this entire unsanctioned and insecure operation and piss off sales and marketing, and gently admonish an employee who really did nothing but try to be helpful. To further tweak everyone involved, I am playing scrooge, and killing off their Christmas dreams of generating Internet sales before the end of Q4. What started as a simple repository rapidly evolved into a full-service portal, with each step introducing visible benefits, but security threats not entirely obvious to those requesting the services. And honestly, they did not care, as the customers were happy. Marketing was happy. Sales was happy. IT Guy was happy. Me? Not so much. Shrdlu points out that “The onus to demonstrate benefit is on those who propose the action be taken.” I get this. In spades. The side of the coin opposite “Mr. Happy Go-getter” is “Mr. Negative Boat-anchor”. It sucks to be the boat anchor. But someone has to be the adult and say ‘No’. Or maybe not say ‘No’ out loud, but make someone else say it for you. There are ways to do this without being labelled “not a team player”. It’s really quite easy to dream up new ways to generate revenue, and everyone wants to make more money. You want to make more money for the company, don’t you? (Try answering that Porcupine Question , in front of your CEO, when a sales guy drops it into your lap). Pointing out the flaws and telling people this is a bad idea makes you the bad guy who keeps the company from being successful. Or you are positioned as the impediment to success. But asking the right questions or providing alternative perspectives – in a positive way – can make you seem like the smart, cautious person who saved the company from serious problems. It’s tough to sit through project scoping meetings and think about what could go wrong when your peers are all wide-eyed and dreamy about some cool new web service. Based on some hard-learned lesions, I would modify Shrdlu’s point to say you need to find clever ways to make the presenter of the action address the risks. You need to develop some IT Project Judo moves to place both the good and the bad at the feet of those who propose the actions. It’s all in how you go about it. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian at Dark Reading on PCI Token Alternatives. Favorite Securosis Posts Mike Rothman: Symantec Bets on Data Protection with PGP and GuardianEdge. Rich: FireStarter: Centralize or Decentralize the Security Organization? Adrian Lane: Incite 4/27/2010: Dishwasher Tales. I was re-arranging just before I read this post. David Mortman: Understanding and Selecting SIEM/Log Management: Introduction. Other Securosis Posts Friday Summary: April 23, 2010. Favorite Outside Posts Mike Rothman: 10 Quick, Dirty and Cheap Things to Improve Enterprise Security. Rich Mogull: Wozniak, Apple Security, Employee Termination and Gray Powell. Adrian Lane: The Narcissistic Vulnerability Pimp post, along with responses from Robert Graham and David “Did someone say Pimp?” Maynor and Russ McRee, purely for their imagery and subtexts. Project Quant Posts Project Quant: Database Security – Change Management. Project Quant: Database Security – Patch. Research Reports and Presentations Low Hanging Fruit: Quick Wins with Data Loss Prevention. Report: Database Assessment. Top News and Posts Texas Botnet Herder caught. Metasploit Express. Ponemon Study on Web App Security (registration required). Personally, I need a survey on Ponemon surveys just to keep track. Seems like every time I turn around there is a new one. Brokerage firm fined for data breach.

Share:
Read Post

Understanding and Selecting SIEM/LM: Use Cases, Part 1

When you think about it, security success in today’s environment comes down to a handful of key imperatives. First we need to improve the security of our environment. We are losing ground to the bad guys, and we’ve got to make some inroads on more quickly figuring out what’s being attacked and stopping it. Next we’ve got to do more with less. Yes, it seems the global economy is improving, but we can’t expect to get back to the halcyon days of spend first, ask questions later – ever. With more systems under management we have more to worry about and less time to spend poring over reports, looking for the proverbial needle in the haystack. Given the number of new attacks – counted by any measure you like – we’ve got to increase the efficiency of our resource utilization. Finally, auditors show up a few times a year, and they want their reports. Summary reports, detail reports, and reports that validate other reports. The entire auditor dance focuses on convincing the audit team that you have the proper security controls implemented and effective. That involves a tremendous amount of data gathering, analysis, and reporting just to set up; with continued tweaking over time. It’s basically a full time job to get ready for the audit, dropped on folks who already have full time jobs. So we’ve got to automate those functions to the greatest degree possible. Yes, there are lots of other reasons organizations embrace SIEM and Log Management technology, but these three make up the vast majority of the projects we see funded. So let’s dig into each use case and understand exactly what problem we are trying to solve. Use Case #1: React Faster Imagine the typical day of a security analyst. They sit down at their desk, check out their monitors, and start seeing events scroll past. A lot of events, probably millions. Their job is to look at that information and figure out what’s wrong and identify the root cause of each problem. They probably have alerts set up to report critical issues within their individual system consoles, in an effort to cull down the millions of events into some finite set of things to investigate – per system. So the analyst goes back and forth between the firewall, IPS, and network traffic analysis consoles. If a WAF is deployed, or a database activity monitoring product, they have to deal with that as well. An office chair that swivels easily is a good investment to keep your neck from wearing out. Security analysts tend to be pretty talented folks, so they do find stuff based on their understanding of the networks and devices and their own familiarity with normal, which allows them to recognize not normal. There are some events that just look weird but cannot be captured in a policy or rule. Successful reviews rise from the ability of the human analyst to interpret the alerts between the various systems and identify attacks. The issues with this scenario are numerous: Too much data, not enough information: With anywhere from 10-2000 devices to monitor, each generating a couple thousand logs and/or alerts a day, there is plenty of data. The analyst has to turn that data into information, which is a tall order for anyone. High signal to noise ratio: With that much data, the analyst is likely only going to investigate the most obvious attacks. And without some way to reduce the number of alerts to deal with, there will be lots of false positives to wade through, impacting productivity. No “situational awareness”: The new new term in security circles is situational awareness; the concept that anomalous situations are lost in a sea of detail unless the bigger business context in considered. With only events to wade through, a human analyst will lose context and not be able to keep track of the big picture. Too many tools to isolate root cause: Without centralizing data from multiple systems, there is no way to know if an IPS alert was related to a web attack or some other issue. So the analyst needs to quickly move from system to system to validate and confirm the attack, and to understand the depth of the issue. That approach isn’t particularly efficient and in an incident situation, time is the enemy. We’ve written on numerous occasions about the need to react faster, since we can’t predict where the next attack is coming from. The promise of SIEM and Log Management solutions is to help us react faster – and better – and make the world a better place, right? The features and functions a security analyst will employ are: Data aggregation: SIEM/LM solutions aggregate data from many sources, including network, security, servers, databases, applications, etc. – providing the ability to monitor everything. Having all of the events in one place helps avoid missing subtle but important ones. Correlation: Correlation looks for common attributes, and links events together into meaningful bundles. Being able to look at all events in a particular window of time, or everything a specific user did, gives us a meaningful way to investigate security events. This technology provides the ability to perform a variety of correlation techniques to integrate different sources, in order to turn data into useful information. Check out our more detailed view of correlation. Alerting: Automated analysis of correlated events can produce more substantial and detailed alerts, and help identify what needs to be investigated right now. Dashboards: With liberal use of eye candy, SIEM/LM tools take event data and turn it into fancy charts. These charts can assist the analyst in seeing patterns, and more importantly in seeing activity that is not a standard pattern, or not visible when looking at individual log entries. So ultimately this use case provides the security analyst with a set of automatic eyes and ears to wade through all the data and help identify what’s most important and requires attention now. This is the first

Share:
Read Post

Symantec Bets on Data Protection with PGP and GuardianEdge

Symantec has once again flexed its wallet, and bought a spot in the data protection market. By acquiring PGP Corporation for $300MM and GuardianEdge for $70MM in cash, Symantec basically bought the marketshare lead in endpoint encryption. Whatever that means, since encryption is a number of different markets with distinct buying constituencies and market leaders. We estimate PGP got a multiple of around 4x bookings, and GuardianEdge got between 3-4x as well, which is pretty generous but not crazy like some of Symantec’s past deals (Vontu, MessageLabs). So what is Symantec getting in the PGP acquisition? Good FDE. They are getting a well-designed key management product, as well as encryption tools that can be leveraged into the MessageLabs suite of email security tools. PGP also has a lot of desktop encryption customers, which will be a nice bundling option for the endpoint protection suites. While the core encryption technology and key management pieces are very good products, PGP has struggled on the management side. They have not done a very good job of listening to the market, or addressing ease of use and deployment concerns around Universal Server, especially at the enterprise level. The only thing universal about Universal is how much people hate it. They have been slow to develop mobile and cloud-based services, and their provisioning approach looks like a poor man’s DRM. Good parts, but poorly orchestrated. Looks like they’ll fit right in at Symantec. GuardianEdge also has a good Full Disk Encryption (FDE) product, which Symantec has been providing via an OEM agreement. Clearly not having a FDE option was a big issue for Symantec, given their biggest competitors (McAfee, Sophos, & Check Point) have acquired market leading products and are increasingly bundling with the endpoint suite. It does beg the question: why acquire GuardianEdge as well? We surmise their decision was based on momentum and product strength. Symantec has been selling GuardianEdge for a while, and to have to migrate customers to PGP would be unpleasant. Additionally, GuardianEdge’s product is strong in the critical places where PGP is weak. They have a much better rights management console, and their endpoint management and smart phone infrastructure are each clearly a step ahead of PGP. On paper, the products from PGP and GuardianEdge are more synergistic than competitive. Which brings us to the blind spot in these deals: strategy and integration. Symantec must now stitch pieces of technology from these two companies together, which will not be easy. It’s never simple, just from a technology perspective, but now Symantec has to reconcile three separate cultures. They will also also need to create an over-arching data protection strategy, including how DLP plays into the architecture. Strategy is not Symantec’s strong suit, but in order to really achieve leverage and earn back their investment, they must communicate a strong data protection strategy and then integrate the products to make it a reality. And there are mixed messages with the target audience: with mobile device support and policy management more tuned for corporate environments, how will these products work for Symantec’s government clients? I think PGP was one of the first security tools I ever purchased. I have been using their email encryption product for over a dozen years, starting with version 5 way back in the mid-90s. PGP is as close to a household name as you get for encryption. It was always reliable, easy to use and secure. Their full disk encryption product – as a single-user product – was the best I have used. They have all the pieces you need for mobile device and data encryption, but have not executed as well as they should have. And as a Mac user, their crappy iPhone support and warning users OS X updates would destroy data – several days after the update was announced – were not at all cool. But those are all personal observations. As far as the market is concerned, encryption is just a tool for security. There are hundreds, of uses cases for encryption, but ultimately encryption needs to be embedded within applications, email clients, and the OS to have its full impact. Encryption as a standalone market opportunity? Not so much. Which is why the deal makes sense on a number of levels. But as Symantec has proven over the past 5 years, having all the pieces doesn’t make it successful. Just having a giant freakin’ sales force is not enough. The onus is on them to actually execute on these deals. We’ll see if the new Enrique Salem regime will have better luck with making big deals work. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.