Securosis

Research

Home Business Payment Security

We have covered this before, but every now and again I run into a new slant on who bears responsibility for online transaction safety. Bank? Individual? If both, where do the responsibilities begin and end? Over the last year a few friends, ejected from longtime professions due to the current economic depression, have started online businesses. A couple of these individuals did not even know what HTML was last year – but now they are building web sites, starting blogs and … taking credit cards online. It came as a surprise to several of these folks when their payment processors fined them, or disrupted service entirely because they had failed a remote security audit. It seems that the web site itself passed its audit with a handful of cautionary notices that the auditor recommended they address. What failed was the management terminal – their home computer, used to dial into the account, had several severe issues. What made my friend aware that there was a problem at all was extra charges on his bill for, in essence, having crappy security. What a novel idea to raise awareness and motivate merchants! I applaud providing the resources to the merchants to help secure their environments. I also worry that this is a method for payment processors to “pass the buck” and lower their own security obligations. That’s probably because I am a cynic by nature, which is why I ended up in security, but that’s a different story. Not having started a small business that takes credit cards online, I was ignorant of many measures payment processors are taking to raise the bar for security on end-user systems. They are sending out guidance on the basic security measures, conducting assessments, providing results, and suggesting additional security measures. In fact, the list of suggested security improvements that the processor – or processor’s service provider – suggested looks a lot like what is covered in a PCI self assessment questionnaire. Firewall rules, use of admin accounts, egress filtering, and so on. I thought this was pretty cool! But on the other side of the equation, all the credit card billing is happening on the web site, without them ever collecting credit card numbers. Good idea? Overkill? These precautions are absolutely overwhelming for most people. Especially like one-person shops like my friends operate. They have absolutely no idea what a TCP reset is, or why they failed the test for it. They have never heard of egress filtering. But they are looking into home office security measures just like large retail merchants. Part of me thinks they need to have this basic understanding if they are going to conduct commerce online. Another part of me thinks they are being set up for failure. I spent about 40 minutes on the phone today, giving one friend some guidance. My first piece of advice was to get a virtual environment set up and make sure he used it for banking and banking only. Then I focused on how to pass the audit. My goal was in this conversation was: Not overwhelm him with technical jargon and concepts that he simply did not, and would not, understand. Get him to pass the next audit with minimum effort on his part, and without having to buy any new hardware or software. Call his ISP, bank, and payment processor and wring out of them any tools and assistance they could provide. Turn on the basic Windows firewall and basic router security. Honestly, the second item was the most important. Despite this person being really smart, I did not have any faith that he could set things up correctly – certainly not the first time, and perhaps not ever. So I, like many, just got him to where he could “check the box”. I just advised someone to do the minimum to pass a pseudo-PCI audit. sigh I’ll be performing penance for the rest of the week. Share:

Share:
Read Post

Friday Summary: July 9, 2010

Today is the deadline for RSA speaker submissions, so the entire team was scrambling to get our presentation topics submitted before the server crash late rush. One of the things that struck me about the submission suggestions is that general topics are discouraged. RSA notes in the submission guidelines that 60% of the attendees have 10 or more years of security experience. I think the idea is that, if your audience is more advanced, introductory or general audience presentations don’t hold the audience’s attention so intermediate and advanced sessions are encouraged. And I bet they are right about that, given the success of other venues like BlackHat, Defcon, and Security B-Sides. Still, I wonder if that is the right course of action. Has security become a private club? Are we so caught up in the security ‘echo chamber’ we forget about the mid-market folks without the luxury of full-time security experts on staff? Perhaps security just is not very interesting without some novel new hack. Regardless, it seems like it’s the same group of us, year after year, talking about the same set of issues and problems. From my perspective software developers are the weakest link in the security chain. Most coders don’t have 10 years of security experience. Heck, they don’t have two! Only a handful of people I know have been involved in secure code development practices for 10 years or more. But developers coming up to speed with security is one of the biggest wins, and advanced security topics may not be inaccessible to them. The balancing act between cutting-edge security discussions that keep researchers up to date, versus educating the people who can benefit most, is at issue. I was thinking about this during our offsite this week while Rich and Mike talked about having their kids trained in martial arts when they are old enough. They were talking about how they want the kids to be able to protect themselves when necessary. They were discussing likely scenarios and what art forms they felt would be most useful for, well, not getting their asses kicked. And they also want the kids to derive many of the same secondary benefits of respect, commitment, confidence, and attention to detail many of us unwittingly gained when our parents sent us to martial arts classes. As the two were talking about their kids’ basic introduction to personal security, it dawned on me that this is really the same issue for developers. Not to be condescending and equate coders to children, but what was bugging me was the focus on the leaders in the security space at the expense of opening up the profession to a wider audience. Basic education on security skills doesn’t just help build up a specific area of education every developer needs – the entire approach to secure code development makes for better programmers. It reinforces the basic development processes we are taught to go through in a meaningful way. I am not entirely sure what the ‘right’ answer is, but RSA is the biggest security conference, and application developers seem to be a very large potential audience that would greatly benefit from basic exposure to general issues and techniques. Secure code development practices and tools are, and hopefully will remain for the foreseeable future, a major category for growth in security awareness and training. Knowledge of these tools, processes, and techniques makes for better code. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Rich and Adrian in Essentials Guide to Data Protection. Rich on Top Three Steps to Simplify DLP without Compromise. Rich quoted at Dark Reading on University Data Breaches. Favorite Securosis Posts Adrian Lane: School’s out for Summer. Mike Rothman: Incite 7/7/2010: The Mailbox Vigil. Normally I don’t toot my own horn, but this was a good deal of analysis. Fairly balanced and sufficiently snarky… David Mortman: School’s out for Summer. Rich: Understanding and Selecting SIEM/LM: Selection Process. Other Securosis Posts Uh, not so much. Favorite Outside Posts Adrian Lane: Atlanta Has Dubious Honor of Highest Malware Infection Rate. This was probably not meant to be humorous, but the map of giant bugs just cracked me up. Does this help anyone? Rich: Top Apps Largely Forgo Windows Security Protections. There is more to vulnerability than the operating system. We can only hope these apps get on board with tactics that will make them (and us) harder to pwn. Mike Rothman: RiskIT – Does ISACA Suffer from Dunning-Kruger?. Hutton is at it again, poking at another silly risk management certification. I’m looking forward to my “Apparently OK” Risk Certificate arriving any day now. Pepper: HSBC mailing activated debit cards. And to make it better, they didn’t agree that this is a serious problem. David Mortman: The New Distribution of The 3-Tiered Architecture Changes Everything. Project Quant Posts DB Quant: Protect Metrics, Part 2, Patch Management. DB Quant: Manage Metrics, Part 1, Configuration Management. DB Quant: Protection Metrics, Part 4, Web Application Firewalls. Research Reports and Presentations White Paper: Endpoint Security Fundamentals. Understanding and Selecting a Database Encryption or Tokenization Solution. Low Hanging Fruit: Quick Wins with Data Loss Prevention. Top News and Posts Regional Trojan Threat Targeting Online Banks. Is Breaking a CAPTCHA a crime? A little more on Cyberwar. Blog Comment of the Week This week’s winner is … no one. We had a strong case of ‘blog post fail’ so I guess we cannot expect comments. Share:

Share:
Read Post

Understanding and Selecting SIEM/LM: Integration

They say that no man is an island, and in the security space that’s very true. No system is, either – especially those tasked with some kind of security management. We get caught up in SIEM and Log Management platforms to suck in every piece of information they can to help with event correlation and analysis, but when it comes down to it security management is just one aspect of an enterprise’s management stack. SIEM/Log Management is only one discipline in the security management chain, and must feed some portion of its analysis to supporting systems. So clearly integration is key, both to getting value from SIEM/LM, and to making sure the rest of the organization is on board with buying and deploying the technology. For a number of enterprise IT management systems it is important to integrate with the SIEM/Log Management platform, ranging from importing data sources, to sending alerts, even up to participating in an IT organization’s workflow. We’ve broken the integrations up into inbound (receiving data from another tool) and outbound (sending data/alerts to another tool). Inbound integration Security management tools: We discussed this a bit when talking about data collection, regarding the importance of broadening the number of data sources for analysis and reporting. These systems include vulnerability management, configuration management, change detection, network behavioral analysis, file integrity monitoring, endpoint security consoles, etc. Typically integration with these systems is via custom connectors, and most SIEM/LM players have relationships with the big vendors in each space. Identity Management: Identity integration was discussed in the last post on advanced features and is another key system for providing data to the SIEM/LM platform. This can include user and group information (to streamline deployment and ongoing user management) from enterprise directory systems like Active Directory and LDAP, as well as provisioning and entitlement information to implement user activity monitoring. These integrations tend to be via custom connectors as well. Because these inbound integrations tend to require custom connectors to get proper breadth and fidelity of data, it’s a good idea to learn a bit about each vendor’s partner program. Vendors use these programs to gain access to the engineering teams behind their data sources; but more importantly devote the resources to developing rules, policies, and reports to take advantage of the additional data. Outbound integration IT GRC: Given that SIEM/Log Management gathers information useful to substantiate security controls for compliance purposes, clearly it would be helpful to be able to send that information to a broader IT GRC (Governance, Risk, and Compliance) platform that is presumably managing the compliance process at a higher level. So integration(s) with whatever IT GRC platform is in use within your organization (if any) is an important consideration for deciiding to acquire of SIEM/Log Management technology. Help Desk: The analysis performed within the SIEM/Log Management platform provides information about attacks in progress and usually requires some type of remediation action once an alert is validated. To streamline fixing these issues, it’s useful to be able to submit trouble tickets directly into the organization’s help desk system to close the loop. Some SIEM/Log Management platform have a built-in trouble ticket system, but we’ve found that capability is infrequently used, since all companies large enough to utilize SIEM/LM also have some kind of external help desk system. Look for the ability to not only send alerts (with sufficient detail to allow the operations team to quickly fix the issue), but also to receive information back when a ticket is closed, and to automatically close the alert within the SIEM platform. CMDB: Many enterprises have also embraced configuration management databases (CMDB) technology to track IT assets and ensure that configurations adhere to corporate policies. When trying to ensure changes are authorized, it’s helpful to be able to send indications of changes at the system and/or device level to the CMDB for confirmation. Again, paying attention to each vendor’s partner program and announced relationships can yield valuable information about the frequency of true enterprise deployment, as large customers demand their vendors work together – often forcing some kind of integration. It also pays to as vendor references about their integration offerings – because issuing a press release does not mean the integration is functional, complete, or useful. Share:

Share:
Read Post

Understanding and Selecting SIEM/LM: Advanced Features

We’ve already discussed the basic features of a SIEM/Log Management platform, including collection, aggregation and normalization, correlation and alerting, reporting and forensics, and deployment architectures. But these posts cover the core functions, and are part of what each products in the space will bring to the table. As markets evolve and vendors push to further differentiate themselves, more and more capabilities are integrated into the platforms. In the case of SIEM/LM, this means pumping more data into the analysis engine, and making engine smarter. The idea is to make 1+1 produce 5, as multiple data types provide more insight than a single source – that’s the concept, anyway. To be clear, having more data does not make directly the product any better. The only way to really leverage additional data is to build correlation rules and alerts and reports that utilize the extra data. Let’s take a tour through some of the advanced data types you’ll see integrated into SIEM/LM platforms. Flow Network flow data is the connection records that stream out of a router or switch. These small and simple data files/streams typically list source, destination, and packet type. Flow data was really the first new data type which, when integrated with event and log data, really made the systems smarter. Flow data allowed the system to establish a baseline and scan for anomalous network traffic as the first indication of a problem. An entire sub-market of network management – network behavioral analysis – revolves around analyzing and visualizing flow data to understand the traffic dynamics of networks, and pinpointing performance and capacity issues before they impact users. Many of the NBA vendors have been unsuccessfully trying to position their products in the security market; but in combination with events and logs, flow data is very useful. As an example, consider a typical attack where a web server is compromised and then used as a pivot to further compromise an application server and the backend database server. The data needs to be exfiltrated in some way, so the attackers establish a secure pipe to an external zombie device. But the application server doesn’t typically send data to an external device, so flow data would show an anomalous traffic flow. At that point an administrator could analyze the logs, with correlated activity showing a new account created on the database server, and identifying the breach. Could an accurate correlation rule have caught the reconnaissance and subsequent compromise of the servers? Maybe. But the network doesn’t lie, and at some point the attackers need to move the data. These types of strange network flows can be a great indicator of a successful attack, but remember strange flows only appear after the attack has occurred. So flow data is really for reacting faster to attacks already underway. Even more powerful is the ability to set up compound correlation rules, which factor in specific events and flow scenarios. Of course setting up these rules is complicated and they require a lot of tuning, but once the additional data stream is in place, there are many options for how to leverage it. Identity Everyone wants to feel like more than just a number, but when talking about SIEM/Log Management, your IP address is pretty much who you are. You can detect many problems by just analyzing all traffic indiscriminately, but this tends to generate plenty of false positives. What about the scenario where the privileged user makes a change on a key server? Maybe they used a different device, which had a different IP address. This would show up as an unusual address for that action, and could trigger an alert. But if the system were able to leverage identity information to know the same privileged user was making the change, all would be well, right? That’s the idea behind identity integration with SIEM/LM. Basically, the analysis engine pulls in directory information from the major directory stores (Active Directory & LDAP) to understand who is in the environment and what groups they belong to, which indicates what access rights they have. Other identity data – such as provisioning and authentication information – can be pulled in to enable advanced analysis, perhaps pinpointing a departed user accessing a key system. The holy grail of identity integration is user activity monitoring. Yup, Big Brother lives – and he always knows exactly what you are doing. In this scenario you’d be able to set up a baseline for a group of users (such as Accounting Clerks), including which systems they access, who they communicate with, and what they do. There are actually a handful of other attributes that help identify a single user even when using generic service accounts. Then you can look for anomalies, such as an accounting clerk accessing the HR system, making a change on a sensitive server, or even sending data to his/her Gmail account. This isn’t a smoking gun, per se, but it does give administrators a place to look for issues. Again, additional data types beyond plain event logs can effectively make the system smarter and streamline problem identification. Database Activity Monitoring Recently SIEM/LM platforms have been integrating Database Activity Monitoring (DAM), which collects very detailed information about what is happening to critical data stores. As with the flow data discussed above, DAM can serve up activity and audit data for SIEM. These sources not only provide more data, but add additional context for analysis, helping with both correlation and forensic analysis. Securosis has published plenty of information on DAM, which you can check out in our research library. The purpose of DAM integration is to drive analysis deeper into database transactions, gaining the ability to detect patterns which indicate successful compromise or misuse. As a simple example, if a mobile user gets infected at Starbucks (like that ever happens!) and then unwittingly provides access to the corporate network, the attacker then proceeds to compromise the database. The DAM device monitors the transactions to and from the database, and should see

Share:
Read Post

Understanding and Selecting SIEM/LM: Data Management

We covered SIEM and Log Management deployment architectures in depth to underscore how different models are used to deal with scalability and data management issues. In some cases these deployment choices are driven by the underlying data handling mechanism within the product. In other words each platform stores and manages data differently – these decisions have significant impact on product scalability, data management, and reporting & forensics capabilities. Here we discuss the different internal data storage models, with advantages and disadvantages of each. Relational Database In the early days of this technology, most SIEM and log management systems were built on relational database engines to store events and log records. In this model the SIEM platform maps data attributes from each data source into database columns, so each event is stored in a single database row. There are numerous advantages to this model, including: Data Validation – As data is inserted into the column, the database verifies data type and range settings. Integrity check failures indicate corrupted files and are omitted from the import, with notification to administrators. Event Consistency – An event from a Cisco router now looks just like an event from a Juniper router, and vice-versa, as events are normalized before being stored in the table. Reporting – Reports are easier to generate from validated data columns, and the database can format data when generating the report. Reports run far faster thanks to column indices, effectively filtering and ordering events. Analytics – An RDBMS facilitates complex queries across all available attributes, inspected content, and correlation. This model for data storage has fallen out of favor due to the overhead of data insertion: as each row is inserted the database must perform the checks and periodically rebuild indices. As daily event volumes scaled from millions to hundreds of millions and billions, this overhead became problematic and resulted in significant scalability issues with SIEM offerings built on RDBMS. Further, data that does not fit into the tables defined in the relational model is typically left out. Unless there is some other method to maintain the fidelity and integrity of the original event records, this is problematic for forensics. This “selective memory” can also result in data accuracy issues, as truncated records may not correlate properly and can hamper analysis. As a result SIEM/LM architectures based on RDBMS are waning, as products in this space re-architect their backend data stores to address these issues. On the other hand, RDBMS storage is not totally dead – some vendors have instead chosen to streamline data insertion, basically by turning off some RDBMS checks and integrity verification. Others use an RDBMS to supplement a flat file architecture (described below), leveraging the advantages above for reporting and forensics. Flat File Flat files, or just ‘files’, are now the most common way to store events for SIEM and Log Management. Files are serve as a blank canvas for the vendor; as they can introduce any structure they choose to help define, format, and delineate events. Anything that helps with correlation and speeds up future searches is included, and each vendor has their own secret sauce for building files. Each file typically contains a day’s events, possibly from a single source, with each event clearly delineated. The files (in some cases each event) can be tagged with additional information – this is called “log enrichment”. These tags offer some of the contextual benefits of a relational database, and help to define attributes. Some even include a control structure similar to VSAM files. The events may be stored in their raw form, or be normalized prior to insertion. Flat files offer several advantages. Performance – Since normalization (to the degree necessary) happens before data insertion, there is very little work to be performed prior to insertion compared to a relational database. Data is stored as quickly as the physical media can handle, and often available immediately for searching and analysis. Flexibility – Stored events are not limited to specific normalized columns as they are in a relational database, but can take any form. Changes to internal file formats are much easier. Search – Searches can be performed without understanding the underlying structures, using simple keyword search. At least one log management vendor provides a Google-style search capability across data files. Alternately, search can rely upon tags and keywords established by the vendor. The flat file tradeoffs are twofold. First, any data management capabilities – such as indexing and data integrity – must be built from scratch by the vendor, since no RDBMS capabilities are provided by the underlying platform. This means the SIEM/LM vendor must provide any needed facilities for data integrity, normalization, filtering, and indexing. Second, there is an efficiency tradeoff. Some vendors tag, index, and normalize prior to insertion; others initially record raw events, later re-reading the data in order to normalize it, and then rewrite the reformatted data. The later method offers faster insertion, at the expense of greater total storage and processing requirements. The good news is that a few years ago most vendors saw the scalability wall of RDBMS approaching, and began investing in their own back-end data management environments. At this point many platforms feature purpose-built high-performance data stores, and we believe this will be the underlying architecture for these products moving forward. Of course, we don’t live in an either/or world, so many of the platforms combine some RDBMS capabilities with flat file aspects. Yes, the answer can be ‘both’. Share:

Share:
Read Post

Friday Summary: June 25, 2010

Thursday was totally shot. I wasted the entire day standing around. Eight hours and twenty nine minutes standing in line. I got in line at 5:50 AM and did not get back in my car until 3:00. Yep, it was Apple iPhone day. And I did not have a reservation. If you like people-watching, this is about as much fun as you will ever have. There were some 700 people at the mall by 6:30 AM. Close to me in line were two women with infants, and they were there all day. There were small children with their grandparents. The guy next to me had a shattered foot from a recent car accident. There were people calling their bosses, not to call in sick, but to tell them they were taking the day off to buy iPhones. These people were freakin’ dedicated. I have not stood in line for any length of time since 1983, trying to get a good seat for Return of the Jedi. I have not stood in line without knowing whether I would get what I was there for since the Tutankhamun exhibit in, what, 1979? This is not something I do, but I wanted the phone. And actually I did not want the ‘phone’, but everything else. I wanted a (mostly) secure way to get email on the road. I wanted a mobile device to surf the web. I wanted a way to find Thai food. I wanted a better camera. I wanted a way to get directions when traveling. I wanted to have music with me. I wanted to access files in Dropbox whenever and wherever. And the BlackBerry did none of these thing well, if at all. Plus, as a device, the BlackBerry is a poorly-engineered turd in comparison. I was just done with it, and I wanted the iPhone, and I wanted it before Black Hat. So there I stood, for eight and a half hours, holding a place in line for a guy with a broken foot so he could sit on the mall couch. I have to say the Apple employees were great. Every 30 minutes they brought us water and Starbucks coffee. Every 15 minutes they brought snacks. They sent employees into the line to chat. They brought sample phones and sat with us, in line, to demo the differences. They thanked us for sticking it out. They asked us if we needed anything, holding places in line and bringing food. They took care of every part of the transaction, including dealing with AT&T and their inability to process credit cards without dialing up Equifax. Great products and great service … it’s like I was transported back in time to an age when those things mattered. All in all I am glad I waited it out and got my phone. Camera is amazing. Display is crystal-clear. The phone does not have the hideous ‘pops’ in audio that blow my ears out, or randomly shut off for 20 seconds like the BlackBerry. And the FaceTime feature works really well, for what it’s worth. Would I do it again? Would I stand there for 8.5 hours? Ask me in another 25 years. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Chris Pepper gives us The Sun Also Sets. Time to kick Oracle a little. What a bloody fiasco! Adrian’s Dark Reading post on Open Source Database Security Issues. Rich on the Network Security Podcast, number 202. Favorite Securosis Posts Rich: Understanding and Selecting SIEM/LM: Deployment Models. Adrian and Mike do a great job of diagramming out the different deployment models. Really clear. Mike Rothman: The Open Source Database Security Project. Adrian needs to flex his database security kung-fu, and we aren’t going to get in his way. Help him out – it’s a great project. Adrian Lane: Trustwave Acquires Breach. I have not seen anyone openly discuss the apparent conflicts of interest, nor how this changes PCI compliance, the way Rich has captured it. Other Securosis Posts Understanding and Selecting a Tokenization Solution: Introduction. Are Secure Web Apps Possible? Incite 6/23/2010: Competitive Fire. FireStarter: Is Full Disk Encryption without Pre-Boot Secure?. Return of the Security Start-up? Friday Summary: June 18, 2009. Doing Well by Doing Good (and Protecting the Kids). Favorite Outside Posts Rich: Why the Disclosure Debate Doesn’t Matter. Dennis nails it. Bad guys don’t give a rat’s ass what we think of disclosure, they still have plenty to own us with. Mike Rothman: Security Intelligence: Defining APT Campaigns Good analysis of what’s involved in detecting a multi-faceted complex intrusion from Mike Cloppert. If you have a great forensics person who is good at this, pay them more. Those skills are gold. Adrian Lane: Anti-WAF Software Only Security Zealotry. Only because Jeremiah wrote this before I did. Project Quant Posts DB Quant: Manage Metrics, Part 1, Configuration Management. DB Quant: Protection Metrics, Part 4, Web Application Firewalls. DB Quant: Protect Metrics, Part 3, Masking. DB Quant: Protect Metrics, Part 2, Encryption. DB Quant: Protect Metrics, Part 1, DAM Blocking. NSO Quant: Manage IDS/IPS Process Map. DB Quant: Monitoring Metrics, Part 2, Audit. DB Quant: Monitoring Metrics, Part 1, DAM. NSO Quant: Manage Firewall Process Map. DB Quant: Secure Metrics, Part 4, Shield. DB Quant: Secure Metrics, Part 3, Restrict Access. DB Quant: Secure Metrics, Part 2, Configure. Research Reports and Presentations White Paper: Endpoint Security Fundamentals. Understanding and Selecting a Database Encryption or Tokenization Solution. Low Hanging Fruit: Quick Wins with Data Loss Prevention. Report: Database Assessment. Top News and Posts Firefox & Opera updates. Improving HTTPS Side Channel Attacks Google wins Viacom suit. MS plans 10 new patches. SharePoint and IE are the big ones. Cyber Thieves Rob Treasury Credit Union. Ukrainian arrested in India on TJX data-theft charges. These incidents go on for years, not days or even months. iPhone PIN code worthless. Rich published on this a long time ago, but automounting on Ubuntu is new and disturbing. Previously people believed you had to jailbreak

Share:
Read Post

The Open Source Database Security Project

I am thinking about writing a guide to secure open source databases, including verification queries. Do you all think that would be useful? For the most part, when I write about database security, I write about generic approaches that apply to all database platforms. I think this is helpful for database managers, as well as security and IT professionals who have projects that span multiple database types. When writing the Database Security Fundamentals series, my goal was to provide a universal checklist of the database security basics that anyone with basic DBA skills could accomplish in a week. DBAs who work in large enterprise may have established guidelines, but small and medium sized firms generally don’t, and I wanted the series to provide an awareness on what to look for and what to do. I also find that mainstream Oracle DBAs tune out because I don’t provide specific queries or discuss native features. The downside is that the series covers what to do, but not how to do it. By taking a more abstract look at the problems to be solved across security and compliance, I cannot provide specific details that will help with Oracle, Sybase, Teradata, PostgreSQL, or others – there are simply too many policies for too many platforms for me to sufficiently cover. Most DBAs know how to write the queries to fulfill the policies I outlined. For the non-DBA security or IT professional, I recognize that what I wrote leaves a gap between what you should do and how to do it. To close this gap you have a couple of options: Acquire tools like DAM, encryption, and assessment from commercial vendors Participate on database chat boards and ask questions RTFM Make friends with a good DBA Yes, there are free tools out there for assessment, auditing, and monitoring. They provide limited value, and that may be sufficient for you. I find that the free assessment tools are pretty bad because they usually only work for one database, and their policies are miserably out of date. Further, if you try to get assessment from a commercial vendor, they don’t cover open source databases like Derby, PostgreSQL, MySQL, and Open Ingres. These platforms are totally underserved by the security community but most have very large installed user bases. But you have to dig for information, and cobble together stuff for anything that is not a large commercial platform. So here is what I am thinking: through the remainder of the year I am going to write a security guide to open source databases. I will create an overview for each of the platforms (PostgreSQL, Derby, Ingres and MySQL), and cover the basics for passwords, communications security, encryption options, and so forth, including specific assessment polices and rules for baselining the databases. Every week I’ll provide a couple new rules for one platform, and I will write some specific assessment policies as well. This is going to take a little resourcefulness on my part, as I am not even sure my test server boots at this point, and I have never used Derby, but what the heck – I think it will be fun. We will post the assessment rules much like Rich and Chris did for the ipfw Firewall Rule Set. So what do you think? Should I include other databases? Should I include under-served but non-open-source such as MS Access and Teradata? Anyone out there want to volunteer to test scripts (because frankly I suck at query execution plans and optimization nowdays)? Let me know because I have been kicking this idea around for a while, but it’s not fully fleshed out, and I would appreciate your input. Share:

Share:
Read Post

Understanding and Selecting SIEM/LM: Deployment Models

We have covered the major features and capabilities of SIEM and Log Management tools, so now let’s discuss architecture and deployment models. Each architecture addresses a specific issue, such as coverage for remote devices, scaling across hundreds of thousands of devices, real-time analysis, or handling millions of events per second. Each has advantages and disadvantages in analysis performance, reporting performance, scalability, storage, and cost. There are four models to discuss: ‘flat’ central collection, hierarchical, ring, and mesh. As a caveat, none of these deployment models is mutually exclusive. Some regions may deploy a flat model, but send information up to a central location via a hierarchy. These are not absolutes, just guidelines to consider as you design your deployment to solve the specific use cases driving your project. Flat The original deployment model for SIM and log management platforms was a single server that collected and consolidated log files. In this model all log storage, normalization, and correlation occurs within a central appliance. All data collection methods (agent, flow, syslog, etc.) are available, but data is always stored in the same central location. A flat model is far simpler to deploy. All data and policies reside in a single location, so there are no policy or data synchronization issues. But of course ultimately a flat central collection model is limited in scalability, processing, and the quantity of data it can manage. A single installation provides a fixed amount of processing and storage, and reporting becomes progressively harder and slower as data sets grow. Truth be told, we only see this kind of architecture for “checkbox compliance”, predominately for smaller companies with modest data collection needs. The remaining models address the limitations of this base architecture. Hierarchical In the Ring model – or what Mike likes to call the Moat – you have a central SIEM server ringed by many log collection devices. Each logger in the ring is responsible for collecting data from event sources. These log archives are also used to support distributed reporting. The log devices send a normalized and filtered (so substantially reduced) stream of events to the master SIEM device. The SIEM server sitting in the middle is responsible for correlation of events and analysis. This architecture was largely designed to address scalability limitations with some SIEM offerings. It wasn’t cost effective to scale the SIEM engine to handle mushrooming event traffic, so surrounding the SIEM centerpiece with logging devices allowed it to analyze the most critical events while providing a more cost-effective scaling mechanism. The upside of this model is that simple (cheaper) high-performance loggers do the bulk of the heavy lifting, and the expensive SIEM components provide the meat of the analysis. This model addresses scalability and data management issues, while reducing the need to distribute code and policies among many different devices. There are a couple issues with the ring model. The biggest problem remains a lack of integration between the two systems. Management tools for the data loggers and the SIEM may be linked together with some type of dashboard, but you quickly discover the two-headed monster of two totally separate products under the covers. Similarly, log management vendors were trying to graft better analysis and correlation onto their existing products, resulting in a series of acquisitions that provided log management players with SIEM. Either way, you end up with two separate products trying to solve a single problem. This is not a happy “you got your chocolate in my peanut butter,” moment, and will continue to be a thorny issue for customers until vendors fully integrate their SIEM and log management offerings as opposed to marketing bandaids dashboards as integrated products. Mesh The last model we want to discuss is the mesh deployment. The mesh is a group of interrelated systems, each performing full log management and SIEM functions for a small part of the environment. Basically this is a cluster of SIEM/LM appliances; each a functional peer with full analysis, correlation, filtering, storage, and reporting for local events. The servers can all be linked together to form a mesh, depending on customer needs. While this model is more complex to deploy and administer, and requires a purpose-built data store to manage high-speed storage and analysis, it does solve several problems. For organizations that require segregation of both data and duties, the mesh model is unmatched. It provides the ability to aggregate and correlate specific segments or applications on specific subsets of servers, making analysis and reporting flexible. Unlike the other models, it can divide and conquer processing and storage requirements flexibly depending on the requirements of the business, rather than the scalability limitations of the product being deployed. Each vendor’s product is capable implementing two or more of these models, but typically not all of them. Each product’s technical design (particularly the datastore) dictates which deployment models are possible. Additionally, the level of integration between the SIEM and Log Management pieces has an effect as well. As we said in our introduction, every SIEM vendor offers some degree of log management capability, and most Log Management vendors offer SIEM functions. This does not mean that the offerings are fully integrated by any stretch. Deployment and management costs are clearly affected by product integration or lack thereof, so make sure to do your due diligence in the purchase process to understand the underlying product architecture and the limitations and compromises necessary to make the product work in your environment. Share:

Share:
Read Post

Friday Summary: June 11, 2010

This Monday’s FireStarter prompted a few interesting behind-the-scenes conversations with a handful of security vendors centering on product strategy in the face of the recent acquisitions in Database Activity Monitoring. The questions were mostly around the state of the database activity monitoring market, where it is going, and how the technology complements and competes with other security technologies. But what I consider a common misconception came up in all of these exchanges, having to do with the motivation behind Oracle & IBMs recent acquisitions. The basic premise went something like: “Of course IBM and Oracle made investments into DAM – they are database vendors. They needed this technology to secure databases and monitor transactions. Microsoft will be next to step up to the plate and acquire one of the remaining DAM vendors.” Hold on. Not so fast! Oracle did not make these investments simply as a database vendor looking to secure its database. IBM is a database vendor, but that is more coincidental to the Guardium acquisition than a direct driver for their investment. Security and compliance buyers are the target here. That is a different buying center than for database software, or just about any hardware or business software purchases. I offered the following parallel to one vendor: if these acquisitions are the database equivalent of SIEM monitoring and auditing the network, then that logic implies we should expect Cisco and Juniper to buy SIEM vendors, but they don’t. It’s more the operations and security management companies who make these investments. The customer of DAM technologies is the operations or security buyer. That’s not the same person who evaluates and purchases database and financial applications. And it’s certainly not a database admin! The DBA is only an evaluator of efficacy and ease of use during a proof of concept. People think that Oracle and IBM, who made splashes with Secerno and Guardium purchases, were the first big names in this market, but that is not the case. Database tools vendor Embarcadero and security vendor Symantec both launched and folded failed DAM products long ago. Netezza is a business intelligence and data warehousing firm. Fortinet describes themselves as a network security company. Quest (DB tools), McAfee (security) and EMC (data and data center management) have all kicked the tires at one time or another because their buyers have shown interest. None of these firms are database vendors, but their customers buy technologies to help reduce management costs, facilitate compliance, and secure infrastructure. I believe the Guardium and Secerno purchases were made for operations and security management. It made sense for IBM and Oracle to invest, but not because of their database offerings. These investments were logical because of their other products, because of their views of their role in the data center, and thanks to their respective visions for operations management. Ultimately that’s why I think McAfee and EMC need to invest in this technology, and Microsoft doesn’t. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian’s Dark Reading post: Massachusetts Data Privacy Standard: Comply Or Not? Rich quoted in Entrepreneur Magazine. Mike quoted in Information Security Magazine. Adrian quoted in Open Source Databases Pose Unique Security Challenges. Rich, Zach, and Martin on episode 200 of the Network Security Podcast. Favorite Securosis Posts Rich: Draft Data Security Survey for Review. It’s been a weird week over here, as all of our posts were nuts and bolts for various projects. I got some great feedback on this draft survey, with a few more comments I need to post, but it could also use more review if any of you have the time. Mike Rothman: FireStarter: Get Ready for Oracle’s New WAF. Oracle has a plan. But it’s a secret. Speculating about it is fun. David Mortman: FireStarter: Get Ready for Oracle’s New WAF. Welcome, Oracle, to the first WAFs club. Adrian Lane: One of our meatier Quant Posts: Configure. Other Securosis Posts Incite 6/9/2010: Creating Excitement. Draft Data Security Survey for Review. Friday Summary: June 4, 2010. Favorite Outside Posts Rich: Why sensible people reject the truth. While it isn’t security specific, this article from New Scientist discusses some of the fascinating reasons people frequently reject science and facts which conflict with their personal beliefs. As security professionals our challenges are often more about understaning people than technology. Mike Rothman: Not so much an “E” ticket. Magical ideas about how TSA can be more Mouse-like from Shrdlu. David Mortman: Google Changed Reputation and Privacy Forever. Adrian Lane: Raffael Marty wrote a really good post on Maturity Scale for Log Management and Analysis. Project Quant Posts DB Quant: Secure Metrics, Part 4, Shield. DB Quant: Secure Metrics, Part 3, Restrict Access. DB Quant: Secure Metrics, Part 2, Configure. DB Quant: Secure Metrics, Part 1, Patch. NSO Quant: Monitor Process Map. DB Quant: Discovery Metrics, Part 4, Access and Authorization. Research Reports and Presentations White Paper: Endpoint Security Fundamentals. Understanding and Selecting a Database Encryption or Tokenization Solution. Low Hanging Fruit: Quick Wins with Data Loss Prevention. Top News and Posts Microsoft, Apple Ship Security Updates via Brian Krebs. Mass SQL Injection Attack from our friends over at Threatpost. Good advice: Three things to harden OpenSSH on Linux. Is correlation killing the SIEM market?. Windows Help Centre Vuln and some commentary on disclosure. Digital River sues over data breach. IT lesson from BP disaster. AT&T leaked iPad Owner Data. This one correctly points out that it’s an AT&T breach, rather than pretending it was an Apple problem to scare up traffic. Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. Usually when a comment starts with “This is a terrific idea …” it gets deleted as blog spam, but not this week, as the best comment goes to DMcElligott, in response to Rich’s Draft Data Security Survey for Review. This is a terrific idea. I am very curious about the results you see from this. My suggestions: In the regulation questions

Share:
Read Post

Understanding and Selecting SIEM/LM: Reporting and Forensics

Reporting and Forensics are the principal products of a SIEM system. We have pushed, prodded, and poked at the data to get it into a manageable format, so now we need to put it to use. Reports and forensic analysis are the features most users work with on a day to day basis. Collection, normalization, correlation and all the other things we do are just to get us to the point where we can conduct forensics and report on our findings. These features play a big part in customer satisfaction, so while we’ll dig in to describe how the technology works, we will also discuss what to look for when making buying decisions. Reporting For those of us who have been in the industry for a long time, the term ‘reporting’ brings back bad memories. It evokes hundreds of pages of printouts on tractor feed paper, with thousands of entries, each row looking exactly the same as the last. It brings to mind hours of scanning these lines, yellow highlighter in hand, marking unusual entries. It brings to mind the tailoring of reports to include new data, excluding unneeded columns, importing files into print services, and hoping nothing got messed up which might require restarting from the beginning. Those days are fortunately long gone, as SIEM and Log Management have evolved their capabilities to automate a lot of this work, providing graphical representations that allow viewing data in novel ways. Reporting is a key capability because this process was just plain hard work. To evaluate reporting features included in SIEM/LM, we need to understand what it is, and the stages of a reporting process. You will notice from the description above that there are several different steps to the production of reports, and depending on your role, you may see reporting as basically one of these subtasks. The term ‘reporting’ is a colloquialism used to encompass a group of activities: selecting, formatting, moving, and reviewing data are all parts of the reporting process. So what is reporting? At its simplest, reporting is just selecting a subset of the data we previously captured for review, focused analysis, or a permanent record (‘artifact’) of activity. Its primary use is to put data into an understandable form, so we can analyze activity and substantiate controls without having to comb through lots of irrelevant stuff. The report comprises the simplified view needed to facilitate review or, as we will discuss later, forensic analysis. We also should not be constrained by the traditional definition of a report, which is a stack of papers (or in modern days a PDF). Our definition of reporting can embrace views within an interface that facilitate analysis and investigation. The second common use is to capture and record events that demonstrates completion of an assigned task. These reports are historic records kept for verification. Trouble-ticket work orders and regulatory reports are common examples, where a report is created and ‘signed’ by both the producer of the report and an auditor. These snapshots of events may be kept within, or stored separately from, the SIEM/LM system. There are a couple basic aspects to reporting that we that we want to pay close attention to when evaluating SIEM/LM reporting capabilities: What reports are included with the standard product? How easy is it to manage and automate reports? How easy is it to create new, ad-hoc reports? What export and integration options are available? For many standard tasks and compliance needs, pre-built reports are provided by the vendor to lower costs and speed up product deployment. At minimum, vendors provide canned reports for PCI, Sarbanes-Oxley, and HIPAA. We know that compliance is the reason many of you are reading this series, and will be the reason you invest in SIEM. Reports embody the tangible benefit to auditors, operations, and security staff. Just keep in mind that 2000 built-in reports is not necessarily better than 100, despite vendor claims. Most end users typically use 10-15 reports on an ongoing basis, and those must be automated and customized to the user’s requirements. Most end users want to feel unique, so they like to customize the reports – even if the built-in reports are fine. But there is a real need for ad-hoc reports in forensic analysis and implementation of new rules. Most policies take time to refine, to be sure that we collect only the data we need, and that what we collect is complete and accurate. So the reporting engine needs to make this process easy, or the user experience suffers dramatically. Finally, the data within the reports is often shared across different audiences and applications. The ability to export raw data for use with third party-reporting and analysis tools is important, and demands careful consideration during selection. People say end users buy interface and reports, and that is true for the most part. We call that broad idea _user experience_m and although many security professionals minimize the focus on reporting during the evaluation process, it can be a critical mistake. Reports are how you will show value from the SIEM/LM platform, so make sure the engine can support the information you need to show. Forensics It was just this past January that I read an “analyst” report on SIEM, where the author felt forensic analysis was policy driven. The report claimed that you could automate forensic analysis and do away with costly forensic investigations. Yes, you could have critical data at your fingertips by setting up policies in advance! I nearly snorted beer out my nose! Believe me: if forensic analysis was that freaking easy, we would detect events in real time and stop them from happening! If we know in advance what to look for, there is no reason to wait until afterwards to perform the analysis – instead we would alert on it. And this is really the difference between alerting on data and forensic analysis of the same data. We need to correlate data from multiple sources and have a

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.