Login  |  Register  |  Contact
Wednesday, June 02, 2010

Incite 6/2/2010: Smuggler’s Blues

By Mike Rothman

Given the craziness of my schedule, I don’t see a lot of movies in the theater anymore. Hard to justify the cost of a babysitter for a movie, when we can sit in the house and watch movies (thanks, Uncle Netflix!). But the Boss does take the kids to the movies because it’s a good activity, burns up a couple hours (especially in the purgatory period between the end of school and beginning of camp), and most of the entertainment is pretty good.

Lots of miles on this leather... Though it does give me some angst to see two credit card receipts from every outing. The first is the tickets, and that’s OK. The movie studios pay lots to produce these fantasies, so I’m willing to pay for the content. It’s the second transaction, from the snack bar, that makes me nuts. My snack bar tab is usually as much as the tickets. Each kid needs a drink, and some kind of candy and possibly popcorn. All super-sized, of course.

And it’s not even the fact that we want to get super sizes of anything. That’s the only option. You can pay $4 for a monstrous soda, which they call small. Or $4.25 for something even bigger. If you can part with $4.50, then you get enough pop to keep a village thirst-free for a month.

And don’t get me started on the popcorn. First of all, I know it’s nutritionally terrible. They may use different oil now, but in the portions they sell, you could again feed a village. But don’t think the movie theaters aren’t looking out for you. If you get the super-duper size, you get free refills of both popcorn and soda. Of course, you’d need to be the size of an elephant to knock down more than two gallons of soda and a feedbag of popcorn, but at least they are giving something back.

So we’re been trying something a bit different, born of necessity. The Boss can’t eat the movie popcorn due to some food allergies, so she smuggles in her own popcorn. And usually a bottle of water. You know what? It works. It’s not like the 14 year old ticket attendant is going to give me a hard time.

I know, it’s smuggling, but I don’t feel guilty at all. I’d be surprised if the monstrous soda cost the theater more than a quarter, but they charge $4. So I’m not going to feel bad about sneaking in a small bag Raisinettes or Goobers with a Diet Coke. I’ll chalk it up to a healthy lifestyle. Reasonable portions and lighter on my wallet. Sounds like a win-win to me.

– Mike.

Photo credits: “Movie Night Party” originally uploaded by Kid’s Birthday Parties


Incite 4 U

  1. Follow the dollar, not the SLA – Great post by Justin James discussing the reality of service level agreements (SLAs). I know I’ve advised many clients to dig in and get preferential SLAs to ensure they get what they contract for, but ultimately it may be cheaper for the service provider to violate the SLA (and pay the fine) than it is to meet the agreement. I remember telling the stories of HIPAA compliance, and the reality that some health care organizations faced millions of dollars of investment to get compliant. But the fines were five figures. Guess what they chose to do. Yes, Bob, the answer was roll the dice. Same goes for SLAs, so there are a couple lessons here. 1) Try to get teeth in your SLA. The service provider will follow the money, so if the fine costs them more, they’ll do the right thing. 2) Have a Plan B. Contingencies and containment plans are critical, and this is just another reason why. When considering services, you cannot make the assumption that the service provider will be acting in your best interest. Unless your best interest is aligned with their best interest. Which is the reality of ‘cloud’. – MR

  2. It just doesn’t matter – I’m always pretty skeptical of poorly sourced articles on the Internet, which is why the Financial Times report of Google ditching Microsoft Windows should be taken with a grain of salt. While I am sometimes critical of Google, I can’t imagine they would really be this stupid. First of all, at least some of the attacks they suffered from China were against old versions of Windows – as in Internet Explorer 6, which even isolated troops of Antarctic chimpanzees know not to touch. Then, unless you are running some of the more-obscure ultra-secure Unix variants, no version of OS X or Linux can stand up to a targeted attacker with the resources of a nation state. Now, if they want some diversity, that’s a different story, but the latest versions of Windows are far more hardened than most of the alternatives – even my little Cupertino-based favorite.– RM

  3. Hack yourself, even if it’s unpopular… – I’ve been talking about security assurance for years. Basically this is trying to break your own defenses and seeing where the exposures are, by any means necessary. That means using live exploits (with care) and/or leveraging social engineering tactics. But when I read stories like this one from Steve Stasiukonis where there are leaks, and the tests are compromised, or the employees actually initiate legal action against the company and pen tester, I can only shake my head. Just to reiterate” the bad guys don’t send message to the chairman saying “I IZ IN YER FILEZ, READIN YER STUFFS!” They don’t worry about whether their tactics are “illegal human experiments,” they just rob you blind and pwn your systems. Yes, it may take some political fandango to get the right folks on board with the tests, but the alternative is to clean up the mess later. – MR

  4. Walk the walk – A while back we were talking about getting started in security over at The Network Security Podcast, and one bit of consensus was that you should try and spend some time on a help desk, as a developer, or as a systems or network administrator, before jumping into security. Basically spend some time in the shoes of your eventual clients. Jack Daniels suggests going a step further and “think like a defender”. Whenever I see someone whining about how bad we are at security, or how stupid someone is for not making “X” threat their top priority, odds are they either never spent time in an operational IT position, or have since forgotten what it’s like. And for those defenders, quite a few seem to forget the practical realities of keeping users up and running on a daily basis. Hell, same goes for researchers who forget the pressures of developing on budget and target. Whatever your role in security, try to understand what it is like on the other side.– RM

  5. Good enough needs to be good enough… – Interesting and short piece on fudsec.com this week from Duncan Hoopes addressing whether this concept of good enough permeating the web world is a good or bad thing for security. At times like these, the pragmatist in me bubbles to the surface. We have to work with our budgets and resources as they are. We could always use more, but probably aren’t going to get it. So we rely on “good enough” by necessity, not as primary goal. But the reality is we can never really be done, right? So our constant focus on reacting faster and incident response are driven by the reality that no matter how much we do, it’s not enough. Gosh, it would be great to have HiFi security. You know, whatever you need to really solve the problem. But that never lasts, and soon enough you’d need an AM radio with a single speaker because that’s all the money left in the budget. – MR

  6. Carry on To my mind, David Mortman’s post on Broken Promises and Mike Rothman’s post on In Search of … Solutions are two parts of the same idea. Does a technology solve, partially or completely, the business problem it’s positioned to solve? But Mike complains that vendors trying to pass off a mallet as a mouse trap just doesn’t cut it, and customers need to ask for a better mouse trap. Mort is saying: stop bitching about the mouse trap because it isn’t perfect but at least solves much of the problem. These posts, along with Jack Daniel’s post on Time for a new mantra, are more about the frustrations of the security community’s inability to make meaningful changes. Seriously, being a security professional today is like being an anti-smoking advocate … in 1955. It’s difficult for the business community to care about unknown consequences or unknown damages, or even to believe proposed security precautions will help. But security professionals self-flagellate over our inability to get management to understand the problem, and vendors’ failure to make better products, and IT departments failure to efficiently implement security programs. Ultimately security teams and vendors are not the agents of change – the business has to be, and it will be long time before businesses embrace security as a required function. –AL

  7. The more social, the less secure – Later today Rich will post some of his ideas on privacy vs. security. So without stealing any of his thunder, let’s take a look specifically at Facebook. Boaz examines the privacy and security debate by candidly assessing what Facebook does or does not need to do relative to security. A vociferous few are calling for Zuckerberg’s head on a stick because monetizing eyeballs usually involves some erosion of privacy. But in reality, whether Facebook’s privacy policy is right or wrong, not restrictive enough, or whatever, like with most other security, 99.99% of users just don’t care. You are dealing with asshats who constantly post pictures and comments that put themselves in compromising positions. You can talk until you are blue in the face, but they won’t change because they don’t see a problem. Maybe they will someday, and maybe they won’t. We security folks see the issue differently, but we are literally the lunatic fringe here. As Boaz says, “For individuals, the risks of collaborative web services are far outweighed by the benefits.” From an enterprise perspective, we must continue to do the right thing to protect our users’ data, but in reality most don’t care until their nudie pictures show up on tmz.com, and then some of them will tell all their friends. – MR

—Mike Rothman

Tuesday, June 01, 2010

DB Quant: Discovery Metrics, Part 4, Access and Authorization

By Adrian Lane

At this point we have set up the access controls strategy in the Planning phase, and collected information on the databases and applications under our control. Now we analyze existing access control and authorization settings. There are two basic efforts in this phase: 1) determining how the system that implements access controls is configured, and 2) determining how permissions are granted by that system. Permissions analysis may be a bit more difficult in this phase, depending on which access control methods you are using. Things are a bit more complicated if you are using domain or local system credentials than just internal database credentials, which themselves may also be mapped differently within the database than they appear externally, for example a standard user account on the domain which has administrative privileges within the database.

Groups and roles; how each is configured; and how permissions are allocated to applications, service accounts, end users, and administrators; each require considerable discovery work and analysis. For all but the smallest organizations, these review items can take weeks to cover. Once again, this task can be performed manually, but we strongly advise vulnerability and configuration assessment tools to support your efforts.

We’ve slightly updated our process to:

  1. Determine Scope
  2. Set up
  3. Scan
  4. Analyze & Report

Determine Scope

Variable Notes
Time to list databases This may be a subset of databases, preferably prioritized
Time to determine authorization methods Database, domain, local, and mixed mode are common options

Setup

Variable Notes
Capital and time costs to acquire and install tools for automated assessments Optional
Time to request and obtain access permissions
Time to establish baselines for group and role configurations Policy is the high-level requirement; rule is the technical query for inspection. Vendors provide these with the tools, but they may require tuning for your internal requirements and environment.
Time to create custom report templates to review permissions Data privacy, operational control, and security require different views of settings to verify authorization settings

Scan

Variable Notes
Time to enumerate groups, roles, and accounts
Time to scan database and domain access configuration
Time to scan password configuration Aging policies, reuse, failed login, and inactivity lockouts
Time to scan passwords for compliance Optional
Time to record results

Analyze & Report

Variable Notes
Time to map admin roles Verify DBA permissions are divided among separate roles
Time to review service account and application access rights Time to verify DB system mapping to domain access
Time to evaluate user accounts and privileges Verify users are assigned the correct groups and roles, and groups and roles have reasonable access

Other Posts in Project Quant for Database Security

  1. An Open Metrics Model for Database Security: Project Quant for Databases
  2. Database Security: Process Framework
  3. Database Security: Planning
  4. Database Security: Planning, Part 2
  5. Database Security: Discover and Assess Databases, Apps, Data
  6. Database Security: Patch
  7. Database Security: Configure
  8. Database Security: Restrict Access
  9. Database Security: Shield
  10. Database Security: Database Activity Monitoring
  11. Database Security: Audit
  12. Database Security: Database Activity Blocking
  13. Database Security: Encryption
  14. Database Security: Data Masking
  15. Database Security: Web App Firewalls
  16. Database Security: Configuration Management
  17. Database Security: Patch Management
  18. Database Security: Change Management
  19. DB Quant: Planning Metrics, Part 1
  20. DB Quant: Planning Metrics, Part 2
  21. DB Quant: Planning Metrics, Part 3
  22. DB Quant: Planning Metrics, Part 4
  23. DB Quant: Discovery Metrics, Part 1, Enumerate Databases
  24. DB Quant: Discovery Metrics, Part 2, Identify Apps
  25. DB Quant: Discovery Metrics, Part 3, Config and Vulnerability Assessment

—Adrian Lane

On “Security engineering: broken promises”

By David Mortman

Recently Michael Zalewski posted a rant about the state of security engineering in Security engineering: broken promises. I posted my initial response to this on Twitter: “Great explanation of the issue, zero thoughts on solutions. Bored now.” I still stand behind that response. As a manager, problems without potential solutions are useless to me. The solutions don’t need to be deep technical solutions – sometimes the solution is to monitor or audit. Sometimes the solution is to do nothing, accept the risk, and make a note of it in case it comes up in conversation or an audit.

But as I’ve mulled over this post over the last two weeks, there is more going on here. There seems to be a prevalent attitude among security practitioners in general, and researchers in particular, that if they can break something it’s completely useless. There’s an old Yiddish saying that loosely translates to: “To a thief there is no lock.” We’re never going to have perfect security, so picking on something for being imperfect is just disingenuous and grandstanding.

We need to be asking ourselves a pragmatic question: Does this technology or process make things better? Just about any researcher will tell you that Microsoft’s SDL has made their lives much harder, and they have to work a lot more to break stuff. Is it perfect? No, of course not! But is it a lot better then it used to be for all involved (except the researchers Microsoft created the SDL to impede)? You betcha. Are CWE and CVSS perfect? No! Were they intended to be? No! But again, they’re a lot better than what we had before. Can we improve them? Yes, CVSS continues to go through revisions and will get better. As will the Risk Management frameworks.

So really, while bitching is fun and all, if you’re not offering improvements, you’re just making things worse.

—David Mortman

FireStarter: In Search of… Solutions

By Mike Rothman

A holy grail of technology marketing is to define a product category. Back in the olden days of 1998, it was all about establishing a new category with interesting technology and going public, usually on nothing more than a crapload of VC money and a few million eyeballs.

Then everything changed. The bubble popped, money dried up, and all those companies selling new products in new categories went bust. IT shops became very risk averse – only spending money on established technologies. But that created a problem, in that analysts had to sell more tetragon reports, which requires new product categories.

My annoyance with these product categories hit a fever pitch last week when LogLogic announced a price decrease on their SEM (security event management) technology. Huh? Seems they dusted off the SEM acronym after years on the shelf. I thought Gartner had decreed that it was SIEM (security information and event management) when it got too confusing between the folks who did SEM and SIM (security information management) – all really selling the same stuff. Furthermore, log management is now part of that deal. Do they dare argue with the great all-knowing oracles in Stamford?

Not that this expanded category definition is controversial. We’ve even posted that log management or SIEM isn’t a stand-alone market – rather it’s the underlying storage platform for a number of applications for security and ops professionals.

The lesson most of us forget is that end users don’t care what you call the technology, as long as you solve their problems. Maybe the project is compliance automation or incident investigation. SIEM/Log Management can be used for both. IT-GRC solutions can fit into the first bucket, while forensic toolkits fit into the latter. Which of course confuses the hell out of most end users. What do they buy? And don’t all the vendors say they do everything anyway?

The security industry – along with the rest of technology – focuses on products, not solutions. It’s about the latest flashing light in the new version of the magic box. Sure, most of the enterprise companies send their folks to solution selling school. Most tech company websites have a “solution” area, but in reality it’s always an afterthought.

Let’s consider the NAC (network access control) market as another example. Lots of folks think Cisco killed the NAC market by making big promises and not delivering. But ultimately, end users didn’t care about NAC – they cared about endpoint assessment and controlling guest access, and they solved those problems through other means.

Again, end users need to solve problems. They want answers and solutions, but they get a steady diet of features and spiels on why one box is better than the competitors. They get answers to questions they don’t ask. No wonder most end users turn off their phones and don’t respond to email.

Vendors spin their wheels talking about product category leadership. Who cares? Actually, Rich reminded me that the procurement people seem to care. We all know how hard it is to get a vendor in the wrong quadrant (or heaven forbid no quadrant at all) through the procurement gauntlet. Although the users are also to blame for accepting this behavior, and the dumb and lazy ones even like it. They wait for a vendor to come in and tell them what’s important, as opposed to figuring out what problem needs to be solved. From where I sit, the buying dynamic is borked, although it’s probably just as screwy in other sectors.

So what to do? That’s a good question, and I’d love your opinion. Should vendors run the risk of not knowing where they fit by not identifying with a set of product categories – and instead focus on solutions and customer problems? Should users stop sending out RFPs for SIEM/Log Management, when what they are really buying is compliance automation? Can vendors stop reacting to competitive speeds and feeds? Can users actually think more strategically, rather than whether to embrace the latest shiny upgrade from the default vendor?

I guess what I’m asking is whether it’s possible to change the buying dynamic. Or should I just quiet down, accept the way the game is played, and try to like it?

—Mike Rothman

Friday, May 28, 2010

The Hidden Costs of Security

By Mike Rothman

When I was abroad on vacation recently, the conversation got to the relative cost of petrol (yes, gasoline) in the States versus pretty much everywhere else. For those of you who haven’t travelled much, fuel tends to be 70-80% more expensive elsewhere. Why is that?

It comes down to the fact that the US Government bears many of real costs of providing a sufficient stream of petroleum. Those look like military, diplomatic, and other types of spending in the Middle East to keep the oil flowing. I’m not going to descend into either politics or energy dynamics here, but suffice it to say we’d be investing a crapload more money in alternative energy if US consumers had to directly bear the full brunt of what it costs to pull oil out of the Middle East.

With that thought in the back of my mind, I checked out one of Bejtlich’s posts last weekend which talked about the R&D costs of the bad guys. Basically these folks run businesses like anyone else. They have to invest in their ‘product’, which is finding new vulnerabilities and exploiting them. They also have to invest in “customer service,” which is basically staying invisible once they are inside to avoid detection.

And these costs are significant, but compared to the magnitude of the ‘revenue’ side of their equation, I’m sure they are happy to make the investment. Cyber-fraud is big business.

But what about other hidden costs of providing security? We had a great discussion on Monday with the FireStarter talking about value/loss metrics, but do these risk models take into account some of the costs we don’t necessarily see as part of security?

Like our network traffic. How much bandwidth is wasted on reconnaissance traffic looking for holes in our perimeters? What about the amount of your inbound pipe congested with spam, which you need to analyze and then drop. One of the key reasons anti-spam services took off is because the bandwidth demand of spam was transferred to the service provider.

What would we do differently if we had to allocate those hidden costs to the security team? I know, at the end of the day it’s all just overhead, but what if? Would it change our behavior or our security architectures? I suspect we’d focus much more on providing clean pipes and having more of our security done in the cloud, removing some of these hidden costs from our IT stack. That makes economic sense, and we all know most of what we do ultimately is driven by economics.

How about the costs of cleaning up an incident? Yes, there are some security costs in there from the standpoint of investigation and forensics, but depending on the nature of the attack there will be legal and HR resources required, which usually don’t make it into the incident post-mortem. Or what about the opportunity cost of 1,000 folks losing their authentication tokens and being locked out of the network? Or the time it takes a knowledge worker to jump through hoops to get around aggressive web filtering rules? Or the cost of false positives on the IPS that block legitimate business traffic and break critical applications?

We know how big the security budget is, but we don’t have a firm grasp of what security really costs our businesses. If we did, what would we do differently? I don’t necessarily have an answer, but it’s an interesting question. As we head into Memorial Day weekend here in the US, we need to remember obviously, all the soldiers who give all. But we also need to remember the ripple effect of every action and reaction to the bad guys. Every time I go through a TSA checkpoint in an airport, I’m painfully aware of the billions spent each month around the world to protect air travel, regardless of whether terrorists will ever attack air travel again. I guess the same analogy can be used with security. Regardless of whether you’re actually being attacked, the costs of being secure add up. Score another one for the bad guys.

—Mike Rothman

DB Quant: Discovery and Assessment Metrics, Part 3, Assess Vulnerabilities and Configuration

By Adrian Lane

By this point we have discovered all databases and identified our key databases based on the sensitivity of their data, importance to business units, and connected applications. Now it’s time to find potential security issues, and decide whether the databases meet our security and configuration requirements. Some of this can be performed manually, but as with network security we strongly advise vulnerability and configuration assessment tools.

The cost metrics associated with configuration and vulnerability analysis typically run higher the first time the process is put in place. Investigating polices, installing tools, and implementing rules are all time-consuming. Once the process is established the total amount of work falls off dramatically, with relatively small incremental investments of time for each round scanning.

As a reminder, the process is:

  1. Define Scans
  2. Setup
  3. Scan
  4. Distribute Results

Define Scans

Variable Notes
Time to list databases This may be a subset of databases, preferably prioritized
Time to gather internal requirements Security, operations, and internal audit groups. These should feed directly from the standards established in the Plan phase
Time to identify tasks/workflow Should be a one-time effort
Time to collect updated vulnerability lists CERT or other threat alerts
Time to collect configuration requirements You should have this from the Plan phase, but may need to update or refine. Also, these need to be updated regularly to account for software patches. This includes patch levels, security checklists from database vendors, and checklists from third parties such as NIST and the Center for Internet Security.

Setup

Variable Notes
Capital and time costs to acquire and install tools for automated assessments Optional
Time to contact database owners to obtain access
Time to update externally supplied policies and rules Policy is the high-level requirement; rule is the technical query for inspection. Vendors provide these with the tools, but they may requiring tuning for your internal requirements and environment.
Time to create custom rules from internal and external policies Additional policies and rules not provided by an outside party

Scan

Variable Notes
Time to run active scan
Time to scan host configuration This is the host system for the database
Time to scan database patches
Time to scan database configuration Internal scan of database settings
Time to scan database for vulnerabilities (Internal) e.g., access settings, admin roles, use of encryption
Time to scan database for vulnerabilities (External) e.g., network settings, external stored procedures
Variable: Time to rerun scans

Distribute Results

Variable Notes
Time to save scan results
Time to filter and prioritize scan results by requirements Divide data by stakeholder (security, ops, audit)
Time to generate report(s) and distribute

Other Posts in Project Quant for Database Security

  1. An Open Metrics Model for Database Security: Project Quant for Databases
  2. Database Security: Process Framework
  3. Database Security: Planning
  4. Database Security: Planning, Part 2
  5. Database Security: Discover and Assess Databases, Apps, Data
  6. Database Security: Patch
  7. Database Security: Configure
  8. Database Security: Restrict Access
  9. Database Security: Shield
  10. Database Security: Database Activity Monitoring
  11. Database Security: Audit
  12. Database Security: Database Activity Blocking
  13. Database Security: Encryption
  14. Database Security: Data Masking
  15. Database Security: Web App Firewalls
  16. Database Security: Configuration Management
  17. Database Security: Patch Management
  18. Database Security: Change Management
  19. DB Quant: Planning Metrics, Part 1
  20. DB Quant: Planning Metrics, Part 2
  21. DB Quant: Planning Metrics, Part 3
  22. DB Quant: Planning Metrics, Part 4
  23. DB Quant: Discovery Metrics, Part 1, Enumerate Databases
  24. DB Quant: Discovery Metrics, Part 2, Identify Apps

—Adrian Lane

Friday Summary: May 28, 2010

By Adrian Lane

We get a lot of requests to sponsor this blog. We got several this week. Not just the spammy “Please link with us,” or “Host our content and make BIG $$$” stuff. And not the PR junk that says “We are absolutely positive your readers would just love to hear what XYZ product manager thinks about data breaches,” or “We just released 7.2.2.4 version of our product, where we changed the order of the tabs in our web interface!” Yeah, we get fascinating stuff like that too. Daily. But that’s not what I am talking about. I am talking about really nice, personalized notes from vendors and others interested in supporting the Securosis site. They like what we do, they like that we are trying to shake things up a bit, and they like the fact that we are honest in our opinions. So they write really nice notes, and they ask if they can give us money to support what we do.

To which we rather brusquely say, “No”.

We don’t actually enjoy doing that. In fact, that would be easy money, and we like as much easy money as we can get. More easy money is always better than less. But we do not accept either advertising on the site or sponsorship because, frankly, we can’t. We just cannot have the freedom to do what we do, or promote security in the way we think best, if we accept payments from vendors for the blog. It’s like the classic trade-off in running your own business: sacrifice of security for the freedom to do things your own way. We don’t say “No,” to satisfy some sadistic desire on our part to be harsh. We do it because we want the independence to write what we want, the way we want.

Security is such a freakin’ red-headed stepchild that we have to push pretty hard to get companies, vendors, and end users to do the right thing. We are sometimes quite emphatic to knock someone off the rhythm of that PowerPoint presentation they have delivered a hundred times, somehow without ever critically examining its content or message. If we don’t they will keep yakking on and on about how they address “Advanced Persistant Threats.” Sometimes we spotlight the lack of critical reasoning on a customer’s part to expose the fact that they are driven by politics without a real plan for securing their environment. We do accept sponsorship of events and white papers, but only after the content has gone through community review and everyone has had a chance to contribute. Many vendors and a handful of end-users who talk with us on the phone know we can be pretty harsh at times, and they still ask if they economically support our research. And we still say, “No”. But we appreciate the interest, and we thank you all for for participating in our work.

On to the Summary:

Webcasts, Podcasts, Outside Writing, and Conferences

Favorite Securosis Posts

  • Rich: Code Re-engineering. This applies to so much more than code. I’ve been on everything from mountain rescues to woodworking projects where the hardest decision is to stop patching and nuke it from orbit. We are not mentally comfortable throwing away hours, days, or years of work; and the ability to step back, analyze, and start over is rare in any society.
  • Mike Rothman: Code Re-engineering. Adrian shows his development kung fu. He should get pissed off more often.
  • David Mortman: Gaming the Tetragon.
  • Adrian Lane: The Secerno Technology. Just because you need to understand what this is now that Oracle has their hands on it.

Other Securosis Posts

Favorite Outside Posts

Project Quant Posts

Research Reports and Presentations

Top News and Posts

Blog Comment of the Week

Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Jack, in response to FireStarter: The Only Value/Loss Metric That Matters.

All of the concerns that have been raised about estimating impact are legitimate. Part of the problem with many approaches to-date, however, is that they’ve concentrated on asset value and not clearly differentiated that from asset liability. Another challenge is that we tend to do a poor job of categorizing how loss materializes.

What I’ve had success with in FAIR is to carve loss into two components–Primary and Secondary. Primary loss occurs directly as a result of an event (e.g., productivity loss due to an application being down, investigation costs, replacement costs, etc.), while Secondary loss occurs as a consequence of stakeholder reactions to the event (e.g., fines/judgments, reputation effects, the costs associated with managing both of those, etc.). I also sub-categorize losses as materializing in one or more of six forms (productivity, response, replacement, competitive advantage, fines/judgments, and reputation).

With the clarity provided by differentiating between the Primary and Secondary loss components, and the six forms of loss, I find it much easier to get good estimates from the business subject matter experts (e.g., Legal, Marketing, Operations, etc.). To make effective use of these estimates we use them as input to PERT distribution functions, which then become part of a Monte Carlo analysis.

Despite what some people might think, this is actually a very straightforward process, and simple spreadsheet tools remove the vast majority of the complexity. Besides results that stand up to scrutiny, another advantage is that a lot of the data you get from the business SME’s is reusable from analysis to analysis, which streamlines the process considerably.

—Adrian Lane

Thursday, May 27, 2010

Understanding and Selecting SIEM/LM: Aggregation, Normalization, and Enrichment

By Adrian Lane

In the last post on Data Collection we introduced the complicated process of gathering data. Now we need to understand how to put it into a manageable form for analysis, reporting, and long-term storage for forensics.

Aggregation

SIEM platforms collect data from thousands of different sources because these events provide the data we need to analyze the health and security of our environment. In order to get a broad end-to-end view, we need to consolidate what we collect onto a single platform. Aggregation is the process of moving data and log files from disparate sources into a common repository. Collected data is placed into a homogenous data store – typically purpose-built flat file repositories or relational databases – where analysis, reporting, and forensics occur; and archival policies are applied.

The process of aggregation – compiling these dissimilar event feeds into a common repository – is fundamental to Log Management and most SIEM platforms. Data aggregation can be performed by sending data directly into the SIEM/LM platform (which may be deployed in multiple tiers), or an intermediary host can collect log data from the source and periodically move it into the SIEM system. Aggregation is critical because we need to manage data in a consistent fashion: security, retention, and archive policies must be systematically applied. Perhaps most importantly, having all the data on a common platform allows for event correlation and data analysis, which are key to addressing the use cases we have described.

There are some downsides to aggregating data onto a common platform. The first is scale: analysis becomes exponentially harder as the data set grows. Centralized collection means huge data stores, greatly increasing the computational burden on the SIEM/LM platform. Technical architectures can help scale, but ultimately these systems require significant horsepower to handle an enterprise’s data. Systems that utilize central filtering and retention policies require all data to be moved and stored – typically multiple times – increasing the burden on the network.

Some systems scale using distributed processing, where filtering and analysis occur outside the central repository, typically at the distributed data collection point. This reduces the compute burden on the central server and allows processing to occur on smaller, more manageable data sets. It does require that policies, along with the code to process them, be distributed and kept current throughout the network. Distributed agent processes are a handy way to “divide and conquer”, but increase IT administration requirements. This strategy also adds a computational burden o the data collection points, degrading their performance and potentially slowing enough to drop incoming data.

Data Normalization

If the process of aggregation is to merge dissimilar event feeds into one common platform, normalization takes it one step further by reducing the records to just common event attributes. As we mentioned in the data collection post, most data sources collect exactly the same base event attributes: time, user, operation, network address, and so on. Facilities like syslog not only group the common attributes, but provide means to collect supplementary information that does not fit the basic template. Normalization is where known data attributes are fed into a generic template, and anything that doesn’t fit is simply omitted from the normalized event log. After all, to analyze we want to compare apple to apples, so we throw away an oranges for the sake of simplicity.

Depending upon the SIEM or Log Management vendor, the original non-normalized records may be kept in a separate repository for forensics purposes prior to later archival or deletion, or they may simply be discarded. In practice, discarding original data is a bad idea, since the full records are required for any kind of legal enforcement. Thus, most products keep the raw event logs for a user-specified period prior to archival. In some cases, the SIEM platform keeps a link to the original event in the normalized event log which provides ‘drill-down’ capability to easily reference extra information collected from the device.

Normalization allows for predicable and consistent storage for all records, and indexes these records for fast searching and sorting, which is key when battling the clock in investigating an incident. Additionally, normalization allows for basic and consistent reporting and analysis to be performed on every event regardless of the data source. When the attributes are consistent, event correlation and analysis – which we will discuss in our next post – are far easier.

Technically normalization is no longer a requirement on current platforms. Normalization was a necessity in the early days of SIEM, when storage and compute power were expensive commodities, and SIEM platforms used relational database management systems for back-end data management. Advances in indexing and searching unstructured data repositories now make it feasible to store full source data, retaining original data, and eliminating normalization overhead.

Enriching the Future

In reality, we are seeing a number of platforms doing data enrichment, adding supplemental information (like geo-location, transaction numbers, application data, etc.) to logs and events to enhance analysis and reporting. Enabled by cheap storage and Moore’s Law, and driven by ever-increasing demand to collect more information to support security and compliance efforts, we expect more platforms to increase enrichment. Data enrichment requires a highly scalable technical architecture, purpose-built for multi-factor analysis and scale, making tomorrow’s SIEM/LM platforms look very similar to current business intelligence platforms.

But that just scratches the surface in terms of enrichment, because data from the analysis can also be added to the records. Examples include identity matching across multiple services or devices, behavioral detection, transaction IDs, and even rudimentary content analysis. It is somewhat like having the system take notes and extrapolate additional meaning from the raw data, making the original record more complete and useful. This is a new concept for SIEM, so the enrichment will ultimately encompass is anyone’s guess. But as the core functions of SIEM have standardized, we expect vendors to introduce new ways to derive additional value from the sea of data they collect.


Other Posts in Understanding and Selecting SIEM/LM

  1. Introduction.
  2. Use Cases, Part 1.
  3. Use Cases, part 2.
  4. Business Justification.
  5. Data Collection.

—Adrian Lane

Wednesday, May 26, 2010

DB Quant: Discovery And Assessment Metrics (Part 2) Identify Apps

By Adrian Lane

Now that we know where the databases are located, we need to find sensitive data inside them, determine how applications connect to databases, and what database features and functions the applications depend on. Applications are often inflexible, requiring particular user accounts or connection types to function properly. They may even be coded to use database features that are considered vulnerabilities by the security team. Data discovery is key, because of course it’s necessary to know the type and location of sensitive data before controls can be established. The entire scanning process require special access provided by the owners of the databases, as well as the platforms and networks that support them.

For some of you in small and medium businesses, especially in cases where you are the sole database administrator, these granular steps will seem like overkill. For mid-to-large enterprises, with hundreds of databases supporting thousands of applications with sensitive data scattered throughout them, these steps are necessary for forming security policies and meeting compliance. Also consider that some of the automated scanning tools behave like a virus or an attacker, requiring both credentials to access the DB and coordination with security countermeasures and staff.

As a reminder, the process is as follows:

  1. Plan
  2. Setup
  3. Identify Dependent Applications
  4. Identify Database Owners
  5. Discover Data
  6. Document

Plan

Variable Notes
Time to assemble list of databases Feeds from the Enumerate Databases step
Time to define data types of interest The sensitive data you want to discover, such as credit card numbers
Time to map locations and schedule scans Databases will reside on different domains, subnets, etc. This is the time to develop a scanning plan based on location

Setup

Variable Notes
Capital and time to acquire tools for discovery automation Optional – DB discovery tools from previous phase may provide this
Time to define patterns, expressions, and signatures e.g., what sensitive data looks like
Time to contact business units & network staff
Time to configure discovery tool Optional

Identify Dependent Applications

Variable Notes
Time to schedule and perform review/run scan
Time to identify applications using the database Based on connections and/or service account credentials
Time to catalog application dependencies and connection types Most items can be discovered without DB credentials
Time to repeat steps As needed

Identify Database Owners

Variable Notes
Time to identify database owners The real-world owner, not just the DBA account name
Time to obtain access and credentials Usually a dedicated account is established for this analysis

Discover Data

Variable Notes
Time to schedule and run scan For automated scans
Time to compile table/schema locations For manual discovery
Time to examine schema and data For manual discovery
Time to adjust rules and repeat scans For automated scans

Document

Variable Notes
Time to filter results and compile report Gather data names, types, and location
Time to generate report(s)

Other Posts in Project Quant for Database Security

  1. An Open Metrics Model for Database Security: Project Quant for Databases
  2. Database Security: Process Framework
  3. Database Security: Planning
  4. Database Security: Planning, Part 2
  5. Database Security: Discover and Assess Databases, Apps, Data
  6. Database Security: Patch
  7. Database Security: Configure
  8. Database Security: Restrict Access
  9. Database Security: Shield
  10. Database Security: Database Activity Monitoring
  11. Database Security: Audit
  12. Database Security: Database Activity Blocking
  13. Database Security: Encryption
  14. Database Security: Data Masking
  15. Database Security: Web App Firewalls
  16. Database Security: Configuration Management
  17. Database Security: Patch Management
  18. Database Security: Change Management
  19. DB Quant: Planning Metrics, Part 1
  20. DB Quant: Planning Metrics, Part 2
  21. DB Quant: Planning Metrics, Part 3
  22. DB Quant: Planning Metrics, Part 4
  23. DB Quant: Discovery Metrics, Part 1, Enumerate Databases

—Adrian Lane

Quick Wins with DLP Presentation

By Rich

Yesterday I gave this presentation as a webcast for McAfee, but somehow my last 8 slides got dropped from the deck. So, as promised, here is a PDF of the slides.

McAfee is hosting the full webcast deck over at their blog. Since we don’t host vendor materials here at Securosis, here is the subset of my slides. (You might still want to check out their full deck, since it also includes content from an end user).

Presentation: Quick Wins with DLP

—Rich

Gaming the Tetragon

By Mike Rothman

Rich highlighted a great post from Rocky DiStefano of Visible Risk in today’s Incite:

Blame the addicts – When I was working at Gartner, nothing annoyed me more than those client calls where all they wanted me to do was read them the Magic Quadrant and confirm that yes, that vendor really is in the upper right corner. I could literally hear them checking their “talked to the analyst” box. An essential part of the due diligence process was making sure their vendor was a Leader, even if it was far from the best option for them. I guess no one gets fired for picking the upper right. Rocky DeStefano nails how people see the Magic Quadrant in his Tetragon of Prestidigitation post. Don’t blame the analyst for giving you what you demand – they are just giving you your fix, or you would go someplace else. – RM

Rocky is dead on – there are a number of constituencies that leverage information like the Magic Quadrant, and they all have different perspectives on the report. I don’t need to repeat what Rocky said, but I want to add a little more depth about each of the constituencies and provide some anecdotes from my travels.

To be clear, Gartner (and Forrester, for that matter) place all sorts of caveats on their vendor rankings. They say not to use them to develop a short list, and they want clients to call to discuss their specific issues. But here’s the rub: They know far too many organizations use the MQ as a crutch to support either their own laziness and stupidity, or to play the game and support decisions they’ve already made.

Institutionally they don’t care. As Rich pointed out, (most of) the analysts hate it. But the vendor rankings represent enough revenue that they don’t want to mess with them. Yes, that’s a cynical view, but at the end of the day both of the big IT research shops are public companies and they have to cater to shareholders. And shareholders love licensing 10-page documents for $20K each to 10 vendors.

Rocky uses 3 cases to illuminate his point, first a veteran information security professional, and those folks (if they have a clue) know that they’ve got to focus their short list on vendors close to the Leader Quadrant. If not, they’ll spend more time justifying another lesser-ranked vendor than implementing the technology. It’s just not worth the fight. So they don’t. They pick the best vendor from the leader quadrant and move on.

This leads us to the second case, the executive, who basically doesn’t care about the technology, but has a lot of stuff on his/her plate and figures if a vendor is a leader, they must have lots of customers calling Gartner and their stuff can’t be total crap. Most of the time, they’d be right.

And the third case is vendors. Rocky makes some categorizations about the different quadrants, which are mostly accurate. Vendors in the “niche” space (bottom left) don’t play into the large enterprise market, or shouldn’t be. Those in the “challenger” quadrant (top left) are usually big companies with products they bundle into broad suites, so the competitiveness of a specific offering is less important.

Those in the “visionary” sector (bottom right) delude themselves into thinking they’ve got a chance. They are small, but Gartner thinks they understand the market. In reality it doesn’t matter because the vast majority of the market – dumb and/or lazy information security professionals – see the MQ like this:

Dumb and Lazy is no way to go through life...

In most enterprise accounts the only vendors with a chance are the ones in the leader quadrant, so placement in this quadrant is critical. I’ve literally had CEOs and Sales VPs take out a ruler and ask why our arch-nemesis was 2mm to the right of our dot. 2 frackin millimeters. You may think I’m kidding, but I’m not.

So many of the high-flying vendors make it their objective to spend whatever resources it takes to get into the leader quadrant. They have customers call into Gartner with inquiries about their selection process (even though the selection is already made) to provide data points about the vendor. Yes, they do that, and the vendors provide talking points to their clients. They show up at the conferences and take full advantage of their 1on1 meeting slots. They buy strategy days.

To be clear, you cannot buy a better placement on the MQ. But you can buy access, which gives a vendor a better opportunity to tell their story, which in many cases results in better placement. Sad but true. Vendors can game the system to a degree.

Which is why Rich, Adrian, and I made a solemn blood oath that we at Securosis would never do a vendor ranking. We’d rather focus our efforts on the folks who want advice on how to do their job better. Not those trying to maximize their Tetris time.

—Mike Rothman

Code Re-engineering

By Adrian Lane

I just ran across a really interesting blog post by Joel Spolsky from last April: Things You Should Never Do, Part 1. Actually. the post pissed me off. This is one of those hot-button topics that I have had to deal with several times in my career, and have had to manage in the face of entrenched beliefs. His statement is t hat you should never rewrite a code base from scratch. The reasoning is “No major firm has ever successfully survived a product rewrite. Just look at Netscape … ” Whatever.

I am a fixer. I was the guy who was able to make code reliable. I was the guy who found and fixed the obscure bugs. As I progressed in my career and started to manage teams of developers, more often than not I was handed the really crummy re-engineering projects because I could fix the problems and make customers happy. Sometimes success is its own penalty.

I have inherited code so bad that bug fixes cost 4x in time and usually created new bugs in the process. I have inherited huge bodies of Java code written entirely as if Java were a 3G procedural language – ignoring the object-oriented paradigm completely. I have been tasked with fixing code that – for a simple true/false comparison – made 12 comparisons, 8 database, insertions and 7 deletions – causing an 180x performance penalty. I have inherited code so bad it broke the compiler. I have inherited code so bad that you could not change a back-end database query without breaking the GUI! It takes a real gift for bad programming to do these things.

There are times when the existing code – all or part – simply needs to be thrown away. There are times that code is so tightly intertwined that you cannot simply fix one piece at a time. And in some cases there are really good business reasons, like your major customers say your code is crap and needs to be thrown away. Bad code can bleed a company to death with lost sales, brand impairment, demoralization, and employee turnover.

That said, I agree with Joel’s basic premise that re-writing your product can kill your company. And I even agree about a lot of the social behaviors he describes that create failure. There is absolutely no reason to believe that the people who developed bad code the first time will not do the same thing the next time. But I don’t agree that you should never rewrite. I don’t agree that it has never been done successfully. I know because I have done it successfully. Twice. Out of three attempts, but hey, I got the important projects right.

We tend not to hear about successful rewrites because the companies that carried it off really don’t want everyone knowing that previous versions were terrible. They would rather focus on happy customers and competitive products. It’s very likely that companies who need to rewrite code will screw up a second time. Honestly, there are a lot more historic rewrite flameouts than success stories. Companies know what they want to fix in the code, but they don’t understand what they need to fix in the company. I contend this is because there are company behaviors that promote failure, and if they did it once, they are likely to do it again. And again. Until, mercifully, the company goes down in flames. There are a lot of reasons why re-architecture and re-implementations projects fail. In no particular order …

  1. Big eyes: You are the chief developer and you hate your current product. You have catalogued everything that is wrong with it and how you would fix it. You have extensive lists of features you would like to implement. You have a grand vision of how this product should function, how it should be architected, and how it will be implemented. This causes your re-engineering effort to fail because you think that you are going to build perfect software, tackle every problem, and build every feature, in the first revision. And you commit to do so, just to get the project green-lighted.
  2. Resources: You current product sucks. It really sucks. It has atrocious quality and low performance, and is miserable to manage. It’s so freaking bad that customers ask for their money back, and sales falter. This causes your re-engineering effort to fail because there is simply not enough time, and not enough revenue to pay for your rebuild. Not with customers breathing down management’s neck, and investors looking for the quick “liquidity event”. So marketing keeps on marketing, sales keeps on selling, and you keep on supporting the old mess you have.
  3. Bad blood: When you car gets old and dies, you don’t expect someone to give you a new one for free. When your crappy old code no longer supports your customers, in essence you need to pay for new code. Yes, it is unfortunate that you bought a lemon last time, but you need to make additional investments in time and development resources, and fix the problems that led you down the wrong path. Your project fails because management is so bitter about the failure that they muck around with development practices, apply more pressure and try to get more involved with day-to-day development, when the opposite is needed.
  4. Expectations: Not only is the development team excited at not having to work on the atrocious code you have now, but they are really looking forward to working on a product that has semi-modern design. The whole department is buzzing, and so is management! This causes your re-engineering effort to fail because the Chickens think that no only are you going to deliver perfect software, but you are going to deliver every feature and function of the old crappy product, as well as a handful of new and extraordinary features as well. And it’s unlikely that management will let you adjust the ship date to accommodate the new demands. If they do, the temptation is to keep working until it’s perfect, but nothing is perfect – this is a good way to keep coding while Rome burns.
  5. Sales: In all the excitement, Sales bragged to a couple major customers about what an amazing new product the development team is building, and it solves all the problems you have today. The customers think this is great, and say “Call us when it’s ready.” Sales grind to a halt. This causes your re-engineering effort to fail because you now have two months to develop what you estimated would take 18.
  6. People: The people who were terrible coders are still on the team because nobody wanted to fire them. The managers who forced releases out the door early, before implementation and QA were completed are still with the company. The executive who threatens employees’ jobs if they fail to deliver on time are still with the company. The Product Manager who fails to do market research to validate bright ideas is still at the company. The engineering ‘leaders’ with no clue about process or leadership skills are still leading when they should be coding. The effort failed before it began.

Re-engineering efforts can fail for a whole new set of reasons, in addition to whatever wrecked the initial project. And unfortunately rewrites always begin at a disadvantage, because management is already miffed that the last development project failed. Building software is risky, but re-engineering can work. If you want to get it right the second time, you need to perform a same critical evaluation of people and processes, just as you hopefully did with technology. You will end up overhauling much of the organization, including management, to avoid the technical and leadership failures of the past. If all of this has not scared you off, consider code re-engineering.

—Adrian Lane

Incite 5/26/2010: Funeral for a Friend

By Mike Rothman

I don’t like to think of myself as a sentimental guy. I have very few possessions that I really care about, and I don’t really fall into the nostalgia trap. But I was shaken this week by the demise of a close friend. We were estranged for a while, but about a year ago we got back in touch and now that’s gone.

Lots of miles on this leather... I know it’s surprising, but I’m talking about my baseball glove, a Wilson A28XX, vintage mid-1980’s. You see, I got this glove from my Dad when I entered little league, some 30+ years ago. It was as big as most of my torso when I got it. The fat left-handed kid always played first base, so I had a kick-ass first baseman’s glove and it served me well. I stopped playing in middle school (something about being too slow as the bases extended to 90 feet), played a bit of intramural in college, and was on a few teams at work through the years.

A few of my buddies here in ATL are pretty serious softball players. They play in a couple leagues and seem to like it. So last year I started playing for my temple’s team in the Sunday morning league with lots of other old Jews. I dug my glove out of the trunk, and amazingly enough it was still very workable. It was broken in perfectly and fit my hand like a glove (pun intended). It was like a magnet – if the ball was within reach, that glove swallowed it and didn’t give it up.

But the glove was showing signs of age. I had replaced the laces in the webbing a few times over the years, and the edges of the leather were starting to fray. Over this weekend the glove had a “leather stroke”, when the webbing fell apart. I could have patched it up a bit and probably made it through the summer season, but I knew the glove was living on borrowed time.

So I made the tough call to put it down. Well, not exactly down, since the leather is already dead, but I went out and got a new glove. Like with a trophy wife, my new glove is very pretty. A black leather Mizuno. No scratches. No imperfections. It even has a sort-of new-car smell. I’ll be breaking it in all week and hopefully it’ll be ready for practice this weekend.

For an anti-nostalgia guy, this was actually hard, and it will be weird taking the field with a new rig. I’m sure I’ll adjust, but I won’t forget.

– Mike

Photo credits: “Leather and Lace” originally uploaded by gfpeck


Incite 4 U

I want to personally thank Rich and the rest of the security bloggers for really kicking it into gear over the past week. Where my feed reader had been barren of substantial conversations and debate for (what seemed like) months, this week I saw way too much to highlight in the Incite. Let’s keep the momentum going. – Mike.

  1. Focus on the problem, not the category – Stepping back from my marketing role has given me the ability to see how ridiculous most of security marketing is. And how we expect the vendors to lead us practitioners out of the woods, and blame then when they find another shiny object to chase. I’m referring to NAC (network access control), and was a bit chagrined by Joel Snyder’s and Shimmy’s attempts to point the finger at Cisco for single-handedly killing the NAC business. It’s a load of crap. To be clear, NAC struggled because it didn’t provide must-have capabilities for customers. Pure and simple. Now clearly Cisco did drive the hype curve for NAC, but amazingly enough end users don’t buy hype. They spend money to solve problems. It’s a cop-out to say that smaller vendors and VCs lost because Cisco didn’t deliver on the promise of NAC. If the technology solved a big enough problem, customers would have found these smaller vendors and Cisco would have had to respond with updated technology. – MR

  2. I can haz your ERP crypto – Christopher Kois noted on his blog that he had ‘broken’ the encryption on the Microsoft Dynamics GP, the accounting package in the Dynamics suite from the Great Plains acquisition. Encrypting data fields in the database, he noticed odd behavioral changes when altering encrypted data. What he witnessed was that if he changed a single character, only two bytes of encrypted data changed. With most block ciphers, if you change a single character in the plaintext, you get radically different output. Through trial and error he figured out the encryption used was a simple substitution cipher – and without too much trouble Kois was able to map the substitution keys. While Microsoft Dynamics does run on MS SQL Server, there are some components that still rely upon Pervasive SQL. Christopher’s discovery does not mean that MS SQL Server is secretly using the ancient Caesar Cipher, but rather that some remaining portion Great Plains does. It does raise some interesting questions: how do you verify sensitive data has been removed from Pervasive? If the data remains in Pervasive, even under a weak cipher, will your data discovery tools find it? Does your discovery tool even recognize Pervasive SQL? – AL

  3. Blame the addicts – When I was working at Gartner, nothing annoyed me more than those client calls where all they wanted me to do was read them the Magic Quadrant and confirm that yes, that vendor really is in the upper right corner. I could literally hear them checking their “talked to the analyst” box. An essential part of the due diligence process was making sure their vendor was a Leader, even if it was far from the best option for them. I guess no one gets fired for picking the upper right. Rocky DeStefano nails how people see the Magic Quadrant in his Tetragon of Prestidigitation post. Don’t blame the analyst for giving you what you demand – they are just giving you your fix, or you would go someplace else. – RM

  4. Compliance and security: brothers in arms – It’s amazing to me that we are still gnashing our teeth over the fact that senior management budgets for compliance and doesn’t give a rat’s ass about security. Also nice to see Anton emerge from his time machine trip back to 2005, and realize that compliance doesn’t provide value. Continuing the riff on AndyITGuy’s rant about compliance vs. security, we have to eliminate that line of thinking. Compliance and security are not at odds. It’s not an either/or proposition. Smart practitioners buy solutions to security problems, which can be positioned and paid for out of the compliance budget. When pitching any security project to senior management, you’ve got no shot unless you can either show how it increases the top line (pretty much impossible), decreases spend (hard), or helps meet a compliance mandate. So stop thinking compliance is the enemy. It’s your friend – your rich friend who needs to pay for all your security stuff. – MR

  5. Give me a Q and an A, and that spells FAIL? – They say it takes years to build credibility, and a minute to lose it. IBM is dealing with some of that in the security space after distributing infected USB sticks at a trade show. I don’t even know what to say. No one thought to actually test the batch of tchotchkes? Even if only to make sure the right content was there? But let’s not focus on the sheer idiocy of IBM here, but on what to do to protect yourself and your organization. First, turn off AutoRun – since that is how most USB stick malware will get executed. Second, don’t be a USB whore. Just because it’s shiny and has the logo of your favorite vendor doesn’t mean you should stick it in your machine. Have a little self-respect, will ya? Maybe the AV vendors (all of whom have detected this malware since 2008) can position themselves as the morning-after pill for promiscuous USB use. Or device control software can be positioned as a USB condom. Ah, the possibilities are endless. – MR

  6. That’s not a bus – it’s a steamroller – I talked with a client this week, who was struggling to maintain security controls while adopting cloud computing. They are moving to a hosted email system, but in the process may lose their DLP solution. It’s a problem they could have planned around, but decisions were made without security involved at the right level. Another client had a similar issue where they traded off their DLP so they could switch to cloud-based web content security. DanO over at Techdulla raises some similar issues as he reminds us that no matter what we think, security folks will likely be held responsible, even if the data has been shipped to the cloud. Steamrollers and buses are easy to dodge, but only if you keep your eyes open and spot them early enough. – RM

  7. Digging a grave for Brightmail? – We don’t publish a lot on anti-spam technologies, even though email and content security are core coverage areas for us. It is just not very interesting to discuss the meaningful differences between 99.2% and 99.6% effectiveness, or what it means when vendors swap positions from month to month depending upon the spam technique du jour. It is an unending cat-and-mouse game between spammers and new techniques to detect and block email spam, and things fluctuate very quickly. That said, every now and again we run across something interesting, such as Symantec Brightmail Gateway Decertified by ICSA labs, because they dropped below 97% effectiveness. It is actually big news when a major email security vendor “stops meeting one or more requirements, or is no longer in daily testing”. But as I have not seen an EOL announcement, it looks like this is a rather passive-aggressive way of notifying customers that they are moving away from supporting Brightmail anti-spam and moving forward with MessageLabs’ service based solution. Unless of course the Brightmail anti-spam guy team was on vacation this week, and they accidentally fell below 97% success, but I am betting this was at least half intentional. It’s still somewhat surprising, as I assumed there were still a handful of ASPs using the Brightmail engine. It will be interesting to see how Symantec covers this in press releases in the coming weeks. – AL

  8. Roland Garros it ain’t: the IT certification racket – I’ve ranted about the value of certifications a lot. But I never miss the opportunity to poke fun at the entire certification value chain. Case in point: Etherealmind’s observation about the joke of CCIE certification. Most vendor certifications fall into the same category. It’s about passing the test so the VAR can say they have X% staff certified, or the IT shop can show proficiency with their key vendors. Unfortunately most of the training isn’t designed to actually teach anything, it’s designed to get students past the test. As Rich said about security awareness training we need a no security professional left behind program to ensure folks doing things are actually competent. I know – details, details. But it won’t happen – as long as hiring managers focus on who has the paper rather than what they know, it’ll be the same old, same old. – MR

—Mike Rothman

Tuesday, May 25, 2010

Understanding and Selecting SIEM/LM: Data Collection

By Adrian Lane

The first four posts our the SIEM series dealt with understanding what SIEM is, and what problems it solves. Now we move into how to select the right product/solution/service for your organization, and that involves digging into the technology behind SIEM and log management platforms. We start with the foundation of every SIEM and Log Management platform: data collection. This is where we collect data from the dozens of different types of devices and applications we monitor. ‘Data’ has a pretty broad meaning – here it typically refers to event and log records but can also include flow records, configuration data, SQL queries, and any other type of standard data we want to pump into the platform for analysis.

It may sound easy, but being able to gather data from every hardware and software vendor on the planet in a scalable and reliable fashion is incredibly difficult. With over 20 vendors in the Log Management and SIEM space, and each vendor using different terms to differentiate their products, it gets very confusing. In this series we will define vendor-neutral terms to describe the technical underpinnings and components of log data collection, to level-set what you really need to worry about. In fact, while log files are what is commonly collected, we will use the term “data collection”, as we recommend gathering more than just log files.

Data Collection Overview

Conceptually, data collection is very simple: we just gather the events from different devices and applications on our network to understand what is going on. Each device generates an event each time something happens, and collects the events into a single repository known as a log file (although it could actually be a database). There are only four components to discuss for data collection, and each one provides a pretty straight-forward function. Here are the functional components:


Fig 1. Agent data collector


Fig 2. Direct connections to the device


Fig 3. Log file collection

  1. Source: There are many different sources – including applications, operating systems, firewalls, routers & switches, intrusion detection systems, access control software, and virtual machines – that generate data. We can even collect network traffic, either directly from the network for from routers that support Netflow-style feeds.
  2. Data: This is the artifact telling us what actually happened. The data could be an event, which is nothing more than a finite number of data elements to describe what happened. For example, this might record someone logging into the system or a service failure. Minimum event data includes the network address, port number, device/host name, service type, operation being performed, result of the operation (success or error code), user who performed the operation, and timestamp. Or the data might just be configuration information or device status. In practice, event logs are pretty consistent across different sources – they all provide this basic information. But each offers additional data, including context. Additional data types may include things such as NetFlow records and configuration files. In practice, most of the data gathered will be events and logs, but we don’t want to arbitrarily restrict our scope.
  3. Collector: This connects to a source device, directly or indirectly, to collect the events. Collectors take different forms: they can be agents residing on the source device (Fig. 1), remote code communicating over the network directly with the device (Fig. 2), an agent writing code writing to a dedicated log repository (Fig. 3), or receivers accepting a log file stream. A collector may be provided by the SIEM vendor or a third party (normally the vendor of the device being monitored). Further, the collector functions differently, depending upon the idiosyncrasies of the device. In most cases the source need only be configured once, and events will be pushed directly to the collector or into a neutral log file read by it. In some cases, the collector must continually request data be sent, polling the source at regular intervals.
  4. Protocol: This is how collector communicates with the source. This is an oversimplification, of course, but think of it as a language or dialect the two agree upon for communicating events. Unfortunately there are lots of them! Sometimes the collector uses an API to communicate directly with the source (e.g., OPSEC LEA APIs, MS WMI, RPC, or SDEE). Sometimes events are streamed over networking protocols such as SNMP, Netflow, or IPFIX. Sometimes the source drops events into a common file/record format, such as syslog, Windows Event Log, or syslog-ng, which is then read by the collector. Additionally, third party applications such as Lasso and Snare provide these features as a service.

Data collection is conceptually simple, but the thousands of potential variations makes implementation a complex mess. It resembles a United Nations meeting: you have a whole bunch of people talking in different languages, each with a particular agenda of items they feel are important, and different ways they want to communicate information. Some are loquacious and won’t shut up, while others need to be poked and prodded just to extract the simplest information. In a nutshell, it’s up to the SIEM and Log Management platforms to act as the interpreters, gathering the information and putting it into some useful form.

Tradeoffs

Each model for data collection has trade-offs. Agents can be a powerful proxy, allowing the SIEM platform to use robust (sometimes proprietary) connection protocols to safely and reliably move information off devices; in this scenario device setup and configuration is handled during agent installation. Agents can also take full advantage of native device features, and can tune and filter the event stream. But agents have fallen out of favor somewhat. SIEM installations cover thousands of devices, which means agents can be a maintenance nightmare, requiring considerable time to install and maintain. Further, agents’ processing and data storage requirements on the device can affect stability and performance. Finally, most agents require administrative access, which creates am additional security concern on each device.

Another common technique streams events to log files, such as syslog or the Windows Event Log. These may reside on the device, streamed to another server, or sent directly to the log management system. The benefit of this method is that data arrives already formatted using a common protocol and layout. Further, if the events are collected in a file, this removes concerns about synchronization issues and uncollected events lost prior to collection – both problems when working directly with some devices. Unfortunately general-purpose logging systems require some data normalization, which can lose detail.

Some older devices, especially dedicated control systems, simply do not offer full-feature logging, and require API-level integration to collect events. These specialized devices are much more difficult to work with, and require dedicated full-time connections to collect event trails, creating both a maintenance nightmare and a performance penalty on the devices. In these cases you do not have a choice, but need a synchronous connection in order to capture events.

Understand that data collection is not an either/or proposition. Depending on the breadth of your monitoring efforts, you may need to use every technique on some subset of device types and applications. Go into the project with your eyes open, recognizing the different types of collection, and the associated nuances and complexity of each.

In the next post we’ll talk about what to do with all this collected data: prepare it for analysis, which means normalization.

—Adrian Lane

A Phish Called Tabby

By Mike Rothman

Thanks to Aza Raskin, this week we learned of a new phishing attack, dubbed “tabnabbing” by Brian Krebs. It opening a tab (unbeknownst to the user), changes the favicon, and does a great job of impersonating a web page – or a bank account, or any other phishing target. Through the magic of JavaScript, the tabs can be controlled and the attack made very hard to detect since it preys on the familiarity of users with common webmail and banking interfaces.

So what do you do? You can run NoScript in your Firefox browser and to prevent the JavaScript from running (unless you idiotically allowed JavaScript on a compromised page). Another option is leveraging a password manager. Both Rich and I have professed our love for 1Password on the Mac. 1Password puts a button in your browser, and when logging in brings up a choice of credentials for that specific domain to automatically fill in the form. So when I go to Gmail, logging in is as easy as choosing one of the 4 separate logins I use on google.com domains.

Now if I navigate to the phishing site, which looks exactly like Gmail, I’d still be protected. 1Password would not show me any stored logins for that domain, since presumably the phisher must use a different domain. This isn’t foolproof because the phisher could compromise the main domain, host the page there, and then I’m hosed. I could also manually open up 1Password and copy/paste the login credentials, but that’s pretty unlikely. I’d instantly know something was funky if my logins were not accessible, and I’d investigate. Both of these scenarios are edge cases and I believe in a majority of situations I’d be protected.

I’m not familiar with password managers on Windows, but if they have similar capabilities, we highly recommend you use one. So not only can I use an extremely long password on each sensitive site, I get some phishing protection as a bonus. Nice.

—Mike Rothman