Login  |  Register  |  Contact
Monday, October 12, 2009

Personal Information Dump

By Adrian Lane

Interesting story of a San Francisco commercial landlord who found 46 boxes of personal information and financial data for thousands of people left behind by a failed title company.

The boxes were the detitrus of what was until last year a thriving business, Financial Title. Then the economy tanked, and the company folded up its locations all across California, including the one Tookoian rented to it.

“They basically abruptly closed shop,” he said as he walked past the company’s logo still affixed to a white wall. “Turned the lights off, closed the door and walked away.”

Despite all of the data breaches and crazy stuff we see in the data security profession, I am still shocked at this type of carelessness. I expect to see prosecutors go after the owners of the company for failure to exercise their custodial responsibilities for these records.

Ridout says the Federal Trade Commission has implemented new laws requiring businesses to properly dispose of sensitive personal information. So far, an Illinois mortgage company was fined $50,000 for throwing personal records in a dumpster. But fines like that are rare.

And after his good deed of having the records destroyed, the landlord still had to pay the bill. Perhaps the FTC will set an example in this case.

—Adrian Lane

Friday, October 09, 2009

Friday Summary - October 9, 2009

By Adrian Lane

A lot of not this week. I was not at SECtor, although I understand it was a good time. I am not going to Oracle Open World. I should be going, but too many projects are either beginning or remain unfinished for me to travel to the Bay Area, visiting old friends and finding a good bar to hang out at. That is lots of fun I will not be having. I will not be going to Atlanta in November as the Tech Target event for data security has been knocked off the calendar. And I am not taking a free Mexican holiday in Peurta de Cancun or wherever Rich is enjoying himself. Oh well, weather has been awesome in Phoenix.

With the posts for Dark Reading this week I spent a bunch of time rummaging around for old database versions and looking through notes for database audit performance testing. Some of the old Oracle 7.3 tests with nearly 50% transactional degradation still seem unreal, but I guess it should not surprising that auditing features in older databases are a problem. They were not designed to audit transactions like we do today. They were designed to capture a sample of activity so administrators could understand how people were using the database. Performance and resource allocation were the end goals. Once a sample was collected, auditing was turned off. Security was not really a consideration, and no thought given to compliance. Yet the order of use and priority has been turned upside down, as they fill a critical compliance need but require careful deployment.

While I was at RSA this year, one database vendor pointed out some of the security vendors citing this 50% penalty as what you could expect. Bollocks! Database security and compliance vendors who do not use native database auditing would like you to embrace this performance myth. They have a competitive offering to sell, so the more people are fearful of performance degradation, the better their odds of selling you an alternative to accomplish this task. I hear DBAs complain a lot about using native auditing features because it used to be a huge performance problem, and DBAs would get complaints from database and application users.

Auditing produces a lot of data. Something has to be done with that data. It needs to be parsed for significant events, reported on, acted upon, erased or backed up, or some combination thereof. In the past, database administrators performed these functions manually, or wrote scripts to partially automate the responsibility, and rewrote them any time something within IT changed. As a form of self preservation, DBAs in general do not like accepting this responsibility. And I admit, it takes a little time to get it set up right, and you may even discover some settings to be counter-intuitive. However, auditing is a powerful tool and it should not be dismissed out of hand. It is not my first choice for database security; no way, no how! But for compliance reporting and control validation, especially for SOX, it’s really effective. Plus, much of this burden can be removed by using third party vendors to handle the setup, data extraction, cleanup, and reporting.

Anyway, enough about database auditing. On to the Summary:

Webcasts, Podcasts, Outside Writing, and Conferences

Favorite Securosis Posts

Favorite Outside Posts

Top News and Posts

Blog Comment of the Week

This week’s best comment comes from Adam in response to Mortman’s Online Fraud Report:

It’s sort of hard to answer without knowing more about what data he has, but what I’d like is raw data, anonymized to the extent needed, and shared in both data and analyzed forms, so other people can apply their own analysis to the data.

—Adrian Lane

Wednesday, October 07, 2009

Online Fraud Report: What Would You Want To See?

By David Mortman

So a buddy of mine back from when I was on the customer side contacted me recently. He’s at a new company doing some very interesting work on detecting certain classes of online fraud and amounts of malware on websites. So far he’s gathered some fascinating data on just how bad the problem is, and I’m trying to convince him that he should start publishing some of his aggregate data in a quarterly or semi-annual report. He is very interested but would love some community input on what the report should look like, which brings me to you.

Some of what should be in such a report is obvious – such as rate of detected fraud overall and by various industry verticals. The rest isn’t so clear and that’s where you all come in. Put on whatever hat you like – CISO, CFO, security researcher, risk officer, consumer, or whatever else – and what would you like to see in such a report? Are there things you hate about other reports? Or are there things you wish they covered which they never do? Throw out your requests, rants, comments, ideas, and questions in the comments and I’ll collect them all together and summarize them in a future post. If this really takes off, I’ll move it over to the forums.

—David Mortman

Tuesday, October 06, 2009

Visa’s Data Field Encryption

By Adrian Lane

I was reading Martin McKeay’s blog this morning and saw his reference to Visa’s Data Field Encryption white paper. Martin’s point that Visa is the author, rather than the PCI council, is a good one. Now that I’ve read the paper, I don’t think Visa is putting it out as a sort of litmus test on behalf of the council, but instead Visa is taking a stand on what technologies they want endorsed. And if that is the case, Rich’s feeling prediction that “Tokenization Will Become the Dominant Payment Transaction Architecture” will happen far faster than we anticipated.

A couple observations about the paper:

… data field encryption renders cardholder data useless to criminals in the event of a merchant data breach decryption.

Use robust key management solutions…

and

Visa has developed best practices to assist merchants in evaluating the new encryption…

Use an alternate account or transaction identifier for business processes that requires[sic] the primary account number…

The recommendations could describe tokenization or format preserving encryption, but it looks to me like they have tokenization in mind. And by tokenization I mean the PAN and other sensitive data are fully encrypted at the POS, and their response to the merchant is a token. I like the fact that their goals do not dictate technology choices, and are worded in such a way that they should not be obsolete within a year.

But the document appears to have been rushed to publication. For example, goal #4: protect the cryptographic operations within devices from physical or logical compromises. It’s the cryptographic operations you want to protect; the device should be considered expendable and not sensitive to compromise.

Similarly, goal #1 states:

Limit cleartext availability of cardholder data and sensitive authentication data to the point of encryption and the point of decryption.

But where is the “point of encryption”? It’s one thing to be at the POS terminal, but if this is a web transaction environment, does that mean it happens at the web server? At the browser level? Somewhere else? Reading the document it seems clear that the focus is on POS and not web based transactional security, which looks likes a big mistake to me.

Martin already pointed out that the authors lumped encryption and hashing into a single domain, but that may have been deliberate, to make the document easier to read. But if clarity was the goal, who thought “Data Field Encryption” was a good term? It does not describe what is being proected. And with tokenization, encryption is only part of the picture. If you are a web application or database developer, you will see why this phrase is really inappropriate.

Make no mistake – Visa has put a stake in the ground and it will be interesting to see how the PCI Council reacts.

—Adrian Lane

Database Audit Events

By Adrian Lane

I have attended a lot of database developer events and DBA forums around the country in the last 6 years. One benefit of attending lectures by database administrators for database administrators is the wealth of information on tools, tricks, and tips for managing databases. And not just the simple administrative tasks, but clever ways to accomplish more complex tasks. A lot of these tricks never seem to make it into the mainstream, instead remaining part of the DBA’s exclusive repertoire. I wish I had kept better notes. And unfortunately I am not going to Oracle Open World, but I wanted to for this very reason.

As part of a presentation I worked on a number for years ago at one of these events, I provided an overview of the common elements in the audit logs. I wanted to show how to comb through logs to find events of interest. I have placed a catalog of audit events for several relational database platforms into the Database Security section of our research library. For those of you interested in “roll your own” database auditing, it may be useful. I have listed out the audit-able events for Sybase, Oracle, SQL Server, and DB2. I had a small shell script that would grab the events I was interested in from the audit trail, place them into a separate file, and then clean up the reviewed audit logs or event monitor resource space. What you choose to do with the data will vary.

As part of my latest submission to Dark Reading, I referred to the essential audit-able events most commonly required for regulatory and security efforts. These files list out the specifics for each of those suggestions. If anyone in the community would like to contribute similar information for MySQL or even Postgres, I will add those into the library as well.

—Adrian Lane

Friday, October 02, 2009

Friday Summary- October 2, 2009

By Rich

I hate to admit it, but I have a bad habit of dropping administrative tasks or business development to focus on the research. It’s kind of like programmer days – I loved coding, but hated debugging or documentation. But eventually I realize I haven’t invoiced for a quarter, or forgot to tell prospects we have stuff they can pay for. Those are the nights I don’t sleep very well.

Thus I’ve spent a fair bit of time this week catching up on things. I still have more invoices to push out, and spent a lot of time editing materials for our next papers, and my contributions to the next version of the Cloud Security Alliance Guidance report. I even updated our retainer programs for users, vendors, and investors. Not that I’ve sent it to anyone – I sort of hate getting intrusive sales calls, so I assume I’m annoying someone if I mention they can pay me for stuff. Probably not the best trait for an entrepreneur.

Thus I’m looking forward to a little downtime next week as my wife and I head off for vacation. It starts tonight at a black tie charity event at the Phoenix Zoo (first time I’ll be in a penguin suit in something like 10 years). Then, on Monday, we head to Puerto Vallarta for a 5 day vacation we won in a raffle at… the Phoenix Zoo. It’s our first time away from the baby since we had her, so odds are instead of hanging out at the beach or diving we’ll be sleeping about 20 hours a day.

We’ll see how that goes.

And with that, on to the Friday Summary:

Webcasts, Podcasts, Outside Writing, and Conferences

Favorite Securosis Posts

Other Securosis Posts

Favorite Outside Posts

Top News and Posts

Blog Comment of the Week

This week’s best comment comes from Slavik in response to SQL Injection Prevention:

Hi Adrian, good stuff.

I just wanted to point out that the fact that you use stored procedures (or packages) is not in itself a protection against SQL injection. It’s enough to briefly glance at the many examples on milw0rm to see how even Oracle with their supplied built-in packages can make mistakes and be vulnerable to SQL injections that will allow an attacker to completely control the database. I agree that if you use only static queries then you’re safe inside the procedure but it does not make your web application safe (especially with databases that support multiple commands in the same call like SQL server batches). Of course, if you use dynamic queries, it’s even worse. Unfortunately, there are times when dynamic queries are necessary and it makes the code very difficult to write securely.

The most important advice regarding SQL injection I would give developers is to use bind variables (parametrized queries) in their applications. There are many frameworks out there that encourage such usage and developers should utilize them.

—Rich

Thursday, October 01, 2009

SQL Injection Prevention

By Adrian Lane

The team over at Dark Reading was kind enough to invite me to blog on their Database Security portal. This week I started a mini-series on threat detection and prevention by leveraging native database features. This week’s post is on using stored procedures to combat SQL injection attacks. But those posts are fairly short and written for a different audience. Here, I will be cross-posting additional points and advanced content I left out of those articles.

My goal was to demystify how stored procedures can help combat SQL injection. There are other options to detect and block SQL injection attacks, many of which have been in use with limited success for some time now.

What can you do about SQL injection? You can patch your database to block known threats. You can buy firewalls to try to intercept these rogue statements, but the application and general network firewalls have shown only limited effectiveness. You need to have a very clear signature for the threat, as well as a written a policy that does not break your application. Many Database Activity Monitoring vendors can block queries before they arrive. Early DAM versions detected SQL injection based on exact pattern matching that was easy for attackers to avoid, back when DAM policy management could not accommodate business policy issues; this resulted in too many false negatives, too many false positives, and deadlocked applications. These platforms are now much better at policy management and enforcement. There are memory scanners to examine statement execution and parameters, as well as lexical and content analyzers to detect and block (with fair success). Some employ a hybrid approach, with assessment to detect known vulnerabilities, and database/application monitoring to provide ‘virtual patching’ as a complement.

I have witnessed many presentations at conferences during the last two years demonstrating how a SQL injection attack works. Many vendors have also posted examples on their web sites and show how easy it is to compromise and unsecured database with SQL injection. At the end of the session, “how to fix” is left dangling. “Buy our product and we will fix this problem for you” is often their implication. That may be true or false, but you do not necessarily need a product to do this, and a bolt-on product is not always the best way. Most are reactive and not 100% effective.

As an application developer and database designer, I always took SQL injection attacks personally. The only reason the SQL injection attack succeeded was a flaw in my code, and probably a bad one. The applications I produced in the late 90s and early 2000s were immune to this form of attack (unless someone snuck an ad-hoc query into the code somewhere without validating the inputs) because of stored procedures. Some of you might say note this was really before SQL injection was fashionable, but as part of my testing efforts, I adopted early forms of fuzzing scripts to do range testing and try everything possible to get the stored procedures to crash. Binary inputs and obtuse ‘where’ clauses were two such variations. I used to write a lot of code in stored procedures and packages. And I used to curse and swear a lot as packages (Oracle’s version, anyway) are syntactically challenging. Demanding. Downright rigorous in enforcing data type requirements, making it very difficult to transition data to and from Java applications. But it was worth it. Stored procedures are incredibly effective at stopping SQL injection, but they can be a pain in the ass for more complex objects. But from the programmer and DBA perspectives, they are incredibly effective for controlling the behavior of queries in your database. And if you have ever had a junior programmer put a three-table cartesian product select statement into a production database, you understand why having only certified queries stored in your database as part of quality control is a very good thing (you don’t need a botnet to DDoS a database, just an exuberant young programmer writing the query to end all queries). And don’t get me started on the performance gains stored procedures offer, or this would be a five-page post …

If you like waiting around for your next SQL injection 0-day patch, keep doing what you have been doing.

—Adrian Lane

Wednesday, September 30, 2009

Tokenization Will Become the Dominant Payment Transaction Architecture

By Rich

I realize I might be dating myself a bit, but to this day I still miss the short-lived video arcade culture of the 1980’s. Aside from the excitement of playing on “big hardware” that far exceeded my Atari 2600 or C64 back home (still less powerful than the watch on my wrist today), I enjoyed the culture of lining up my quarters or piling around someone hitting some ridiculous level of Tempest.

One thing I didn’t really like was the whole “token” thing. Rather than playing with quarters, some arcades (pioneered by the likes of that other Big Mouse) issued tokens that would only work on their machines. On the upside you would occasionally get 5 tokens for a dollar, but overall it was frustrating as a kid. Years later I realized that tokens were a parental security control – worthless for anything other than playing games in that exact location, they keep the little ones from buying gobs of candy 2 heartbeats after a pile of quarters hits their hands.

With the increasing focus on payment transaction security due to the quantum-entangled forces of breaches and PCI, we are seeing a revitalization of tokenization as a security control. I believe it will become the dominant credit card transaction processing architecture until we finally dump our current plain-text, PAN-based system.

I first encountered the idea a few years ago while talking with a top-tier retailer about database encryption. Rather than trying to encrypt all credit card data in all their databases, they were exploring the possibility of concentrating the numbers in one master database, and then replacing the card numbers with “tokens” in all the other systems. The master database would be highly hardened and encrypted, and keep track of which token matched which credit card. Other systems would send the tokens to the master system for processing, which would then interface with the external transaction processing systems.

By swapping out all the card numbers, they could focus most of their security efforts on one controlled system that’s easier to control. Sure, someone might be able to hack the application logic of some server and kick off an illicit payment, but they’d have to crack the hardened master server to get card numbers for any widespread fraud.

We’ve written about it a little bit in other posts, and I have often recommended it directly to users, but I probably screwed up by not pushing the concept on a wider basis. Tokenization solves far more problems than trying to encrypt in place, and while complex it is still generally easier to implement than alternatives. Well-designed tokens fit the structure of credit card numbers, which may require fewer application changes in distributed systems. The assessment scope for PCI is reduced, since card numbers are only in one location, which can reduce associated costs. From a security standpoint, it allows you to focus more effort on one hardened location. Tokenization also reduces data spillage, since there are far fewer locations which use card numbers, and fewer business units that need them for legitimate functions, such as processing refunds (one of the main reasons to store card numbers in retail environments).

Today alone we were briefed on two different commercial tokenization offerings – one from RSA and First Data Corp, the other from Voltage. The RSA/FDC product is a partnership where RSA provides the encryption/tokenization tech FDC uses in their processing service, while Voltage offers tokenization as an option to their Format Preserving Encryption technology. (Voltage is also partnering with Heartland Payment Systems on the processing side, but that deal uses their encryption offering rather than tokenization).

There are some extremely interesting things you can do with tokenization. For example, with the RSA/FDC offering, the card number is encrypted on collection at the point of sale terminal with the public key of the tokenization service, then sent to the tokenization server which returns a token that still “resembles” a card number (it passes the LUHN check and might even include the same last 4 digits – the rest is random). The real card number is stored in a highly secured database up at the processor (FDC). The token is the stored value on the merchant site, and since it’s paired with the real number on the processor side, can still be used for refunds and such. This particular implementation always requires the original card for new purchases, but only the token for anything else.

Thus the real card number is never stored in the clear (or even encrypted) on the merchant side. There’s really nothing to steal, which eliminates any possibility of a card number breach (according to the Data Breach Triangle). The processor (FDC) is still at risk, so they will need to use a different set of technologies to lock down and encrypt the plain text numbers. The numbers still look like real card numbers, reducing any retrofitting requirements for existing applications and databases, but they’re useless for most forms of fraud. This implementation won’t work for recurring payments and such, which they’ll handle differently.

Over the past year or so I’ve become a firm believer that tokenization is the future of transaction processing – at least until the card companies get their stuff together and design a stronger system. Encryption is only a stop-gap in most organizations, and once you hit the point where you have to start making application changes anyway, go with tokenization.

Even payment processors should be able to expand use of tokenization, relying on encryption to cover the (few) tokenization databases which still need the PAN.

Messing with your transaction systems, especially legacy databases and applications, is never easy. But once you have to crack them open, it’s hard to find a downside to tokenization.

—Rich

Tuesday, September 29, 2009

Realistic Security

By David J. Meier

Finally, it’s here: my first post! Although I doubt anyone has been holding their breath, I have had a much harder than anticipated time trying to nail down my first topic. This is probably due in part to the much larger and more focused audience at Securosis than I have ever written for in the past. That said, I’d like to thank Rich and Adrian for supporting me in this particular role and I hope to bring a different perspective to Securosis with increased frequency as I move forward.

Last week provided a situation that brought out a heated discussion with a colleague (I have a bad habit of forgetting that not everyone enjoys heated debate as much as I do). Actually, the argument only heated up when he mentioned that vulnerability scanning and penetration testing aren’t required to validate a security program. At this point I was thoroughly confused because when I asked how he could measure the effectiveness of such a security program without those tools, he didn’t have a response. Another bad habit: I prefer debating with someone who actually justifies their positions.

My position is that if you can’t measure or test the effectiveness of your security, you can’t possibly have a functioning security program.

For example, let’s briefly use the Securosis “Building a Web Application Security Program” white paper as a reference. If I take the lifecycle outline (now please turn your PDFs to page 11, class) there’s no possible way I can fulfill the Secure Deployment step without using VA and pen testing to validate our security controls are effective. Similarly, consider the current version of PCI DSS without any pen testing – again you fail in multiple requirement areas. This is the point at which I start formulating a clearer perspective on why we see security failing so frequently in certain organizations.

I believe one of the major reasons we still see this disconnect is that many people have confused compliance, frameworks, and checklists with what’s needed to keep their organizations secure. As a consultant, I see it all the time in my professional engagements. It’s like taking the first draft blueprints for a car, building said car, and assuming everything will work without any engineering, functional, or other tests. What’s interesting is that our compliance requirements are evolving to reflect, and close, this disconnect.

Here’s my thought: year over year compliance is becoming more challenging from a technical perspective. The days of paper-only compliance are now dead. Those who have already been slapped in the face with high visibility breach incidents can probably attest (but never will) that policy said one thing and reality said another. After all they were compliant – it can’t be their fault that they’ve been breached after they complied with the letter of the rules.

Let’s make a clear distinction between how security is viewed from a high level that makes sense (well, at least to me) by defining “paper security” versus “realistic security”. From the perspective of the colleague I was talking with, he believed that all controls and processes on paper would somehow magically roll over into the digital boundaries of infrastructure as he defined them. The problem is: how can anyone write those measures if there isn’t any inherent technology mapping during development of the policies? Likewise how can anyone validate a measure’s existence and future validity without some level of testing? This is exactly the opposite of my definition of realistic security. Realistic security can only be created by mapping technology controls and policies together within the security program, and that’s why we see both the technical and testing requirements growing in the various regulations.

To prove the point that technical requirements in compliance are only getting more well defined, I did some quick spot checking between DSS 1.1 and 1.2.1. Take a quick look at a few of the technically specific things expanded in 1.2.1:

  • 1.3.6 states: ‘…run a port scanner on all TCP ports with “syn reset” or “syn ack” bits set’ – new as of 1.2.
  • 6.5.10 states: “Failure to restrict URL access (Consistently enforce access control in presentation layer and business logic for all URLs.)” – new as of 1.2.
  • 11.1.b states: “If a wireless IDS/IPS is implemented, verify the configuration will generate alerts to personnel” – new as of 1.2.

Anyone can see the changes between 1.1 and 1.2.1 are relatively minor. But think about how, as compliance matures, both its scope and specificity increase. This is why it seems obvious that technical requirements, as well as direct mappings to frameworks and models for security development, will continue to be added and expanded in future revisions of compliance regulations.

This, my friends, is on the track of what “realistic security” is to me. It can succinctly be defined as a never ending Test Driven Development (TDD) methodology applied to a security posture: if it is written in your policy then you should be able to test and verify it; and if you can’t, don’t, or fail during testing, then you need to address it. Rinse, wash, and repeat. Can you honestly say those reams of printed policy are what you have in place today? C’mon – get real(istic).

—David J. Meier

Digital Ant Swarms

By Adrian Lane

A friend of mine emailed yesterday, admonishing me for not writing about the Digital Ants concept discussed on Dailytech. I think it’s because he wanted me to call B.S. on the story. It seems that some security researchers are trying to mimic the behavior of ants in computer defenses to thwart attackers. From the article:

Security researchers found inspiration in the common ant. Describes Wake Forest University Professor of Computer Science Errin Fulp, “In nature, we know that ants defend against threats very successfully. They can ramp up their defense rapidly, and then resume routine behavior quickly after an intruder has been stopped. We were trying to achieve that same framework in a computer system.”

WFU created digital “ants” – utilities that migrate from computer to computer over networks searching for threats. When one locates a threat, others congregate on it, using so-called “swarm intelligence”. The approach allows human researchers to quickly identify and quarantine dangerous files by watching the activity of the ants.

This seems like nature’s reaction du jour. Many have written about the use of ‘helpful viruses’ and viral technologies (cheese worm (PDF), anti-porn worm, wifi worm, etc.) to combat hostile computer worms and viruses. Helpful virus code finds exploits the same way a harmful virus would, but then patches the defect – curing the system instead of reproducing. But the helpful viruses tend to become an attack vector of themselves, or ‘fix’ things in very unintended ways, compounding the problem.

Ants behave very differently than viruses. Real ants fill a dual role, both gathering food and defending the hive. Besides access controls, few security products can make this claim. Second, ants can detect threats. Software and systems are only marginally effective at this, even with different pieces operating (hopefully) as a coordinated unit. Finally, ants bite. They have the ability to defend themselves individually, as well as work effectively as a group. In either case they post a strong deterrent to attack, something seldom seen in the digital domain.

Conceptually I like the idea of being able to systemically respond to a threat, with different parts of the system reacting to different threats. On the threat detection side this makes sense as well, as many subtle attacks require information gathered from different parts of the system to be able to identify them. SEM/SIEM has slowly been advancing this science for some time now, and it is a core piece of the ADMP concept for web application security, where the detection and prevention is systemic. It is not the idea of a swam that makes it effective, but holistic detection in combination with multiple, different reactions by systems that can provide a meaningful response. So I am not saying ant swarming behavior applied to computer security is B.S., but “ramping up responses” is not the real problem – detection and appropriate reactions are.

—Adrian Lane

Monday, September 28, 2009

IDM: It’s A Process

By David Mortman

IDM fascinates me, if only because it is such an important base for a good security program. Despite this, many organizations (even ones with cutting edge technology) haven’t really focused on solving the issues around managing users’ identity. This is, no doubt, in part due to the fact that IDM is hard in the real world. Businesses can have hundreds if not thousands of applications (GM purportedly had over 15,000 apps at one point) and each application itself can have hundreds or thousands of roles within it. Combine this with multiple methods of authentication and authorization, and you have a major problem on your hands which makes digging into the morass challenging to say the least.

I also suspect IDM gets ignored because it does not warrant playing with fun toys, so as a result, doesn’t get appropriate attention from the technophiles. Don’t get me wrong – there are some great technologies out there to help solve the problem, but no matter what tools you have at your disposal, IDM is fundamentally not a technology problem but a process issue. I cannot possibly emphasize this enough. In the industry we love to say that security is about People, Process, and Technology. Well, IDM is pretty much all about Process, with People and Technology supporting it. Process is an area that many security folks have trouble with, perhaps due to lack of experience. This is why I generally recommend that security be part of designing the IDM processes, policies, and procedures – but that the actual day to day stuff be handled by the IT operations teams who have the experience and discipline to make it work properly.

DS had a great comment on my last post, which is well worth reading in its entirety, but today there is one part I’d like to highlight because it nicely shows the general process that should be followed regardless of organization size:

While certainly not exhaustive, the above simple facts can help build a closed loop process.

  1. When someone changes roles, IT gets notified how.
  2. A request is placed by a manager or employee to gain access to a system.
  3. If employee request, manager must(?) approve.
  4. If approved as “in job scope” by manager, system owner approves.
  5. IT (or system owner in decentralized case) provisions necessary access. Requester is notified.

Five steps, not terribly complicated and easy to do, and essentially what happens when someone gets hired. For termination, all you really need are steps 1, 2, and 5 – but in reverse. This process can even work in large decentralized organizations, provided you can figure out (a) the notification/request process for access changes and (b) a work flow process for driving through the above cycle.

(a) is where the Info Sec team has to get outside the IT department and talk to the business. This is huge. I’ve talked in the past about the need for IT to understand the business and IDM is a great example of why. This isn’t directly about business goals or profit/loss margins, but rather about understanding how the business operates on a day to day basis. Don’t assume that IT knows what applications are being used – in many organizations IT only provides the servers and sometimes only the servers for the basic infrastructure. So sit down with the various business units and find out what applications/services are being used and what process they are using today to provision users, who is handling that process, and what changes if any they’d like to see to the process. This is an opportunity to figure out which applications/services need to be part of your IDM initiative (this could be compliance, audit, corporate mandate etc.) and which ones currently aren’t relevant. It has the added benefit of discovering where data is flowing, which is key to not only compliance mandates under HIPAA, SOX, and the European Data Directive (to name a few), but also incredibly handy when electronic discovery is necessary. One all this data has been gathered, you can evaluate the various technologies available and see if they can help. This could be anything from a web app to manage change requests, to workflow (see below), to a full-scale automated access provisioning and de-provisioning system, driven by the approval process.

Once you’ve solved (a), (b) is comparatively straightforward and another place where technology can make life easier. The best part is that your organization likely has something like this deployed for other reasons, so the additional costs should be relatively low. Once your company/department/university/etc. grows to a decent size and/or starts to decentralize, manually following the process will become more and more cumbersome, especially as the number of supported applications goes up. A high rate of job role changes within the organization has a similar effect. So some sort of software that automatically notifies employees when they have tasks will greatly streamline the process and help people get the access they need much more quickly. Workflow software is also a great source of performance metrics and can help provide the necessary logs when dealing with audit or compliance issues.

As I mentioned above, the business reality for many organizations is far from pristine or clear, so in my next post I’ll explore more those issues in more depth. For now, suffice it to say that until you address those issues, the above process will work best with a small company with fewer apps/auth methods. If you are involved in a larger more complex organization, all is not lost. In that case, I highly recommend that you not try to fix things all at once, but start with one a group or sub-group within the organization and roll out there first. Once you’ve worked out the kinks, you can roll in more and more groups over time.

—David Mortman

Friday, September 25, 2009

Friday Summary - September 25, 2009

By Rich

I get some priceless email on occasion, and I thought this one was too good not to pass along. Today’s Friday summary introduction is an anonymous guest post … if it missed any cliches I apologize in advance.


Internal Memorandum

To: Employees
From: Mega-Corp Executive Team
Subject: Messaging
Data: Friday, September 25th, 2009

Due to growing media scrutiny and misquotes in major trade publication (because you people can’t get the pitch right), we ask you to be cautious in dealing with the press and analysts. Please. All we ask is that our employees “stay on message.” We, Mega-Corp, Inc., pay (way too much) to have (young imbecilic twits at) large advertising agencies (put down their drugs and alcohol long enough to) work closely with our (also over-paid) CMO to carefully develop media strategies to effectively leverage our assets to increase consumer awareness and increase sales, thus generating a win-win situation for all. We cannot allow rogue employees to torpedo our finely crafted multi-media approach by deviating from the platforms that have been developed. This will ensure that all of our clients receive the same message. It is also designed to increase brand awareness by striking just the right balance between being edgy enough to appeal to the hipper, young crowd while at the same time conveying stability to our long-term clients. It is especially important at the present, due to current market conditions. While our competitors are focused on their competitive disadvantages, we see this current market as a great opportunity for growth. Our marketing campaign has thus placed its emphasis on all of the added benefits that the new product line now offers. We will not allow iconoclastic employees to remain fixated on their complaints that we have increased prices. We believe that the new price line is a bargain for the new features that we have added, and our marketing campaign is designed to educate our clients and potential clients about the new features that they will be receiving. It does not help when our branch-office employees continuously snipe that the clients do not want the new features. That is only true because we have failed to educate them sufficiently about the benefits of those features. We are confident that if we all work together, to remain on message and educate our clients and potential clients about what we have to offer, we can expect double-digit growth over the next eight quarters. In do so we that our competitors will not be able to catch up to our technological advances during that period of time and thus we will not be forced to offer new products for the next 30 months. In fact, we are so confident of this that we have right-sized our R&D department, thus proving once again that we can be lean and mean. I also know that many of you have seen reports that suggest we plan further layoffs of up to 30%. We have no idea where the 30% figure came from and can say without equivocation that no such reduction is currently planned. Of course we will always remain nimble enough to provide reasonable value to our shareholders but I know that all of you believe that is our most sacred obligation and would want nothing less. In concluding, I cannot stress enough the importance of staying on message and delivering a united theme to the public as we continue to unveil our marketing strategy across so many different media platforms, including the new emphasis on social media. There can be no doubt that our innovative use of Twitter will in itself dramatically increase sales over the next several quarters. I look forward to seeing those who do remain with us next year when we gather via video-conferencing for the annual employee meeting.

Thank you for your attention in this matter (or you will be part of the next 30%),

Mega-Corp Executive Team

The Death Star,
Moon of Endor,
Outer Rim


And with that, on to the Friday Summary:

Webcasts, Podcasts, Outside Writing, and Conferences

Favorite Securosis Posts

Other Securosis Posts

Favorite Outside Posts

Top News and Posts

Blog Comment of the Week

This week’s best comment comes from ds in response to Incomplete Thought: Why is Identity and Access Management Hard?:

I’d argue that most companies have both technology and process problems and cannot (but try to) solve bad process with (often dubious) technology, hence they fail and assume it was the fault of their slick IDM.

I also think that there is a major difference in what a top line IDM product offers when compared to a more niche workflow powered provisioning system, and that the vendors in this market don’t highlight this well enough (e.g., Quest’s ARS may be just the trick for a SMB using only AD for AAA, but they want to sell against the big boys with virtual directories and other complex solutions). Dividing this market makes sense.

After all the dust settles, IDM is a perfect example case of a process problem that could be, but doesn’t have to be, supported by technology to be more efficient. Companies need to nail down some key challenges first:

-How do IT and HR find out about job change activities (joining, moving, leaving). Don’t assume HR will know… sometimes a job change is simply an informal promotion or an assignment to a special project. If there isn’t a solid notification route, fix that first. Doing so builds relationships, surfaces problems that support the effort and ensure buy in at all levels. Then move on.

-What is the motivation for better identity management (e.g., regulatory or commercial compliance?). Knowing this can help constrain initial scope. Without limits, the project will grow too fast and fail.

-What systems are in scope.

-Who in the business “owns” those systems.

-Who in the business “uses” those systems (and why/how).

While certainly not exhaustive, the above simple facts can help build a closed loop process.

  1. When someone changes roles, IT gets notified <how>.
  2. A request is placed by <manager or employee> to gain access to a system
  3. If employee request, manager <must?> approves
  4. If approved as “in job scope” by manager, system owner approves
  5. IT (or system owner in decentralized case) provisions necessary access. Requestor is notified.

Process for new hires, terminations and other elements in the lifecycle are just as easy to think through. Org wrinkles may make them more or less complicated, but essentially the point is to have an approval process where the right folks make the decision. Knowing and keeping track of those folks is a challenge, but not impossible.

Long story short, don’t think technology until the process is in place.

(or not so short … but well stated)

Rich

Thursday, September 24, 2009

Database Encryption Benchmarking

By Adrian Lane

Database benchmarking is hard to do. Any of you who followed the performance testing wars of the early 90’s, or the adoption and abuse of TPC-C and TPC-D, know that the accuracy of database performance testing is a long-standing sore point. With database encryption the same questions of how to measure performance rear their heads. But in this case there are no standards. That’s not to say the issue is not important to customers – it is. You tell a customer encryption will reduce throughput by 10% or more, and your meeting is over. End of discussion. Just the fear of potential performance issues has hindered the adoption of database encryption.

This is why it is incredibly important for all vendors who offer encryption for databases (OS/file system encryption vendors, drive manufacturers, database vendors, etc.) to be able to say their impact is below 10%. Or as far below that number as they can. I am not sure where it came from, but it’s a mythical number in database circles. Throughout my career I have been doing database performance testing of one variety or another. Some times I have come up with a set of tests that I thought exercised the full extent of the system. Other times I created test cases to demonstrate high and low water marks of system performance. Sometimes I captured network traffic at a customer site as a guide, so I could rerun those sessions against the database to get a better understanding of real world performance. But it is never a general use case – it’s only an approximation for that customer, approximating their applications.

When testing database encryption performance several variables come into play:

What queries?

Do users issue 50 select statements for every insert? Are the queries a balance of select, inserts, and deletes? Do I run the queries back to back as fast as I can, like a batch job, or do I introduce latency between requests to simulate end users? Do I select queries that only run against encrypted data, or all data? Generally when testing database performance, with or without encryption, we select a couple different query profiles to see what types of queries cause problems. I know from experience I can create a test case that will drop throughput by 1%, and another that will drop it by 40% (I actually had a junior programmer unknowingly design and deploy a query with a cartesian that crashed our live server instantly, but that’s another story).

What type of encryption?

Full disk? Tablespace? Column Level? Table level? Media? Each variant has advantages and disadvantages. Even if they are all using AES-256, there will be differences in performance. Some have hardware acceleration; some limit the total amount of encrypted data by partitioning data sets; others must contend for thread, memory, and processor resources. Column level encryption encrypts less data than full disk encryption, so this should be an apples to apples comparison. But if the encrypted column is used as an index, it can have a significant impact on query performance, mitigating the advantage.

What percentage of queries are against encrypted data?

Many customers have the ability to partition sensitive data to a database or tablespace that is much smaller than the overall data set. This means that many queries do not require encryption and decryption, and the percentage impact of encryption on these systems is generally quite good. It is not unreasonable to see that encryption only impacts the entire database by 1% over the entire database in a 24 hour period, while reduces throughput against the encrypted tablespace by 25%.

What is the load on the system?

Encryption algorithms are CPU intensive. They take some memory as well, but it’s CPU you need to pay attention to. With two identical databases, you can get different performance results if the load on one system is high. Sounds obvious, I know, but this is not quite as apparent as you might think. For example, I have seen several cases where the impact of encryption on transactional throughput was a mere 5%, but the CPU load grew by 40%. If the CPUs on the system do not have sufficient ‘headroom’, you will slow down encryption, overall read/write requests (both encrypted and unencrypted), and the system’s transactional throughput over time. Encryption’s direct slowdown of encrypted traffic is quite different than its impact on overall system load and general responsiveness.

How many cryptographic operations can be accomplished during a day is irrelevant. How many cryptographic operations can be accomplished under peak usage during the day is a more important indicator. Resource consumption is not linear, and as you get closer to the limits of the platform, performance and throughput degrade and a greater than linear rate.

How do you know what impact database encryption will have on your database? You don’t. Not until you test. There is simply no way for any vendor of a product to provide a universal test case. You need to test with cases that represent your environment. You will see numbers published, and they may be accurate, but they seldom reflect your environment and are so are generally useless.

—Adrian Lane

A Bit on the State of Security Metrics

By Rich

Everyone in the security industry seems to agree that metrics are important, but we continually spin our wheels in circular debates on how to go about them. During one such email debate I sent the following. I think it does a reasonable job of encapsulating where we’re at:

  1. Until Skynet takes over, all decisions, with metrics or without, rely on human qualitative judgement. This is often true even for automated systems, since they rely on models and decision trees programmed by humans, reflecting the biases of the designer.
  2. This doesn’t mean we shouldn’t strive for better metrics.
  3. Metrics fall into two categories – objective/measurable (e.g., number of systems, number of attacks), and subjective (risk ratings). Both have their places.
  4. Smaller “units” of measurement tend to be more precise and accurate, but more difficult to collect and compile to make decisions… and at that point we tend to introduce more bias. For example, in Project Quant we came up with over 100 potential metrics to measure the costs of patch management, but collecting every one of them might cost more than your patching program. Thus we had to identify key metrics and rollups (bias) which also reduces accuracy and precision in calculating total costs. It’s always a trade-off (we’d love to do future studies to compare the results between using all metrics vs. key metrics to seeing if the deviation is material).
  5. Security is a complex system based on a combination of biological (people) and computing elements. Thus our ability to model will always have a degree of fuzziness. Heck, even doctors struggle to understand how a drug will affect a single individual (that’s why some people need medical attention 4 hours after taking the blue pill, but most don’t).
  6. We still need to strive for better security metrics and models.

My personal opinion is that we waste far too much time on the fuzziest aspects of security (ALE, anyone?), instead of focusing on more constrained areas where we might be able to answer real questions. We’re trying to measure broad risk without building the foundations to determine which security controls we should be using in the first place.

—Rich

Stupid FUD: Weird Nominum Interview

By Rich

We see a lot of FUD on a daily basis here in the security industry, and it’s rarely worth blogging about. But for whatever reason this one managed to get under my skin.

Nominum is a commercial DNS vendor that normally targets large enterprises and ISPs. Their DNS server software includes more features than the usual BIND installation, and was originally designed to run in high-assurance environments. From what I know, it’s a decent product. But that doesn’t excuse the stupid statements from one of their executives in this interview that’s been all over the interwebs the past couple days:

Q: In the announcement for Nominum’s new Skye cloud DNS services, you say Skye ‘closes a key weakness in the internet’. What is that weakness?

A: Freeware legacy DNS is the internet’s dirty little secret – and it’s not even little, it’s probably a big secret. Because if you think of all the places outside of where Nominum is today – whether it’s the majority of enterprise accounts or some of the smaller ISPs – they all have essentially been running freeware up until now. Given all the nasty things that have happened this year, freeware is a recipe for problems, and it’s just going to get worse.

Q: Are you talking about open-source software?

A: Correct. So, whether it’s Eircom in Ireland or a Brazilian ISP that was attacked earlier this year, all of them were using some variant of freeware. Freeware is not akin to malware, but is opening up those customers to problems.

By virtue of something being open source, it has to be open to everybody to look into. I can’t keep secrets in there. But if I have a commercial-grade software product, then all of that is closed off, and so things are not visible to the hacker.

Nominum software was written 100 percent from the ground up, and by having software with source code that is not open for everybody to look at, it is inherently more secure.

I would respond to them by saying, just look at the facts over the past six months, at the number of vulnerabilities announced and the number of patches that had to made to Bind and freeware products. And Nominum has not had a single known vulnerability in its software.

The word “bullsh**” comes to mind. Rather than going on a rant, I’ll merely include a couple of interesting reference points:

  • Screenshot of a cross-site scripting vulnerability on the Nominum customer portal.
  • Link to a security advisory in 2008. Gee, I guess it’s older than 6 months, but feel free to look at the record of DJBDNS, which wasn’t vulnerable to the DNS vuln.
  • As for closed source commercial code having fewer vulnerabilities than open source, I refer you to everything from the recent SMB2 vulnerability, to pretty much every proprietary platform vs. FOSS in history. There are no statistics to support his position. Okay, maybe if you set the scale for 2 weeks. That might work, “over the past 2 weeks we have had far fewer vulnerabilities than any open source DNS implementation”.

Their product and service are probably good (once they fix that XSS, and any others that are lurking), but what a load of garbage in that interview…

—Rich