Login  |  Register  |  Contact
Thursday, June 17, 2010

DB Quant: Monitoring Metrics, Part 2, Audit

By Rich

Our next step is the Monitor phase is Audit. While monitoring is a real-time activity that typically requires third-party products, auditing typically using native database features; DAM products also offer audit as a core function, but audit is available without them.

Our Audit process is:

  1. Scope
  2. Define
  3. Deploy
  4. Document and Report

Scope

Variable Notes
Time to identify databases
Time to determine audit requirements Some of this assessment occurrs in the Planning phase

Define

Variable Notes
Time to select data collection method
Time to identify users, objects, and transactions to monitor
Time to specify filtering
Cost of storage to support auditing

Deploy

Variable Notes
Time to set up and configure auditing
Time to integrate with existing systems e.g. SIEM, log management
Time to implement log file cleanup

Document and Report

Variable Notes
Time to document
Time to define reports
Time to generate reports Ongoing, depending on reporting cycle

Other Posts in Project Quant for Database Security

  1. An Open Metrics Model for Database Security: Project Quant for Databases
  2. Database Security: Process Framework
  3. Database Security: Planning
  4. Database Security: Planning, Part 2
  5. Database Security: Discover and Assess Databases, Apps, Data
  6. Database Security: Patch
  7. Database Security: Configure
  8. Database Security: Restrict Access
  9. Database Security: Shield
  10. Database Security: Database Activity Monitoring
  11. Database Security: Audit
  12. Database Security: Database Activity Blocking
  13. Database Security: Encryption
  14. Database Security: Data Masking
  15. Database Security: Web App Firewalls
  16. Database Security: Configuration Management
  17. Database Security: Patch Management
  18. Database Security: Change Management
  19. DB Quant: Planning Metrics, Part 1
  20. DB Quant: Planning Metrics, Part 2
  21. DB Quant: Planning Metrics, Part 3
  22. DB Quant: Planning Metrics, Part 4
  23. DB Quant: Discovery Metrics, Part 1, Enumerate Databases
  24. DB Quant: Discovery Metrics, Part 2, Identify Apps
  25. DB Quant: Discovery Metrics, Part 3, Config and Vulnerability Assessment
  26. DB Quant: Discovery Metrics, Part 4, Access and Authorization
  27. DB Quant: Secure Metrics, Part 1, Patch
  28. DB Quant: Secure Metrics, Part 2, Configure
  29. DB Quant: Secure Metrics, Part 3, Restrict Access
  30. DB Quant: Monitoring Metrics: Part 2, Database Activity Monitoring

—Rich

Wednesday, June 16, 2010

Take Our Data Security Survey & Win an iPad

By Rich

One of the biggest problems in security is that we rarely have a good sense of which controls actually improve security outcomes. This is especially true for newer areas like data security, filled with tools and controls that haven’t been as well tested or widely deployed as things like firewalls.

Thanks to all the great feedback you sent in on our drafts, we are happy to kick off our big data security survey. This one is a bit different than most of the others you’ve seen floating around, because we are focusing more on effectiveness (technically perceived) of controls rather than losses & incidents. We do have some incident-related questions, but only what we need to feed into the effectiveness results.

As with most of our surveys, we’ve set this one up so you can take it anonymously, and all the raw results (anonymized, in spreadsheet format) will be released after our analysis.

Since we have a sponsor for this one (Imperva), we actually have a little budget and will be giving away a 32gb WiFi iPad to a random participant. You don’t need to provide an email address to take the survey, but you do if you want the iPad. If we get a lot of recipients (say over 200) we’ll cough up for more iPads so the odds stay better than the lottery.

Click here to take the survey, and please spread the word. We designed it to only take 10-20 minutes. Even if you aren’t doing a lot with data security, we need your responses to balance the results.

With our surveys we also use something called a “registration code” to keep track of where people found out about it. We use this to get a sense of which social media channels people use. If you take the survey based on this post, please use “Securosis”. If you re-post this link, feel free to make up your own code and email it to us, and we will let you know how many people responded to your referral – get enough and we can give you a custom slice of the data.

Thanks! Our plan is to keep this open for a few weeks.

—Rich

DB Quant: Monitoring Metrics, Part 1, DAM

By Rich

Now that we’ve completed the Secure phase, it’s time to move on to metrics for the Monitor phase. We break this into two parts: Database Activity Monitoring and Auditing.

We initially defined the Database Activity Monitoring process as:

  1. Define
  2. Develop Policies
  3. Deploy
  4. Document

But based on feedback and some overlap with the Planning section, we are updating it to:

  1. Prepare
  2. Deploy
  3. Document
  4. Manage

Prepare

Variable Notes
Cost of DAM tool
Time to identify and profile monitored database Identify the database to monitor and its configuration (e.g., DBMS, platform, connection methods)
Time to define rule set Based on policies determined in the Planning phase. If this wasn’t done during planning, move those metrics into this phase.

Deploy

Variable Notes
Time to deploy DAM tool
Time to configure policies
Time to test deployment

Document

Variable Notes
Time to document activation and deployed rules
Time to record code changes in source control system

Manage

Variable Notes
Time to monitor for policy violations
Time to handle incidents
Time to tune policies

Other Posts in Project Quant for Database Security

  1. An Open Metrics Model for Database Security: Project Quant for Databases
  2. Database Security: Process Framework
  3. Database Security: Planning
  4. Database Security: Planning, Part 2
  5. Database Security: Discover and Assess Databases, Apps, Data
  6. Database Security: Patch
  7. Database Security: Configure
  8. Database Security: Restrict Access
  9. Database Security: Shield
  10. Database Security: Database Activity Monitoring
  11. Database Security: Audit
  12. Database Security: Database Activity Blocking
  13. Database Security: Encryption
  14. Database Security: Data Masking
  15. Database Security: Web App Firewalls
  16. Database Security: Configuration Management
  17. Database Security: Patch Management
  18. Database Security: Change Management
  19. DB Quant: Planning Metrics, Part 1
  20. DB Quant: Planning Metrics, Part 2
  21. DB Quant: Planning Metrics, Part 3
  22. DB Quant: Planning Metrics, Part 4
  23. DB Quant: Discovery Metrics, Part 1, Enumerate Databases
  24. DB Quant: Discovery Metrics, Part 2, Identify Apps
  25. DB Quant: Discovery Metrics, Part 3, Config and Vulnerability Assessment
  26. DB Quant: Discovery Metrics, Part 4, Access and Authorization
  27. DB Quant: Secure Metrics, Part 1, Patch
  28. DB Quant: Secure Metrics, Part 2, Configure
  29. DB Quant: Secure Metrics, Part 3, Restrict Access
  30. DB Quant: Secure Metrics, Part 4, Shield

—Rich

Incite 6/16/2010: Fenced in

By Mike Rothman

I spent last weekend at my 20th college reunion. I dutifully flew into Ithaca, NY to see many Cornell friends and (fraternity) brothers. It was a great trip, but I did have an experience that reminded me I’m no spring chicken any more.

Now that is some fireworks... I guess I could consider the unbelievable hangover I had on Saturday morning as the first indication that I can’t behave like a 20-year-old and expect no consequences. But it gets better. We were closing da Palms on Saturday night and an undergrad called me over because he had about 3/4 of a pitcher left and graciously asked for some help. I scurried over (because who turns down free beer?) and we started chatting.

So he asked me, “When did you graduate?” I responded that I was Class of 1990. He looked at me cross-eyed and I figured he was just respecting my beer drinking prowess. Not so much. He then said, “Wow. I was born in 1989.” Uh. This kid was crapping his pants when I graduated from college. I literally have T-shirts that are older than this guy. That put everything into perspective: 20 years is a long time.

Of course the campus has changed a lot as well. Lots more buildings, but the biggest change was the ever-present fences. In the last year, there have been numerous suicides on campus. It’s actually very sad that kids today can’t deal with the pressure and have no perspective that whatever it is, and however hard it feels, it will pass. So they jump off any number of bridges overlooking Ithaca’s beautiful gorges. Splat.

So the Cornell administration figured one way to stop the jumpers is to put 10-foot-high fences on all the bridges. It now looks more like a detainment camp than an Ivy League university. That’s sad too. Cornell is one of the most beautiful places I’ve ever been. Now not so much. It’s still a campus, it just feels different.

Being the engineers many of my friends are, we tried to come up with better solutions. The ideas (after a number of beers, as I recall) ranged from a big airbag on the bottom of the gorge to a high speed blower to keep the jumper suspended in air (like those Vegas rides). We also talked about nets and other ideas, of course none really feasible.

I guess I’ll just have to become accustomed to the fences, and remember how things were. With the understanding that like my ability to recover quickly from a night of binge drinking, some things are destined to stay in the past.

– Mike.

Photo credits: “Fenced In” originally uploaded by Mike Rothman


Incite 4 U

  1. Getting to know your local Hoover – No, this isn’t about vacuums, but about getting to know your local law enforcement personnel. It seems the FBI is out there educating folks about how and when to get them involved in breaches. The Bureau is also taking a more proactive stance in sharing information with the financials and other corporates. All this is good stuff, and a key part of your incident response plan needs to be interfacing with law enforcement. So defining your organization’s rules of engagement sooner rather than later is a good thing. – MR

  2. String theory – Kelly Jackson Higgins had the most interesting post of the past week, covering Dan Kaminsky’s announcement of Interpolique. Actually, the story is mostly a pre-announcement for Dan’s Black Hat presentation in Vegas later this summer, but the teaser is intriguing. The tool that Kaminsky is describing would automatically format code – with what I assume is some type of pre-compiler – making it far more difficult to execute injected code via string variables. The only burden on the developer would be to define strings in such a way that the pre-compiler recognizes them and corrects the code prior to compilation/execution. That and remembering to run the tool. This is different than something like Arxan, which acts like a linker after compilation. Philosophically both approaches sound like good ideas. But Interpolique should be simpler to implement and deploy, especially if Recursion Ventures can embed the technology into development environments. Dan is dead right that “… string-injection flaws are endemic to the Web, cross all languages …” – the real question is whether this stops injection attacks across all languages. I guess we have to wait until Black Hat to find out. – AL

  3. Hatfields and McCoys, my ass – Evidently there is a feud between Symantec and McAfee. I guess a VP shot another VP and now the clans have been at war for generations. Computer security changes fundamentally every couple years. And fervent competition is always a good thing for customers. Prices go down and innovation goes up. But to say the AV market is a two-horse race seems wrong. To get back to the Coke vs. Pepsi analogy used in this story, in this market Dr. Pepper and 7Up each have a shot because some customers decide they need a fundamentally different drink. Security is about much more than just the endpoint, and if the Hatfields or McCoys take their eyes off the Microsofts and the HPs, they will end up in the annals of history, like the DECs and the Wangs. – MR

  4. Speed may kill… – Sophos is hoping that the security industry has a short memory. They just announced a ‘Live Protection’ offering in their endpoint suite that uses a cloud service to push signature updates. Right, that’s not novel, but they are using speed as the differentiator. So you can get real-time updates. Of course that assumes you won’t have a Bad DAT(e) try to slip your devices a roofie that renders them useless. Needless to say, there is a bunch of marketing hocus-pocus going on here, since Sophos is also talking about their speed gain resulting from not pushing full signature updates, but doing some analysis in the cloud. Ah, calling Dr. Latency – this is something most other endpoint vendors are already doing. In any case, as our friends from McAfee showed a whole bunch of customers, sometimes it pays to wait a few hours before pushing a signature update. – MR

  5. Fail Whale II, the sequel – If you were on Twitter this morning, you were probably up to your eyeballs in AT&T FAIL on iPhone 4 pre-orders. Yes, the accusation that AT&T was deliberately killing its own site because they ran out of iPhones was funny but untrue! Most people simply could not make it through the session without some form of timeout or “service unavailable” message due to an overburdened (underprovisioned) system. But I was reading on Gizmodo about how user sessions were being compromised and you could randomly access other people’s accounts. With screen shots to prove it! As if AT&T’s reputation were not tarnished enough, their Internet capabilities are as bad as their cell coverage. Then AT&T released a support message saying “We have been unable to replicate the issue …” Awesome! Probably because the support techs were actually inside the firewall, rather than outside, where the thrashing load-balancing routers were spitting out customer data to anyone and everyone visiting the site. Their claim that “information displayed did not include call-detail records, social security numbers, or credit card information” is ridiculous. If they could not reproduce the issue, how could they know that information was not accessible, even if it wasn’t (supposed to be) shown on the pricing page. As with many of the nations banks, “too big to fail” needs to be supplanted with “too F’ed up to fix”. – AL

  6. Skills you can sell – Since I’ve stepped off the corporate ladder, I’m not overly concerned with career management. My primary concern is to make sure that Rich and Adrian don’t walk me to the (virtual) door. But almost everyone else needs to think about what’s next. Dark Reading has a good analysis of what kinds of skills are in demand now, including incident response, compliance, and security clearances for government work. Surprisingly enough application security isn’t at the top of the list, and given the skills gap between the number of qualified folks and the number of exposed apps, that’s strange to me. But I guess apathy isn’t a good hiring manager and clearly there is application security apathy in spades throughout the industry. – MR

—Mike Rothman

Tuesday, June 15, 2010

Need to know the time? Ask the consultant.

By Mike Rothman

You all know the story. If you need to know the time, ask the consultant, who will then proceed to tell you the time from your own watch. We all laugh, but there is a lot of truth in this joke – as there usually is. Consultants are a necessary evil for many of us. We don’t have the leeway to hire full time employees (especially when Wall Street is still watching employee rolls like hawks), but we have too much work to do. So we bring in some temporary help to get stuff done.

I’ve been a consultant, and the Securosis business still involves some project-oriented work. The problem is that most organizations don’t utilize their consultants properly. My thinking was triggered by a post on infoseccynic.com from 2009 (hat tip to infosecisland) that discusses the most annoying consultants.

It’s easy to blame the consultant when things go wrong, and sometimes they are to blame. You tend to run into the dumb, lame, and lazy consultants; and sometimes it’s too late before you realize the consultant is taking you for a ride. Each of the profiles mentioned in the annoying consultant post is one of those. They waste time, they deliberate, and they ride the fence because it usually ends up resulting in more billable hours for them.

Having been on both sides of the fence with consultants, here are a few tips to get the most out of temporary resources.

  1. Scope tightly Like it or not, consultants need to be told what to do. Most project managers suck at that, but then get pissed when the consultant doesn’t read their minds. Going into any project: have a tight scoping document, and a process for changes.
  2. Fixed price – Contracting for a project at a fixed cost will save you a lot of heartburn. There is no incentive for the consultant to take more time if they are paid the same whether the project takes 5 hours or 10. And if you have specified a process for changes, then there are no surprises if/when the scope evolves.
  3. Demand accountability – This gets back to Management 101. Does the consultant do a weekly or daily status report (depending on the project)? Do you read them the riot act when they miss dates? Some consultants will take you for a ride, but only if you let them.
  4. Change the horse – Many project managers are scared to get rid of an underperforming consultant. One of the reasons you got temporary help in the first place is to avoid HR issues if it doesn’t work out. Make sure you have a clear ‘out’ clause in the contract, but if it isn’t working, don’t waste time deliberating – just move on.
  5. Pay for value – Some folks have very specialized skills and those skills are valuable. But the best folks in the world demand a premium because they’ll get the job done better and faster than someone else. Don’t be penny wise and pound foolish. Get the right person and let them do the work – you’ll save a lot in the long term.
  6. Be accountable – Ultimately the success (or failure) of any project lies at the feet of the project manager. It’s about proper scoping, lining up executive support, working the system, lining up the resources, and getting the most out of the project team. When things go wrong, ultimately it’s the project manager’s fault. Don’t point fingers – fix the problem.

So go back and look at the annoying consultant profiles mentioned in the post above. If any of those folks are on your project teams, man (or woman) up and take care of business. As I’ve said a zillion times over the years, I’m not in the excuses business. Neither are you. Consultants are a necessary evil, but they can be a tremendous resource if utilized effectively.

—Mike Rothman

NSO Quant: Manage Firewall Process Map

By Mike Rothman

After posting the monitor process map to define a high-level process for monitoring firewalls, IDS/IPS, and servers; now can we look to the process for managing these devices. In this post we’ll tackle firewalls.

Remember the Quant process depends on you to keep us honest. Our primary research and experience in the trenches gives us a good idea, but there are nuances to fighting these battle every day. So if something seems a bit funky, let us know in the comments.

Keep the philosophy of Quant in mind: the high-level process framework is intended to cover all the tasks involved. That doesn’t mean you need to do everything listed, but this should be a fairly exhaustive list. Individual organizations can then pick and choose the appropriate steps.

When contrasting the monitor process with management, the first thing that becomes apparent is the reality that the policies drive the use of the device(s), but when you need to make a change, the heavy process orientation kicks in. Why? Because making a mistake or unauthorized change can have severe ramifications, like exposing critical data to the entire Internet. Right, that’s bad. So there are a lot of checks and balances in the change management process to ensure any changes are authorized and tested, and won’t create a ripple effect of mayhem.

Policy Management

In this phase, we define what ports, protocols, and (increasingly) applications are allowed to traverse the firewall. Depending on the nature of what is protected and the sophistication of the firewall the policies may include source and destination addresses, application behavior, and user entitlements.

Policy Review

At times a firewall rule set resembles a junk closet. There are lots of things in there, but no one can quite remember what everything is for or who it belongs to. So it is a best practice to periodically review firewall policy and prune rules that are obsolete, duplicative, risky provide unwanted exposures, or otherwise unneeded. Catalysts for policy review may include signature updates (new application support, etc.), external advisories (to block a certain attack vector or work around a missing patch, etc.), and policy updates resulting from the operational management of the device (change management process described below).

Define/Update Policies & Rules

Policy HierarchyThis entails defining the depth and breadth of the firewall policies – including which ports, protocols, and applications are allowed to traverse the firewall. Time-limited policies may also be deployed, to support short-term access for specific applications or user communities. Additionally, the policies vary depending on primary use case, which might include perimeter deployment or network segmentation. Logging, alerting, and reporting policies are also defined in this step.

It’s important here to consider the hierarchy of policies that will be implemented on the devices. The chart at right shows a sample hierarchy with organizational policies at the highest level, which may then be supplemented (or even supplanted) by business unit or geographic policies. Those feed the specific policies and/or rules implemented at each location, which then filter down to a particular device. Designing a hierarchy to properly leverage policy inheritance can either dramatically increase or decrease the complexity of the rule set.

Initial deployment of the firewall policies should include a Q/A process to ensure none of the rules impacts the ability of critical applications to communicate either internally or externally.

Document Policy Changes

As the planning stage is an ongoing process, documentation is important for operational and compliance purposes. This step lists and details whatever changes have been made to the policies.

Change Management

This phase encompasses additions, deletions, and other changes to the firewall rules.

Change Request

Based on the activities in the policy management phase, some type of policy/rule change will be requested.

Authorize

Authorization involves ensuring the requestor is allowed to request the change, as well as determining the relative priority of the change to slot into an appropriate change window. Prioritize based on the nature of the policy update and potential risk of the attack occurring. Then build out a deployment schedule based on your prioritization, scheduled maintenance windows, and other factors. This usually involves the participation of multiple stakeholders – ranging from application, network, and system owners to business unit representatives if the change involves downtime or changes to application usage.

Test & Assess Impact

Develop test criteria, perform any required testing, analyze the results, and approve the rule change for release once it meets your requirements. Testing should include monitoring the operation and performance impact of the change on the device. Changes may be implemented in “log-only” mode to understand their impact before approving them for production deployment.

Approve

With an understanding of the impact of the change(s), the request is either approved or denied.

Deploy Change

Prepare the target device(s) for deployment, deliver the change, and install/activate.

Confirm

Verify that changes were properly deployed, including successful installation and operation. This might include use of vulnerability assessment tools or application test scripts to ensure production systems are not disrupted.

Emergency Update

In some cases, including data breach lockdowns and imminent zero-day attacks, a change to the firewall policy rules must be made immediately. A process to short-cut the full change process should be established and documented, ensuring proper authorization for immediate changes and that they can be rolled back in case of unintended consequences.

Other Considerations

Health Monitoring and Maintenance

This phase involves ensuring the firewalls are operational and secure. This includes monitoring the devices for availability and performance. If performance measured here is inadequate, this may drive a hardware upgrade. Additionally, software patches (for either functionality or security) are implemented in this phase. We’ve broken out this step due to the operational nature of the function. This doesn’t relate directly to security or compliance, but can be a significant management cost for these devices, and thus should be modeled separately.

Incident Response/Management

For this Quant project, we are considering the monitoring and management processes as separate, although many organizations (especially managed service providers) consider device management a superset of device monitoring.

So the firewall management process flow does not include incident investigation, response, validation, or management. Please refer to the monitoring process flow for those activities.

We are looking forward to your comments and feedback. Fire away.


Network Security Operations Quant posts

  1. Announcing NetSec Ops Quant: Network Security Metrics Suck. Let’s Fix Them.
  2. NSO Quant: Monitor Process Map

—Mike Rothman

Top 5 Security Tips for Small Business

By Rich

We in the security industry tend to lump small and medium businesses together into “SMB”, but there are massive differences between a 20-person retail outlet and even a 100-person operation. These suggestions are specifically for small businesses with limited resources, based on everything we know about the latest threats and security defenses.

The following advice is not conditional – there really isn’t any safe middle ground, and these recommendations aren’t very expensive. These are designed to limit the chance you will be hit with attacks that compromise your finances or ability to continue business operations, and we’re ignoring everything else:

  1. Update all your computers to the latest operating systems and web browsers – this is Windows 7 or Mac OS X 10.6 as of this writing. On Windows, use at least Internet Explorer 8 or Firefox 3.6 (Firefox isn’t necessarily any more secure than the latest versions of IE). On Macs, use Firefox 3.6. Most small business struggle with keeping malware off their computers, and the latest operating systems are far more secure than earlier versions. Windows XP is nearly 10 years old at this point – odds are most of your cars are newer than that.
  2. Turn on automatic updates (Windows Update, or Software Update on Mac) and set them to check and automatically install patches daily. If this breaks software you need, find an alternative program rather than turning off updates. Keeping your system patched is your best security defense, because most attacks exploit known vulnerabilities. But since those vulnerabilities are converted to attacks within hours of becoming public (when the patch is released, if not earlier), you need to patch as quickly as possible.
  3. Use a dedicated computer for your online banking and financial software. Never check email on this system. Never use it to browse any Web site except your bank. Never install any applications other than your financial application. You can do this by setting up a non-administrative user account and then setting parental controls to restrict what Web sites it can visit. Cheap computers are $200 (for a new PC) and $700 (for a new Mac mini) and this blocks the single most common method for bad guys to steal money from small businesses, which is compromising a machine and then stealing credentials via a software key logger. Currently, the biggest source of financial losses for small business is malicious software sniffing your online bank credentials, which are then used to transfer funds directly to money mules. This is a better investment than any antivirus program.
  4. Arrange with your bank to require in-person or phone confirmation for any transfers over a certain amount, and check your account daily. Yes, react faster is applicable here as well. The sooner you learn about an attempt to move money from your account, the more likely you’ll be able to stop it. Remember that business accounts do not have the same fraud protections as consumer accounts, and if someone transfers your money out because they broke into your online banking account, it is very unlikely you will ever recover the funds.
  5. Buy backup software that supports both local and remote backups, like CrashPlan. Backup locally to hard drives, and keep at least one backup for any major systems off-site but accessible. Then subscribe to the online backup service for any critical business files. Remember that online backups are slow and take a long time to restore, which is why you want something closer to home. Joe Kissell’s Take Control of Mac OS X Backups is a good resource for developing your backup strategy, even if you are on Windows 7 (which includes some built-in backup features). Hard drives aren’t designed to last more than a few years, and all sorts of mistakes can destroy your data.

Those are my top 5, but here are a few more:

  • Turn on the firewalls on all your computers. They can’t stop all attacks, but do reduce some risks, such as if another computer on the network (which might just mean in the same coffee shop) is compromised by bad guys, or someone connects an infected computer (like a personal laptop) to the network.
  • Have employees use non-administrator accounts (standard users) if at all possible. This also helps limit the chances of those computers being exploited, and if they are, will limit the exploitation.
  • If you have shared computers, use non-administrator accounts and turn on parental controls to restrict what can be installed on them. If possible, don’t even let them browse the web or check email (this really depends on the kind of business you have… if employees complain, buy an iPad or spare computer that isn’t needed for business, and isn’t tied to any other computer). Most exploits today are through email, web browsing, and infected USB devices – this helps with all three.
  • Use an email service that filters spam and viruses before they actually reach your account.
  • If you accept payments/credit cards, use a service and make sure they can document that their setup is PCI compliant, that card numbers are encrypted, and that any remote access they use for support has a unique username and password that is changed every 90 days. Put those requirements into the contract. Failing to take these precautions makes a breach much more likely.
  • Install antivirus from a major vendor (if you are on Windows). There is a reason this is last on the list – you shouldn’t even think about this before doing everything else above.

—Rich

Monday, June 14, 2010

If You Had a 3G iPad Before June 9, Get a New SIM

By Rich

If you keep up with the security news at all, you know that on June 9th the email addresses and the device ICC-ID for at least 114,000 3G iPad subscribers were exposed.

Leaving aside any of the hype around disclosure, FBI investigations, and bad PR, here are the important bits:

  1. We don’t know if bad guys got their hands on this information, but it is safest to assume they did.
  2. For most of you, having your email address potentially exposed isn’t a big deal. It might be a problem for some of the famous and .gov types on the list.
  3. The ICC-ID is the unique code assigned to the SIM card. This isn’t necessarily tied to your phone number, but…
  4. It turns out there are trivial ways to convert the ICC-ID into the IMSI here in the US according to Chris Paget (someone who knows about these things).
  5. The IMSI is the main identifier your mobile operator uses to identify your phone, and is tied to your phone number.
  6. If you know an IMSI, and you are a hacker, it greatly aids everything from location tracking to call interception. This is a non-trivial problem, especially for anyone who might be a target of an experienced attacker… like all you .gov types.
  7. You don’t make phone calls on your iPad, but any other 3G data is potentially exposed, as is your location.
  8. Everything you need to know is in this presentation from the Source Boston conference by Nick DePetrillo and Don Bailey.](http://www.sourceconference.com/bos10pubs/carmen.pdf)

Realistically, very few iPad 3G owners will be subject to these kinds of attacks, even if bad guys accessed the information, but that doesn’t matter. Replacing the SIM card is an easy fix, and I suggest you call AT&T up and request a new one.

—Rich

Friday, June 11, 2010

Insider Threat Alive and Well

By Mike Rothman

Is it me or has the term “insider threat” disappeared from security marketing vernacular? Clearly insiders are still doing their thing. Check out a recent example of insider fraud at Bank of America. The perpetrator was a phone technical support rep, who would steal account records when someone called for help. Awesome.

Of course, the guy got caught. Evidently trying to sell private sensitive information to an undercover FBI agent is risky. It is good to see law enforcement getting ahead of some issues, but I suspect for every one of these happy endings (since no customers actually lost anything) there are hundreds who get away with it. It’s a good idea to closely monitor your personal banking and credit accounts, and make sure you have an identity theft response plan. Unfortunately it’s not if, but when it happens to you.

Let’s put our corporate security hats back on and remember the reality of our situation. Some attacks cannot be defended against – not proactively, anyway. This crime was committed by a trusted employee with access to sensitive customer data. BofA could not do business without giving folks access to sensitive data. So locking down the data isn’t an answer. It doesn’t seem he used a USB stick or any other technical device to exfiltrate the data, so there isn’t a specific technical control that would have made a difference.

No product can defend against an insider with access and a notepad. The good news is that insiders with notepads don’t scale very well, but that gets back to risk management and spending wisely to protect the most valuable assets from the most likely attack vectors. So even though the industry isn’t really talking about insider threats much anymore (we’ve moved on to more relevant topics like cloud security), fraud from insiders is still happening and always will. Always remember there is no 100% security, so revisit that incident response plan often.

—Mike Rothman

Friday Summary: June 11, 2010

By Adrian Lane

This Monday’s FireStarter prompted a few interesting behind-the-scenes conversations with a handful of security vendors centering on product strategy in the face of the recent acquisitions in Database Activity Monitoring. The questions were mostly around the state of the database activity monitoring market, where it is going, and how the technology complements and competes with other security technologies. But what I consider a common misconception came up in all of these exchanges, having to do with the motivation behind Oracle & IBMs recent acquisitions. The basic premise went something like: “Of course IBM and Oracle made investments into DAM – they are database vendors. They needed this technology to secure databases and monitor transactions. Microsoft will be next to step up to the plate and acquire one of the remaining DAM vendors.”

Hold on. Not so fast!

Oracle did not make these investments simply as a database vendor looking to secure its database. IBM is a database vendor, but that is more coincidental to the Guardium acquisition than a direct driver for their investment. Security and compliance buyers are the target here. That is a different buying center than for database software, or just about any hardware or business software purchases.

I offered the following parallel to one vendor: if these acquisitions are the database equivalent of SIEM monitoring and auditing the network, then that logic implies we should expect Cisco and Juniper to buy SIEM vendors, but they don’t. It’s more the operations and security management companies who make these investments. The customer of DAM technologies is the operations or security buyer. That’s not the same person who evaluates and purchases database and financial applications. And it’s certainly not a database admin! The DBA is only an evaluator of efficacy and ease of use during a proof of concept.

People think that Oracle and IBM, who made splashes with Secerno and Guardium purchases, were the first big names in this market, but that is not the case. Database tools vendor Embarcadero and security vendor Symantec both launched and folded failed DAM products long ago. Netezza is a business intelligence and data warehousing firm. Fortinet describes themselves as a network security company. Quest (DB tools), McAfee (security) and EMC (data and data center management) have all kicked the tires at one time or another because their buyers have shown interest. None of these firms are database vendors, but their customers buy technologies to help reduce management costs, facilitate compliance, and secure infrastructure.

I believe the Guardium and Secerno purchases were made for operations and security management. It made sense for IBM and Oracle to invest, but not because of their database offerings. These investments were logical because of their other products, because of their views of their role in the data center, and thanks to their respective visions for operations management. Ultimately that’s why I think McAfee and EMC need to invest in this technology, and Microsoft doesn’t.

On to the Summary:

Webcasts, Podcasts, Outside Writing, and Conferences

Favorite Securosis Posts

Other Securosis Posts

Favorite Outside Posts

Project Quant Posts

Research Reports and Presentations

Top News and Posts

Blog Comment of the Week

Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. Usually when a comment starts with “This is a terrific idea …” it gets deleted as blog spam, but not this week, as the best comment goes to DMcElligott, in response to Rich’s Draft Data Security Survey for Review.

This is a terrific idea. I am very curious about the results you see from this.

My suggestions: In the regulation questions I would include some reference to the financial regulatory agencies like FINRA, SEC, NYSE, etc. to cover the banking and financial sector better.

I would also be curious about the level of implementation and the accuracy confidence. Where a data security implementation has been completed what level of confidence do you have in the results (maybe a 1-10 rating)? And are there any user interactions for any data? I assume the confidence level feeds the willingness to interact with an end user.

Best of luck with the survey.

—Adrian Lane

Thursday, June 10, 2010

Understanding and Selecting SIEM/LM: Reporting and Forensics

By Adrian Lane

Reporting and Forensics are the principal products of a SIEM system. We have pushed, prodded, and poked at the data to get it into a manageable format, so now we need to put it to use. Reports and forensic analysis are the features most users work with on a day to day basis. Collection, normalization, correlation and all the other things we do are just to get us to the point where we can conduct forensics and report on our findings. These features play a big part in customer satisfaction, so while we’ll dig in to describe how the technology works, we will also discuss what to look for when making buying decisions.

Reporting

For those of us who have been in the industry for a long time, the term ‘reporting’ brings back bad memories. It evokes hundreds of pages of printouts on tractor feed paper, with thousands of entries, each row looking exactly the same as the last. It brings to mind hours of scanning these lines, yellow highlighter in hand, marking unusual entries. It brings to mind the tailoring of reports to include new data, excluding unneeded columns, importing files into print services, and hoping nothing got messed up which might require restarting from the beginning.

Those days are fortunately long gone, as SIEM and Log Management have evolved their capabilities to automate a lot of this work, providing graphical representations that allow viewing data in novel ways. Reporting is a key capability because this process was just plain hard work. To evaluate reporting features included in SIEM/LM, we need to understand what it is, and the stages of a reporting process. You will notice from the description above that there are several different steps to the production of reports, and depending on your role, you may see reporting as basically one of these subtasks. The term ‘reporting’ is a colloquialism used to encompass a group of activities: selecting, formatting, moving, and reviewing data are all parts of the reporting process.

So what is reporting? At its simplest, reporting is just selecting a subset of the data we previously captured for review, focused analysis, or a permanent record (‘artifact’) of activity. Its primary use is to put data into an understandable form, so we can analyze activity and substantiate controls without having to comb through lots of irrelevant stuff. The report comprises the simplified view needed to facilitate review or, as we will discuss later, forensic analysis. We also should not be constrained by the traditional definition of a report, which is a stack of papers (or in modern days a PDF). Our definition of reporting can embrace views within an interface that facilitate analysis and investigation.

The second common use is to capture and record events that demonstrates completion of an assigned task. These reports are historic records kept for verification. Trouble-ticket work orders and regulatory reports are common examples, where a report is created and ‘signed’ by both the producer of the report and an auditor. These snapshots of events may be kept within, or stored separately from, the SIEM/LM system.

There are a couple basic aspects to reporting that we that we want to pay close attention to when evaluating SIEM/LM reporting capabilities:

  1. What reports are included with the standard product?
  2. How easy is it to manage and automate reports?
  3. How easy is it to create new, ad-hoc reports?
  4. What export and integration options are available?

For many standard tasks and compliance needs, pre-built reports are provided by the vendor to lower costs and speed up product deployment. At minimum, vendors provide canned reports for PCI, Sarbanes-Oxley, and HIPAA. We know that compliance is the reason many of you are reading this series, and will be the reason you invest in SIEM. Reports embody the tangible benefit to auditors, operations, and security staff. Just keep in mind that 2000 built-in reports is not necessarily better than 100, despite vendor claims. Most end users typically use 10-15 reports on an ongoing basis, and those must be automated and customized to the user’s requirements.

Most end users want to feel unique, so they like to customize the reports – even if the built-in reports are fine. But there is a real need for ad-hoc reports in forensic analysis and implementation of new rules. Most policies take time to refine, to be sure that we collect only the data we need, and that what we collect is complete and accurate. So the reporting engine needs to make this process easy, or the user experience suffers dramatically.

Finally, the data within the reports is often shared across different audiences and applications. The ability to export raw data for use with third party-reporting and analysis tools is important, and demands careful consideration during selection.

People say end users buy interface and reports, and that is true for the most part. We call that broad idea _user experience_m and although many security professionals minimize the focus on reporting during the evaluation process, it can be a critical mistake. Reports are how you will show value from the SIEM/LM platform, so make sure the engine can support the information you need to show.

Forensics

It was just this past January that I read an “analyst” report on SIEM, where the author felt forensic analysis was policy driven. The report claimed that you could automate forensic analysis and do away with costly forensic investigations. Yes, you could have critical data at your fingertips by setting up policies in advance! I nearly snorted beer out my nose! Believe me: if forensic analysis was that freaking easy, we would detect events in real time and stop them from happening! If we know in advance what to look for, there is no reason to wait until afterwards to perform the analysis – instead we would alert on it. And this is really the difference between alerting on data and forensic analysis of the same data. We need to correlate data from multiple sources and have a real live human being make a judgement call. Let’s be clear: these pseudo-analyst claims and vendor promotional fluff (you know who they are) are complete BS, and do a disservice to end users by creating absurd expectations.

Now that I’m off the soapbox, let’s take a step back. Forensic analysis is conducted by trained security and network analysts to investigate an event, or more likely a sequence of events, indicating fraud or misuse. An analyst may have an idea what to look for in advance, but more often you don’t actually know what you are looking for, and you need to navigate through thousands of events to piece together what happened and understand the breadth of the damage. This involves rewriting queries over and over to drill down and look at data, using different methods of graphing and visualization before finding the proverbial needle in the haystack.

The use cases for forensic analysis are numerous, including examination of past events and data to determine what happened in your network, OS, or application. This may be to verify something that was supposed to happen actually occurred, or to better understand whether strange activity was fraud or misuse. You might need forensic analysis for simple health checks on equipment and business operations. You may need it to scan user activity to support disciplinary actions against employees. You might even need to provide data to law enforcement to pursue criminal data breaches.

Unlike correlation and alerting, where we have automated analysis of events, forensic analysis is largely manual. Fortunately we can leverage collection, normalization, and correlation – much of the data has already been collected, aggregated, and indexed within the SIEM/LM platform.

A forensic analysis usually starts with data provided by a report, an alert, or a query against the SIEM/LM repository. We start with an idea of whether we are interested in specific application traffic, strange behavior from a host, or pretty much an infinite number of things that could be suspicious. We select data with the attributes we are interested in, gathering information we need to analyze events and validate whether the initial suspicious activity is much ado about nothing, or indicates a major issue.

These queries may be as simple as “Show all failed logins for user ‘mrothman’”, or as specific as “Show events from all firewalls, between 1 and 4 am, that involved this list of users”. It is increasingly common to examine application-layer or database activity to provide context for business transactions – for example, “list all changes to the general ledger table where the user was not ‘GA_Admin’ or the application was not ‘GA_Registered_App’.

There are a couple important capabilities we need to effectively perform forensic analysis:

  1. Custom queries and views of data in the repository
  2. Access to correlated and normalized data
  3. Drill-down to view non-normalized or supplementary data
  4. Ability to reference and access older data
  5. Speed, since forensics is usually a race against time (and attackers)

Basically the most important capability is to enable a skilled analyst to follow their instincts. Forensics is all about making their job easier by facilitating access, correlation, and viewing of data. They may start with a set of anomalous communications between two devices, but end up looking at application logs and database transactions to prove a significant data breach. If queries take too long, data is manipulated, or data is not collected, the investigator’s ability to do his/her job is hindered. So the main role of SIEM/LM in forensics is to streamline the process.

To be clear, the tool only makes the process faster and more accurate. Without a strong incident response process, no tool can solve the problem. Although we all get very impressed by a zillion built-in reports and cool drill-down investigations during a vendor demo, don’t miss the forest for the trees. SIEM/Log Management platforms can only streamline a process that already exists. And if the process is bad, you’ll just execute on that bad process faster.

—Adrian Lane

Wednesday, June 09, 2010

Incite 6/9/2010: Creating Excitement

By Mike Rothman

Some businesses are great at creating excitement. Take Apple, for instance. They create demand for their new (and upgraded) products, which creates a feeding frenzy when the public can finally buy the newest shiny object. 2 million iPads in 60 days is astounding. I suspect they’ll move a bunch of iPhone 4 units on June 24 as well (I know I’ll be upgrading mine and the Boss’). They’ve created a cult around their products, and it generates unbelievable excitement whenever there is a new toy to try.

Now that is some fireworks... Last week I was in the Apple store dropping my trusty MacBook Pro off for service. The place was buzzing, and the rest of the mall was pretty much dead. This was 3 PM on a Thursday, but you’d think it was Christmas Eve from looking at the faces of the folks in the store. Everything about the Apple consumer experience is exciting. You may not like them, you may call me a fanboy, but in the end you can’t argue with the results. Excitement sells.

If you have kids, you know all about how Disney creates the same feeling of excitement. Whether it’s seeing a new movie or going to the theme parks, this is another company that does it right. We recently took the kids down to Disneyworld, and it sure didn’t seem like the economy was crap inside the park. Each day it was packed and everyone was enjoying the happiest place on Earth, including my family. One night we stayed at a Disney property. It’s not enough to send a packet of information and confirmations a few months ahead of the trip. By the time you are ready to go, the excitement has faded. So Disney sends an email reminding you of the great time you are about to have a few days before you check in. They give you lots of details about your resort, with fancy pictures of people having a great time. The message is that you will be those people in a few days. All your problems will be gone, because you are praying in the House of the Mouse. Brilliant.

I do a lot of business travel and I can tell you I’m not excited when I get to Topeka at 1am after being delayed for 3 hours at O’Hare. No one is. But it’s not like any of the business-oriented hotels do anything to engage their customers. I’m lucky if I get a snarl from the front desk attendant as I’m assigned some room near the elevator overlooking the sewage treatment facility next door. It’s a friggin’ bed and a place to shower. That’s it.

It just seems to me these big ‘hospitality’ companies could do better. They can do more to engage their customers. They can do more to create a memorable experience. I expect so little that anything they do is upside. I believe most business travelers are like me. So whatever business you are in, think about how you can surprise your customers in a positive fashion (yes, those pesky users who keep screwing everything up are your customers) and create excitement about what you are doing.

I know, we do security. It’s not very exciting when it’s going well. But wouldn’t it be great if a user was actually happy to see you, instead thinking, “Oh, crap, here comes Dr. No again, to tell me not to surf pr0n on the corporate network.”? Think about it. And expect more from yourself and everyone else you do business with.

– Mike.

Photo credits: “Magic Music Mayhem 3 (Explored)” originally uploaded by Express Monorail


Incite 4 U

  1. Microsoft cannot fix stupid – The sage Rob Graham is at it again, weighing in on Google’s alleged dictum to eradicate Microsoft’s OS from all their desktops, because it’s too hard to secure. Rob makes a number of good points in the post, relative to how much Microsoft invests in security and the reality that Windows 7 and IE 8 are the most secure offerings out there. But ultimately it doesn’t matter because it’s human error that is responsible for most of the successful attacks. And if we block one path the attackers find another – they are good that way. So what to do? Do what we’ve always done. Try to eliminate the low hanging fruit that makes the bad guy’s job too easy, and make sure you have a good containment and response strategy for when something bad does happen. And it will, whatever OS you use. – MR

  2. Fight the good fight – Apparently “Symantec believes security firms should eradicate ‘false positives’ ”. I imagine that this would be pretty high on their list. Somewhere between “Rid the world of computer viruses” and “Wipe out all spam”. And I love their idea of monitoring social network sites such as Facebook and online fora to identify false positives, working tirelessly to eliminate the threat of, what was it again? Yeah, misdiagnosis. In fact, I want to help Symantec. I filled out my job application today because I want that job. Believe me, I could hunt Facebook, Twitter, and YouTube all day, looking for those false positives and misdiagnosis thingies. Well, until the spam bots flood these sites with false reports of false positives. Then I’d have to bring the fight to the sports page for false positive detection, or maybe check out those critical celebrity false positives. It sounds like tough work, but hey, it’s a noble cause. Keep up the good fight, guys! – AL

  3. Good intentions – I always struggle with “policy drift”; the tendency to start from a compliant state but lose that over time due to distractions, pressure, and complacency. For example, I’m pretty bad at keeping my info in our CRM tool up to date. That’s okay, because so are Mike and Adrian. As Mathias Thurman writes over at Computerworld, this can be a killer for something crucial like patch management. Mathias describes his difficulties in keeping systems up to date, especially those pesky virtual machines. The policies are there, everyone even started from a known good state, but the practical realities of running a day to day IT shop and *gasp* testing those patches throws a monkey wrench into the system. – RM

  4. Logging as infrastructure… – As Adrian and I continue plowing through the Understanding and Selecting a SIEM/Log Management series, one of the things we may not have explicitly mentioned was that data collection is really an infrastructure function, and there will be applications that run on top to provide solutions to the usage demands. Seems everyone is still hung up on the category names, but Sam Curry on RSA’s blog gets it right. Every user (not just large enterprises) should be figuring out how to leverage the data they are collecting. Whether it’s for security, efficiency, or compliance reporting, things like forensics and correlation can be useful to pretty much any practitioner. Of course, that doesn’t make them any easier to do, but the first step on that path is to consider data collection an infrastructure function, not just a hermetically sealed security problem solved with an isolated security product. – MR

  5. Must read from Ivan – I’m skipping the usual pithy title and intro to simply point you to Ivan Arce’s response to Michal Zalewski’s recent post on software security. Ivan is flat out one of the best security writers and thinkers out there. In this post Ivan lays out a compelling review of the pitfalls of formal models in secure software engineering, but it applies equally well to general security defenses. The key line, and a major theme in one of my current presentations, is, “Michal’s first argument simply points out that devising mathematical-logical formal models to define and implement security usually goes awry in the presence of real world economic actors, and that the information security discipline would benefit more from adopting knowledge, practices and experience from other fields such as sociology and economics, rather than seeking purely technical solutions. I agree.” I prefer cognitive science to sociology since it’s a bit of a harder science, but everything in our industry is driven by how people act, and the economics that influence their behavior. – RM

  6. New Math – Does piracy occur? Yep. Does it have economic impact? Absolutely. But you have to ask yourself why would someone conduct a study like this: Piracy Cost Game Industry $41.5 Billion. Forget for a moment that the students conducting this survey failed their courses in logic, statistics, and finance, and focus on the question of why was this survey commissioned? Is it about piracy and theft? Was it so game companies knows whether they need to adjust their business and pricing models to combat the problem? Is it to gauge whether they should change their protection model? The answer is “D”, none of the above. This is paid PR to influence legislators into thinking that they are going to make billions in extra tax revenue if they can legislate this bad behavior. Dangle that carrot in front of politicians so they will do your bidding. An adjustment to the law will hopefully coax some extra revenue out of a handful of thieves customers without cost to the company. All without having to change their technology, pricing, or behavior. So the politicians don’t generate 1/1000th of what they were promised because the survey is based on totally bogus numbers, but they do get to pass a law, making it a total win/win! And when said billions in revenue fails to materialize, you can blame the government! Now, where is my trillion dollars? I have a budget deficit to erase! – AL

  7. Binary as a second language – I’m sure a lot of folks working in an HP data center are feeling distinctly uncomfortable now. In fact, 9,000 of them will get a lot more uncomfortable as they are replaced with some kind of automation as HP makes a $1b investment to fully automate their data centers. It begs the question of your own value to your organization. Can you be automated? Replaced by a machine? We’d like to think not, but 9,000 folks will soon realize their assumptions were wrong. So always keep in mind that value is proven every day. The other aspect to the story is that HP is adding 6,000 sales and service reps. So it’s that time again to revisit your choice of career and make sure you are on the right path. Many data center ops folks are doing many other things. Like buying a Subway franchise. Kidding aside, HP is on the cutting edge, but the trend toward replacing ops folks isn’t going to go away. It may be time to start thinking about Plan B. – MR

—Mike Rothman

Tuesday, June 08, 2010

DB Quant: Secure Metrics, Part 4, Shield

By Adrian Lane

This portion of the Secure phase is to ‘shield’ databases from threats such as SQL injection, buffer overflows, and other common attacks. The idea is that patching will only address known weaknesses, and only after you apply the patch, but some products detect and block activity that looks unusual or provide a temporary reprieve to buy you time to patch. What we are advocating in this step is not what you will find in recommended best practices from your database vendor, but is increasingly common as an aid for database security. Shielding can be as simple re-mapping port numbers to avoid automated probing, or as complex as virtual patching via web application firewall or activity monitoring platform. It may include changes to the database, such as stored procedures to perform input validation, or might involve changes to the calling application. In any of these cases, the analysis of threats and countermeasures is part of the database security process and must be accounted for in your cost estimates.

We define our Shield process as:

  1. Identify Threats
  2. Specify Countermeasures
  3. Implement
  4. Document

Identify Threats

Variable Notes
Time to identify databases at risk e.g., Those vulnerable to a known attack or containing particularly sensitive information
Time to review ingress/egress points and network protocols The routes via which the database is accessible, including through applications
Time to identify threats and exploitable trust relationships e.g., SQL injection, unpatched vulnerabilities, application hijacking, etc.

Specify Countermeasures

Variable Notes
Time to identify countermeasures i.e., What countermeasures are available, including external options or internal changes, and how they should be configured/implemented
Time to develop regression test cases Shielding affects database operations and must be tested

Implement

Variable Notes
Time to adjust database configuration or functions e.g., New triggers/stored procedures
Time to adjust firewall/IPS/WAF rules
Time to install new security controls e.g., WAF, VPN, etc.
Time to verify changes via regression tests

Document

Variable Notes
Time to document changes Firewall rules, etc.
Time to record code changes in source control system

Other Posts in Project Quant for Database Security

  1. An Open Metrics Model for Database Security: Project Quant for Databases
  2. Database Security: Process Framework
  3. Database Security: Planning
  4. Database Security: Planning, Part 2
  5. Database Security: Discover and Assess Databases, Apps, Data
  6. Database Security: Patch
  7. Database Security: Configure
  8. Database Security: Restrict Access
  9. Database Security: Shield
  10. Database Security: Database Activity Monitoring
  11. Database Security: Audit
  12. Database Security: Database Activity Blocking
  13. Database Security: Encryption
  14. Database Security: Data Masking
  15. Database Security: Web App Firewalls
  16. Database Security: Configuration Management
  17. Database Security: Patch Management
  18. Database Security: Change Management
  19. DB Quant: Planning Metrics, Part 1
  20. DB Quant: Planning Metrics, Part 2
  21. DB Quant: Planning Metrics, Part 3
  22. DB Quant: Planning Metrics, Part 4
  23. DB Quant: Discovery Metrics, Part 1, Enumerate Databases
  24. DB Quant: Discovery Metrics, Part 2, Identify Apps
  25. DB Quant: Discovery Metrics, Part 3, Config and Vulnerability Assessment
  26. DB Quant: Discovery Metrics, Part 4, Access and Authorization.
  27. DB Quant: Secure Metrics, Part 1, Patch.
  28. DB Quant: Secure Metrics, Part 2, Configure.
  29. DB Quant: Secure Metrics, Part 3, Restrict Access.

—Adrian Lane

Monday, June 07, 2010

DB Quant: Secure Metrics, Part 3, Restrict Access

By Adrian Lane

This portion of the Secure phase is reconfiguration of access control and authorization settings. Its conceptual simplicity belies the hard work involved, as is it one of the most tedious and time-consuming of all database security tasks. Merely reviewing the permissions assigned to groups and roles is hard enough, but verifying that just the right users are assigned to each and every role and group can take days or even weeks. Additionally, many DBAs do not fully appreciate the seriousness of misconfigured database authentication: subtle errors can serve as a wide-open avenue for hackers to assume DBA credentials – tricking the database into trusting them.

Automation is extremely useful in the discovery and analysis process, but when it comes down to it, a great deal of manual analysis and verification is required to complete these tasks.

Our process is:

  1. Review Access/Authentication
  2. Determine Changes
  3. Implement
  4. Document

Review Access/Authentication

Variable Notes
Time to review users and access control settings May have been completed in review phase
Time to identify authentication method
Time to compare authentication method with policy e.g., Domain, database, mixed mode, etc.

Determine Changes

Variable Notes
Time to identify user permission changes
Time to identify group and role membership adjustments
Time to identify changes to password policy settings
Time to identify dormant or obsolete accounts

Implement

Variable Notes
Time to alter authentication settings/methods
Time to reconfigure and remove user accounts
Time to implement new groups and roles, and adjust memberships
Time to reconfigure service accounts e.g., generic application and DBA accounts

Document

Variable Notes
Time to document changes
Time to document accepted configuration variances

Other Posts in Project Quant for Database Security

  1. An Open Metrics Model for Database Security: Project Quant for Databases
  2. Database Security: Process Framework
  3. Database Security: Planning
  4. Database Security: Planning, Part 2
  5. Database Security: Discover and Assess Databases, Apps, Data
  6. Database Security: Patch
  7. Database Security: Configure
  8. Database Security: Restrict Access
  9. Database Security: Shield
  10. Database Security: Database Activity Monitoring
  11. Database Security: Audit
  12. Database Security: Database Activity Blocking
  13. Database Security: Encryption
  14. Database Security: Data Masking
  15. Database Security: Web App Firewalls
  16. Database Security: Configuration Management
  17. Database Security: Patch Management
  18. Database Security: Change Management
  19. DB Quant: Planning Metrics, Part 1
  20. DB Quant: Planning Metrics, Part 2
  21. DB Quant: Planning Metrics, Part 3
  22. DB Quant: Planning Metrics, Part 4
  23. DB Quant: Discovery Metrics, Part 1, Enumerate Databases
  24. DB Quant: Discovery Metrics, Part 2, Identify Apps
  25. DB Quant: Discovery Metrics, Part 3, Config and Vulnerability Assessment
  26. DB Quant: Discovery Metrics, Part 4, Access and Authorization.
  27. DB Quant: Secure Metrics, Part 1, Patch.
  28. DB Quant: Secure Metrics, Part 2, Configure.

—Adrian Lane

Draft Data Security Survey for Review

By Rich

Hey everyone,

As mentioned the other day, I’m currently putting together a big data security survey to better understand what data security technologies you are using, and how effective they are.

I’ve gotten some excellent feedback in the comments (and a couple of emails), and have put together a draft survey for final review before we roll this out. A couple things to keep in mind if you have the time to take a look:

  • I plan on trimming this down more, but I wanted to err on the side of including too many questions/options rather than too little. I could really use help figuring out what to cut.
  • Everyone who contributes will be credited in the final report.
  • After a brief bit of exclusivity (45 days) for our sponsor, all the anonymized raw data will be released to the community so you can perform your own analysis. This will be in spreadsheet format, just the same as I get it from SurveyMonkey.

The draft survey is up at SurveyMonkey for review, because it is a bit too hard to replicate here on the site.

To be honest, I almost feel like I’m cheating when I develop these on the site with all the public review, since the end result is way better than what I would have come up with on my own. Hopefully giving back the raw data is enough to compensate all of you for the effort.

—Rich