By Adrian Lane
I am now switching gears to talk about some of the ‘detective’ measures that help with forensic analysis of transactions and activity. The preventative measures discussed previously are great for protecting your system from known attacks, but they don’t help detect fraudulent misuse or failure of business processes. For that we need to capture the events that make up the business processes and analyze them. Our basic tool is database auditing, and they provide plenty of useful information.
Before I get too far into this discussion, it’s worth noting that the term ‘transactions’ is an arbitrary choice on my part. I use it to highlight both that audit data can group statements into logical sequences corresponding to particular business functions, and that audit trail analysis is most useful when looking for insider misuse – rather than external attacks. Audit trails are much more useful for detecting what was changed, rather than what was accessed, and for forensic examination of database ‘state’. There are easier and more efficient ways of cataloging
SELECT statements and simple events like failed logins.
Usually at this point I provide a business justification for auditing of transactions or specific events, and some use cases as examples of how it helps. I will skip that this time, as you already know that auditing is built into every database; and captures database queries, transactions, and important system changes. You probably already use audit logs to see what actions are most common so you can set up indices and tune your most common queries. You may even use auditing to detect suspect activity, to perform forensic audits, or even to address a specific compliance mandate. At the very least you need to have some form of database auditing enabled on production databases to answer the question “What the &!$^% happened” after a database crash. Regardless of your reasons, auditing is essential for security and compliance.
In this post I will focus on capturing transactions and alterations to the database. What type of analysis you do, how long you keep the data, and what reports you create are secondary. I am focusing on gathering the audit trail rather than what do with it next. What’s critical is here understanding what data you need, and how best to capture it.
All databases have some type of audit function. The ‘gotcha’ is that use of database auditing needs careful consideration to avoid storage, performance, and management nightmares. Depending on the vendor and how the feature is deployed, you can gather detailed transactional information, filter unwanted transactions to get a succinct view of database activity, and do so with only modest performance impact. Yes, I said modest performance impact. This remains a hot-button issue for most DBAs and is easy to mess up, so planning and basic tests form a bulk of this phase.
- Benchmark: Find a test system, gather a bunch of queries that reflect user activity on your system, and run some benchmarks. Turn on the auditing or tracing facility and rerun the benchmark. Wince and swear repeatedly at the performance degradation you just witnessed. Aren’t you glad you did this on a test system? You have a baseline for best and worst case performance.
- Select Audit options. Oracle, SQL Server, DB2, and Sybase have multiple options for generating audit trails. Don’t use Oracle’s fine-grained auditing when normal auditing will suffice. Don’t use event monitors on DB2 if you have many different types of events to collect. Which auditing option you choose will dramatically affect performance and data volumes.
- Examine the audit capture options, and select only the event types you need. If you only care about user events, don’t bother collecting all the object events. If you only care about failed logins and changes to system privileges, filter out meta-data and data changes.
- Examine buffer space, tablespace, block utilization, and other resource tuning options. Audit data are static in size, so their data blocks can be set to ‘write-only’, thus saving space. For audit trails that store data within database tables, you can pre-allocate table space and blocks to reduce latency from space allocation.
- Rerun the benchmarks and see what helps performance. Generally these steps provide significant performance gains. No more cursing the database vendor should be needed.
- Filter: Get rid of specific actions you don’t need. For example, batch updates may fall outside your area of interest, yet comprise a significant fraction of the audit log, and can therefore be parsed out. Or you may want to audit all transactions while the databse is running, but not need events from database startup. In some scenarios the database can filter these events, which improves performance. If the database does not provide this type of row-level filtering, you can add a
WHERE clause to the extraction query or use a script to whittle down the extracted data.
- Implement: Take what you learned and apply it in the production environment. Verify that the audit trail is being collected.
- Ad-hoc Analysis: Review the logs periodically, focusing on failures of system functions and logins that indicate system probing. Any policy or report that you generate may miss unusual events, so I recommend occasional ad hoc analysis to detect anything hinkey.
- Record: Document the audit settings as well as the test results, as you will need them in the future to understand the impact of increased auditing levels. Communicate use of audit to users, and warn them that there will be a performance hit due to regulatory and security requirements. Create a log retention policy. This is necessary – even if the policy simply states that you will collect audit trails and delete them at the end of every week, write it down. Many compliance requirements let you define retention however you choose, so be proactive. You can always change it in the future.
More advanced considerations:
- Automate: I recommend that you automate as much of the process as you can, once you have done the initial analysis and configuration. Collection of the audit trail, filtering, purging old records, and long-term storage are all tasks that can be automated through simple scripts and
- Integrate: Reporting services, event management, and change management are all services that help automate security tasks based on audit data.
- Review: Periodically review log tuning and filtering to determine if the settings are still appropriate. Most organizations collect more data over time rather than less, but who knows? You may want to run some sanity checks on the performance benchmarks every now and again, as vendors make improvements or offer new options.
The standard audit capabilities provided by database vendors provides ample information for compliance reporting, but in cases where
SELECT statements are not captured, they are of limited use for security reviews. So in the next post I will go over event analysis to discuss other data collection options and essential events to evaluate.
Remember: if you are a DBA, and people within your company are requiring that you provide them with log files, this is a good thing. It means that they are the ones who will need to review transactions for suspicious activity. They can delve through the reports. You may have more work in setting up the reports and auditing options, but overall this is a good tradeoff.
Posted at Thursday 8th April 2010 8:15 pm
(1) Comments •
By Mike Rothman
Now that we’ve established a process to make sure our software is sparkly new and updated, let’s focus on the configurations of the endpoint devices that connect to our networks. Silly configurations present another path of least resistance for the hackers to compromise your devices. For instance, there is no reason to run FTP on an endpoint device, and your standard configuration should factor that in.
Define Standard Builds
Initially you need to define a standard build, or more likely a few standard builds. Typically for desktops (no sensitive data, and sensitive data), mobile employees, and maybe, kiosks. There probably isn’t a lot of value to going broader than those 4 profiles, but that will depend on your environment.
A good place to start is one of the accepted benchmarks of configurations available in the public domain. Check out the Center for Internet Security, which produces configuration benchmarks for pretty much every operating system and many of the major applications. In order to see your tax dollars at work (if you live in the US anyway), also consult NIST, especially if you are in the government. Its SCAP configuration guides provide a similar type of enumeration of specific setting to lock down your machines.
To be clear, we need to balance security with usability and some of the configurations suggested in the benchmarks clearly impact usability. So it’s about figuring out what will work in your environment, documenting those configurations, getting organizational buy-in, and then implementing.
It also makes sense to put together a list of authorized software as part of the standard builds. You can have this authorized software installed as part of the endpoint build process, but it also provides an opportunity to revisit policies on applications like iTunes, QuickTime, Skype, and others which may not yield a lot of business value and have a history of vulnerability. We’re not saying these applications should not be allowed – you’ve got to figure that out in the context of your organization – but you should take the opportunity to ask the questions.
As you define your standard builds, at least on Windows, you should turn on anti-exploitation technologies. These technologies make it much harder to gain control of an endpoint through a known vulnerability. I’m referring to DEP (data execution prevention) and ASLR (address space layout randomization), though Apple is also implementing similar capabilities in their software.
To be clear, anti-exploitation technology is not a panacea for protection – as the winners of Pwn2Own at CanSecWest show us every year. Especially for those applications that don’t support it (d’oh!), but the technologies do help make it harder to exploit the vulnerabilities in compatible software.
- Running as a standard user – We’ve written a bit on the possibilities of devices running in standard user mode (as opposed to administrator mode), and you should consider this option when designing secure configurations, especially to help enforce authorized software policies.
- VPN to Corporate – Given the reality that mobile users will do something silly and put your corporate data at risk, one technique to protect them is to run all their Internet traffic through the VPN to your site. Yes, it may add a bit of latency, but at least the traffic will be running through the web gateway and you can both enforce policy and audit what the user is doing. As part of your standard build, you can enforce this network setting.
Implementing Secure Configurations
Once you have the set of secure configurations for your device profiles, how do you start implementing them? First make sure everyone buys into the decisions and understands the ramifications of going down this path. Especially if you plan to stop users from installing certain software or block other device usage patterns. Constantly asking for permission can be dangerously annoying but choosing the right threshold for confirmations is a critical aspect of a designing a policy. If the end users feel they need to go around the security team and their policies to get the job done, everyone loses.
Once the configurations are locked and loaded, you need to figure out how much work is required for implementation. Next you assess the existing endpoints against the configurations. Lots of technologies can do this, ranging from Windows management tools, to vulnerability scanners, to third party configuration management offerings. The scale and complexity of your environment should drive the selection of the tool.
Then plan to bring those non-compliant devices into the fold. Yes, you could just flip the switch and make the changes, but since many of the configuration settings will impact user experience, it makes sense to do a bit of proactive communication to the user community. Of course some folks will be unhappy, but that’s life. More importantly, this should help cut down help desk mayhem when some things (like running that web business from corporate equipment) stop working.
Discussion of actually making the changes brings us to automation. For organizations with more than a couple dozen machines, a pretty significant ROI is available from investing in some type of configuration management. Again, it doesn’t have to be the Escalade of products, and you can even look at things like Group Policy Objects in Windows. The point is making manual changes on devices is idiotic, so apply the level of automation that makes sense in your environment.
Finally, we also want to institutionalize the endpoint configurations, and that means we need to get devices built using the secure configuration. Since you probably have an operations team that builds the machines, they need to get the image and actually use it. But since you’ve gotten buy-in at all steps of this process, that shouldn’t be a big deal, right?
Next up, we’ll discuss the anti-malware space and what makes sense on our endpoints.
Other posts in the Endpoint Security Fundamentals Series
Posted at Thursday 8th April 2010 12:07 pm
(2) Comments •
By Mike Rothman
Come on, admit it. Unless you have Duke Blue Devil blood running through your veins (and a very expensive diploma on the wall) or had Duke in your tournament bracket with money on the line, you were pulling for the Butler Bulldogs to prevail in Monday night’s NCAA Men’s Basketball final. Of course you were – everyone loves the underdog.
If you think of all the great stories through history, the underdog has always played a major role. Think David taking down Goliath. Moses leading the Israelites out of Egypt. Pretty sure the betting line had long odds on both those scenarios. Think of our movie heroes, like Rocky, Luke Skywalker, Harry Potter, and the list goes on and on. All weren’t supposed to win and we love the fact that they did. We love the underdogs.
Unfortunately reality intruded on our little dream, and on Monday Butler came up a bucket short. But you still felt good about the game and their team, right? I can’t wait for next year’s season to see whether the little team that could can show it wasn’t all just a fluke (remember George Mason from 2006?).
And we love our underdogs in technology, until they aren’t underdogs anymore. No one really felt bad when IBM got railed when mainframes gave way to PCs. Unless you worked at IBM, of course. Those damn blue shirts. And when PCs gave way to the Internet, lots of folks were happy that Microsoft lost their dominance of all things computing. How long is it before we start hating the Google. Or the Apple?
It’ll happen because there will be another upstart taking the high road and showing how our once precious Davids have become evil, profit-driven Goliaths. Yup, it’ll happen. It always does. Just think about it – Apple’s market cap is bigger than Wal-Mart. Not sure how you define underdog, but that ain’t it.
Of course, unlike Rocky and Luke Skywalker, the underdog doesn’t prevail in two hours over a Coke and popcorn. It happens over years, sometimes decades. But before you go out and get that Apple logo tattooed on your forearm to show your fanboi cred, you may want to study history a little. Or you may become as much a laughingstock as the guy who tattooed the Zune logo on his arm. I’m sure that seemed like a good idea at the time, asshat. The mighty always fall, and there is another underdog ready to take its place.
If we learn anything through history, we should now the big dogs will always let us down at some point. So don’t get attached to a brand, a company, or a gadget. You’ll end up as disappointed as the guy who thought The Phantom Menace would be the New Hope of our kids’ generation.
Photo credits: “Underdog Design” originally uploaded by ChrisM70 and “Zune Tattoo Guy” originally uploaded by Photo Giddy
Incite 4 U
What about Ritalin? – Shrdlu has some tips for those of us with an, uh, problem focusing. Yes, the nature of the security managers’ job is particularly acute, but in reality interruption is the way of the world. Just look at CNN or ESPN. There is so much going on I find myself rewinding to catch the headlines flashing across the bottom. Rock on, DVR – I can’t miss that headline about… well whatever it was about. In order to restore any level of productivity, you need to take Shrdlu’s advice and delegate, while removing interruptions – like email notifications, IM and Twitter. Sorry Tweeps, but it’s too hard to focus when you are tempted by links to blending an iPad. It may be counter-intuitive, but you do have to slow down to speed up at times. – MR
Database security is a headless rhicken – As someone who has been involved with database security for a while, it comes as no surprise that this study by the Enterprise Strategy Group shows a lack of coordination is a major issue. Anyone with even cursory experience knows that security folks tend to leave the DBAs alone, and DBAs generally prefer to work without outside influence. In reality, there are usually 4+ stakeholders – the DBA, the application owner/manager/developer, the sysadmin, security, and maybe network administration (or even backup, storage, and…). Everyone views the database differently, each has different roles, and half the time you also have outside contractors/vendors managing parts of it. No wonder DB security is a mess… pretty darn hard when no one is really in charge (but we sure know who gets fired first if things turn south). – RM
Beware of surveys bearing gifts – The PR game has changed dramatically over the past decade. Now (in the security business anyway) it’s about sound bites, statistics, and exploit research. Without either of those three, the 24/7 media circus isn’t going to be interested. Kudos to Bejtlich, who called out BeyondTrust for trumping up a “survey” about the impact of running as a standard user. Now to be clear, I’m a fan of this approach, and Richard acknowledges the benefits of running as a standard user as well. I’m not a fan of doing a half-assed survey, but I guess I shouldn’t be surprised. It’s hard to get folks interested in a technology unless it’s mandated by compliance. – MR
e-Banking and the Basics – When I read Brian Krebs’ article on ‘e-Banking Guidance for Banks & Businesses’, I was happy to see someone offering pragmatic advice on how to detect and mitigate the surge of on-line bank fraud. What shocked me is that the majority of his advice was basic security and anti-fraud steps, and it was geared towards banks! They are not already doing all this stuff? Oh, crap! Does that mean most of these regional banks are about as sophisticated as an average IT shop about security – “not very”? WTF? You don’t monitor for abnormal activity already? You don’t have overlapping controls in place already? You don’t have near-real-time fraud detection in place already? You’re a freaking bank! It’s 2010, and you are not requiring 3rd factor verification for sizable Internet transfers already? I suspect that security will be a form of business Darwinism, and you’ll be out of business soon enough for failing to adapt. Then someone else will worry about your customers. I just hope they don’t get bankrupted before you finish flailing and failing. – AL
If you can’t beat them, OEM – When you have an enterprise firewall that isn’t a market leader in a mature market, what do you do? That’s the challenge facing McAfee. The former Secure Computing offering (Sidewinder) still has a decent presence in the US government, but hasn’t done much in the commercial sector, and isn’t going to displace the market leaders like Cisco, Juniper, or Check Point by hoping some ePO fairy dust changes things. So McAfee is partnering with other folks to integrate firewall capabilities into network devices. A while back they announced a deal with Brocade (the former Foundry switch folks) and this week did a deal with Riverbed to have the firewall built into the WAN optimization box. Clearly security and network stuff need to come together cleanly (something Cisco and Juniper have been pushing) and folks like Foundry and Riverbed had no real security mojo. But the real question is whether this is going to help McAfee capture any share in network security. I’m skeptical because it’s not like the folks using Brocade switches or Riverbed gear aren’t doing security now, and an OEM relationship doesn’t provide the perceived integration that will make a long-term difference. – MR
Compliance owns us – No surprise – yet another survey shows that compliance drives security spending, even though it doesn’t always align with enterprise priorities. Forrester performed a study, commissioned by RSA and Microsoft, on where dollars go compared to the information assets an organization prioritizes. The study did an okay job of constraining the normally fuzzy numbers around losses (limiting costs to hard dollars), but I’m a bit skeptical that organizations are tracking them well in the first place. Some of the conclusions are pretty damn weak, especially considering how they structured the study, but it’s still worth a read to judge attitudes – even if the value numbers are crap. While imperfect, it’s a better methodology than the vast majority of this kind of research. As I’ve said before – I think our compliance obsession is the natural result of the current loss economics, and until we can really measure the costs of IP loss, nothing will change. – RM
If not the FCC, then who? – In a clever move, Comcast was able to successfully argue against net neutrality claims, arguing that because the FCC deregulated the Internet, they have no basis to force compliance with a policy that is not embodied in law. Rather than debate the merits of net neutrality itself, they side-stepped the issue. As there is no other governing body that could enforce the policy at this time, Comcast is getting its way. The corporate equivalent of a cold-blooded murderer getting off on a technicality. But this is a pyrrhic victory, because now we get to see all those clever tools that hide content and protocols from the Chinese government unleashed closer to home, so Verizon, AT&T and Comcast are going to end up having to move the data regardless. Hopefully the public will find a suitable way to avoid broadband providers’ bureaucracy and legislation at the same time. – AL
Are you grin frackking me? – Funny article here on the Business Insider about a former consultant (now a VC) who called bunk on his entire organization, which was basically feeding everyone a load of crap about their capabilities. I’ve been using the term “grin fscker” for years to represent someone who tells you want you want to hear, but has no intention of following through. Sometimes I call them on it, sometimes I don’t – and that’s my bad. The only way to deal with grin fscking is to call it out and shove the grin fscker’s nose in the poop. As the post explains, the buck should stop here. If someone is being disingenuous, it’s everyone’s responsibility to call that out. – MR
Posted at Wednesday 7th April 2010 7:00 am
(0) Comments •
By Mike Rothman
Running old software is bad. Bad like putting a new iPad in a blender. Bad because all software is vulnerable software, and with old software even unsophisticated bad guys have weaponized exploits to compromise the software. So the first of the Endpoint Security Fundamentals technical controls is to make sure you run updated software.
Does that mean you need to run the latest version of all your software packages? Can you hear the rejoicing across the four corners of the software ecosystem? Actually, it depends. What you do need to do is make sure your endpoint devices are patched within a reasonable timeframe. Like one minute before the weaponized exploit hits the grey market (or shows up in Metasploit).
Assess your (software) assets
Hopefully you have some kind of asset management thing, which can tell you what applications run in your environment. If not, your work gets a bit harder because the first step requires you to inventory software. No, it’s not about license enforcement, it’s about risk assessment. You need to figure out the your software vendors’ track records on producing secure code, and then on patching exploits as they are discovered. You can use sites like US-CERT and Secunia, among others, to figure this out. Your anti-malware vendor also has a research site where you can look at recent attacks by application.
You probably hate the word prioritize already, but that’s what we need to do (again). Based on the initial analysis, stack rank all your applications and categorize into a few buckets.
- High Risk: These applications are in use by 50M+ users, thus making them high-value targets for the bad guys. Frequent patches are issued. Think Microsoft stuff – all of it, Adobe Acrobat, Firefox, etc.
- Medium Risk: Anything else that has a periodic patch cycle and is not high-risk. This should be a big bucket.
- Low Risk: Those apps which aren’t used by many (security by obscurity) and tend to be pretty mature, meaning they aren’t updated frequently.
Before we move on to the updating/patching process, while you assess the software running in your environment, it makes sense to ask whether you really need all that stuff. Even low-risk applications provide attack surface for the bad guys, so eliminating software you just don’t need is a good thing for everyone. Yes, it’s hard to do, but that doesn’t mean we shouldn’t try.
Defining the Update/Patch Process
Next you need to define what your update and patching process is going to be – and yes, you’ll have three different policies for high, medium and low risk applications. The good news is your friends at Securosis have already documented every step of this process, in gory detail, through our Patch Management Quant research.
At a very high level, the cycle is: Monitor for Release/Advisory, Evaluate, Acquire, Prioritize and Schedule, Test and Approve, Create and Test Deployment Package, Deploy, Confirm Deployment, Clean up, and Document/Update Configuration Standards. Within each phase of the cycle, there are multiple steps.
Not every step defined in PM Quant will make sense for your organization, so you can pick and choose what’s required. The requirement is to having a defined, documented, and operational process; and to have answered the following questions for each of your categories:
- Do you update to the latest version of the application? Within how soon after its release?
- When a patch is released, how soon should it be applied? What level of testing is required before deployment?
In a perfect world, everything should be patched immediately and all software should be kept at the latest version. Unless you are talking about Microsoft Vista <grin>. But we all know the world isn’t perfect and there are real economic and resource dependencies to tightening the patch window and buying software updates – and discovering more bugs in the patches themselves. So all these factors need to be weighed when defining the process and policies. There is no right or wrong answer – it’s a matter of balancing economic reality against risk tolerance.
Also keep in mind that patching remote and mobile users is a different animal, and you have to factor that into the process. Many of these folks infrequently connect and may not have access to high-bandwidth connections. Specifying a one-day patch window for installing a 400mb patch at a mobile office in the jungle may not be practical.
Tools and Automation
Lots of tools can help you automate your software updating and patching process. They range from full-fledged asset and configuration management offerings to fairly simple patching products. It’s beyond the scope of this series to really dig into the nuances of configuration/patch management, but we’ll just say here that any organization with more than a couple hundred users needs a tool. This is a topic we’ll cover in detail later this year.
The next endpoint control we’ll discuss is Secure Configurations, so stay tuned.
Other posts in the Endpoint Security Fundamentals Series
Posted at Tuesday 6th April 2010 7:00 pm
(0) Comments •
Today I read two very different posts on what to look for when hiring, and how to get started in the security field. Each clearly reflects the author’s experiences, and since I get asked both sides of this question a lot, I thought I’d toss my two cents in.
First we have Shrdlu’s post over at Layer 8 on Bootstrapping the Next Generation. She discusses the problem of bringing new people into a field that requires a fairly large knowledge base to be effective.
Then over at Errata Security, Marisa focuses more on how to get a job through the internship path (with a dollop of self-promotion). As one of our industry’s younger recruits, who successfully built her own internship, she comes from exactly the opposite angle.
My advice tends to walk a line slightly in the middle of the two, and varies depending on where in security you want to go.
When someone asks me how to get started in security I tend to offer two recommendations:
- Start with a background as a systems and network administrator… probably starting with the lowly help desk. This is how I got started (yes, I’m thus biased), and I think these experiences build a strong foundation that spans most of the tasks you’ll later deal with. Most importantly, they build experience on how the real world works – even more so than starting as a developer. You are forced to see how systems and applications are really used, learn how users interact with technology, and understand the tradeoffs in keeping things running on a day to day basis. I think even developers should spend some time on the help desk or cleaning up systems – while I was only a mediocre developer from a programming standpoint, I became damn good at understanding user interfaces and workflows from the few years I spent teaching people how to unhide their Start menus and organize those Windows 3.1 folders.
- Read a crapload of action thriller and spy novels, watch a ton of the same kinds of movies, and express your inner paranoid. This is all about building a security mindset, and it is just as important as any technical skills. It’s easy to say “never assume”, but very hard to put it into practice (and to be prepared for the social consequences). You are building a balanced portfolio of paranoia, cynicism, and skepticism. Go do some police ride-alongs, become an EMT, train in a hard martial art, join the military, or do whatever you need to build some awareness. If you were the kid who liked to break into school or plan your escape routes for when the commies (or yankees) showed up, you’re perfect for the industry. You need to love security.
The best security professionals combine their technical skills, a security mindset, and an ability to communicate (Marisa emphasized public speaking skills) with a wrapper of pragmatism and an understanding of how to balance the real world sacrifices inherent to security.
These are the kinds of people I look for when hiring (not that I do much of that anymore). I don’t care about a CISSP, but want someone who has worked with users and understands technology from actual experience rather than a library shelf, or a pile of certificates.
In terms of entry-level tracks, we are part of a complex profession and thus need to specialize. Even security generalists now need to have at least one deep focus area. I see the general tracks as:
- Operational Security – The CISO track. Someone responsible for general security in the organization. Usually comes from the systems or network track, although systems integration is another option.
- Secure Coder – Someone who either programs security software, or is responsible for helping secure general (non-security-specific) code. Needs a programmer’s background, but I’d also suggest some more direct user interaction if they’re used to coding in a closet with pizzas slipped under the door at irregular intervals.
- Security Assessor (or Pen Tester) – Should ideally come out of the coder or operations track. I know a lot of people are jumping right into pen testing, but the best assessors I know have practical experience on the operational side of IT. That provides much better context for interpreting results and communicating with clients. The vulnerability researcher or penetration tester who speaks in absolutes has probably spent very little time on the defensive or operational side of security.
You’ll noticed I skipped a couple options – like the security architect. If you’re a security architect and you didn’t come from a programming or operational background, you likely suck at your job. I also didn’t break out security management – mostly since I hate managers who never worked for a living. To be a manager, start at the bottom and work your way up. In any case, if you’re ready for either of those roles you’re past these beginner’s steps, and if you want to get there, this is how to begin.
To wrap this up, when hiring look for someone with experience outside security and mentor them through if they have the right mindset. Yes, this means it’s hard to start directly in security, but I’m okay with that. It only takes a couple years in a foundational role to gain the experience, and if you have a security mindset you’ll be contributing to security no matter your operational role. So if you want to work in security, develop the mindset and jump on every security opportunity that pops up. As either a manager or recruit, also understand the different focus of each career track.
Finally, in terms of certifications, focus on the ‘low-level’ technical ones, often from outside security. A CISSP doesn’t teach you a security mindset, and as Shrdlu said it’s insane that something that is supposed to take 5 years of operational experience is a baseline for hiring – and we all know it’s easy to skirt the 5-year rule anyway.
I’m sure some of you have more to add to this one…
Posted at Tuesday 6th April 2010 3:07 pm
(1) Comments •
By Mike Rothman
One of the hardest things to do in security is to discover what really works. It’s especially hard on the endpoint, given the explosion of malware and the growth of social-engineering driven attack vectors. Organizations like ICSA Labs, av-test.org, and VirusBulletin have been testing anti-malware suites for years, though I don’t think most folks put much stock in those results. Why? Most of the tests yield similar findings, which means all the products are equally good. Or more likely, equally bad.
I know I declared the product review dead, but every so often you still see comparative reviews – such as Rob Vamosi’s recent work on endpoint security suites in PCWorld. The rankings of the 13 tested are as follows (in order):
- Top Picks: Norton Internet Security 2010, Kaspersky Internet Security 2010, AVG Internet Security 9.0
- Middle Tier: Avast, BitDefender, McAfee, Panda, PC Tools, Trend Micro, and Webroot
- Laggards: ESET, F-Secure, and ZoneAlarm
The PCWorld test was largely driven by a recent av-test.org study into malware detection. But can one lab produce enough information (especially in a single round of testing) to really understand which product works best? I don’t think so, because my research in this area has shown that 3 testing organizations can produce 10 different results. A case in point is the NSS Labs test from August of last year. Their rankings are as follows, ranked by malware detection rates: Trend Micro, Kaspersky, Norton, McAfee, Norman, F-Secure, AVG, Panda, and ESET. Some similarities, but also a lot of differences.
More recently, NSS did an analysis of how well the consumer suites detected the Aurora attacks (PDF), which got so much air play in January. Their results were less than stellar: only McAfee entirely stopped the original attack and a predictable variant two weeks out. ESET and Kaspersky performed admirably as well, but it’s bothersome that most of the products we use to protect our endpoints have awful track records like this.
If you look at the av-test ratings and then compare them to the NSS tests, the data shows some inconsistencies – especially with vendors like Trend Micro who are ranked much higher by NSS but close to the bottom by av-test; and AVG which is ranked well by av-test but not by NSS. So what’s the deal here?
Your guess is as good as mine. I know the NSS guys and they focus their tests pretty heavily on what they call “social engineering malware,” which are legit downloads with malicious code hidden in the packages. This kind of malware is much harder to detect than your standard malware sample that’s been on the WildList for a month. Reputation and advanced file integrity monitoring capabilities are critical to blocking socially engineered malware, and most folks believe these attacks will continue to proliferate over time.
Unfortunately, there isn’t enough detail about the av-test.org tests to really know what they are digging into. But my spidey sense tingles on the objectivity of their findings when you read this report from December by av-test.org and commissioned by Trend. It concerns me that av-test.org had Trend close to the bottom in a number of recent tests, but changed their testing methodology a bit with this test, and shockingly: Trend came out on top. WTF? There is no attempt to reconcile the findings across different sets of av-test.org tests, but I’d guess it has something to do with green stuff changing hands.
Moving forward, it would also be great to see some of the application whitelisting products tested alongside the anti-malware suites – for detection, blocking, and usability. That would be interesting.
If I’m an end user trying to decide between these products, I’m justifiably confused. Personally, I favor the NSS tests – if only because they provide a lot more transparency on they did their tests. The inconsistent information being published by av-test.org is a huge red flag for me.
But ultimately you probably can’t trust any of these tests, so you have a choice to make. Do you care about the test scores or not? If not, then you buy based on what you would have bought on anyway: management and price. It probably makes sense to disqualify the bottom performers in each of the tests, since for whatever reason the testers figured out how to beat them, which isn’t a good sign.
In the end you will probably kick the tires yourself, pick a short list (2 or 3 packages) and run them side by side though a gauntlet of malware you’ve found in your organization. Or contract with testing labs to do a test on your specific criteria. But that costs money and takes time, neither of which we have a lot of.
The Bottom Line
The truth may be out there, but Fox Mulder has a better chance of finding it than you. So we focus on the fundamentals of protecting not just the endpoints, but also the networks, servers, applications, and data. Regardless of the effectiveness of the anti-malware suites, your other defenses should help you both block and detect potential breaches.
Posted at Tuesday 6th April 2010 1:00 pm
(1) Comments •
By Mike Rothman
As we discussed in the last ESF post on prioritizing the most significant risks, the next step is to build, communicate, and execute on a triage plan to fix those leaky buckets. The plan consists of the following sections: Risk Confirmation, Remediation Plan, Quick Wins, and Communication
Coming out of the prioritize step, before we start committing resources and/or pulling the fire alarm, let’s take a deep breath and make sure our ranked list really represents the biggest risks. How do we do that? Basically by using the same process we used to come up with the list. Start with the most important data, and work backwards based on the issues we’ve already found.
The best way I know to get everyone on the same page is to have a streamlined meeting between the key influencers of security priorities. That involves folks not just within the IT team, but also probably some tech-savvy business users – since it’s their data at risk. Yes, we are going to go back to them later, once we have the plan. But it doesn’t hurt to give them a heads up early in the process about what the highest priority risks are, and get their buy-in early and often throughout the process.
Now comes the fun part: we have to figure out what’s involved in addressing each of the leaky buckets. That means figuring out whether you need to deploy a new product, or optimize a process, or both. Keep in mind that for each of the discrete issues, you want to define the fix, the cost, the effort (in hours), and the timeframe commitment to get it done. No, none of this is brain surgery, and you probably have a number of fixes on your project plan already. But hopefully this process provides the needed incentive to get some of these projects moving.
Once the first draft of the plan is completed, start lining up the project requirements with the reality of budget and availability of resources. That way when it comes time to present the plan to management (including milestones and commitments), you have already had the visit with Mr. Reality so you can stick to what is feasible.
As you are doing the analysis to build the remediation plan, it’ll be obvious that some fixes are cheap and easy. We recommend you take the risk (no pun intended) and take care of those issues first. Regardless of where they end up on the risk priority list. Why? We want to build momentum behind the endpoint security program (or any program, for that matter) and that involves showing progress as quickly as possible. You don’t need to ask permission for everything.
The hallmark of any pragmatic security program (read more about the Pragmatic philosophy here) is frequent communications and senior level buy-in. So once we have the plan in place, and an idea of resources and timeframes, it’s time to get everyone back in the room to get thumbs up for the triage plan.
You need to package up the triage plan in a way that makes sense to the business folks. That means thinking about business impact first, reality second, and technology probably not at all. These folks want to know what needs to be done, when it can get done, and what it will cost.
We recommend you structure the triage pitch roughly like this:
- Risk Priorities – Revisit the priorities everyone has presumably already agreed to.
- Quick Wins – Go through the stuff that’s already done. That will usually put the bigwigs in a good mood, since things are already in motion.
- Milestones – These folks don’t want to hear the specifics of each project. They want the bottom line. When will each of the risk priorities be remediated?
- Dependencies – Now that you’ve told them what need to do, next tell them what constraints you are operating under. Are there budget issues? Are there resource issues? Whatever it is, make sure you are very candid about what can derail efforts and impact milestones.
- Sign-off – Then you get them to sign in blood as to what will get done and when.
Dealing with Shiny Objects
To be clear, getting to this point in the process tends to be a straightforward process. Senior management knows stuff needs to get done and your initial should plans present a good way to get those things done. But the challenge is only beginning, because as you start executing on your triage plan, any number of other priorities will present that absolutely, positively, need to be dealt with.
In order to have any chance to get through the triage list, you’ll need to be disciplined about managing expectations relative to the impact of each shiny object on your committed milestones. We also recommend a monthly meeting with the influencers to revisit the timeline and recast the milestones – given the inevitable slippages due to other priorities.
OK, enough of this program management stuff. Next in this series, we’ll tackle some of the technical fundamentals, like software updates, secure configuration, and malware detection.
Other posts in the Endpoint Security Fundamentals Series
Posted at Monday 5th April 2010 11:45 pm
(0) Comments •
By Adrian Lane
When you think of database virtualization, do you think this term means:
a) Abstracting the database installation/engine from the application and storage layers.
b) Abstracting the database instance across multiple database installations or engines.
c) Abstracting the data and tables from a specific database engine/type, to make the dependent application interfaces more generic.
d) Abstracting the data and tables across multiple database installations/engines.
e) Moving your database to the cloud.
f) All of the above.
I took a ‘staycation’ last month, hanging around the house to do some spring cleaning. Part of the cleaning process was cutting through the pile of unread technical magazines and trade rags to see if there was anything interesting before I threw them into the garbage. I probably should have just thrown them all away, as in the half dozen articles I read on the wonderful things database virtualization can do for you, not one offered a consistent definition. In most cases, the answer was f), and they used the term “database virtualization” to mean all of the above options with actually mentioning that database virtualization can have more than one definition. One particularly muddled piece at eWeek last October used all of the definitions interchangeably – within a single article.
Databases have been using abstraction for years. Unfortunately the database techniques are often confused with other forms of platform, server, or application virtualization – which run on top of a hypervisor utilizing any of several different techniques (full, emulated, application, para-virtualization, etc.). To further confuse things, other forms of abstraction and object-relational mapping layers within applications which uses the database, do not virtualize resources at all. Let’s take a closer look at the options and differentiate between them:
a) This form of database virtualization is most commonly called “database virtualization”. It’s more helpful to think about it as application virtualization because the database is an application. Sure, the classic definition of a database is simply a repository of data, but from a practical standpoint databases are managed by an application. SQL Server, Oracle and MySQL are all applications that manage data.
b) This option can also be a database virtualization model, We often call this clustering, and many DBAs will be confused if you call it virtualization. Note that a) & c) are not mutually exclusive.
c) This is not a database virtualization model, but rather and abstraction model. It is used to decouple specific database functions from the application, as well as enabling more powerful 4GL object-oriented programming rather than dealing directly with 3GL routines and queries. The abstraction is handled within the application layer through a service like Hibernate, rather than through system virtualization software like Xen or VMware.
d) Not really database virtualization, but abstraction. Most DBAs call this ‘partitioning’, and the model has been available for years, with variants from multiple database vendors.
e) The two are unrelated. Chris Hoff summarized the misconception well when he said “Virtualization is not a requirement for cloud computing, but the de-facto atomic unit of the digital infrastructure has become the virtual machine”. Actually, I am paraphrasing from memory, but I think that provides the essence of why people often equate the two.
This is important for two reasons. One, the benefits that can be derived depend heavily on the model you select. Not every benefit is available with every model, so these articles are overly optimistic. Two, the deployment model affects security of the data and the database. What security measures you can deploy and how you configure them must be reconsidered in light of the options you select.
Posted at Monday 5th April 2010 8:14 pm
(0) Comments •
By Mike Rothman
As we start to dig into the Endpoint Security Fundamentals series, the first step is always to figure out where you are. Since hope is not a strategy, you can’t just make assumptions about what’s installed, what’s configured correctly, and what the end users actually know. So we’ve got to figure that out, which involves using some of the same tactics our adversaries use.
The goal here is twofold: first you need to figure out what presents a clear and present danger to your organization, and put a triage plan in place to remediate those issues. Secondly, you need to manage expectations at all points in this process. That means documenting what you find (no matter how ugly the results) and communicating that to management, so they understand what you are up against.
To be clear, although we are talking about endpoint security here, this prioritization (and triage) process should be the first steps in any security program.
Assessing the Endpoints
In terms of figuring out your current state, you need to pay attention to a number of different data sources – all of which yield information to help you understand the current state. Here is a brief description of each and the techniques to gather the data.
- Endpoints – Yes, the devices themselves need to be assessed for updated software, current patch levels, unauthorized software, etc. You may have a bunch of this information via a patch/configuration management product or as part of your asset management environment. To confirm that data, we’d also recommend you let a vulnerability scanner loose on at least some of the endpoints, and play around with automated pen testing software to check for exploitability of the devices.
- Users – If we didn’t have to deal with those pesky users, life would be much easier, eh? Well, regardless of the defenses you have in place, an ill-timed click by a gullible user and you are pwned. You can test users by sending around fake phishing emails and other messages with fake bad links. You can also distribute some USB keys and see how many people actually plug them into machines. These “attacks” will determine pretty quickly whether you have an education problem and what other defenses you may need, to overcome those issues.
- Data – I know this is about endpoint security, but Rich will be happy to know doing a discovery process is important here as well. You need to identify devices with sensitive information (since those warrant a higher level of protection) and the only way to do that is to actually figure out where the sensitive data is. Maybe you can leverage other internal efforts to do data discovery, but regardless, you need to know which devices would trigger a disclosure if lost/compromised.
- Network – Clearly devices already compromised need to be identified and remediated quickly. The network provides lots of information to indicate compromised devices. Whether it’s looking at network flow data, anomalous destinations, or alerts on egress filtering rules – the network is a pretty reliable indicator of what’s already happened, and where your triage efforts need to start.
Keep in mind that it is what it is. You’ll likely find some pretty idiotic things happening (or confirm the idiotic things you already knew about), but that is all part of the process. The idea isn’t to get overwhelmed, it’s to figure out how much is broken so you can start putting in place a plan to fix it, and then a process to make sure it doesn’t happen so often.
Prioritizing the Risks
Prioritization is more art than science. After spending some time gathering data from the endpoints, users, data, and network, how do you know what is most important? Not to be trite, but it’s really a common sense type thing.
For example, if your network analysis showed a number of endpoints already compromised, it’s probably a good idea to start by fixing those. Likewise, if your automated pen test showed you could get to a back-end datastore of private information via a bad link in an email (clicked on by an unsuspecting user), then you have a clear and present danger to deal with, no?
After you are done fighting the hottest fires, the prioritization really gets down to who has access to sensitive data and making sure those devices are protected. This sensitive data could be private data, intellectual property, or anything else you don’t want to see on the full-disclosure mailing list. Hopefully your organization knows what data is sensitive, so you can figure out who has access to that data and build the security program around protecting that access.
In the event there is no internal consensus about what data is important, you can’t be bashful about asking questions like, “why does that sales person need the entire customer database?” and “although it’s nice that the assistant to the assistant controller’s assistant wants to work from home, should he have access to the unaudited financials?” Part of prioritizing the risk is to identify idiotic access to sensitive data.
And not everything can be a Priority 1.
Jumping on the Moving Train
In the real world, you don’t get to stop everything and start your security program from scratch. You’ve already got all sorts of assessment and protection activities going on – at least we hope you do. That said, we do recommend you take a step back and not be constrained to existing activities. Existing controls are inputs to your data gathering process, but you need to think bigger about the risks to your endpoints and design a program to handle them.
At this point, you should have a pretty good idea of which endpoints are at significant risk and why. In the next post, we’ll discuss how to build the triage plan to address the biggest risks and get past the fire fighting stage.
Endpoint Security Fundamentals Series
Posted at Friday 2nd April 2010 8:27 pm
(3) Comments •
By Adrian Lane
We have one last process to define in our Quant for Database Security series, before moving into more specific metrics. Here we cover the Change Management task of the Manage phase. The steps and process flow in this task strongly resemble the patching process introduced in the previous section. Rather than looking to the database vendor for patch advisories, you will be looking at internal workflow, product development, and trouble-ticketing work requests; for changes to database structure, stored procedures, application interfaces, indices, views, and data extraction/masking.
Security is not something that typically comes to mind when thinking about change management. For those of you who support databases that back large web applications, a lot of daily adjustments and maintenance will be security-related, in the same way that database patches are as likely to be security-related as updates to the core functionality. As the costs of these exercises are on par with patching work, we need to account for the time required to keep databases running effectively.
The following is our outline of the high-level steps, with an itemization of the costs to consider when accounting for your database change management process.
- Time to monitor for work requests, assess priority, and identify target databases for maintenance.
- Time to update trouble-ticket system with workflow status.
- Schedule & Prepare
- Time to map requests to specific changes.
- Time to clarify any ambiguity in the requests, and schedule according to criticality.
- Time to create scripts, gather import files, verify parameter settings, checkpoint the database, and create database backups as needed.
- Time to make changes, run scripts, export data files, and restart the database.
- Time to verify that changes are in place and perform basic sanity testing of structural modifications. This may include functional tests or regression testing with new application logic.
- Time to document the changes, update workflow, and update trouble-ticket systems.
- Archival of backups or custom scripts.
In our next post we will change gears, so to speak, and start digging into the metrics.
Posted at Friday 2nd April 2010 7:30 pm
(0) Comments •
By Adrian Lane
It’s the new frontier. It’s like the “Wild West” meets the “Barbary Coast”, with hostile Indians and pirates all rolled into one. And like those places, lawless entrepreneurialism a major part of the economy. That was the impression I got reading Robert Mullins’ The biggest cloud on the planet is owned by … the crooks. He examines the resources under the control of Conficker-based worms and compares them to the legitimate cloud providers. I liked his post, as considering botnets in terms of their position as cloud computing leaders (by resources under management) is a startling concept. Realizing that botnets offer 18 times the computational power of Google and over 100 times Amazon Web Services is astounding. It’s fascinating to see how the shady and downright criminal have embraced technology – and in many cases drive innovation. I would also be interested in comparing total revenue and profitability between, say, AWS and a botnet. We can’t, naturally, as we don’t really know the amount of revenue spam and bank fraud yield. Plus the business models are different and botnets provide abnormally low overhead – but I am willing to bet criminals are much more efficient than Amazon or Google.
It’s fascinating to see the shady and downright criminal have embraced the model so effectively. I feel like I am watching a Battlestar Galatica rerun, where the humans can’t use networked computers, as the Cylons hack into them as fast as they find them. And the sheer numbers of hacked systems support that image. I thought it was apropos that Andy the IT Guy asked Should small businesses quit using online banking, which is very relevant. Unfortunately the answer is yes. It’s just not safe for most merchants who do not – and who do not want to – have a deep understanding of computer security. Nobody really wants to go back to the old model where they drive to the bank once or twice a week and wait in line for half an hour, just so the new teller can totally screw up your deposit. Nor do they want to buy dedicated computers just to do online banking, but that may be what it comes down to, as Internet banking is just not safe for novices. Yet we keep pushing onward with more and more Internet services, and we are encouraged by so many businesses to do more of our business online (saving their processing costs). Don’t believe me? Go to your bank, and they will ask you to please use their online systems. Fun times.
On to the Summary:
Webcasts, Podcasts, Outside Writing, and Conferences
Favorite Securosis Posts
Other Securosis Posts
Favorite Outside Posts
Project Quant Posts
Research Reports and Presentations
Top News and Posts
Blog Comment of the Week
Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Martin McKeay, for offering practical advice in response to Help a Reader: PCI Edition.
Unluckily, there isn’t a third party you can appeal to, at least as far as I know. My suggestion would be to get both your Approved Scanning Vendor and your hosting provider on the same phone call and have the ASV explain in detail to the hosting provider the specifics of vulnerabilities that have been found on the host. Your hosting provider may be scanning your site with a different ASV or not at all and receiving different information than your seeing. Or it may be that they’re in compliance and that your ASV is generating false positives in your report. Either way, it’s going to be far easier for them to communicate directly at a technical level than for you to try and act as an intermediary between the two.
I’d also politely point out to your host that their lack of communication is costing you money and if it continues you may have to take your business elsewhere. If they’re not willing to support you, you should continue to pay them money. Explore your contract, you may have the option of subtracting the amount of the fines from your payment to them. Money always get’s their attention.
There are too many variables involved for there to be a solid answer to this, these are just my suggestions. If you have a relationship with a QSA I’d strongly suggest you get them involved as well.
Posted at Friday 2nd April 2010 6:40 am
(0) Comments •
By Mike Rothman
As we continue building out coverage on more traditional security topics, it’s time to focus some attention on the endpoint. For the most part, many folks have just given up on protecting the endpoint. Yes, we all go through the motions of having endpoint agents installed (on Windows anyway), but most of us have pretty low expectations for anti-malware solutions. Justifiably so, but that doesn’t mean it’s game over. There are lots of things we can do to better protect the endpoint, some of which were discussed in Low Hanging Fruit: Endpoint Security.
But let’s not get the cart ahead of the horse. First off, nowadays there are lots of incentives for the bad guys to control endpoint devices. There is usually private data on the device, including nice things like customer databases – and with the strategic use of keyloggers, it’s just a matter of time before bank passwords are discovered. Let’s not forget about intellectual property on the devices, since lots of folks just have to have their deepest darkest (and most valuable) secrets on their laptop, within easy reach. Best of all, compromising an endpoint device gives the bad guys a foothold in an organization, and enables them to compromise other systems and spread the love.
The endpoint has become the path of least resistance, mostly because of the unsophistication of the folks using said devices doing crazy Web 2.0 stuff. All that information sharing certainly seemed like a good idea at the time, right? Regardless of how wacky the attack, it seems at least one stupid user will fall for it. Between web application attacks like XSS (cross-site scripting), CSRF (cross-site request forgery), social engineering, and all sorts of drive-by attacks, compromising devices is like taking candy from a baby. But not all the blame can be laid at the feet of users, because many attacks are pretty sophisticated, and even hardened security professionals can be duped.
Combine that with the explosion of mobile devices, whose owners tend to either lose them or bring back bad stuff from coffee shops and hotels, and you’ve got a wealth of soft targets. And as the folks tasked with protecting corporate data and ensuring compliance, we’ve got to pay more attention to locking down the endpoints – to the degree we can. And that’s what the Endpoint Security Fundamentals series is all about.
Philosophy: Real-world Defense in Depth
As with all of Securosis’ research, we focus on tactics to maxize impact for minimal effort. In the real world, we may not have the ability to truly lock down the devices since those damn users want to do their jobs. The nerve of them! So we’ve focused on layers of defense, not just from the standpoint of technology, but also looking at what we need to do before, during, and after an incident.
- Prioritize – This will warm the hearts of all the risk management academics out there, but we do need to start the process by understanding which endpoint devices are most at risk because they hold valuable data, for a legitimate business reason – right?
- Assess the current status – Once we know what’s important, we need to figure out how porous our defenses are, so we’ll be assessing the endpoints.
- Focus on the fundamentals – Next up, we actually pick that low hanging fruit and do the thing that we should be doing anyway. Yes, things like keeping software up to date, leveraging what we can from malware defense, and using new technologies like personal firewalls and HIPS. Right, none of this stuff is new, but not enough of us do it. Kind of like… no, I won’t go there.
- Building a sustainable program – It’s not enough to just implement some technology. We also need to do some of those softer management things, which we don’t like very much – like managing expectations and defining success. Ultimately we need to make sure the endpoint defenses can (and will) adapt to the changing attack vectors we see.
- Respond to incidents – Yes, it will happen to you, so it’s important to make sure your incident response plan factors in the reality that an endpoint device may be the primary attack vector. So make sure you’ve got your data gathering and forensics kits at the ready, and also have an established process for when a remote or traveling person is compromised.
- Document controls – Finally, the auditor will show up and want to know what controls you have in place to protect those endpoints. So you also need to focus on documentation, ensuring you can substantiate all the tactics we’ve discussed thus far.
The ESF Series
To provide an little preview of what’s to come, here is how the series will be structured:
- Prioritize: Finding the Leaky Buckets
- Triage: Fixing the Leaky Buckets
- Fundamentals: Leveraging existing technologies (a few posts covering the major technology areas)
- The Endpoint Security Program: Systematizing Protection
- Incident Response: Responding to an endpoint compromise
- Compliance: Documenting Endpoint Controls
As with all our research initiatives, we count on you to keep us honest. So check out each piece and provide your feedback. Tell me why I’m wrong, how you do things differently, or what we’ve missed.
Posted at Thursday 1st April 2010 7:00 pm
(0) Comments •
By Adrian Lane
It’s tough for me to write a universal quick configuration management guide for databases, because the steps you take will be based upon the size, number, and complexity of the databases you manage. Every DBA works in a slightly different environment, and configuration settings get pretty specific. Further, when I got started in this industry, the cost of the database server and the cost of the database software were more than a DBA’s yearly salary. It was fairly common to see one database admin for one database server. By the time the tech bubble burst in 2001, it was common to see one database administrator tending to 15-20 databases. Now that number may approach 100, and it’s not just a single database type, but several. The greater complexity makes it harder to detect and remedy simple mistakes that lead to database compromises.
That said, re-configuring a database is a straightforward task. Database administrators know it involves little more than changing some parameter value in a file or management UI and, worst case, re-starting the database. And a majority of the parameters, outside the user settings we have already discussed, will remain static over time. The difficulties are knowing what settings are appropriate for database security, and keeping settings consistent and up-to-date across a large number of databases. Research and ongoing management are what makes this step more challenging.
The following is a set of basic steps to establish and maintain database configuration. This is not meant to be a process per se, but just a list of tasks to be performed.
- Research: How should your databases be configured for security? We have already discussed many of the major topics with user configuration management and network settings, and patching takes care of another big chunk of the vulnerabilities. But that still leaves a considerable gap. All database vendors recommend configuration and security settings, and it does not take very long to compare your configuration to the standard. Researching what settings you need to be concerned with, and the proper settings for your databases, will comprise the bulk of your work for this exercise. All database vendors provide recommended configurations and security settings, and it does not take very long to compare your configuration to the standard. There are also some free assessment tools with built-in polices that you can leverage. And your own team may have policies and recommendations. There are also third party researchers who provide detailed information on blogs, as well as CERT & Mitre advisories.
- Assess & Configure: Collect the configuration parameters and find out how your databases are configured. Make changes according to your research. Pay particular attention to areas where users can add or alter database functions, such as cataloging databases and nodes in DB2 or UTL_File settings in Oracle. Pay attention to the OS level settings as well, so verify that the database is installed under a non-IT or domain administration account. Things like shared memory access and read permissions on database data files need to be restricted. Also note that assessment can verify audit settings to ensure other monitoring and auditing facilities generate the appropriate data streams for other security efforts.
- Discard What Your Don’t Need: Databases come with tons of stuff you may never need. Test databases, advanced features, development environments, web servers, and other features. Remove modules & services you don’t need. Not using replication? Remove those packages. These services may or may not be secure, but their absence assures they are not providing open doors for hackers.
- Baseline and Document: Document approved configuration baseline for databases. This should be used for reference by all administrators, as well as guidelines to detect misconfigured systems. The baseline will really help so that you do not need to re-research what the correct settings are, and the documentation will help you and team members remember why certain settings were chosen.
A little more advanced:
- Automation: If you work on a team with multiple DBAs, there will be lots of changes you are not aware of. And these changes may be out of spec. If you can, run configuration scans on a regular basis and save the results. It’s a proactive way to ensure configurations do not wander too far out of specification as you maintain your systems. Even if you do not review every scan, if something breaks, you at least have the data needed to detect what changes were made and when for after-the-fact forensics.
- Discovery: It’s a good idea to know what databases are on your network and what data they contain. As databases are being embedded into many applications, they surreptitiously find their way onto your network. If hacked, they provide launch points for other attacks and leverage whatever credentials the database was installed with, which you hope was not ‘root’. Data discovery is a little more difficult to do, and comes with separation of duties issues (DBAs should not be looking at data, just database setup), but understanding where sensitive data resides is helpful in setting table, group, and schema permissions.
Just as an aside on the topic of configuration management, I wanted to mention that during my career I have helped design and implement database vulnerability assessment tools. I have written hundreds of policies for database security and operations for most relational database platforms, and several non-relational platforms. I am a big fan of being able to automate configuration data collection and analysis. And frankly, I am a big fan of having someone else write vulnerability assessment policies, because it is difficult and time consuming work. So I admit that I have a bias for using assessment tools for configuration management. I hate to recommend tools for an essentials guide as I want this series to stick to lightweight stuff you can do in an afternoon, but the reality is that you cannot reasonably research vulnerability and security settings for a database in an afternoon. It takes time and a willingness to learn, and means you need to learn about some of the esoteric database features attackers will exploit. Once the initial research is done, keeping the database configuration in check is not that difficult. As the number and type of databases under your management grows, you’re going to need some help automating the job, so my practical advice is: plan on grabbing a tool or writing some scripts.
There are a couple free assessment tools that you can look into to help automate your process, and quickly help you identify topics of interest, so grab one and review. There are professional tools with much greater depth and breadth of functionality, but that is outside our scope here. Granted, if you are managing iSeries, MySQL or Teradata, pickings may be slim, but most databases are covered, and policies for other platforms can offer guidance on the specific issues you need to be concerned with. If you are handy with a scripting language or stored procedures, you can write your own scripts to automate these tasks. This approach works very well as long as you have the time to write the scripts, proper system access, and the scripts are secured from non-DBAs.
Posted at Thursday 1st April 2010 5:00 pm
(0) Comments •
Update- Lanscope posted some new information positioning this as a compliment, not substitute, to DLP. Looks like the marketing folks might have gotten a little out of control.
I’ve been at this game for a while now, but sometimes I see a piece of idiocy that makes me wish I was drinking some chocolate milk so I could spew it out my nose in response the the sheer audacity of it all.
Today’s winner is Lancope, who astounds us with their new “data loss prevention” solution that detects breaches using a Harry Potter-inspired technique that completely eliminates the need to understand the data. Actually, according to their extremely educational marketing paper, analyzing the content is bad, because it’s really hard! Kind of like math. Or common sense.
Lancope’s far superior alternative monitors your network for any unusual activity, such as a large file transfer, and generates an alert. You don’t even need to look at packets! That’s so cool! I thought the iPad was magical, but Lancope is totally kicking Apple’s ass on the enchantment front. Rumor is your box is even delivered by a unicorn. With wings!
I’m all for netflow and anomaly detection. It’s one of the more important tools for dealing with advanced attacks. But this Lancope release is ridiculous – I can’t even imagine the number of false positives. Without content analysis, or even metadata analysis, I’m not sure how this could possibly be useful. Maybe paired with real DLP, but they are marketing it as a stand-alone option, which is nuts. Especially when DLP vendors like Fidelis, McAfee, and Palisade are starting to add data traffic flow analysis (with content awareness) to their products.
Maybe Lancope should partner with a DLP vendor. One of the weaknesses of many DLP products is that they do a crappy job of looking across all ports and protocols. Pretty much every product is capable of it, but most of them require a large number of boxes with sever traffic or analysis limitations, because they aren’t overly speedy as network devices (with some exceptions). Combining one with something like Lancope where you could point the DLP at target traffic could be interesting… but damn, netflow alone clearly isn’t a good option.
Lancope, thanks for a great DLP WTF with a side of BS. I’m glad I read it today – that release is almost as good as the ThinkGeek April Fool’s edition!
Posted at Thursday 1st April 2010 4:12 pm
(4) Comments •
By David Mortman
One of our readers recently emailed me with a major dilemma. They need to keep their website PCI compliant in order to keep using their payment gateway to process credit card transactions. Their PCI scanner is telling them they have vulnerabilities, while their hosting provider tells them they are fine. Meanwhile our reader is caught in the middle, paying fines.
I don’t dare to use my business e-mail address, because it would disclose my business name. I have been battling with my website host and security vendor concerning the Non-PCI Compliance of my website. It is actually my host’s IP address that is being scanned and for several months it has had ONE Critical and at least SIX High Risk scan results. This has caused my Payment Gateway provider to start penalizing me $XXXX per month for Non-PCI compliance. I wonder how long they will even keep me. When I contact my host, they say their system is in compliance. My security vendor is saying they are not. They are each saying I have to resolve the problem, although I am in the middle. Is there not a review board that can resolve this issue? I can’t do anything with my host’s system, and don’t know enough gibberish to even interpret the scan results. I have just been sending them to my host for the last several months.
There is no way that this could be the first or last time this has happened, or will happen, to someone in this situation. This sort of thing is bound to come up in compliance situations where the customer doesn’t own the underlying infrastructure, whether it’s a traditional hosted offering, and ASP, or the cloud. How do you recommend the reader – or anyone else stuck in this situation – should proceed? How would you manage being stuck between two rocks and a hard place?
Posted at Wednesday 31st March 2010 1:26 pm
(12) Comments •