Securosis

Research

ESF: Controls: Secure Configurations

Now that we’ve established a process to make sure our software is sparkly new and updated, let’s focus on the configurations of the endpoint devices that connect to our networks. Silly configurations present another path of least resistance for the hackers to compromise your devices. For instance, there is no reason to run FTP on an endpoint device, and your standard configuration should factor that in. Define Standard Builds Initially you need to define a standard build, or more likely a few standard builds. Typically for desktops (no sensitive data, and sensitive data), mobile employees, and maybe, kiosks. There probably isn’t a lot of value to going broader than those 4 profiles, but that will depend on your environment. A good place to start is one of the accepted benchmarks of configurations available in the public domain. Check out the Center for Internet Security, which produces configuration benchmarks for pretty much every operating system and many of the major applications. In order to see your tax dollars at work (if you live in the US anyway), also consult NIST, especially if you are in the government. Its SCAP configuration guides provide a similar type of enumeration of specific setting to lock down your machines. To be clear, we need to balance security with usability and some of the configurations suggested in the benchmarks clearly impact usability. So it’s about figuring out what will work in your environment, documenting those configurations, getting organizational buy-in, and then implementing. It also makes sense to put together a list of authorized software as part of the standard builds. You can have this authorized software installed as part of the endpoint build process, but it also provides an opportunity to revisit policies on applications like iTunes, QuickTime, Skype, and others which may not yield a lot of business value and have a history of vulnerability. We’re not saying these applications should not be allowed – you’ve got to figure that out in the context of your organization – but you should take the opportunity to ask the questions. Anti-Exploitation As you define your standard builds, at least on Windows, you should turn on anti-exploitation technologies. These technologies make it much harder to gain control of an endpoint through a known vulnerability. I’m referring to DEP (data execution prevention) and ASLR (address space layout randomization), though Apple is also implementing similar capabilities in their software. To be clear, anti-exploitation technology is not a panacea for protection – as the winners of Pwn2Own at CanSecWest show us every year. Especially for those applications that don’t support it (d’oh!), but the technologies do help make it harder to exploit the vulnerabilities in compatible software. Other Considerations Running as a standard user – We’ve written a bit on the possibilities of devices running in standard user mode (as opposed to administrator mode), and you should consider this option when designing secure configurations, especially to help enforce authorized software policies. VPN to Corporate – Given the reality that mobile users will do something silly and put your corporate data at risk, one technique to protect them is to run all their Internet traffic through the VPN to your site. Yes, it may add a bit of latency, but at least the traffic will be running through the web gateway and you can both enforce policy and audit what the user is doing. As part of your standard build, you can enforce this network setting. Implementing Secure Configurations Once you have the set of secure configurations for your device profiles, how do you start implementing them? First make sure everyone buys into the decisions and understands the ramifications of going down this path. Especially if you plan to stop users from installing certain software or block other device usage patterns. Constantly asking for permission can be dangerously annoying but choosing the right threshold for confirmations is a critical aspect of a designing a policy. If the end users feel they need to go around the security team and their policies to get the job done, everyone loses. Once the configurations are locked and loaded, you need to figure out how much work is required for implementation. Next you assess the existing endpoints against the configurations. Lots of technologies can do this, ranging from Windows management tools, to vulnerability scanners, to third party configuration management offerings. The scale and complexity of your environment should drive the selection of the tool. Then plan to bring those non-compliant devices into the fold. Yes, you could just flip the switch and make the changes, but since many of the configuration settings will impact user experience, it makes sense to do a bit of proactive communication to the user community. Of course some folks will be unhappy, but that’s life. More importantly, this should help cut down help desk mayhem when some things (like running that web business from corporate equipment) stop working. Discussion of actually making the changes brings us to automation. For organizations with more than a couple dozen machines, a pretty significant ROI is available from investing in some type of configuration management. Again, it doesn’t have to be the Escalade of products, and you can even look at things like Group Policy Objects in Windows. The point is making manual changes on devices is idiotic, so apply the level of automation that makes sense in your environment. Finally, we also want to institutionalize the endpoint configurations, and that means we need to get devices built using the secure configuration. Since you probably have an operations team that builds the machines, they need to get the image and actually use it. But since you’ve gotten buy-in at all steps of this process, that shouldn’t be a big deal, right? Next up, we’ll discuss the anti-malware space and what makes sense on our endpoints. Other posts in the Endpoint Security Fundamentals Series Introduction Prioritize: Finding the Leaky Buckets Triage: Fixing the Leaky Buckets Controls: Update and Patch Share:

Share:
Read Post

Database Security Fundamentals: Auditing Transactions

I am now switching gears to talk about some of the ‘detective’ measures that help with forensic analysis of transactions and activity. The preventative measures discussed previously are great for protecting your system from known attacks, but they don’t help detect fraudulent misuse or failure of business processes. For that we need to capture the events that make up the business processes and analyze them. Our basic tool is database auditing, and they provide plenty of useful information. Before I get too far into this discussion, it’s worth noting that the term ‘transactions’ is an arbitrary choice on my part. I use it to highlight both that audit data can group statements into logical sequences corresponding to particular business functions, and that audit trail analysis is most useful when looking for insider misuse – rather than external attacks. Audit trails are much more useful for detecting what was changed, rather than what was accessed, and for forensic examination of database ‘state’. There are easier and more efficient ways of cataloging SELECT statements and simple events like failed logins. Usually at this point I provide a business justification for auditing of transactions or specific events, and some use cases as examples of how it helps. I will skip that this time, as you already know that auditing is built into every database; and captures database queries, transactions, and important system changes. You probably already use audit logs to see what actions are most common so you can set up indices and tune your most common queries. You may even use auditing to detect suspect activity, to perform forensic audits, or even to address a specific compliance mandate. At the very least you need to have some form of database auditing enabled on production databases to answer the question “What the &!$^% happened” after a database crash. Regardless of your reasons, auditing is essential for security and compliance. In this post I will focus on capturing transactions and alterations to the database. What type of analysis you do, how long you keep the data, and what reports you create are secondary. I am focusing on gathering the audit trail rather than what do with it next. What’s critical is here understanding what data you need, and how best to capture it. All databases have some type of audit function. The ‘gotcha’ is that use of database auditing needs careful consideration to avoid storage, performance, and management nightmares. Depending on the vendor and how the feature is deployed, you can gather detailed transactional information, filter unwanted transactions to get a succinct view of database activity, and do so with only modest performance impact. Yes, I said modest performance impact. This remains a hot-button issue for most DBAs and is easy to mess up, so planning and basic tests form a bulk of this phase. Benchmark: Find a test system, gather a bunch of queries that reflect user activity on your system, and run some benchmarks. Turn on the auditing or tracing facility and rerun the benchmark. Wince and swear repeatedly at the performance degradation you just witnessed. Aren’t you glad you did this on a test system? You have a baseline for best and worst case performance. Tune: Select Audit options. Oracle, SQL Server, DB2, and Sybase have multiple options for generating audit trails. Don’t use Oracle’s fine-grained auditing when normal auditing will suffice. Don’t use event monitors on DB2 if you have many different types of events to collect. Which auditing option you choose will dramatically affect performance and data volumes. Examine the audit capture options, and select only the event types you need. If you only care about user events, don’t bother collecting all the object events. If you only care about failed logins and changes to system privileges, filter out meta-data and data changes. Examine buffer space, tablespace, block utilization, and other resource tuning options. Audit data are static in size, so their data blocks can be set to ‘write-only’, thus saving space. For audit trails that store data within database tables, you can pre-allocate table space and blocks to reduce latency from space allocation. Rerun the benchmarks and see what helps performance. Generally these steps provide significant performance gains. No more cursing the database vendor should be needed. Filter: Get rid of specific actions you don’t need. For example, batch updates may fall outside your area of interest, yet comprise a significant fraction of the audit log, and can therefore be parsed out. Or you may want to audit all transactions while the databse is running, but not need events from database startup. In some scenarios the database can filter these events, which improves performance. If the database does not provide this type of row-level filtering, you can add a WHERE clause to the extraction query or use a script to whittle down the extracted data. Implement: Take what you learned and apply it in the production environment. Verify that the audit trail is being collected. Ad-hoc Analysis: Review the logs periodically, focusing on failures of system functions and logins that indicate system probing. Any policy or report that you generate may miss unusual events, so I recommend occasional ad hoc analysis to detect anything hinkey. Record: Document the audit settings as well as the test results, as you will need them in the future to understand the impact of increased auditing levels. Communicate use of audit to users, and warn them that there will be a performance hit due to regulatory and security requirements. Create a log retention policy. This is necessary – even if the policy simply states that you will collect audit trails and delete them at the end of every week, write it down. Many compliance requirements let you define retention however you choose, so be proactive. You can always change it in the future. More advanced considerations: Automate: I recommend that you automate as much of the process as you can, once you have done the initial analysis and configuration. Collection of the audit

Share:
Read Post

Friday Summary: April 9, 2010

So I’m turning 39 in a couple of weeks. Not that 39 is one of those milestone birthdays, but it leaves me with only 365 days until I can not only no longer trust myself (as happened when I turned 30), but I supposedly can’t even trust my bladder anymore. I’m not really into birthdays with ‘0’ at the end having some great significance, but I do think they can be a good excuse to reflect on where you are in life. Personally I have an insanely good life – I run my own company, have a great family, enjoy my (very flexible) job, and have gotten to do some pretty cool things over the years. Things like “fly a jet,” “drive over 100 MPH with lights and sirens on,” “visit 6 of 7 continents,” “compete in a national martial arts tournament” (and lose to a 16 year old who hadn’t discovered beer yet), “rescue people from mountains,” “get choppered into a disaster,” “ski patrol at a major resort,” “meet Jimmy Buffett,” and even “write a screenplay” (not a good screenplay, but still). But there are a few things I haven’t finished yet, and that last year before 40 seems like a good chance to knock one or two off. Here are my current top 5, and I’m hoping to finish at least one: Get my pilot’s license. Visit Antarctica (the only continent I haven’t been on). Sail the Caribbean Captain Ron style. Run a marathon. Finish an Olympic-distance triathlon (I’ve done sprint distance already). I’m open to suggestions, and while the marathon/triathlon are the cheapest, I’d kind of like to get that pilot’s license. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Living with Windows: security. Rich wrote this up for Macworld. Effort Will Measure Costs Of Monitoring, Managing Network Security: Open-source Network Security Operations Quant goes live. Favorite Securosis Posts Rich: Anti-Malware Effectiveness: The Truth Is out There. Lies, damn lies, and testing. Adrian Lane: Database Virtualization and Abstraction. Mike Rothman: FireStarter: Nasty or Not, Jericho is Irrelevant. I know this is from last week, but the comments on the post this week were pretty intense (and good). David Mortman: Who to Recruit for Security, How to Get Started, and Career Tracks. Other Securosis Posts Database Security Fundamentals: Auditing Transactions. ESF: Controls: Secure Configurations. Incite 4/7/2010: Everybody Loves the Underdog. ESF: Controls: Update and Patch. ESF: Triage: Fixing the Leaky Buckets. ESF: Prioritize: Finding the Leaky Buckets. Favorite Outside Posts Rich: The Spider That Ate My Site Okay, I hate to admit this but I did something similar to our back end management interface. When admin buttons like “delete” are merely links, a simple spider can cause some serious damage. Adrian Lane: How did Wikileaks decrypt the video? Robert Graham’s for, indirectly, calling BS on this. David Mortman: Bring your Cloud to Work in Iraq. Mike Rothman: Preview or vaporware? Announcements of technology that doesn’t exist annoy the crap out of me. These are the PR guidelines for how to do it (and not annoy guys like me) from Schwartz. Chris Pepper: Researchers Trace Data Theft to Intruders in China. Project Quant Posts Project Quant: Database Security – Change Management. Project Quant: Database Security – Patch. Top News and Posts Anton: CISecurity Metrics Move Ahead. Are PDF’s Worm-able? IWM and Shadow Server Project Report: Shadows in the Clouds. Researcher Releases ‘Qubes’ Hardened OS. Researcher Details New Class Of Cross-Site Scripting Attack. Meta-Information Cross Site Scripting. Still looks like persistent XSS to me, but still an interesting variant. Removing the RSA Security 1024 V3 Root An orphaned root certificate. Now I don’t feel so bad about forgetting to renew domain registration. Patch Tuesday Info. Customers Sue Countrywide Financial Over Theft And Sale Of Personal Data: Class-action suit seeks $20 million as well as answers about company’s involvement. Very interesting, as it appears the claimants believe the data theft was organized and possibly condoned. Stolen IDs used to file fake tax returns. Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Paul Simmonds, in response to FireStarter: Nasty or Not, Jericho is Irrelevant. Having just read the RFI response from a major software vendor, who’s marketing BS manages to side-step all the questions designed to get to the bottom of “is this secure”, then the answer is YES, we do need the nasty questions. More importantly they may be obvious but we as purchasers are not asking them, and the vendors are not volunteering the information (mainly because what they supply is inherently insecure). And then we wonder why we are in the state we are in?? Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.