Securosis

Research

FireStarter: Admin access, buh bye

It seems I’ve been preoccupied lately with telling all of you about the things you shouldn’t do anymore. Between blowing away firewall rules and killing security technologies, I guess I’ve become that guy. Now get off my lawn! But why stop now – I’m on a roll. This week, let’s take on another common practice that ends up being an extraordinarily bad idea – running user devices with administrator access. Let’s slay that sacred cow. Once again, most of you security folks with any kind of kung fu are already here. You’d certainly not let an unsophisticated user run with admin rights, right? They might do something like install software or even click on a bad link and get pwned. Yeah, it’s been known to happen. Thus the time has come for the rest of the great unwashed to get with the program. Your standard build (you have a standard build, right? If not, we’ll talk about that next week in the Endpoint Fundamentals Series) should dictate user devices run as standard users. Take that and put it in your peace pipe. Impact Who cares? Why are we worried about whether a user runs as a standard or admin user? To state the obvious: admin rights on endpoint devices represent the keys to the kingdom. These rights allow users to install software, mess with the registry under Windows, change all sorts of config files, and generally do whatever they want on the device. Which wouldn’t be that bad if we only had to worry about legitimate users. Today’s malware doesn’t require user interaction to do its damage, especially on devices running with local admin access. All the user needs to do is click the wrong link and it’s game over – malware installed and defenses rendered useless, all without the user’s knowledge. Except when they are offered a great deal on “security software,” get a zillion pop-ups, or notice their machine grinding to a halt. To be clear, users can still get infected when running in standard mode, but the attack typically ends at logout. Without access to the registry (on Windows) and other key configuration files, malware generally expires when the user’s session ends. Downside As with most of the tips I’ve provided over the past few weeks, user grumpiness may result once they start running in standard mode. They won’t be able to install new software and some of their existing applications may break. So use this technique with care. That means you should actually test whether every application runs in standard user mode before you pull the trigger. Business critical applications probably need to be left alone, as offensive as that is. Most applications should run fine, making this decision a non-issue, but use your judgement on allowing certain users to keep admin rights (especially folks who can vote you off the island) to run a specific application. Yet I recommend you stamp your feet hard and fast if an application doesn’t play nicely in standard mode. Yell at your vendors or internal developers to jump back into the time machine and catch up with the present day. It’s ridiculous that in 2010 an end-user facing application would require admin rights. You also have the “no soup for you” card, which is basically the tough crap response when someone gripes about needing admin rights. Yes, you need to have a lot of internal mojo to pull that off, but that’s what we are all trying to build, right? Being able to do the right security thing and make it stick is the hallmark of an effective CISO. Discuss So there you have it. This is a FireStarter, so fire away, sports fans. Why won’t this work in your environment? What creative, I mean ‘legitimate’, excuses are you going to come up with now to avoid doing proper security hygiene on the endpoints? This isn’t that hard, and it’s time. So get it done, or tell me why you can’t… Share:

Share:
Read Post

Counterpoint: Admin Rights Don’t Matter the Way You Think They Do

Update – Based on feedback, I failed to distinguish that I’m referring to normal users running as admin. Sysadmins and domain admins definitely shouldn’t be running with their admin privileges except for when they need them. As you can read in the comments, that’s a huge risk. When I was reviewing Mike’s FireStarter on yanking admin rights from users, it got me thinking on whether admin rights really matter at all. Yes, I realize this is a staple of security dogma, but I think the value of admin rights is completely overblown due to two reasons: There are plenty of bad things an attacker can do in userland without needing admin rights. You can still install malware and access everything the user can. Lack of admin privileges is little more than a speed bump (if even that) for many kinds of memory corruption attacks. Certain buffer overflows and other attacks that directly manipulate memory can get around rights restrictions and run as root, admin, or worse. For example, if you exploit a kernel flaw with a buffer overflow (including flaws in device drivers) you are running in Ring 0 and fully trusted, no matter what privilege level the user was running as. If you read through the vulnerability updates on various platforms (Mac, PC, whatever), there are always a bunch of attacks that still work without admin rights. I’m also completely ignoring privilege escalation attacks, but we all know they tend to get patched at a slower pace than remote exploitation vulnerabilities. This isn’t to say that removal of admin rights is completely useless – it’s very useful to keep users from mucking up your desktop images – but from a defensive standpoint, I don’t think restricting user rights is nearly as effective as is often claimed. My advice? Do not rely on standard user mode as a security defense. It’s useful for locking down users, but has only limited effectiveness for stopping attacks. When you evaluate pulling admin rights, don’t think it will suddenly eliminate the need for other standard endpoint security controls. Share:

Share:
Read Post

Litchfield Discloses Oracle 0-Day at Black Hat

During Black Hat last week, David Litchfield disclosed that he had discovered an 0-day in Oracle 11G which allowed him to acquire administrative level credentials. Until today, I was unaware that the attack details were made available as well, meaning anyone can bounce the exploit off your database server to see if it is vulnerable. From the NetworkWorld article, the vulnerability is … … the way Java has been implemented in Oracle 11g Release 2, there’s an overly permissive default grant that makes it possible for a low privileged user to grant himself arbitrary permissions. In a demo of Oracle 11g Enterprise Edition, he showed how to execute commands that led to the user granting himself system privileges to have “complete control over the database.” Litchfield also showed how it’s possible to bypass Oracle Label Security used for managing mandatory access to information at different security levels. As this issue allows for arbitrary escalation of privileges in the database, it’s pretty much a complete compromise. At least Oracle 11G R2 is affected, and I have heard but not confirmed that 10G R2 is as well. This is serious and you will need to take action ASAP, especially for installations that support web applications. And if your web applications are leveraging Oracle’s Java implementation, you may want to take the servers offline until you have implemented the workaround. From what I understand, this is an issue with the Public user having access to the Java services packaged with Oracle. I am guessing that the appropriate workaround is to revoke the Public user permissions granted during the installation process, or lock that account out altogether. There is no patch available at this time, but that should serve as a temporary workaround. Actually, it should be a permanent workaround – after all, you didn’t really leave the ‘Public’ user account enabled on your production server, did you? I have been saying for several years that there is no such thing as public access to your database. Ever! You may have public content, but the public user should not just have its password changed, but should be fully locked out. Use a custom account with specific grant statements. Public execute permission to anything is ill advised, but in some cases can be done safely. Running default ‘Public’ permissions is flat-out irresponsible. You will want to review all other user accounts that have access to Java and ensure that no other accounts have public access – or access provided by default credentials – until a patch is available. Update A couple database assessment vendors were kind enough to contact me with more details on the hack, confirming what I had heard. Application Security Inc. has published more specific information on this attack and on workarounds. They are recommending removing the execute permissions as a satisfactory work-around. That is the most up-to-date information I can find. Share:

Share:
Read Post

Rock Beats Scissors, and People Beat Process

My mentors in engineering management used to define their job as managing people, process, and technology. Those three realms, and how they interact, are a handy way to conceptualize organizational management responsibilities. We use process to frame how we want people to behave – trying to promote productivity, foster inter-group cooperation, and minimize mistakes. The people are the important part of the equation, and the process is there to help make them better as a group. How you set up process directly impacts productivity, arranges priority, and creates or reduces friction. Subtle adjustments to process are needed to account for individuals, group dynamics, and project specifics. I got to thinking about this when reading Microsoft’s Simple Implementation of SDL. I commented on some of the things I liked about the process, specifically the beginning steps of (paraphrased): Educate your team on the ground rules. Figure out what you are trying to secure. Commit to gate insecure code. Figure out what’s busted. Sounds simple, and conceptually it is, but in practice this is really hard. The technical analysis of the code is difficult, but implementing the process is a serious challenge. Getting people to change their behavior is hard enough, but with diverging company goals in the mix, it’s nearly impossible. Adding the SDL elements to your development cycle is going to cause some growing pains and probably take years. Even if you agree with all the elements, there are several practical considerations that must be addressed before you adopt the SDL – so you need more than the development team to embrace it. The Definition of Insanity I heard Marcus Ranum give a keynote last year at Source Boston on the Anatomy of The Security Disaster, and one of his basic points was that merely identifying a bad idea rarely adjusts behavior, and when it does it’s generally only because failure is imminent. When initial failure conditions are noticed, as much effort is spent on finger-pointing and “Slaughter of the Innocents” as on learning and adjusting from mistakes. With fundamental process re-engineering, even with a simplified SDL, progress is impossible without wider buy-in and a coordinated effort to adapt the SDL to local circumstances.To hammer this point home, let’s steal a page from Mike Rothman’s pragmatic series, and imagine a quick conversation: CEO to shareholders: “Subsequent to the web site breach we are reacting with any and all efforts to ensure the safety of our customers and continue trouble-free 24×7 operation. We are committed to security … we have hired one of the great young minds in computer security: a talented individual who knows all about web applications and exploits. He’s really good and makes a fine addition to the team! We hired him shortly after he hacked our site.” Project Manager to programmers: “OK guys, let’s all pull together. The clean-up required after the web site hack has set us back a bit, but I know that if we all focus on the job at hand we can get back on track. The site’s back up and most of you have access to source code control again, and our new security expert is on board! We freeze code two weeks from now, so let’s focus on the goal and … Did you see that? The team was screwed before they started. Management’s covered as someone is now responsible for security. And project management and engineering leadership must get back on track, so they begin doing exactly what they did before, but will push for project completion harder than ever. Process adjustments? Education? Testing? Nope. The existing software process is an unending cycle. That unsecured merry-go-round is not going to stop so you can fix it before moving on. As we like to say in software development: we are swapping engines on a running car. Success is optional (and miraculous, when it happens). Break the Process to Fix It The Simplified SDL is great, provided you can actually follow the steps. While I have not employed this particular secure development process yet, I have created similar ones in the past. As a practical matter, to make changes of this scope, I have always had to do one of three things: Recreate the code from scratch under the new process. Old process habits die hard, and the code evaluation sometimes makes it clear a that a retrofit would require more work than a complete rewrite. This makes other executives very nervous, but has been the most efficient path from practical experience. You may not have this option. Branch off the code, with the sub-branch in maintenance while the primary branch lives on under the new process. I halted new feature development until the team had a chance to complete the first review and education steps. Much more work and more programming time (meaning more programmers committed), but better continuity of product availability, and less executive angst. Moved responsibility of the code to an entirely new team trained on security and adhering to the new process. There is a learning curve for engineers to become familiar with the old code, but weaknesses found during review tend to be glaring, and no one’s ego get hurts when you rip the expletive out of it. Also, the new engineers have no investment in the old code, so can be more objective about it. If you don’t break out of the old process and behaviors, you will generally end up with a mixed bag of … stuff. Skills Section As in the first post, I assume the goal of the simplified version of the process is to make this effort more accessible and understandable for programmers. Unfortunately, it’s much tougher than that. As an example, when you interview engineering candidates and discuss their roles, their skill level is immediately obvious. The more seasoned and advanced engineers and managers talk about big picture design and architecture, they talk about tools and process, and they discuss the tradeoffs of their choices. Most newbies are not even aware

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.