Securosis

Research

Friday Summary: February 12, 2010

Chris was kind enough to forward me Game Development in a Post-Agile World this week. What I know about game development could fit on the the head of a pin. Still, one of the software companies I worked for was incubated inside a much larger video game development company. I was always very interested in watching the game team dynamics, and how they differed from the teams I ran. The game developers did not have a lot of overlapping skills and the teams were – whether they knew it or not – built around the classical “surgical team” structure. They was always a single and clear leader of the team, and that person was usually both technically and creatively superior. The teams were small, and if they had a formalized process, I was unaware of it. It appeared that they figured out their task, built the tools they needed to support the game, and then built the game. There was consistency across the teams, and they appeared to be very successful in their execution. Regardless, back to the post. When I saw the title I thought this would be a really cool examination of Agile in a game development environment. After the first 15 pages or so, I realized there is not a damned thing about video game development in the post. What is there, though, is a really well-done examination of the downsides with Agile development. I wrote what I thought to be a pretty fair post on the subject this week, but this post is better! While I was focused on the difficulties of changing an entrenched process, and their impact on developing secure code, this one takes a broader perspective and looks at different Agile methodologies along a continuum of how people-oriented different variations are. The author then looks at how moving along the continuum alters creativity, productivity, and stakeholder involvement. If you are into software development processes, you’re probably a little odd, but you will very much enjoy this post! On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences It’s the week of Rich Mogull, Media Giant: First, Rich is on the cover of the March issue of Information Security Magazine Then, Rich managed to snag the cover story in this month’s Macworld magazine. It’s all about security issues for consumers, and is only mildly Mac-specific (How cool is it to be at Macworld and have the cover of the magazine at the same time? Congrats Rich! –Adrian) (Are folks there carrying around ‘your’ issue? Has anyone asked you to sign one, or a body part? –Chris) Rich’s cover handily eclipsed the print appearance of Chris’ Google Voice piece (page 56). And, lest he be accused of being an old media lackey, Rich wrote a TidBITS article on iPads in the enterprise. Adrian’s Dark Reading post on Amazon SimpleDB. Favorite Securosis Posts David Mortman: Misconceptions of a DMZ. Mike Rothman: Adrian’s post on People over Process. This may be the best piece written on the blog this year. Just awesome. Rich: Adrian’s post on SDL and Process. Adrian Lane: Mike’s post on The Death of Product Reviews. David Meier: This week’s Incite. Other Securosis Posts Database Security Fundamentals: Database Access Methods Choose Your Own Whitepaper Adventure (and Upcoming Papers) Network Security Fundamentals: Correlation Counterpoint: Correlation Is Useful, but Threat Assessment Is Fundamental Litchfield Discloses Oracle 0-Day at Black Hat FireStarter: Admin access, buh bye Counterpoint: Admin Rights Don’t Matter the Way You Think They Do RSVP for the Securosis and Threatpost Disaster Recovery Breakfast Kill. IE6. Now. Favorite Outside Posts Rich: Gunnar nails the truth on our relationship with China. Before you start touting terms like ‘cyberwar’, you need to understand the economics of the situation. They pwned us, and it has nothing to do with technology. David Mortman: Answering APT Misconceptions. An unmuddying of some of the APT waters. Hallelujah! Mike Rothman: Making Progress Matters Most. Bejtlich gets at the heart of keeping team members engaged and productive. Get the fsck out of their way. Adrian Lane: Martin’s post on PCI Compliance and Public Clouds. Despite the site advertisements, it was my favorite this week. Project Quant Posts Project Quant: Database Security – Masking Project Quant: Database Security – WAF Top News and Posts FEDs want cell phone tracking. And who wouldn’t? Critical Adobe Update. Dave Lewis at LiquidMatrix calls out an anonymous vendor for factually-incorrect FUD. This is a short but important post. Just today I got an email from a vendor who wanted to tell me the “top 3 ways DLP fails”. Successful vendors market on the strength of their products, not the (sometimes fictional) weaknesses of other’s. Hackers Steal $50k; bank says ‘tough’. Microsoft calls for Congress to get involved with cloud computing security. Since they can’t even agree on the best way to run the country into the ground, I’m not expecting any government action. Mudge hits DARPA. This could be exceptionally good, depending on where he starts dropping the cash. Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to ds, in response to Rich’s Counterpoint: Admin Rights Don’t Matter the Way You Think They Do: I think that this post is dangerous. While many will understand the difference between removing admin rights from a desktop for the user and restricting/managing admin rights for sysadmins, the distinction isn’t explicitly stated, and some may take this to mean dealing with admin rights isn’t necessary as a blanket statement. Share:

Share:
Read Post

Counterpoint: Correlation Is Useful, but Threat Assessment Is Fundamental

So it’s probably apparent that Mike and I have slightly different opinions on some security topics, such as Monitoring Everything (or not). But sometimes we have exactly the same viewpoint, for slightly different reasons. Correlation is one of these later examples. I don’t like correlation. Actually, I am bitter that I have to do correlation at all. But we have to do it because log files suck. Please, I don’t want log management and SIEM vendors to get all huffy with that statement: it’s not your fault. All data sources for forensic information pretty much lack sufficient information for forensics, and deficient in options for tuning and filtering capabilities to make them better. Application developers did not have security in mind when they created the log data, and I have no idea what the inventors of Event Log had in mind when they spawned that useless stream of information, but it’s just not enough. I know that this series is on network fundamentals, but I want to raise an example outside of the network space to clearly illustrate the issue. With database auditing, the database audit trail is the most accurate reflection of the database transactional history and database state. It records exactly what operations are performed. It is a useful centerpiece of auditing, but it is missing critical system events not recorded in the audit trail, and it does not have the raw SQL queries sent to the database. The audit trail is useful to an extent, but to enforce most policies for compliance or to perform a meaningful forensic investigation you must have additional sources of information (There are a couple vendors out there who, at this very moment, are tempted to comment on how their platform solves this issue with their novel approach. Please, resist the temptation). Relational database platforms do a better job of creating logs than most networks, platforms, or devices. Log file forensics are a little like playing a giant game of 20 questions, and each record is the answer to a single question. You find something interesting in the firewall log, but you have to look elsewhere to get a better idea of what is going on. You look at an access control log file, and now it really looks like something funny is going on, but now you need to check the network activity files to try to estimate intent. But wait, the events don’t match up one for one, and activity against the applications does not map one-to-one with the log file, and the time stamps are skewed. Now what? Content matching, attribute matching, signatures, or heuristics? Which data sources you select depends on the threats you are trying to detect and, possibly, react to. The success of correlation is largely dependent on how well you size up threats and figure out which combination of log events are needed. And which data sources you choose. Oh, and then how well develop the appropriate detection signatures. And then how well you maintain those policies as threats morph over time. All these steps take serious time and consideration. So do you need correlation? Probably. Until you get something better. Regardless of the security tool or platform you use for threat detection, the threat assessment is critical to making it useful. Otherwise you are building a giant Cartesian monument to the gods of useless data. Share:

Share:
Read Post

Litchfield Discloses Oracle 0-Day at Black Hat

During Black Hat last week, David Litchfield disclosed that he had discovered an 0-day in Oracle 11G which allowed him to acquire administrative level credentials. Until today, I was unaware that the attack details were made available as well, meaning anyone can bounce the exploit off your database server to see if it is vulnerable. From the NetworkWorld article, the vulnerability is … … the way Java has been implemented in Oracle 11g Release 2, there’s an overly permissive default grant that makes it possible for a low privileged user to grant himself arbitrary permissions. In a demo of Oracle 11g Enterprise Edition, he showed how to execute commands that led to the user granting himself system privileges to have “complete control over the database.” Litchfield also showed how it’s possible to bypass Oracle Label Security used for managing mandatory access to information at different security levels. As this issue allows for arbitrary escalation of privileges in the database, it’s pretty much a complete compromise. At least Oracle 11G R2 is affected, and I have heard but not confirmed that 10G R2 is as well. This is serious and you will need to take action ASAP, especially for installations that support web applications. And if your web applications are leveraging Oracle’s Java implementation, you may want to take the servers offline until you have implemented the workaround. From what I understand, this is an issue with the Public user having access to the Java services packaged with Oracle. I am guessing that the appropriate workaround is to revoke the Public user permissions granted during the installation process, or lock that account out altogether. There is no patch available at this time, but that should serve as a temporary workaround. Actually, it should be a permanent workaround – after all, you didn’t really leave the ‘Public’ user account enabled on your production server, did you? I have been saying for several years that there is no such thing as public access to your database. Ever! You may have public content, but the public user should not just have its password changed, but should be fully locked out. Use a custom account with specific grant statements. Public execute permission to anything is ill advised, but in some cases can be done safely. Running default ‘Public’ permissions is flat-out irresponsible. You will want to review all other user accounts that have access to Java and ensure that no other accounts have public access – or access provided by default credentials – until a patch is available. Update A couple database assessment vendors were kind enough to contact me with more details on the hack, confirming what I had heard. Application Security Inc. has published more specific information on this attack and on workarounds. They are recommending removing the execute permissions as a satisfactory work-around. That is the most up-to-date information I can find. Share:

Share:
Read Post

Rock Beats Scissors, and People Beat Process

My mentors in engineering management used to define their job as managing people, process, and technology. Those three realms, and how they interact, are a handy way to conceptualize organizational management responsibilities. We use process to frame how we want people to behave – trying to promote productivity, foster inter-group cooperation, and minimize mistakes. The people are the important part of the equation, and the process is there to help make them better as a group. How you set up process directly impacts productivity, arranges priority, and creates or reduces friction. Subtle adjustments to process are needed to account for individuals, group dynamics, and project specifics. I got to thinking about this when reading Microsoft’s Simple Implementation of SDL. I commented on some of the things I liked about the process, specifically the beginning steps of (paraphrased): Educate your team on the ground rules. Figure out what you are trying to secure. Commit to gate insecure code. Figure out what’s busted. Sounds simple, and conceptually it is, but in practice this is really hard. The technical analysis of the code is difficult, but implementing the process is a serious challenge. Getting people to change their behavior is hard enough, but with diverging company goals in the mix, it’s nearly impossible. Adding the SDL elements to your development cycle is going to cause some growing pains and probably take years. Even if you agree with all the elements, there are several practical considerations that must be addressed before you adopt the SDL – so you need more than the development team to embrace it. The Definition of Insanity I heard Marcus Ranum give a keynote last year at Source Boston on the Anatomy of The Security Disaster, and one of his basic points was that merely identifying a bad idea rarely adjusts behavior, and when it does it’s generally only because failure is imminent. When initial failure conditions are noticed, as much effort is spent on finger-pointing and “Slaughter of the Innocents” as on learning and adjusting from mistakes. With fundamental process re-engineering, even with a simplified SDL, progress is impossible without wider buy-in and a coordinated effort to adapt the SDL to local circumstances.To hammer this point home, let’s steal a page from Mike Rothman’s pragmatic series, and imagine a quick conversation: CEO to shareholders: “Subsequent to the web site breach we are reacting with any and all efforts to ensure the safety of our customers and continue trouble-free 24×7 operation. We are committed to security … we have hired one of the great young minds in computer security: a talented individual who knows all about web applications and exploits. He’s really good and makes a fine addition to the team! We hired him shortly after he hacked our site.” Project Manager to programmers: “OK guys, let’s all pull together. The clean-up required after the web site hack has set us back a bit, but I know that if we all focus on the job at hand we can get back on track. The site’s back up and most of you have access to source code control again, and our new security expert is on board! We freeze code two weeks from now, so let’s focus on the goal and … Did you see that? The team was screwed before they started. Management’s covered as someone is now responsible for security. And project management and engineering leadership must get back on track, so they begin doing exactly what they did before, but will push for project completion harder than ever. Process adjustments? Education? Testing? Nope. The existing software process is an unending cycle. That unsecured merry-go-round is not going to stop so you can fix it before moving on. As we like to say in software development: we are swapping engines on a running car. Success is optional (and miraculous, when it happens). Break the Process to Fix It The Simplified SDL is great, provided you can actually follow the steps. While I have not employed this particular secure development process yet, I have created similar ones in the past. As a practical matter, to make changes of this scope, I have always had to do one of three things: Recreate the code from scratch under the new process. Old process habits die hard, and the code evaluation sometimes makes it clear a that a retrofit would require more work than a complete rewrite. This makes other executives very nervous, but has been the most efficient path from practical experience. You may not have this option. Branch off the code, with the sub-branch in maintenance while the primary branch lives on under the new process. I halted new feature development until the team had a chance to complete the first review and education steps. Much more work and more programming time (meaning more programmers committed), but better continuity of product availability, and less executive angst. Moved responsibility of the code to an entirely new team trained on security and adhering to the new process. There is a learning curve for engineers to become familiar with the old code, but weaknesses found during review tend to be glaring, and no one’s ego get hurts when you rip the expletive out of it. Also, the new engineers have no investment in the old code, so can be more objective about it. If you don’t break out of the old process and behaviors, you will generally end up with a mixed bag of … stuff. Skills Section As in the first post, I assume the goal of the simplified version of the process is to make this effort more accessible and understandable for programmers. Unfortunately, it’s much tougher than that. As an example, when you interview engineering candidates and discuss their roles, their skill level is immediately obvious. The more seasoned and advanced engineers and managers talk about big picture design and architecture, they talk about tools and process, and they discuss the tradeoffs of their choices. Most newbies are not even aware

Share:
Read Post

Comments on Microsoft Simplified SDL

I spent the last couple hours pouring over the Simplified Implementation of the Microsoft SDL. I started taking notes and making comments, and realized that I have so much to say on the topic it won’t fit in a single post. I have been yanking stuff out of this one and trying to just cover the highlights, but I will have a couple follow-ups as well. But before I jump into the details and point out what I consider are a few weaknesses, let me just say that this is a good outline. In fact, I will go so far as to say that if you perform each of these steps (even half-assed), your applications will more secure. Much more secure, because the average software development shop is not performing these functions. There is a lot to like here but full adoption will be difficult, due to the normal resistance to change of any software development organization. Before I come across as too negative, let’s take a quick look at the outline and what’s good about it. For the sake of argument, let’s agree that the complexity of MS’s full program is the motivation for this simplified implementation of SDL. Lightweight, agile, simple, and modular are common themes for every development tool, platform, and process that enjoys mainstream success. Security should be no different, and let’s say this process variant meets our criteria. Microsoft’s Simplified Implementation of Secure Development Lifecycle (SDL) is a set of companion steps to augment software development processes. From Microsoft’s published notes on the subject, it appears they picked the two most effective security tasks from each phase of software development. The steps are clear, and their recommendations align closely with the web application security research we performed last year. What I like about it: Phased Approach: Their phases map well to most development processes. Using Microsoft’s recommendations, I can improve every step of the development process, and focus each person’s training on the issues they need to account for. Appropriate Guidelines: Microsoft’s recommendations on tools and testing are sound and appropriate for each phase. Education: The single biggest obstacle for most development teams is ignorance of how threats are exploited, and what they are doing wrong. Starting with education covers the biggest problem first. Leaders and Auditors: Appointing someone as ‘Champion’ or leader tasks someone with the responsibility to improve security, and having an auditor should ensure that the steps are being followed effectively. Process Updates and Root Cause Analysis: This is a learning effort. No offense intended, but the first cycle through the process is going to be as awkward as a first date. Odds are you will modify, improve, and streamline everything the second time through. Here’s what needs some work: Institutional Knowledge: In the implementation phase, how do you define what an unsafe function is? Microsoft knows because they have been tracking and classifying attacks on their applications for a long time. They have deep intelligence on the subject. They understand the specifics and the threat classes. You? You have the OWASP top ten threats, but probably less understanding of your code. Maybe you have someone on your team with great training or practical experience. But this process will work better for Microsoft because they understand what sucks about their code and how attackers operate. Your first code analysis will be a mixed bag. Some companies have great engineers who find enough to keep your entire development organization busy for ten years. Others find very little and are dependent on tools to tell them what to fix. Your effectiveness will come with experience and a few lumps on your forehead. Style: When do you document safe vs. unsafe style? Following the point above, your code development team, as part of their education, should have a security style guide. It’s not just what you get taught in the classroom, but proper use of the specific tools and platforms you rely on. New developers come on board all the time, and you need to document so they can learn the specifics of your environment and style. Metrics: What are the metrics to determine accountability and effectiveness? The Simplified Implementation mentions metrics as a way to guide process compliance and retrospective metrics to help gauge what is effective. Metrics are also the way to decide what a risky interface or unsafe function actually is. But the outline only pays lip service to this requirement. Agile: The people who most need this are web developers using Agile methods, and this process does not necessarily map to those methods. I mentioned this earlier. And the parts I could take or leave: Tools: Is the latest tool always the best tool? To simplify their process, Microsoft introduced a couple security oversimplifications that might help or hurt. New versions of linkers & compilers have bugs as well. Threat Surface: Statistically speaking, if you have twice as many objects and twice as many interfaces, you will generally make more mistakes and have errors. I get the theory. In practice, or at least from my experience, threat surface is an overrated concept. I find that the issue is test coverage of the new APIs and functions, which is why you prioritize tests and manual inspection for new code as opposed to older code. Prioritization: Microsoft has a bigger budget than you do. You will need to prioritize what you do, which tools to buy first, and how to roll out tools in production. Some analysis is required on what training and products will be most effective. All in all, a good general guide to improving development process security, and they have reduced the content to a manageable length. This provides a useful structure, but there are some issues regarding how to apply this type of framework to real world programming environments, which I’ll touch on tomorrow. Share:

Share:
Read Post

Database Security Fundamentals: Access & Authorization

This is part 2 of the Database Security Fundamentals series. In part 1, I provided an overview. Here I will cover basic access and authorization issues. First, the basics: Reset Passwords: Absolutely the first basic step is to change all default passwords. If I need to break into a database, the very first thing I am going to try is to log into a known account with a default password. Simple, fast, and it rarely gets noticed. You would be surprised (okay, maybe not surprised, but definitely disturbed) at how often the default SA password is left in place. Public & Demonstration Accounts: If you are surprised by default passwords, you would be downright shocked by how effectively a skilled attacker can leverage ‘Scott/Tiger’ and similar demonstration accounts to take control of a database. Relatively low levels of permissions can be parlayed into administrative functions, so lock out unused accounts or delete them entirely. Periodically verify that they have not reverted because of a re-install or account reset. Inventory Accounts: Inventory the accounts stored within the database. You should have reset critical DBA accounts, and locked out unneeded ones previously, but re-inventory to ensure you do not miss any. There are always service accounts and, with some database platforms, specific login credentials for add-on modules. Standard accounts created during database installation are commonly subject to exploit, providing access to data and database functions. Keep a list so you can compare over time. Password Strength: There is lively debate about how well strong passwords and password rotation improve security. Keystroke loggers and phishing attacks ignore these security measures. On the other hand, the fact that there are ways around these security precautions doesn’t mean they should be skipped, so my recommendation is to activate some password strength checks for all accounts. Having run penetration tests on databases, I can tell you from first-hand experience that weak passwords are pretty easy to guess; with a little time and an automated login program you can break most in a matter of hours. If I have a few live databases I can divide the password dictionary and run password checks in parallel, with a linear time savings. This is really easy to set up, and a basic implementation takes only a couple minutes. A couple more characters of (required) password length, and a requirement for numbers or special characters, both make guessing substantially more difficult. Authentication Methods: Choose domain authentication or database authentication – whichever works for you. I recommend domain authentication, but the point is to pick one and stick with it. Do not mix the two or later on, confusion and shifting responsibilities will create security gaps – cleaning up those messes is never fun. Do not rely on the underlying operating system for authentication, as that would sacrifice separation of duties, and OS compromise would automatically provide control over the data & database as well. Educate: Educate users on the basics of password selection and data security. Teach them how to pick a word or phase that is easy to remember, such as something they see visually each day, or perhaps something from childhood. Now show them simple substitutions of the letters with special characters and numbers. It makes the whole experience more interesting and less of a bureaucratic annoyance, and will reduce your support calls. All these steps are easy to do. Everything I mentioned you should be able to tackle in an afternoon for one or two critical databases. Once you have accomplished them, the following are slightly more complicated, but offer greater security. Unfortunately this is where most DBAs stop, because they make administration more difficult. Group and Role Review: List out user permissions, roles, and groups to see who has access to what. Ideally review each account to verify users have just enough authorization to do their jobs. This is a great idea, which I always hated. First, it required a few recursive queries to build the list, and second it requires a huge list for non-trivial numbers of users. And actually using the list to remove ‘extraneous’ permissions gets you complaining users, such as receptionists who run reports on behalf of department administrators. Unfortunately, this whole process is time consuming and often unpleasant, but do it anyway. How rigorously you pursue excess rights is up to you, but you should at least identify glaring issues when normal users have access to admin functions. For those of you with the opportunity to work with application developers, this is your opportunity to advise them to keep permission schemes simple. Division of Administrative Duties: If you did not hate me for the previous suggestion, you probably will after this one: Divide up administrative tasks between different admins. Specifically, perform all platform maintenance under an account that cannot access the database and visa-versa. You need to separate the two and this is really not optional. For small shops it seems ridiculous to log out as one user and log back in as another, but negates the domino effect: when one account gets breached it does not mean every system must be considered compromised. If you are feeling really ambitious, or your firm employs multiple DBAs, relational database platforms provide advanced access control provisions to segregate database admin tasks such as archival and schema maintenance, improving security and fraud detection. Share:

Share:
Read Post

FireStarter: Agile Development and Security

I am a big fan of the Agile project development methodology, especially Agile with Scrum. I love the granularity and focus the approach requires. I love that at any given point in time you are working on the most important feature or function. I love the derivative value of communication and subtle form of peer pressure that Scrum meetings produce. I love that if mistakes are made you do not go too far in the wrong direction, resulting in higher productivity and few software projects that are total disasters. I think Agile is the biggest advancement in code development in the last decade as it addresses issues of complexity, scalability, focus and bureaucratic overhead. But it comes with one huge caveat: Agile hurts secure code development. There, I said it. Someone had to. The Agile process, and even the scrum leadership model, hamstrings development in the area of building secure products. Security is not a freakin’ task card. Logic flaws are not well documented, discreet tasks to be assigned. Project managers (and unfortunately most ScrumMasters) learned security by skimming a ‘For Dummies’ book at Barnes & Noble while waiting for their lattes, but these are the folks making the choices as to what security should make it into the iterations. Just like general IT security, we end up wrapping the Agile process in a security blanket or bolting on security after the code is complete, because the process as we know it is not well suited to secure development. I know there will be several of you out there who saying “Prove it! Show us a study or research evidence that supports your theory.” I can’t. I don’t have meaningful statistical data to back up my claim. But that does not mean it’s not true, and there is anecdotal evidence to support what I am saying. For example: The average Sprint duration of two weeks is simply too short for meaningful security testing. Fuzzing & black box testing are infeasible with nightly builds or pre-release sanity checks. Trust assumptions between code modules or system functions where multiple modules process requests cannot be fully exercised and tested within the Agile timeline. White box testing can be effective, but face it – security assessments don’t fit into neat 4-8 hour windows. In the same way Agile products deviate from design and architecture specifications, they deviate from systemic analysis of trust and code dependancies. It’s a classic forest through the trees problem: efficiency and focus gained by skipping over big picture details necessarily come at the expense of understanding how the system and data are used as a whole. Agile’s great at dividing and conquering what you know, but not so great for dealing with the abstract. Secure code development is not like fixing bugs where you have a stack trace to follow. Secure code development is more about coding principles that lead to better security. In the same way Agile can’t help enforce code ‘style’, it won’t help with secure coding guidelines. (Secure) style verification is an advantage of peer programming and inherent in code reviews, but not intrinsic to Agile. The person on the Scrum team with the least knowledge of security, the Product Manager, prioritizes what gets done. Project managers as a general guideline don’t track security testing, and they are not incented to get security right. They are incented to get the software over the finish line. If they track bugs on the product backlog, they probably have a task card buried somewhere, but don’t understand the threats. Security personnel are chickens in the project and do not gate code acceptance they way they traditionally were able to do in waterfall testing, and may have limited exposure to developers. The fact that major software development organizations are modifying or wrapping Agile with other frameworks to compensate for security is evidence of the difficulties in applying security practices directly. The forms of testing that fit within Agile are more likely to get done. If they don’t fit, they are usually skipped (especially at crunch time), or they have to be scheduled outside the development cycle. It’s not just that the granular focus on tasks makes it harder to verify security at the code and system levels. It’s not just that the features are the focus, or that the wrong person is making security decisions. It’s not just that the quick turnaround in code production precludes some forms of testing known to be effective at identifying security issues. It’s not just that it’s hard to bucket security into discreet tasks. It’s all that and more. We’re not going to see a study that compares Waterfall with Agile for security benefits. Putting together similar development teams to create similar products under two development methodologies to prove this point is not practical. I have run Agile and Waterfall projects of a similar nature in parallel, and while Agile had overwhelming advantages in a number of areas, security was not one of them. If you are moving to Agile, great – but you will need to evolve your Agile process to accomodate security. What do you think? How have you successfully integrated secure coding practices with Agile? This is a FireStarter, so discuss in the comments. Share:

Share:
Read Post

Friday Summary: January 29, 2010

I really enjoy making fun of marketing and sales pitches. It’s a hobby. At my previous employer, I kept a book of stupid and nonsense sales sayings I heard sales people make – kind of my I Ching by sociopaths. I would even parrot back nonsense slogans and jargon at opportune moments. Things like “No excuses,” “Now step up to the plate and meet your commitments,” “Hold yourself accountable,” “The customer is first, don’t forget that,” “We must find ways to support these efforts,” “The hard work is done, now you need to complete a discrete task,” “All of your answers are YES YES YES!” and “Allow us to position for success!” Usually these were thrown out in a desperate attempt to get the engineering team to spend $200k to close a $40k deal. Mainstream media marketing uses a similar ham-fisted belligerence in their messaging – trying to tie all your hopes, dreams, and desires to their product. My wife and I used to sit in front of the TV and call out all the overt and subliminal messages in commercials, like how buying a certain waffle iron would get you laid, or a vacuum cleaner that created marital bliss and made you the envy of your neighbors. Some of the pharmaceutical ads are the best, as you can turn off the sound altogether and just gaze at the the imagery and try to guess whether they are selling Viagra, allergy medicine, or eternal happiness. But playing classic music and, in a re-assuring voice, having a cute cartoon figure tell people just how smart they are, is surprisingly effective at getting them to pay an extra $.25 per gallon for gasoline. But I must admit I occasionally find myself swayed by marketing when I thought I was more or less impervious. Worse, when it happens, I can’t even figure out what triggered the reaction. This week was one of those rare occasions. Why the heck is it that I need an iPad? More to the point, what void is this device filling and why do I think it will make my life better? And that stupid little video was kind of condescending and childish … but I still watched it. And I still want one. Was it the design? The size? Maybe it’s because I know my newspaper is dead and I want some new & better way to get information electronically at the breakfast table? Maybe I want to take a browser with me when I travel, and not a phone trying to pretend to display web pages? Maybe it’s because this is a much more appropriate design for a laptop? I don’t know, and I don’t care. This think looks cool and useful in a way that the Kindle just cannot compare to. I want to rip Apple for calling this thing ‘magical’ and ‘revolutionary’, but dammit, I want one. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Rich, Martin, and Zach on this week’s Network Security Podcast. Favorite Securosis Posts Rich: Adrian’s start to the database security fundamentals series. Mike: Rich’s FireStarter on APT. I’m so over APT at this point, but Rich provides some needed rationality in midst of all the media frenzy. Adrian: Rich’s series on Pragmatic Data Security is getting interesting with the Define Phase. Mort: Low Hanging Fruit: Security Management takes Adam’s posts on the topic and fleshes them out. Meier: Security Strategies for Long-Term, Targeted Threats. “Advanced Persistent Threat” just does not cut it. Other Securosis Posts Pragmatic Data Security: Define Phase Incite 1/27/2010: Depending on the Kids Network Security Fundamentals: Default Deny The Certification Myth Pragmatic Data Security: Groundwork FireStarter: Security Endangered Species List Favorite Outside Posts Rich: Who doesn’t love a good cheat sheet? How about a couple dozen all compiled together into a nicely organized list? Mike: Daniel Miessler throws a little thought experiment bomb on pushing everyone through a web proxy farm for safer browsing. An interesting concept, and I need to analyze this in more depth next week. Adrian: Stupid: A Metalanguage For Cryptography Very cool idea. Very KISS! Mort: Managing to the biggest risk. More awesomeness from shrdlu. I particularly love the closer: “So I believe politics can affect both how you assess and prioritize your security risks, and how you go about mitigating them. If you had some kind of magic Silly String that you could spray into your organization to highlight the invisible political tripwires, you’d have a much broader picture of your security risk landscape.” Meier: I luvs secwerity. I also like Tenable’s post on Understanding the New Masschusetts Data Protection Law. Project Quant Posts Project Quant: Database Security – Encryption Project Quant: Project Comments Project Quant: Database Security – Protect through Monitoring Project Quant: Database Security – Audit Top News and Posts Krebs’ article on the Texas bank preemptively suing a customer. Feds boost breach fines. Politics and Security. Groundspeed: a Firefox add-on for web application pen testers. PCI QSAs, certifications to get new scrutiny. It’s The Adversaries Who Are Advanced And Persistent. The EFF releases a tool to see how private/unique your browser is. Intego releases their 2009 Mac security report. It’s pretty balanced. Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. Yeah, I am awarding myself a consolation prize for my comment in response to Mike’s post on Security Management, but I have to award this week’s best comment to Andre Gironda, in response to Matt Mike’s post on The Certification Myth. I usually throw up some strange straw-man and other kinds of confusing arguments like in my first post. But for this one, I’ll get right to the point: Does anyone know if China{|UK|AU|NZ|Russia|Taiwan|France} has a military directive similar to Department of Defense Directive 8570, thus requiring CISSP and/or GIAC certifications in various information assurance roles? Does anyone disagree that China has information superiority compared to the US, and potentially due in part to the existence of DoDD 8570? If China only hires the best (and not just the brown-nosers), then this would stand to achieve them a significant advantage, right? Could it be that instead

Share:
Read Post

Database Security Fundamentals: Introduction

I have been part of 5 different startups, not including my own, over the last 15 years. Every one of them has sold, or attempted to sell, enterprise software. So it is not surprising that when I provide security advice, by default it is geared toward an enterprise audience. And oddly, when it comes to security, large enterprises are a little further ahead of the curve. They have more resources and people dedicated to the subject than small and medium sized businesses, and their coverage is much more diverse. But security advice does not always transfer well from one audience to the other. The typical SMB IT security team is one person. Or in the case or database security, the DBA and the security practitioner are one and the same. The time they have to spend on learning and performing security tasks are significantly less, and the money they have to spend for security tools and automation is typically minimal. To remedy that issue I am creating a couple posts for some pragmatic, hands-on tasks for database security. I’ll provide clear and actionable steps to protect your database and the data it stores. This series is geared to small IT shops who just need a straightforward checklist for database security. We’re not covering advanced security here, and we’re not talking about huge database installations with thousands of users, but rather the everyday security stuff you can do in an afternoon. And to keep costs low, I will focus on the built-in database security functions built into the database. Access: User and administrative security, and security on the avenues into and out of the database. Configuration: Database settings and setup that affect security and protect database functions from subversion or unauthorized alteration. I’ll go into the issue of reliance on the operating system as well. Audit: An examination of activity, transactions, and anomalous events. Data Protection: In cases where the database cannot protect access to information, we will cover techniques to prevent information from being lost of stolen. The goal here is to protect the data stored within the database. We often lose sight of this goal as we spend so much time focusing on the container (i.e., the database) and less on the data and how it is used. Of course I will cover database security – much of which will be discussed as part of access control and configuration sections – but I will include security around the data and database functions as well. Share:

Share:
Read Post

Data Discovery and Databases

I periodically write for Dark Reading, contributing to their Database Security blog. Today I posted What Data Discovery Tools Really Do, introducing how data discovery works within relational database environments. As is the case with many of the posts I write for them, I try not to use the word ‘database’ to preface every description, as it gets repetitive. But sometimes that context is really important. Ben Tomhave was kind enough to let me know that the post was referenced on the eDiscovery and Digital evidence mailing list. One comment there was, “One recurring issue has been this: If enterprise search is so advanced and so capable of excellent granularity (and so touted), why is ESI search still in the boondocks?” I wanted to add a little color to the post I made on Dark Reading as well as touch on an issue with data discovery for ESI. Automated data discovery is a relatively new feature for data management, compliance, and security tools. Specifically in regard to relational databases, the limitations of these products have only been an issue in the last couple years due to growing need – particularly in accuracy of analysis. The methodologies for rummaging around and finding stuff are effective, but the analysis methods have a little way to go. That’s why we are beginning to see labeling and content inspection. With growing use of flat file and quasi-relational databases, look for labeling and Google type search to become commonplace. In my experience, metadata-based data discovery was about 85% effective. Having said that, the number is totally bogus. Why? Most of the stuff I was looking for was easy to find, as the databases were constructed by someone was good at database design, using good naming conventions and accurate column definitions. In reality you can throw the 85% number out, because if a web application developer is naming columns “Col1, Col2, Col3, … Col56”, and defining them as text fields up to 50 characters long, your effectiveness will be 0%. If you do not have labeling or content analysis to support the discovery process, you are wasting your time. Further, with some of the ISAM and flat file databases, the discovery tools do not crawl the database content properly, forcing some vendors to upgrade to support other forms of data management and storage. Given the complexity of environments and the mixture of data and database types, both discovery and analysis components must continue to evolve. Remember that a relational database is highly structured, with columns and tables being fully defined at the time of creation. Data that is inserted goes through integrity checks, and in some cases, must conform to referential integrity checks as well. Your odds of automated tools finding useful information in such databases is far higher because you have definitive descriptions. In flat files or scanned documents? All bets are off. As part of a project I conducted in early 2009, I spoke with a bunch of attorneys in California and Arizona regarding issues of legal document discovery and management. In that market, document discovery is a huge business and there is a lot of contention in legal circles regarding its use. In terms of legal document and data discovery, the process and tools are very different from database data discovery. From what I have witnessed and from explanations by people who sit on steering committees for issues pertaining to legal ESI, very little of the data is ever in a relational database. The tools I saw were pure keyword and string pattern matching on flat files. Some of the large firms may have document management software that is a little more sophisticated, but much of it is pure flat file server scanning with reports, because of the sheer volume of data. What surprised me during my discussions was that document management is becoming a huge issue as large legal firms are attempting to win cases by flooding smaller firms with so many documents that they cannot even process the results of the discovery tools. They simply do not have adequate manpower and it undermines their ability to process their casefiles. The fire around this market has to do with politics and not technology. The technology sucks too, but that’s secondary suckage. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.