Thanks to my wife’s job at a hospital, yesterday I was able to finally get my H1N1 flu shot. While driving down, I was also listening to a science podcast talking about the problems when the government last rolled out a big flu vaccine program in the 1970s. The epidemic never really hit, and there was a much higher than usual complication rate with that vaccine (don’t let this scare you off – we’ve had 30 years of improvement since then). The public was justifiably angry, and the Ford administration took a major hit over the situation.
Recently I also read an article about the Y2K “scare”, and how none of the fears panned out. Actually, I think it was a movie review for 2012, so perhaps I shouldn’t take it too seriously.
In many years of being involved with risk-based careers, from mountain rescue and emergency medicine to my current geeky stuff, I’ve noticed a constant trend by majorities to see risk management successes as failures. Rather than believing that the hype was real and we actually succeeded in preventing a major negative event, most people merely interpret the situation as an overhyped fear that failed to manifest. They thus focus on the inconvenience and cost of the risk mitigation, as opposed to its success.
Y2K is probably one of the best examples. I know of many cases where we would have experienced major failures if it weren’t for the hard work of programmers and IT staff. We faced a huge problem, worked our assess off, and got the job done. (BTW – if you are a runner, this Nike Y2K commercial is probably the most awesomest thing ever.)
This behavior is something we constantly wrestle with in security. The better we do our job, the less intrusive we (and the bad guys) are, and the more invisible our successes. I’ve always felt that security should never be in the spotlight – our job is to disappear and not be noticed. Our ultimate achievement is absolute normalcy.
In fact, our most noticeable achievements are failures. When we swoop in to clean up a major breach, or are dangling on the end of a rope hanging off a cliff, we’ve failed. We failed to prevent a negative event, and are now merely cleaning up.
Successful risk management is a failure because the more we succeed, the more we are seen as irrelevant.
Posted at Tuesday 17th November 2009 8:17 am
(1) Comments •
By Adrian Lane
I was working at Unisys two decades ago when I first got into the discussion of what traits, characteristics, or skills to look for in programmer candidates we interviewed. One of the elder team members shocked me when he said he tried to hire musicians regardless of prior programming experience. His feeling was that anyone could learn a language, but people who wrote music understood composition and flow, far harder skills to teach. At the time I thought I understood what he meant, that good code has very little to do with individual statements or programing language used. And the people he hired did make mistakes with the language, but their applications were well thought out. Still, it took 10 years before I fully grasped why this approach worked.
I got to thinking about this today when Rich forwarded me the link to Esther Schindler’s post “If the comments are ugly, the code is ugly”.
Perhaps my opinion is colored by my own role as a writer and editor, but I firmly believe that if you can’t take the time to learn the syntax rules of English (including “its” versus “it’s” and “your” versus “you’re”), I don’t believe you can be any more conscientious at writing code that follows the rules. If you are sloppy in your comments, I expect sloppiness in the code.
Thoughtful and well written, but horseshit none the less! Worse, this is a red herring. The quality of code lies in its suitability to perform the task it was designed to do. The goal should not be to please a spell checker.
Like it or not, there are very good coders who are terrible at putting comments into the code, and what comments they provide are gibberish. They think like coders. They don’t think like English majors. And yes, I am someone who writes like English was my second language, and code like Java was my first. I am just more comfortable with the rules and uses. We call Java and C++ ‘languages’, which seems to invite comparison or cause some to equate these two things. But make no mistake: trying to extrapolate some common metric of quality is simply nuts. It is both a terrible premise, and the wrong perspective for judging a software developer’s skills. Any relevance of human language skill to code quality is purely accidental.
I have gotten to the point in my career where a lack of comments in code can mean the code is of higher quality, not lower. Why? Likely the document first, code later process was followed. When I started working with seasoned architects for the first time, we documented everything long before any code was written. And we had an entire hierarchy of documents, with the first layer covering the goals of the project, the second layer covering the major architectural components and data flow, the third layer covering design issues and choices, and finally documentation at the object level. These documents were checked into the source code control system along with the code objects for reference during development. There were fewer comments in the code, but a lot more information was readily available.
Good programs may have spelling errors in the comments. They may not have comments at all. They may have one or two logic flaws. Mostly irrelevant. I call the above post a red herring because it tries to judge software quality using spelling as a metric, as opposed to more relevant attributes such as:
- The number of bugs in any given module (on a per-developer basis if I can tell).
- The complexity or effort required to fix these bugs.
- How closely the code matches the design specifications.
- Uptime during stress testing.
- How difficult it is to alter or add functionality not provided for in the original design.
- The inclusion of debugging flags and tools.
- The inclusion of test cases with source code.
The number of bugs is far more likely to be an indicator of sloppiness, mis-reading the design specification, bad assumptions, or bogus use cases. The complexity of the fix usually tells me, especially with new code, if the error was a simple mistake or a major screw-up. Logic errors need to be viewed in the same way. Finally, test cases and debugging built into the code are a significant indicator that the coder was thinking about the important data points in the code. Witnessing code behavior has been far more helpful for debugging code than inline comments. Finding ‘breadcrumbs’ and debugging flags is a better indication of a skilled coder than concise grammatically correct comments.
I know some very good architects whose code and comments are sloppy. There are a number of reasons for this, primarily that coding is no longer their primary job. Most follow coding practices because heck, they wrote them. And if they are responsible for peer review this is a form of self preservation and education for their reviewees. But their most important skill is an understanding of business goals, architecture, and 4GL design. These are the people I want laying out my object models. These are the people I want stubbing out objects and prototyping workflow. These are the people I want choosing tools and platforms. Attention to detail is a prized attribute, but some details are more important than others. The better code I have seen comes from those who have the big picture in mind, not those who fuss over coding standards. Comments save time if professional code review (outsourced or peer) is being used, but a design specification is more important than inline comments.
There is another angle to consider here, and that is coding in the open source community is a bit different than working for “The Man”. This is because the eyes of your peers are on you. Not just one or two co-workers, but an entire community. Peer pressure is a great way to get better quality code. Misspellings will earn you a few private email messages pointing out your error, but sloppy programming habits invite public ridicule and scorn. Big motivator.
Still, I maintain in-code comments are of limited value and an old model for development that went out of fashion with Pascal in the enterprise. We have source code control systems that allow us to keep documentation with code segments. Better still, design documents that describe what should be, whereas the code comments describes what ‘is’ and explain the small idiosyncrasies in the implementation because of language, platform, or compatibility limitations.
Spelling as a quality indicator… God, if it were only that easy!
Posted at Monday 16th November 2009 2:48 pm
(8) Comments •
By RichAdrian Lane
One of the more vexing problems in information security is our lack of metrics models for measuring and optimizing security efforts. We tend to lack frameworks and metrics to help measure the efficiency and effectiveness of security programs. This makes it more difficult both to improve our processes, and to communicate our value to non-technical decision makers.
I’m not saying we don’t have any metrics. In recent years we’ve come a long way, with developments such as the Center for Internet Security’s Consensus Metrics and the work of Andrew Jaquith and the Security Metrics community. For the most part these metrics fall into two broad categories: program metrics, and risk/threat models.
One area that has been generally lacking – not to take anything away from the other two categories – is detailed process-oriented models for improving efficiency and effectiveness within specific security areas. In other words, instead of just determining whether a particular process is an overall improvement, such as by measuring time to patch managed systems (efficiency) or percentage of overall systems patched (effectiveness), we lack tools for examining the individual steps within the process for finer-grained changes. Such detailed measurements can help us figure out how much it costs to patch, identify where and why our patching might be slower than desired (and thus how to make it faster), and determine why certain systems fall between the gaps and aren’t patched. Our higher-level models help us evaluate risk and overall security programs, while detailed metrics would be useful for performance optimization.
Our first attempt at building a security performance optimization model focused on patch management, and we called it Project Quant. Over about 6 months we built a standard process framework for patch management, with heavy community participation, and then identified a series of detailed metrics for each step in the process. We ended up with about 40 steps in 10 min phases, with well over 100 potential metrics, prioritized so you can focus on few key areas because few people have the resources to measure them all.
About a month ago we were approached by a database security vendor to see if we could do the same thing for database security. This vendor, Application Security Inc., wanted an open, public, objective framework to measure the potential costs associated with database security. As with the initial Project Quant, which was sponsored by Microsoft, we agreed to proceed with the project as long as we could maintain our Totally Transparent Research policy. In other words, all the work has to be done in public, and the sponsor must participate through the same public mechanisms (comments and forum posts) as anyone else.
This project aligns very well with our research coverage, and we’ve been looking for an excuse to build out more-detailed database security process models for some time now. We also realized the format we used for Project Quant works well for other process-based metrics models. Thus we’re proud to introduce Project Quant for Database Security, and we will now refer to the initial project as Project Quant for Patch Management.
Based on what we have learned to date in Project Quant, this is how the project will proceed:
- We will, with community involvement, build out a high-level process framework for database security (see the patch management cycle for an example).
- Once the high level process looks good, we will build out detailed steps for each phase of the higher-level process, and solicit public feedback and involvement.
- We will build out sub-phase processes that help define tasks, and identify metrics for each step. Metrics will be hard costs in dollars (hardware/software), or time to complete the step. In some cases we will also include some effectiveness metrics (e.g., success vs. failure rates), but the primary focus of the model is costs/efficiency.
- We will classify the metrics by importance and identify key metrics. We learned in the first Project Quant that it’s easy to identify a large number of potential metrics, but most people need only focus on a few that they can measure with a reasonable investment – once again, some metrics are expensive enough to measure that they would be a poor investment for some (or even most) organizations.
- Where possible, we will support the research with open surveys and interviews.
- Absolutely all the research will be conducted out in the open to maintain objectivity. All public comments will be retained as part of the project record, and no comments will be filtered except for spam and off-topic content. The sponsor is only allowed to participate through the same public mechanisms, so their financial involvement can’t influence the result. (As with all our contracts, the sponsor doesn’t have to pay if the result doesn’t meet their needs due to our objectivity requirements).
- Anyone can participate – other security vendors, database and security professionals, database vendors, or anyone with too much time on their hands. If you work for a database or database security vendor, we ask that you disclose the company you are with.
- All materials will be released under a Creative Commons license.
Since database security is more diverse than patch management, we expect to identify multiple sub-processes as part of an overall program. For example, assessment and monitoring aren’t necessarily part of a contiguous cycle like most of the phases of patch management. Because this scope is also wider, we don’t plan on delving into the same level of detail on the metrics as we did with patch management. To be honest, we probably went too deep, and included far more metrics than anyone could reasonably collect using current technologies.
In terms of timeline we are shooting to complete this project around the end of January or early February.
So let us know what you think. We’ll start posting initial thoughts on the process model tomorrow, and start cranking through it from there. We’ll keep all material in the Project Quant site, and will update the Research Library to reflect that we’re now expanding Quant into other security areas. You can find a complete Table of Contents in the Process Framework post.
Posted at Monday 16th November 2009 1:31 pm
(4) Comments •
I was talking with security researcher Mike Bailey over the weekend, and there’s a lot of confusion around his disclosure last week of a combination of issues with Adobe Flash that lead to some worrisome exploit possibilities. Mike posted his original information and an FAQ. Adobe responded, and Mike followed up with more details.
The reason this is a bit confusing is that there are 4 related but independent issues that contribute to the problem.
- Flash ignores file extensions and content headers. The Flash player built into all of our browsers will execute any file that has Flash file headers. This means it ignores HTTP content headers. Some sites assume that content can’t execute because they don’t label it as runnable in the HTML or through the HTTP headers. If they don’t specifically filter the content type, though, and allow a Flash object anywhere in the page, it will run – in their context. Running in context of the containing page/site is expected, but execution despite content labeling is often unexpected and can be dangerous. Now most sites filter or otherwise mark images and some other major uploadable content types, but if they have a field for a .zip file or a document, unless they filter it (and many sites do) the content will run.
- Flash files can impersonate other file types. A bad guy can take a Flash program, append a .zip file, and give it a .zip file extension. To any ZIP parser, that’s a valid zip file, and not a Flash file. This also applies to other file types, such as the .docx/pptx/xlsx zipped XML formats preferred by current versions of MS Office. As I mentioned in the second point, many servers screen potentially-unsafe file types such as zip. Such hybrid files are totally valid zip archives, but simultaneously executable Flash files. If the site serves up such a file, (as many bulletin boards and code-sample sites do), the Flash plugin will manage to recognize and execute the Flash component, even though it looks more like a zip file to humans and file scanners.
Thus we have four problems – three of which Adobe can fix – that create new exploit scenarios for attackers. Attackers can sneak Flash files into places where they shouldn’t run, and can design these malicious applications to allow them to manipulate the hosting site in ways that shouldn’t be possible. This works on some common platforms if they enable file uploads (Joomla, Drupal), as well as some of the sites Mike references in his posts.
This isn’t an end-of-the-world kind of problem, but is serious enough that Adobe should address it. They should force Flash to respect HTTP headers, and could easily filter out “disguised” Flash files. Flash should also respect the same origin policy, and not allow the hosting site to affect the presenting site.
This issue is definitely more serious than Adobe is saying, and hopefully they’ll change their position and fix the parts of it that are under their control.
Posted at Monday 16th November 2009 10:39 am
(1) Comments •
By David Mortman
I recently had the pleasure to present at a local CIO conference. There were about 50 CIOs in the room, ranging from .edu folks, to start-ups, to the CIOs of major enterprises including a large international bank and a similarly large insurance company. While the official topic for the event was “the cloud”, there was a second underlying theme – that CIOs needed to learn how to talk to the business folks on their terms and also how to make sure that IT wasn’t being a roadblock but rather an enabler of the business. There was a lot of discussion and concern about the cloud in general – driven by business’ ability to take control of infrastructure away from IT – so while everybody agreed that communicating with the business should always have been a concern, the cloud has brought this issue to the fore.
This all sounds awfully familiar, doesn’t it? For a while now I’ve been advocating that we as an industry need to be doing a better job communicating with the business and I stand behind that argument today. But I hadn’t realized how fortunate I was to work with several CIOs who had already figured it out. It’s now pretty clear to me that many CIOs are still struggling with this, and that it is not necessarily a bad thing. It means, however, that while the CIO is still an ally as you work to communicate better with the business, it is now important to keep in mind that the CIO might be more of a direct partner rather than a mentor. Either way, having someone to work with on improving your messaging is important – it’s like having an editor (Hi Chris!) when writing. That second set of eyes is really important for ensuring the message is clear and concise.
Posted at Monday 16th November 2009 9:51 am
(1) Comments •
We talk a lot about the role of anonymization on the Internet. On one hand, it’s a powerful tool for freedom of speech. On the other, it creates massive security challenges by greatly reducing attackers’ risk of apprehension.
The more time I spend in security, the more I realize that economics plays a far larger role than technology in what we do.
Anonymization, combined with internationalization, shifts the economics of online criminal activity. In the old days to rob or hurt someone you needed a degree of physical access. The postal and phone systems reduced the need for this access, but also contain rate-limiters that reduce scalability of attacks. Physical access corresponds to physical risk – particularly the risk of apprehension. A lack of sufficient international cooperation (or even consistent international laws), combined with anonymity, and the scope and speed of the Internet, skew the economics in favor of the bad guys. There is a lower risk of capture, a lower risk of prosecution, limited costs of entry, and a large (global) scope for potential operations.
Heck, with economics like that, I feel like an idiot for not being a cybercriminal.
In security circles we spend a lot of time talking about the security issues of anonymity and internationalization, but these really aren’t the problem. The real problem isn’t the anonymity of users, but the anonymity of losses.
When someone breaks into your house, you know it. When a retailer loses inventory to shrinkage, the losses are directly attributable to that part of the supply chain, and someone’s responsible. But our computer security losses aren’t so clear, and in fact are typically completely hidden from the asset owner. Banking losses due to hacking are spread throughout the system, with users rarely paying the price.
Actually, that statement is completely wrong. We all pay for this kind of fraud, but it’s hidden from us by being spread throughout the system, rather than tied to specific events. We all pay higher fees to cover these losses. Thus we don’t notice the pain, don’t cry out for change, and don’t change our practices. We don’t even pick our banks or credit cards based on security any more, since they all appear the same.
Losses are also anonymized on the corporate side. When an organization suffers a data breach, does the business unit involved suffer any losses? Do they pay for the remediation out of their departmental budget? Not in any company I’ve ever worked with – the losses are absorbed by IT/security.
Our system is constructed in a manner that completely disrupts the natural impact of market forces. Those most responsible for their assets suffer minimal or no direct pain when they experience losses. Damages are either spread through the system, or absorbed by another cost center.
Now imagine a world where we reverse this situation. Where consumers are responsible for the financial losses associated with illicit activity in their accounts. Where business unit managers have to pay for remediation efforts when they are hacked. I guarantee that behavior would quickly change.
The economics of security fail because the losses are invisibly transfered away from those with the most responsibility. They don’t suffer the pain of losses, but they do suffer the pain/inconvenience of security. On top of that, many of the losses are nearly impossible to measure, even if you detect them (non-regulated data loss). No wonder they don’t like us.
Security professionals ask me all the time when users will “get it”, and management will “pay attention”. We don’t have a hope of things changing until those in charge of the purse strings start suffering the pain associated with security failures.
It’s just simple economics.
Posted at Friday 13th November 2009 1:33 pm
(2) Comments •
I have to be honest. I’m getting tired of this whole “security is failing, security professionals suck” meme.
If the industry was failing that badly all our bank accounts would be empty, we’d be running on generators, our kids would all be institutionalized due to excessive exposure to porn, email would be dead, and all our Amazon orders would be rerouted to Liberia… but would never show up because of all the falling planes crashing into sinking cargo ships.
I’m not going to say we don’t have serious problems! We do, but we are also far from complete failure. Just as any retail supply chain struggles with shrinkage (theft), any organization of sufficient size will struggle with data shrinkage and security penetrations.
Are we suffering losses? Hell, yes. Are they bad? Most definitely. But these losses clearly haven’t hit the point where the pain to society has sufficiently exceeded our tolerance. Partially I think this is because the losses are unevenly distributed and hidden within the system, but that’s another post. I don’t know where the line is that will kick the world into action, but suspect it might involve sudden unavailability of Internet porn and LOLCats email.
Those of us deeply embedded within the security industry forget that the vast majority of people responsible for IT security across the world aren’t necessarily in dedicated positions within large enterprises. I’d venture a bet that if we add up all the 1-2 person security teams in SMB (many only doing security part-time), and other IT professionals with some security responsibilities, that number would be a pretty significant multiple of all the CISSPs and SANS graduates in the world.
It’s ridiculous for us to tell these folks that they are failing. They are slammed with day to day operational tasks, with no real possibility of ever catching up. I heard someone say at Gartner once that if we froze the technology world today, buying no new systems and approving no new projects, it would still take us 5 years to catch up.
Security professionals have evolved… they just have far too much to deal with on a daily basis. We also forget that, as with any profession, most of the people in it just want to do their jobs and go home at night, perhaps 10% are really good and always thinking about it, and at least 30% are lazy and suck. I might be too generous with that 30% number.
Security, and security professionals, aren’t failing. We lose some battles and win others, and life goes on. At some point the world feels enough pain and we get more resources to respond. Then we reduce that pain to an acceptable level, and we’re forgotten again.
That said, I do think life will be more interesting once losses aren’t hidden within the system (and I mean inside all kinds of businesses, not just the financial world). Once we can tie data loss to pain, perhaps priorities will shift. But that’s for another post…
On to the Summary:
Webcasts, Podcasts, Outside Writing, and Conferences
Favorite Securosis Posts
Other Securosis Posts
Favorite Outside Posts
Top News and Posts
Blog Comment of the Week
This week’s best comment comes from Mike Rothman in response to Compliance vs. Security:
Wow. Hard to know where to start here. There is a lot to like and appreciate about Corman’s positions. Security innovation has clearly suffered because organizations are feeding the compliance beast. Yes, there is some overlap - but it’s more being lucky than good when a compliance mandate actually improves security.
The reality is BOTH security and compliance do not add value to an organization. I’ve heard the “enabling” hogwash for years and still don’t believe it. That means organizations will spend the least amount possible to achieve a certain level of “risk” mitigation - whether it’s to address security threats or compliance mandates. That is not going to change.
What Josh is really doing is challenging all of us to break out of this death spiral, where we are beholden to the compliance gods and that means we cannot actually protect much of anything. Compliance is and will remain years behind the real threats.
Posted at Thursday 12th November 2009 9:52 pm
(1) Comments •
By Adrian Lane
A couple weeks ago, we began an internal discussion about DNS security and X.509 certificates. It dawned on me that those of you who have never worked with certificates may not understand what they are or what they are for. Sure, you can go to the X.509 Wiki, where you get the rules for usage and certificate structure, but that’s a little like trying to figure out football by reading the rule book. If you are asking, “What the heck is it and what is it used for?”, you are not alone.
An X.509 certificate is used to make an authoritative statement about something. A real life equivalent would be “Hi, I’m David, and I live at 555 Main Street.” The certificate holder presents it to someone/something in order to prove they are who they say they are, in order to establish trust. X.509 and other certificates are useful because the certificate provides the necessary information to validate the presenter’s claim and the authenticate the certificate itself. Like a driver’s license with a hologram, but much better. The recipient examines the certificate’s contents to decide if the presenter is who they say they are, and them whether to trust them with some privilege.
Certificates are used primarily to establish trust on the web, and rely heavily on cryptography to provide the built-in validation. Certificates are always signed with a chain of authority. If the root of the chain is trusted, the user or application can extend that level of trust to some other domain/server/user. If the recipient doesn’t already trust the top signing authority, the certificate is ignored and no trust is established. In a way, an x.509 certificate is a basic embodiment of data centric security, as it contains both information and some rules of use.
Most certificates state within themselves what they are used for, and yes, they can be used for purposes other than validating web site identity/ownership, but in practice we don’t see diverse uses of X.509 certificates. You will hear that X.509 is an old format, that it’s not particularly flexible or adaptable. All of which is true and why we don’t see it used very often in different contexts. Considering that X.509 certificates are used primarily for network security, but were designed a decade before most people had even heard of the Internet, they have worked considerably better than we had any right to expect.
Posted at Thursday 12th November 2009 4:12 pm
(0) Comments •
I just read about some Georgia Tech researchers working on remote security techniques that carriers could use to help manage attacks on cell phones.
Years ago I used to focus on a similar issue: how mobile malware was something that carriers would eventually be responsible for stopping, and that’s why we wouldn’t really need AV on our phones. That particular prediction was clearly out of date before the threat ever reared its ugly head.
These days our phones are connected nearly as much to WiFi, Bluetooth, and other networks as they are to the carrier’s network. Thus it isn’t hard to see malware that checks to see which network interface is active before sending out any bad packets (DDOS is much more effective over WiFi than EDGE/3G anyway). This could circumvent the carrier, leaving malware to propagate over local networks.
Then again, perhaps we’ll all have super-high-speed carrier-based networks on some 6G technology before phone malware is prevalent, and we’ll be back on carrier networks again for most of our connectivity. In which case, if it’s AT&T, the network won’t be reliable enough for any malware to spread anyway.
Posted at Thursday 12th November 2009 3:43 pm
(0) Comments •
How often have you heard the phrase, “Never assume” (insert the cheesy catch phrase that was funny in 6th grade here)?
For the record, it’s wrong.
When designing our security, disaster recovery, or whatever, the problem isn’t that we make assumptions, it’s that we make the wrong assumptions. To narrow it down even more, the problem is when we make false assumptions, and typically those assumptions skew towards the positive, leaving us unprepared for the negative. Actually, I’ll narrow this down even more… the one assumption to avoid is a single phrase: “That will never happen.”
There’s really no way to perform any kind of forward-looking planning without some basis for assumptions. The trick to avoiding problems is that these assumptions should generally skew to the negative, and must always be justified, rather than merely accepted. It’s important not to make all your decisions based on worst cases because that leads to excessive costs. Expose all the the assumptions helps you examine the corresponding risk tolerance.
For example, in mountain rescue we engaged in non-stop scenario planning, and had to make certain assumptions. We assumed that a well cared for rope under proper use would only break at its tested breaking strength (minus knots and other calculable factors). We didn’t assume said breaking strength was what was printed on the label by the manufacturer, but was our own internal breaking strength value, determined through testing. We would then build in a minimum of a 3:1 safety factor to account for unexpected dynamic strains/wear/whatever. In the field we were constantly calculating load levels in our heads, and would even occasionally break out a dynamometer to confirm. We also tested every single component in our rescue systems – including the litter we’d stick the patient into, just in case someone had to hang off the end of it.
Our team was very heavy with engineers, but that isn’t the case with other rescue teams. Most of them used a 10:1 safety factor, but didn’t perform the same kinds of testing or calculations we did. There’s nothing wrong with that… although it did give our team a little more flexibility.
I was recently explaining the assumptions I used to derive our internal corporate security, and realized that I’ve been using a structured assumptions framework that I haven’t ever put in writing (until now). Since all scenario planning is based on assumptions, and the trick is to pick the right assumptions, I formalized my approach in the shower the other night (an image that has likely scarred all of you for life). It consists of four components:
- Reasoning: The basis for the assumption.
- Indicators: Specific cues that indicate whether the assumption is accurate or if there’s a problem in that area.
- Controls: The security/recovery/safety controls to mitigate the issue.
Here’s how I put it in practice when developing our security:
Assumption: Securosis in general, and myself specifically, are a visible target.
Reasoning: We are extremely visible and vocal in the security community, and as such are not only a target of opportunity. We also have strong relationships within the vulnerability research community, where directed attacks to embarrass individuals are not uncommon. That said, we aren’t at the top of an attacker’s list – there is no financial incentive to attack us, nor does any of our work directly interfere with the income of cybercriminal organizations. While we deal with some non-public information, it isn’t particularly valuable in a financial context. Thus we are a target, but the motivation would be to embarrass us and disrupt our operations, not to generate income.
Indicators: A number of our industry friends have been targeted and successfully attacked. Last year one of my private conversations with one such victim was revealed as part of an attack. For this particular assumption, no further indicators are really needed.
Controls: This assumption doesn’t drive specific controls, but does reinforce a general need to invest heavily in security to protect against a directed attack by someone willing to take the time to compromise myself or the company. You’ll see how this impacts things with the other assumptions.
Assumption: While we are a target, we are not valuable enough to waste a serious zero-day exploit on.
Reasoning: A zero-day capable of compromising our infrastructure will be too financially valuable to waste on merely embarrassing a gaggle of analysts. This is true for our internal infrastructure, but not necessarily for our web site.
Indicators: If this assumption is wrong, it’s possible one of our outbound filtering layers will register unusual activity, or we will see odd activity from a server.
Controls: Outbound filtering is our top control here, and we’ve minimized our external surface area and compartmentalized things internally. The zero-day would probably have to target our individual desktops, or our mail server, since we don’t really have much else. Our web site is on a less common platform, and I’ll talk more about that in a second. There are other possible controls we could put in place (from DLP to HIPS), but unless we have an indication someone would burn a valuable exploit on us, they aren’t worth the cost.
Assumption: Our website will be hacked.
Reasoning: We do not have the resources to perform full code analysis and lockdown on the third party platform we built our site on. Our site is remotely co-hosted, which also opens up potential points of attack. It is the weakest link in our infrastructure, and the easiest point to attack short of developing some new zero-day against our mail server or desktops.
Indicators: Unusual activity within the site, or new administrative user accounts. We periodically review the back-end management infrastructure for indicators of an ongoing compromise, including both the file system and the content management system. For example, if HTML rendering in comments was suddenly turned on, that would be an indicator.
Controls: We deliberately chose a service provider and platform with better than average security records, and security controls not usually available for a co-hosted site. We’ve disabled any HTML rendering in comments/forum posts, and promote use of NoScript when visiting our site to reduce user exposure when it’s compromised. On our side, we mandate single-site passwords for all the staff, which are not reused anywhere else. The site is hosted separately from our other infrastructure. I encourage everyone to use a single site browser that is locked down to only render content from our site (to avoid XSS/CSRF). I use two different layers to ensure I can only access the site, and nothing but the site, from my dedicated browser. Thus our own site shouldn’t be able to be used to compromise any other part of our infrastructure when someone finally pops it. Also, right now we don’t store sensitive information about any visitors on the site (no PII). When we do start offering for-pay products, we will use external credit card processing, pay for ongoing penetration testing, and remind our users to never reuse their site password anyplace else. We have a multi-level backup scheme to minimize lost data when the site is finally hacked.
Assumption: Our mail server is the most valuable target for an attacker.
Reasoning: Assuming our attacker is out to steal proprietary information or just embarrass us, our mail server is the best target (except for maybe my personal desktop). That’s where our sensitive client information is, and we pretty much give everything else away for free.
Indicators: Either a rise in attack activity on our mail server, or new outbound connections/accounts.
Controls: We have multiple layers of security on the mail server. It’s on an isolated network with nothing else on that network segment to compromise. This is the one area I don’t want to discuss in detail, but we have at least two filtering layers to get to the server (more than just a firewall), and outbound connection restrictions with a serious deny-all policy. Our mail server is locked up in my house (no remote admins, no other sites on the server that could be compromised to get to us), but not connected to my home network. The server itself is locked down pretty tight – we don’t even allow AV/anti-spam on the server since that could be a vector for attack (in other words, we minimize message processing). There’s even more, but despite what they say a little obscurity is sometimes good for security. If someone can get this server, they’ve fracking earned it.
This is already longer than I planned, but you can see the process. I’ve done the same thing for my day to day system and laptop, with a set of corresponding controls. Despite all this I’ll probably be hacked someday, but it will take a hack of a lot of time and effort since I always assume I’m under attack, and take precautions far above normal best practices. My goal is to make the effort to get to me high enough that to succeed, someone will have to give up far more lucrative financial opportunities. Even bad guys need to feed their families.
Assumptions are good… as long as you understand the reasoning, define indicators to track if they are right or wrong over time, and use them to develop corresponding controls.
Posted at Thursday 12th November 2009 9:17 am
(6) Comments •
You can ignore this post if you aren’t interested in the for-pay side of Securosis (in other words, if you don’t want to give us any cash).
I try not to put too much of the business side here in the blog feed, but we’re doing our 2010 planning and have made some changes to our services. Since we continue to grow, we needed to formalize things a little more than we have in the past. Being transparent, we don’t have to hide any of this. So if you are looking for some independent analysis, here’s what’s on our plate for 2010.
For anything that’s public facing (whitepapers, webcasts, speaking) it has to comply with our objectivity standards (Totally Transparent Research). All of our services are open to users, vendors or the investment community, but I doubt any of you user types wants to sponsor a whitepaper.
- Retainer Programs: For 2010 we’ve split this into 2 levels – Basic and Premier. Basic is defined and priced based on the number of dedicated hours you think you might want per quarter (the lowest is 3), and includes unlimited “short” contact (quick emails/calls). Premier is our new program, priced at the level our largest retainer clients tended to go with ($7,500/quarter), but now includes unlimited hour-long calls, and up to 5 “extended inquiries” for deeper work. All retainer programs include discounts for all our other services, especially on-site days, which vary based on the tier of the retainer.
- Advisory Projects: These are custom scoped projects to meet specific objectives. Yes, pretty much just like consulting, even though we give it a fancy analyst name.
- On-Site Advisory Days: These are for on-site strategy work, although we can combine them with speaking engagements.
- White Papers/Published Research: We are formalizing our research agenda for the year and many of our papers are open for sponsorship. Sponsors cannot influence the content of the paper, but they also don’t have to pay if the paper ends up not meeting their needs. We do accept proposals for paper/research ideas, but if they don’t match our coverage agenda, or are biased by the nature of the topic, we can’t be involved. We do not do any ghostwriting. Right now we have a couple slots open on our database encryption and vulnerability assessment papers, and topics for 2010 include cloud computing (specifically a paper on data security in the cloud, and another on cloud-based security services), some data and application security topics we’re trying to narrow down, and additional work on patch management and Project Quant. There will be a bunch more, but we haven’t fully planned out our 2010 agenda yet.
- Webcasts/Speaking/Presentations: The usual – topics must be in our coverage areas and meet objectivity requirements, but we are otherwise pretty open on topics.
- Videocasts: Within the next few weeks we will be releasing our first in a series of short videos designed to supplement our other research. We’ve invested a lot of time and resources to be able to produce something a heck of a lot better than blurry talking heads, and some of these will be open for sponsorship. The first two should be ready within the next couple weeks – one on content analysis techniques, and the other on database activity monitoring collection techniques. They will average 5-15 minutes long, with laser focus on a specific topic.
And that’s it. We have some other exciting stuff in the works (all for the user community), but nothing we’re ready to announce yet.
Posted at Wednesday 11th November 2009 2:06 pm
(0) Comments •
By David J. Meier
At lunch last week, location-based privacy came up. I actively opt in to a monitoring service, which gets me a discount on insurance for a vehicle I own. My counterpart stated that they would never agree to anything of the sort because of the inherent breach of personal privacy and security. I responded that the privacy statement explicitly reads that the device does not contain GPS, nor does the company track the vehicle’s location. But even if the privacy statement said the opposite – should I care? Is location directly tied to some aspect of my life that might negatively impact me? And ultimately is security really tied to privacy in this context?
In a paper by Janice Tsai, Who’s Viewed You? The Impact of Feedback in a Mobile Location-Sharing Application (PDF) the abstract’s last line states, “…our study suggests that peer opinion and technical savviness contribute most to whether or not participants thought they would continue to use a mobile location technology.” This makes sense as I would self-qualify my ability to understand the technology enough to be able to control and measure the level of exposure I may create. Although the paper’s focus is ultimately on the feedback (or lack thereof) that these location-based services provide, it still contains interesting information. The thing that most intrigued me is that it never actually correlated privacy to security. I expected there to be a definitive point where users complained about being less secure somehow because they were being tracked. But nothing like that appeared.
I continued on my journey, looking to tie location-based privacy to security, and ran across another paper with a more promising title: “Location-Based Services and the Privacy-Security Dichotomy” by K. Michael, L. Perusco, and M. G. Michael. The paper provides much more warning of “security compromise” and “privacy risk”, but the problem remains – again, this paper doesn’t provide any hard evidence of how these location-based services actually create a security risk. In fact it’s more the opposite – they state that if we are willing to give up privacy, then our personal security may be increased. The authors mention the obvious risks, including lack of control and data leakage, but at this point, I’m still unsatisfied and have yet to find a clear understanding of how or why using a location-based service might ultimately make me less secure. So maybe it’s simply not so, and perhaps the real problem is outlined in section 3.2 of the paper: “The Human Need for Autonomy”.
Let’s be honest – it’s more psychological than anything with a placeholder for obvious exceptions, the most notable being stalker scenarios that are linked to domestic abuse of sorts. Even in this scenario it may be a stretch to say that location-based services are really the root cause of decreased personal security. Sure an angry ex may guess or even know a password to a webmail account and skim location data from communications, but the same could be done by lock picking a place of residence and stealing a daily planner. It’s a particular area that can easily be argued from either side because of different interpretations of what it is in the end.
We’d like to think that nobody is tracking us, but we all carry mobile phones, we’re all recorded daily by countless cameras, we all badge in at work using RFID, we all swipe payment cards, and we all use the Internet (I’m generalizing “we” based on content distribution here, but flame if you must). The addition of things like Google Latitude, Skyhook Wireless, and Yahoo! Fire Eagle are adding a level of usability but in the grand scheme of things do they really impact your personal security? Probably not. In the meantime, my fellow netizens, we can at least make light of the situation while we discuss what it is and isn’t. It’s a place, no matter where we are, that can mockingly be referred to as: Oceania – because try as you might, someone is watching.
–David J. Meier
Posted at Tuesday 10th November 2009 5:35 pm
(6) Comments •
By Adrian Lane
Reading Bill Brenner’s PCI Security a Devil, ‘Like No Child Left Behind’, I had the impression Brenner’s summary of Joshua Corman’s presentation would be: Joshua was %#!*$ crazy. In a nutshell:
“Organizations have made PCI DSS and compliance in general the basis of their information security policies,” he said. “They’re basing security on sloppy logic from Visa and MasterCard and in the process are ignoring some very bad state-sponsored threats. As a community, we have not evolved at all.”
You have to read the whole article to fully grasp Corman’s nuances, and note that some of the inflammatory additions seem to be Bill’s, rather than direct quotes from Joshua. Still, while there are points I agree with, Corman seems to have connected the dots arbitrarily. Not only do I not see general security policies being based off compliance initiatives, I don’t buy the argument that compliance is at the expense of security. Is there overlap? Absolutely. But the recognized lack of security is motivated by completely different forces. In the presence of evidence that many organizations are doing the absolute minumum to comply with regulations, how can you suppose that they would voluntarily invest in security without compliance requirements? Why would companies take a risk-based approach to spending efficiently, when they really don’t want to spend at all?
To me, companies embody the approach of The Three Wise Monkeys: “See no evil. Hear no evil. Speak no evil.”
Regulations espouse the ideals of safety, security and efficacy, and companies want tasks performed cheaply, quickly, and easily. Regulation is supposed to alter the way companies do business, providing guidance on how to realize the ideal. Companies often handle compliance as just another task, and try to address it from within the same processes the compliance mandate is designed to reform. If companies could be trusted to come close to the ideals and intentions, we would not have auditors.
Part of Corman’s presentation seems to be a derivative of his 8 Dirty Secrets presentation (summarized), where part 6 discusses how “Compliance Threatens Security”. Do I think that security product vendors are “…offering products that do everything from offer PCI compliance out of the box to ultimate cure-alls for healthcare entities coping with the demands of HIPAA”? Absolutely. But this was the cheapest, fastest and easiest way to comply. Take Sarbanes-Oxley as an example: products like Database Activity Monitoring and Log Management are the only way to achieve some of the required controls over automated financial systems that process millions of transactions a day. The fact that these unique data collection and analysis capabilities came from a security vendor is incidental. The security investment was made to satisfy a compliance mandate, not for the sake of security. The fact that the tools provide security as well is a by-product for many vendors and customers, often considered unimportant or incidental.
If I was going to create my own Dirty Little Secret list, I would say most companies treat security as “Don’t Ask, Don’t Tell”. Security tools that are bought to fulfill compliance have a bad habit of illuminating threats companies really don’t want to know about. They want to pass their compliance audits and not worry about other problems problems discovered … those just lead to additional expenses. If you doubt my cynical perspective, look at how most firms react when told their corporate network is host to 5,000 bots that just commenced a DDOS attack on another company: they tend to threaten suit for invasion of privacy or libel. Another example we see is that a high percentage of companies have web application firewalls for PCI, but run them as monitors rather than proxies! They need to have WAF to comply with PCI, so they bought one, but no one mandateed they use it effectively. Security professionals really care about security, but the executive management cares precisely as much as legal and finance tells them to.
I think security is a really hard problem, and far too often our attempts at security are flawed. I just don’t see any evidence that risk management is subjugated to compliance.
Posted at Monday 9th November 2009 5:41 pm
(5) Comments •
- Do not expect human behavior to change. You can affect habits, but not behavior.
- No security problem ever goes away. People have always hit each other over the heads with rocks and cracked safes since they existed (which is why safes were invented, of course), and will continue to hit each other with rocks and crack safes. Problems get better or worse, but never disappear.
Posted at Monday 9th November 2009 3:02 pm
(1) Comments •
By Adrian Lane
I was playing around with Google Dashboard this morning. After reading the cnet post on Google’s Data Liberation Project, and Google’s announcement of DataLiberation.org, I could not help but get a excited about what they were doing. Trying to be ‘open’ and ‘liberate’ data sounds great!
Many web services make it difficult to leave their services – you have to pay them for exporting your data, or jump through all sorts of technical hoops – for example, exporting your photos one by one, versus all at once. We believe that users – not products – own their data, and should be able to quickly and easily take that data out of any product without a hassle. We’d rather have loyal users who use Google products because they’re innovative – not because they lock users in. You can think of this as a long-term strategy to retain loyal users, rather than the short-term strategy of making it hard for people to leave.
We’ve already liberated over half of all Google products, from our popular blogging platform Blogger, to our email service Gmail, and Google developer tools including App Engine. In the upcoming months, we also plan to liberate Google Sites and Google Docs (batch-export).
Awesome! I jumped right in as I had two very specific things to address. I wanted to see if I could remove some information from Google that would change Google search behavior. Those issues are:
1. After I responded to a friend’s email inquiry a few months ago (sent to my Gmail account) regarding a piece of electronics equipment, I started to see ads for that product in my search results. I have no interest in the product and it does not belong in my search results.
2. I do a lot of driving and I use Google and Amazon maps. Google has started altering my route endpoints arbitrarily. I own a home, but the address is not registered as my home address anywhere except tax records, and has never been used in any online search, much less a Google map search (for very specific reasons). But Google Maps has been altering the endpoints of my routes to direct me to this property; it’s not an address I want to travel to and I did not enter it. How Google found it and then associated it with me is a interesting in and of itself, but to arbitrarily assume I want to go there is both annoying and disconcerting.
So I plunged right and and found: zero. Nothing that showed any of that data, nor how it was being used. Oh well. I guess my expectations were far too high. So I took a step back and looked at exactly what Google is offering.
Digging in, what does the concept of “liberated” data get me? To “… easily take that data out of any product without a hassle” is a nice idea. Medical records, photos, and social media site contents would be great to have copies of. But making digital copies is trivial, and I don’t think Google is talking about removal from products or services, but taking a copy and importing that copy into another app or service. Looking at the Dashboard, control and management is absent. To put this into context, when I think of data management, I think of the Data Security Lifecycle concept that Rich and I present at conferences. Data ownership and management is totally different than getting a copy. Most people will read this ‘take’ in a non-digital, real-world analog sense, meaning to ‘remove’. Google is using the digital sense, where ‘take’ is closer to ‘propagate’.
Furthermore, I am not sure just what exactly they mean by an “an open web run on open standards”. Is Google offering an open data format? An open API to control or manage data? Or do they mean all web data being open to web search (Google), and available to as many applications (Google) and services (Google) as you care to use?
It sounded so good, but unfortunately there does not seem to be anything of substance behind the press releases! That’s why I think this is all window dressing. Call me a skeptical security guy, but it looks like Google is taking a page out of Microsoft’s handbook, in that they are creating a tool to combat user fears and concerns, but data storage and management become tied more closely to Google, not less. Taking data from one place to another provides additional attributes and context that increases its value. Google remains in control and it will be very difficult to argue who owns that data.
Posted at Monday 9th November 2009 12:00 am
(1) Comments •