Today I was mildly snarky on the Security Metrics email list when a few people suggested that instead of talking about cloud computing we should talk about shared infrastructure. In their minds, ‘shared’ = ‘cloud’. I fully acknowledge that I may be misinterpreting their point, but this is a common thread I hear. Worse yet, very frequently when I discuss security risks, other security professionals key in on multitenancy as their biggest concern in cloud computing.
To be honest it may be the least interesting aspect of the cloud from a security perspective.
Shared infrastructure and applications are definitely a concern – I don’t mean to say they do not pose any risk. But multitenancy is more an emergent property of cloud computing rather than an essential characteristic – and yes, I am deliberately using NIST terms.
In my humble opinion – please tell me if I’m wrong in the comments – the combination of resource pooling (via abstraction) and orchestration/automation creates the greatest security risk. This is primarily for IaaS and PaaS, but also can apply to SaaS when it isn’t just a standard web app.
With abstraction and automation we add a management layer that effectively network-enables direct infrastructure management. Want to wipe out someone’s entire cloud with a short
bash script? Not a problem if they don’t segregate their cloud management and harden admin systems. Want to instantly copy the entire database and make it public? That might take a little PHP or Ruby code, but well under 100 lines.
In neither of those cases is relying on shared resources a factor – it is the combination of APIs, orchestration, and abstraction.
These aren’t fully obvious until you start really spending time using and studying the cloud directly – as opposed to reading articles and research reports. Even our cloud security class only starts to scratch the surface, although we are considering running a longer version where we spend a bunch more time on it.
The good news is that these are also very powerful security enablers, as you will see later today or tomorrow when I get up some demo code I have been working on.
Posted at Tuesday 9th July 2013 1:38 am
(2) Comments •
As some of you know, I’ve always been pretty critical of quantitative risk frameworks for information security, especially the Annualized Loss Expectancy (ALE) model taught in most of the infosec books. It isn’t that I think quantitative is bad, or that qualitative is always materially better, but I’m not a fan of funny math.
Let’s take ALE. The key to the model is that your annual predicted losses are the losses from a single event, times the annual rate of occurrence. This works well for some areas, such as shrinkage and laptop losses, but is worthless for most of information security. Why? Because we don’t have any way to measure the value of information assets.
Oh, sure, there are plenty of models out there that fake their way through this, but I’ve never seen one that is consistent, accurate, and measurable. The closest we get is Lindstrom’s Razor, which states that the value of an asset is at least as great as the cost of the defenses you place around it. (I consider that an implied or assumed value, which may bear no correlation to the real value).
I’m really only asking for one thing out of a valuation/loss model:
The losses predicted by a risk model before an incident should equal, within a reasonable tolerance, those experienced after an incident.
In other words, if you state that X asset has $Y value, when you experience a breach or incident involving X, you should experience $Y + (response costs) losses. I added, “within a reasonable tolerance” since I don’t think we need complete accuracy, but we should at least be in the ballpark. You’ll notice this also means we need a framework, process, and metrics to accurately measure losses after an incident.
If someone comes into my home and steals my TV, I know how much it costs to replace it. If they take a work of art, maybe there’s an insurance value or similar investment/replacement cost (likely based on what I paid for it). If they steal all my family photos? Priceless – since they are impossible to replace and I can’t put a dollar sign on their personal value. What if they come in and make a copy of my TV, but don’t steal it? Er… Umm… Ugh.
I don’t think this is an unreasonable position, but I have yet to see a risk framework with a value/loss model that meets this basic requirement for information assets.
Posted at Monday 24th May 2010 9:30 am
(34) Comments •
On Friday I asked a simple question over Twitter and then let myself get dragged into a rat-hole of a debate that had people pulling out popcorn and checking the latest odds in Vegas. (Not the odds on who would win – that was clear – but rather on the potential for real bloodshed).
And while the debate strayed from my original question, it highlighted a major problem we often have in the security industry (and probably the rest of life, but I’m not qualified to talk about that).
A common logical fallacy is to assume that a possibility is a probability. That because something can happen, it will happen. It’s as if we tend to forget that the likelihood something will happen (under the circumstances in question) is essential to the risk equation – be it quantitative, qualitative, or whatever.
Throughout the security industry we continually burn our intellectual capital by emphasizing low-probability events.
“Mac malware might happen so all Mac users should buy antivirus or they’re smug and complacent”. Forgetting the fact that the odds of an average Mac user being infected by any type of malware are so low as to be unmeasurable, and lower than their system breaking due to problems with AV software. Sure, it might change. It will probably change; but we can’t predict that with any certainty and until then our response should match the actual (current) risk.
Bluetooth attacks are another example. Possible? Sure. Probable? Not unless you’re at a security or hacker conference.
There are times, especially during scenario planning, to assume that anything that can happen will happen. But when designing your actual security we can’t equate all threats.
Possible isn’t probable. The mere possibility of something is rarely a good reason to make a security investment.
Posted at Monday 7th December 2009 2:32 pm
(21) Comments •
We talk a lot about the role of anonymization on the Internet. On one hand, it’s a powerful tool for freedom of speech. On the other, it creates massive security challenges by greatly reducing attackers’ risk of apprehension.
The more time I spend in security, the more I realize that economics plays a far larger role than technology in what we do.
Anonymization, combined with internationalization, shifts the economics of online criminal activity. In the old days to rob or hurt someone you needed a degree of physical access. The postal and phone systems reduced the need for this access, but also contain rate-limiters that reduce scalability of attacks. Physical access corresponds to physical risk – particularly the risk of apprehension. A lack of sufficient international cooperation (or even consistent international laws), combined with anonymity, and the scope and speed of the Internet, skew the economics in favor of the bad guys. There is a lower risk of capture, a lower risk of prosecution, limited costs of entry, and a large (global) scope for potential operations.
Heck, with economics like that, I feel like an idiot for not being a cybercriminal.
In security circles we spend a lot of time talking about the security issues of anonymity and internationalization, but these really aren’t the problem. The real problem isn’t the anonymity of users, but the anonymity of losses.
When someone breaks into your house, you know it. When a retailer loses inventory to shrinkage, the losses are directly attributable to that part of the supply chain, and someone’s responsible. But our computer security losses aren’t so clear, and in fact are typically completely hidden from the asset owner. Banking losses due to hacking are spread throughout the system, with users rarely paying the price.
Actually, that statement is completely wrong. We all pay for this kind of fraud, but it’s hidden from us by being spread throughout the system, rather than tied to specific events. We all pay higher fees to cover these losses. Thus we don’t notice the pain, don’t cry out for change, and don’t change our practices. We don’t even pick our banks or credit cards based on security any more, since they all appear the same.
Losses are also anonymized on the corporate side. When an organization suffers a data breach, does the business unit involved suffer any losses? Do they pay for the remediation out of their departmental budget? Not in any company I’ve ever worked with – the losses are absorbed by IT/security.
Our system is constructed in a manner that completely disrupts the natural impact of market forces. Those most responsible for their assets suffer minimal or no direct pain when they experience losses. Damages are either spread through the system, or absorbed by another cost center.
Now imagine a world where we reverse this situation. Where consumers are responsible for the financial losses associated with illicit activity in their accounts. Where business unit managers have to pay for remediation efforts when they are hacked. I guarantee that behavior would quickly change.
The economics of security fail because the losses are invisibly transfered away from those with the most responsibility. They don’t suffer the pain of losses, but they do suffer the pain/inconvenience of security. On top of that, many of the losses are nearly impossible to measure, even if you detect them (non-regulated data loss). No wonder they don’t like us.
Security professionals ask me all the time when users will “get it”, and management will “pay attention”. We don’t have a hope of things changing until those in charge of the purse strings start suffering the pain associated with security failures.
It’s just simple economics.
Posted at Friday 13th November 2009 1:33 pm
(2) Comments •
This is a great day for security researchers, and a bad day for anyone with a bank account.
First up is the release of the 2009 Verizon Data Breach Investigations Report. This is now officially my favorite breach metrics source, and it’s chock full of incredibly valuable information. I love the report because it’s not based on bullshit surveys, but on real incident investigations. The results are slowly spreading throughout the blogosphere, and we won’t copy them all here, but a few highlights:
- Verizon’s team alone investigated cases that resulted in the loss of 285 million records. That’s just them, never mind all the other incident response teams.
- Most organizations do a crap job with security- this is backed up with a series of metrics on which security controls are in place and how incidents are discovered.
- Essentially no organizations really complied with all the PCI requirements- but most get certified anyway.
Liquidmatrix has a solid summary of highlights, and I don’t want to repeat their work. As they say,
Read pages 46-49 of the report and do what it says. Seriously. It’s the advice that I would give if you were paying me to be your CISO.
And we’ll add some of our own advice soon.
Next is an article on organized cybercrime by Brian Krebs THAT YOU MUST GO READ NOW. (I realize it might seem like we have a love affair with Brian or something, but he’s not nearly my type). Brian digs beyond the report, and his investigative journalism shows what many of us believe to be true- there is a concerted attack on our financial system that is sophisticated and organized, and based out of Eastern Europe.
I talked with Brain and he told me,
You know all those breaches last year? Most of them are a handful of groups.
Here are a couple great tidbits from the article:
For example, a single organized criminal group based in Eastern Europe is believed to have hacked Web sites and databases belonging to hundreds of banks, payment processors, prepaid card vendors and retailers over the last year. Most of the activity from this group occurred in the first five months of 2008. But some of that activity persisted throughout the year at specific targets, according to experts who helped law enforcement officials respond to the attacks, but asked not to be identified because they are not authorized to speak on the record.
One hacking group, which security experts say is based in Russia, attacked and infiltrated more than 300 companies – mainly financial institutions – in the United States and elsewhere, using a sophisticated Web-based exploitation service that the hackers accessed remotely. In an 18-page alert published to retail and banking partners in November, VISA described this hacker service in intricate detail, listing the names of the Web sites and malicious software used in the attack, as well as the Internet addresses of dozens of sites that were used to offload stolen data.
Steve Santorelli, director of investigations at Team Cymru, a small group of researchers who work to discover who is behind Internet crime, said the hackers behind the Heartland breach and the other break-ins mentioned in this story appear to have been aware of one another and unofficially divided up targets. “There seem, on the face of anecdotal observations, to be at least two main groups behind many of the major database compromises of recent years,” Santorelli said. “Both groups appear to be giving each other a wide berth to not step on each others’ toes.”
Keep in mind that this isn’t the same old news. We’re not talking about the usual increase in attacks, but a sophistication and organizational level that developed materially in 2007-2008.
To top it all off, we have this article over at Wired on PIN cracking. This one also ties in to the Verizon report. Another quote:
“We’re seeing entirely new attacks that a year ago were thought to be only academically possible,” says Sartin. Verizon Business released a report Wednesday that examines trends in security breaches. “What we see now is people going right to the source … and stealing the encrypted PIN blocks and using complex ways to un-encrypt the PIN blocks.”
If you read more deeply, you learn that the bad guys haven’t developed some quantum crypto, but are taking advantage of weak points in the system where the data is unencrypted, even if only in memory.
Really fascinating stuff, and I love that we’re getting real information on real breaches.
Posted at Wednesday 15th April 2009 10:35 am
(0) Comments •
By Adrian Lane
Reading yet another comment on yet another blog about “what good is ABC technology because I can subvert the process” or “we should not use XYZ technology because it does not stop the threats” … I feel a rant coming on. I get seriously annoyed when I hear these blanket statements about how some technologies are no good because they can be subverted. I appreciate zeal in researchers, but am shocked by people’s myopia in applied settings. Seriously, is there any technology that cannot be compromised?
I got a chance to chat with an old friend on Friday and he reminded me of a basic security tenet … most security precautions are nothing more than ‘speed bumps’. They are not fool-proof, not absolute in the security that they offer, and do not stand unto themselves without support. What they do is slow attackers down, make it more difficult and expensive in time, money, and processing power to achieve their goals. While I may not be able to brute force and already encrypted file, I can subvert most encryption systems, especially if I can gain access to the host. Can I get by your firewall? Yes. Can I get spam through your email filter? Absolutely. Can I find holes in your WAF policy set? Yep. Write malware that goes undetected, escalate user privileges, confuse your NAC, poison your logs, evade IDS, compromise your browser? Yep. But I cannot do all of these things at the same time. Some will slow me down while others detect what I am doing. With enough time and attention there are very few security products or solutions that would not succumb to attack under the right set of circumstances, but not all of them at one time. We buy anti-spam, even if it is not 100% effective, because it makes the problem set much smaller. We try not to click email links and visit suspect web sites because we know our browsing sessions are completely at risk. When we have solid host security to support encryption systems, we drop the odds of system compromise dramatically.
If you have ever heard me speak on security topics, you will have heard a line that I throw into almost every presentation: embrace insecurity! If you go about selecting security technologies thinking that they will protect you from all threats under all circumstances, you have already failed. Know that all your security measures are insecure to some degree. Admit it. Accept it. Understand it. Then account for it. One of the primary points Rich and I were trying to make in our Web Application Security paper was that there are several ways to address most issues. And it’s like fitting pieces of a puzzle together to get reasonable security against your risks in a cost effective manner. What technologies and process changes you select depend upon the threats you need to address, so adapt your plans such that you cover for these weaknesses.
Posted at Tuesday 24th March 2009 10:05 am
(3) Comments •
Nate Silver is one of those rare researchers with the uncanny ability to send your brain spinning off on unintended tangents totally unrelated to the work he’s actually documenting. His work is fascinating more for its process than its conclusions, and often generates new introspections applicable to our own areas of expertise. Take this article in Esquire where he discusses the concept of recency bias as applied to financial risk assessments.
Recency bias is the tendency to skew data and analysis towards recent events. In the economic example he uses he compares the risk of a market crash in 2008 using data from the past 60 years vs. the past 20. The difference is staggering; from one major downturn every 8 years (using 60 years of data) vs. a downturn every 624 years (using only 20 years of data). As with all algorithms, input selection deeply skews output results, with the potential for cataclysmic conclusions.
In the information security industry I believe we just as frequently suffer from selective inverse recency bias- giving greater credence to historical data over more recent information, while editing out the anomalous events that should drive our analysis more than the steady state. Actually, I take that back, it isn’t just information security, but safety and security in general, and it is likely of a deep evolutionary psychological origin. We cut out the bits and pieces we don’t like, while pretending the world isn’t changing.
Here’s what I mean- in security we often tend to assume that what’s worked in the past will continue to work in the future, even though the operating environment around us has completely changed. At the same time, we allow recency bias to intrude and selectively edit out our memories of negative incidents after some arbitrary time period. We assume what we’ve always done will always work, forgetting all those times it didn’t work.
From an evolutionary psychology point of view (assuming you go in for that sort of thing) this makes perfect sense. For most of human history what worked for the past 10, 20, or 100 years still worked well for the next 10, 20, or 100 years. It’s only relatively recently that the rate of change in society (our operating environment) accelerated to high levels of fluctuation in a single human lifetime. On the opposite side, we’ve likely evolved to overreact to short term threats over long term risks- I doubt many of our ancestors were the ones contemplating the best reaction to the tiger stalking them in the woods; our ancestors clearly got their asses out of there at least fast enough to procreate at some point.
We tend to ignore long term risks and environmental shifts, then overreact to short term incidents.
This is fairly pronounced in information security where we need to carefully balance historical data with our current environment. Over the long haul we can’t forget historical incidents, yet we also can’t assume that what worked yesterday will work tomorrow.
It’s important to use the right historical data in general, and more recent data in specific. For example, we know major shifts in technology lead to major new security threats. We know that no matter how secure we feel, incidents still occur. We know that human behavior doesn’t change, people will make mistakes, and are predictably unpredictable.
On the other hand, firewalls only stop a fraction of the threats we face, application security is now just as important as network security, and successful malware utilizes new distribution channels and propagation vectors.
Security is always a game of balance. We need to account for the past, without assuming its details are useful when defending against specific future threats.
Posted at Tuesday 17th February 2009 5:21 pm
(0) Comments •
I just read a great article on the Heartland breach, which I’ll talk more about later. There is one quote in there that really stands out:
End-to-end encryption is far from a new approach. But the flaw in today”s payment networks is that the card brands insist on dealing with card data in an unencrypted state, forcing transmission to be done over secure connections rather than the lower-cost Internet. This approach avoids forcing the card brands to have to decrypt the data when it arrives.
While I no longer think PCI is useless, I still stand by the assertion that its goal is to reduce the risks of the card companies first, and only peripherally reduce the real risk of fraud. Thus cardholders, merchants, and banks carry both the bulk of the costs and the risks. And here’s more evidence of its fundamental flaws.
Let’s fix the system instead of just gluing on more layers that are more costly in the end. Heck, let’s bring back SET!
Posted at Friday 30th January 2009 12:20 pm
(5) Comments •
You’ve probably noticed that we’ve been a little quieter than usual here on the blog. After blasting out our series on Building a Web Application Security Program, we haven’t been putting up much original content.
That’s because we’ve been working on one of our tougher projects over the past 2 weeks. Adrian and I have both been involved with data security (information-centric) security since long before we met. I was the first analyst to cover it over at Gartner, and Adrian spent many years as VP of Development and CTO in data security startups. A while back we started talking about models for justifying data security investments. Many of our clients struggle with the business case for data security, even though they know the intrinsic value. All too often they are asked to use ROI or other inappropriate models.
A few months ago one of our vendor clients asked if we were planning on any research in this area. We initially thought they wanted yet-another ROI model, but once we explained our positions they asked to sign up and license the content. Thus, in the very near future, we will be releasing a report (also distributed by SANS) on The Business Justification for Data Security. (For the record, I like the term information-centric better, but we have to acknowledge the reality that “data security” is more commonly used).
Normally we prefer to develop our content live on the blog, as with the application security series, but this was complex enough that we felt we needed to form a first draft of the complete model, then release it for public review. Starting today, we’re going to release the core content of the report for public review as a series of posts. Rather than making you read the exhaustive report, we’re reformatting and condensing the content (the report itself will be available for free, as always, in the near future). Even after we release the PDF we’re open to input and intend to continuously revise the content over time.
The Business Justification Model
Today I’m just going to outline the core concepts and structure of the model. Our principle position is that you can’t fully quantify the value of information; it changes too often, and doesn’t always correlate to a measurable monetary amount. Sure, it’s theoretically possible, but practically speaking we assume the first person to fully and accurately quantify the value of information will win the nobel prize.
Our model is built on the foundation that you quantify what you can, qualify the rest, and use a structured approach to combine those results into an overall business justification. We purposely designed this as a business justification model, not a risk/loss model. Yes, we talk about risk, valuation, and loss, but only in the context of justifying security investments. That’s very different from a full risk assessment/management model.
Our model follows four steps:
- Data Valuation: In this step you quantify and qualify the value of the data, accounting for changing business context (when you can). It’s also where you rank the importance of data, so you know if you are investing in protecting the right things in the right order.
- Risk Estimation: We provide a model to combine qualitative and quantitative risk estimates. Again, since this is a business justification model, we show you how to do this in a pragmatic way designed to meet this goal, rather than bogging you down in near-impossible endless assessment cycles. We provide a starting list of data-security specific risk categories to focus on.
- Potential Loss Assessment: While it may seem counter-intuitive, we break potential losses from our risk estimate since a single kind of loss may map to multiple risk categories. Again, you’ll see we combine the quantitative and qualitative. As with the risk categories, we also provide you with a starting list.
- Positive Benefits Evaluation: Many data security investments also contain positive benefits beyond just reducing risk/losses. Reduced TCO and lower audit costs are just two examples.
After walking through these steps we show how to match the potential security investment to these assessments and evaluate the potential benefits, which is the core of the business justification. A summarized result might look like:
- Investing in DLP content discovery (data at rest scanning) will reduce our PCI related audit costs by 15% by providing detailed, current reports of the location of all PCI data. This translates to $xx per annual audit.
- Last year we lost 43 laptops, 27 of which contained sensitive information. Laptop full drive encryption for all mobile workers effectively eliminates this risk. Since Y tool also integrates with our systems management console and tells us exactly which systems are encrypted, this reduces our risk of an unencrypted laptop slipping through the gaps by 90%.
- Our SOX auditor requires us to implement full monitoring of database administrators of financial applications within 2 fiscal quarters. We estimate this will cost us $X using native auditing, but the administrators will be able to modify the logs, and we will need Y man-hours per audit cycle to analyze logs and create the reports. Database Activity Monitoring costs %Y, which is more than native auditing, but by correlating the logs and providing the compliance reports it reduces the risk of a DBA modifying a log by Z%, and reduces our audit costs by 10%, which translates to a net potential gain of $ZZ.
- Installation of DLP reduces the chance of protected data being placed on a USB drive by 60%, the chances of it being emailed outside the organization by 80%, and the chance an employee will upload it to their personal webmail account by 70%.
We’ll be detailing more of the sections in the coming days, and releasing the full report early next month. But please let us know what you think of the overall structure. Also, if you want to take a look at a draft (and we know you) drop us a line…
We’re really excited to get this out there. My favorite parts are where we debunk ROI and ALE.
Posted at Thursday 22nd January 2009 7:44 am
(6) Comments •
It looks like China is thinking about requiring in-depth technical information on all foreign technology products before they will be allowed into China.
I highly suspect this won’t actually happen, but you never know. If it does, here is a simple risk related IQ test for management:
- Will you reveal your source code and engineering documents to a government with a documented history of passing said information on to domestic producers who often clone competitive technologies and sell at lower than the market value you like?
- Do you have the risk tolerance to accept domestic Chinese abuse of your intellectual property should you reveal it?
If the answer to 1 is “yes” and 2 is “no”, the IQ is “0”. Any other answer shows at least as basic understanding of risk tolerance and management.
I worked a while back with an Indian company that engaged in a partnership with China to co-produce a particular high value product. That information was promptly stolen and spread to other local manufacturers.
I don’t have a problem with China, but not only do they culturally view intellectual property differently than us, there is a documented history of what the western world would consider abuse of IP. If you can live with that, you should absolutely engage with that market. If can’t accept the risk of IP theft, stay away.
(P.S.- This is also true of offshore development. Stop calling me after you have offshored and asking how to secure your date. You know, closing barn doors and cows and all).
Posted at Wednesday 10th December 2008 1:21 pm
(1) Comments •
I despise the very concept of mortality. That everything we were, are, and can be comes to a crashing close at some arbitrary deadline. I’ve never been one to accept someone telling me to do something just because “that’s the way it is”, and I feel pretty much the same way about death. Having seen far more than my fair share of it, I consider it nothing but random and capricious.
For those that follow Twitter, yesterday afternoon mortality bitch slapped me upside the head. I found out that my cholesterol is two points shy of the thin black line that defines “high”. Being thirty seven, a lifetime athlete, and relatively healthy eater since my early twenties, my number shouldn’t even be on the same continent as “high”, never mind the same zip code. I clearly have my parent’s genes to blame, and since my father passed away many years ago of something other than heart disease, I get to have a long conversation with mother this weekend on her poor gene selection. I might bring up the whole short thing while I’m at it (seriously, all I asked for was 5’9”).
I tend to look at situations like this as risk management problems. With potential mitigating actions, all of which come at a cost, and a potential negative consequence (well, negative for me), it slots nicely into a risk-based approach. It also highlights what is the single most important factor in any risk analysis- integrity. If you deceive yourself (or others) you can never make an effective risk decision. Let’s map it out:
Asset Valuation - Really fracking high for me personally, $2M to the insurance company (time limited to 20 years), and somewhere between zero and whatever for the rest of the world (and, I suspect, a few negative values circulating out there).
Risk Tolerance - Low. Oh sure, I’d like to say “none”, but the reality is if my risk tolerance was really 0, I’d mentally implode in a clash of irreconcilable risk factors as fear of my house burning around me conflicts with the danger of a meteor smashing open my skull like a ripe pumpkin when I walk outside. Since anything over 100 years old isn’t realistically quantifiable (and 80 is more reasonable), I’ll call 85 the low end of my tolerance, with no complaints if I can double that.
Risk/Threat Factors - Genetics, lifestyle, and medication. This one is pretty easy, since there are really only 3 factors that effect the outcome (in this dimension, I’m skipping cancer, accidents, and those freaky brain eating bacteria found in certain lakes). I can only change two of the factors, each of which comes with both a financial cost, and, for lack of a better word, a “pleasure” cost.
Risk Analysis - I’m going to build three scenarios:
- Since some of my cholesterol is good to normal (HDL and triglycerides), and only part of it bad (LDL and total serum), I can deceive myself into thinking I don’t need to do anything today and ignore the possibility of slowly clogging my arteries until a piece of random plaque breaks off and kills me in excruciating pain at an inconvenient moment. Since that’s what everyone else tends to do, we’ll call this option “best practices”.
- I can meet with my doctor, review the results, and determine which lifestyle changes and/or medication I can start today to reduce my long term risks. I can reduce the intake of certain foods, switch to things like Egg Beaters, and increase my intake of high fiber food and veggies. I’ll pay an additional financial cost for higher quality food, a time cost for the extra workouts, and a “pleasure” cost for fewer chocolate chip cookies. In exchange for those french fries and gooey burritos I’ll be healthier overall and live a higher quality of life until I’m disemboweled by an irate ostrich while on safari in Africa.
- I can immediately switch to a completely heart-healthy diet and disengage from any activity that increases my risk of premature death (and isn’t all death premature?). I’ll never eat another cookie or french fry, and I’ll move to a monastery in a meteor-free zone to eliminate all stress from my life as I engage in whatever the latest medical journals define as the optimum diet and exercise plan. I will lead a longer, lower quality life until I’m disemboweled by an irate monk who is sick of my self righteous preaching and mid-chant calisthenics. We’ll call this option the “consultant/analyst” recommendations.
Risk Decision and Mitigation Plan - Those three scenarios represent the low, middle, and high option. In every case there is a cost- but the cost is either in the short term or the long term. None of the scenarios guarantees success. This is where the integrity comes in- I’ve tried to qualify all the appropriate costs in each scenario, and don’t try and fool myself into thinking I can avoid those costs to steer myself towards the easy decision.
It would be easy to look at my various cholesterol levels and current lifestyle, then decide that maybe if I read the numbers from a certain angle nothing bad will happen. Or maybe I can just hang out without making changes until the numbers get worse, and fix things then. On the other end, I could completely deceive myself and decide that a bunch of extreme efforts will fix everything and I can completely control the end result, ignoring the cost and all the other factors out there.
But if I’m really honest to myself, I know that despite my low tolerance for an early death, I’m unwilling to pay the costs of extreme actions.
Thus I’m going to make immediate changes to my diet that I know I can tolerate in the long term, I’ll meet with my doctor and start getting annual tests, and I’ll slip less on my fitness plan when work gets out of control. I’m putting metrics in place I can track over time, taking a programatic approach, and not pretending I can control everything or completely eliminate the risk. If those changes aren’t enough, I’ll re-evaluate them to build a more effective program and consider investment in medication.
Here’s the secret of risk management- integrity. No risk framework, quantification scheme, or qualitative approach can ever compensate for self deception. Nearly every major risk analysis failure comes down to someone, somewhere, (if not everyone) closing their eyes and skewing the system to give a desired result. And the higher the stakes, the more likely we are to fool ourselves.
Posted at Tuesday 9th December 2008 2:30 am
(1) Comments •
When we talk about security threats we tend to break them down into all sorts of geeky categories. Sometimes we use high level terms like clientside, targeted attack, or web application vulnerability. Other times we dig in and talk about XSS, memory corruption, and so on. You’ll notice we tend to mix in vulnerabilities when we talk about threats, but when we do that hopefully in our heads we’re following the proper taxonomy and actually thinking about that vulnerability being exploited, which is closer to a threat.
Anyway, none of that matters.
In security there are only two kinds of threats that affect us:
- Noisy threats that break things people care about.
- Quiet threats everyone besides security geeks ignore, because it doesn’t screw up their ability to get their job done or browse ESPN during lunch.
We get money for noisy threats, and get called paranoid freaks for trying to prevent quiet threats (which can still lose our organizations a boatload of money, but don’t interfere with the married CEO’s ability to flirt with the new girl in marketing over email).
Compliance, spam, AV, and old-school network attacks are noisy threats. Data breaches (unless you get caught), web app attacks, virtualization security, and most internal stuff are quiet threats.
Don’t believe me? Slice up your budget and see how much you spend preventing noisy vs. quiet threats. It’s often our own little version of security theater. And if you really want to understand a vertical market, one of the best things you can do is break out noisy vs. quiet for that market, and you’ll know what you’ll get money for.
The problem is, noisy vs. quiet may bear little to no relationship to your actual risk and losses, but that’s just human nature.
Posted at Sunday 9th November 2008 11:46 pm
(1) Comments •
Over at Emergent Chaos, Adam raises the question of whether we are seeing more data breaches, or just more data breach reporting. His post is inspired by a release from the Identity Theft Resource Center stating that they’ve already matched the 2007 breach numbers this year.
Personally, I think it’s a bit of both, and we’re many years away from any accurate statistics for a few reasons:
- Breaches are underreported. As shown in the TJX case, not every company performs a breach notification (TJX reported, other organizations did not). I know of a case where a payment processor was compromised, records lost for some financial services firms that ran through them, and only 1 of 3-4 of the companies involved performed their breach notification. Let’s be clear, they absolutely knew they had a legal requirement to report and that their customer information was breached, and they didn’t.
- Breaches are underdetected. I picked on some of the other companies fleeced along with TJX that later failed to report, but it’s reasonable that at least some of them never knew they were breached. I’d say less than 10% of companies with PII even have the means to detect a breach.
- Breaches do not correlate with fraud. Something else we’ve discussed here before. In short, there isn’t necessary any correlation between a “breach” notification and any actual fraud. Thus the value of breach notification statistics is limited. A lost backup tape may contain 10 million records, yet we don’t have a singe case that I can find where a lost tape correlated with fraud. My gut is that hacking attacks result in more fraud, but even that is essentially impossible to prove with today’s accounting.
- There’s no national standard for a breach, never mind an international standard. Every jurisdiction has their own definition. While many follow the California standard, many others do not.
Crime statistics are some of the most difficult to gather and normalize on the planet. Cybercrime statistics are even worse.
With all that said I need to go call Bank of America since we just got a breach notification letter from them, but it doesn’t reveal which third party lost our information. This is our third letter in the past few years, and we haven’t suffered any losses yet.
Posted at Tuesday 23rd September 2008 10:44 am
(6) Comments •
I got an interesting email right before I ran off on vacation from Mark on a PCI issue he blogged about:
13. Arrangements must be made to configure the intrusion detection system/intrusion prevention system (IDS/IPS) to accept the originating IP address of the ASV. If this is not possible, the scan should be originated in a location that prevents IDS/IPS interference.
I understand what the intention of this requirement is. If your IPS is blacklisting the scanner IP’s then ASVs don’t get a full assessment because they are a loud and proud scan rather than a targeted attack… However, blindly accepting the originating IP of the scanner leaves the hosts vulnerable to various attacks. Attackers can simply reference various public websites to see what IP addresses they need to use to bypass those detective or preventive controls.
I figured no assessor would ask their client to open up big holes just to do a scan, but lo and behold, after a little bit of research it turns out this is surprisingly common. Back to email:
It came up when I was told by my ASV “Authorized scanning vendor” that I had to do exclude their IPs. They also provided me with the list of IP’s to exclude. Both [redacted] and [redacted] have told me I needed to do bypass the IDS. When I asked about the exposure they were creating, both told me that their “other customers” do this and it isn’t a problem for them.
If your ASV can’t perform a scan/test without having you turn off your IDS/IPS, it might be time to look for a new one. Especially if their source IPs are easy to figure out.
For the record, “everyone else does it” is the dumbest freaking reason in the book. Remember the whole jumping off a bridge thing your mom taught you?
Posted at Friday 19th September 2008 5:21 am
(23) Comments •
By Adrian Lane
A very thought-provoking ‘Good until Reached For’ post over on Gunnar Peterson’s site this week. Gunnar is tying together a number of recent blog threads to exemplify through the current financial crisis of how security and risk management best practices were not applied. There are many angles to this post, and Gunnar is covering a lot of ground, but the concept that really resonated with me is automation of process without verification.
From a personal angle, having a wife who is a real estate broker and many friends in the mortgage and lending industries, I have been hearing quiet complaints for several years now that buyers were not meeting the traditional criteria. People with $40k a year in household income were buying half million dollar homes. A lot of this was attributed to having the entire loan approval process being automated in order to keep up with market demands. Banks were automating the verification process to improve throughput and turnaround because there was demand for home loans. Mortgage brokers steered their clients to banks that were known to have the fastest turnaround, and mostly that was because those were the institutions that were not closely scrutinizing loans. This pushed more banks to further streamline and cutting corners for faster turnaround in order to be competitive; the business was to originate the loans as that is how they made money.
The other angle that was quite common was many mortgage brokers had further learned to ‘game the system’ to get questionable loans through. For example, if a lender was known to have a much higher approval rating for college graduates than non-college graduates given equal FICO scores, the mortgage brokers would state the buyer had a college degree knowing full well that no one was checking the details. Verification of ‘Stated Income’ was minimal and thus often fudged. Property appraisers were often pushed to come up with valuations that were not in line with reality as banks were not independently managing this portion of the verification process. When it came right down to it the data was simply not trustworthy.
The quote of the Ian Grigg about is interesting as well. I wonder if the comments are ‘tongue in cheek’ as I am not sure that it killed the core skill, rather automation detached personal supervision in some cases, and others overwhelmed the individuals responsible because they could not be competitive and perform the necessary checks. As with software development, if it comes down to adding new features or being secure, new features almost always win. With competitions between banks to make money in this GLBA fueled land grab, good practices were thrown out the door as they are an impediment to revenue. If you look at the loan process and the various checkpoints and verifications that occur along the way, it is very similar in nature to the goal with Sarbanes-Oxley in verification of accounting practices within IT. But rather than protecting investors from accounting oversight, these controls are in place to protect the banks from risk. To bypass these controls is very disconcerting as these banks understand better than anyone financial history and risk exposure.
I think that capture the gist of much of why sanity checks in the process are so important; to make sure we are not fundamentally missing the point of the effort and destroying all the safeguards for security and risk going in. And more and more, we will see business processes automated for efficiency and timeliness, however, software not only needs to meet the functional specifications but risk specifications as well. Ultimately this is why I believe that securing business processes is an inside out game. Rather and rather than bolt security and integrity onto the infrastructure, checks and balances need to be built into the software. This concept is not all that far from what we do today with unit testing and building in debugging capabilities into software, but needs to encompass audit and risk safeguards as well. Gunnar’s point of ‘Design For Failure’ really hits home when viewed in context of the current crisis.
Posted at Thursday 18th September 2008 2:22 am
(0) Comments •