By Adrian Lane
Off-topic post …
My wife is constantly reading about the banks and lending institutions, and likes to read to me every gory detail she learns. Occasionally I do listen. About a month ago she made the comment “If the banks do go under, we’ll have to go back to cash. That will be strange.” I thought about it for a while and I realized just how true that was. I seldom carry cash. I do a lot of my shopping on the Internet. Can’t really do that with cash very well. I used the credit card for everything … even the occasional Starbucks triple-shot-Hoff-inspired-venti-iced-coffee-with-splenda-shaken-not-stirred gets a credit card swipe. Then my wife says “Let’s see if we can go for a month without spending on the credit card. Just cash!” Being the contrarian that I am, I decided “What the heck, let’s try it.”
We failed miserably.
The whole thing about spending cash is you have to go somewhere and get cash before you can spend cash. An important small step. We had a minor medical emergency and we were not about to slow down and get cash first. When you take enough out to cover expenses, the bank teller’s get weird and antsy like you are doing something wrong. Trying our best, by the end of the month, we looked at the results, and we were only 60% credit card, 40% cash by dollar amount. But overall, our spending and it was down quite a bit. While we hear about how much easier psychologically it is to spend money when it’s not cash, I see just how true that is. Either you reel yourself in because you are not sure you have enough cash on you, or you feel a little more attached to the money that the concept of money and hold back on some purchases that are not necessary. So we are going to try it again this month, and we think we can reverse this to 60% cash, 40% CC.
I have never been mugged. I have never had my wallet stolen, and I am not really worried about carrying some cash around. I have had fraudulent charges on my credit card, more than a dozen times, and I am constantly worried about my bill having bogus charges. I usually state that the reason I use credit cards so much is that I have reduced risk. Lost or stolen, I am only liable for $50.00. Airline tickets and hotels are a nightmare without a credit card. And I would never buy something on line without the ability to shield myself from bogus merchants. But my perspective has changed that, given most common situations, cash has a lower risk than credit and changed my behavior in a positive way.
It’s been an interesting experiment, and I think we are going to keep doing it for a while.
Posted at Tuesday 3rd March 2009 1:52 pm
(3) Comments •
By Adrian Lane
It’s Friday again and time for the summary. It’s been a yin & yang kind of week for me, with mixed blessings and curses all around.
On the down side, Friday is always the day for bad news. It’s the day that Fannie Mae, Countrywide and others announce impending disaster so as to lessen the impact on the market. I just have to wonder if they learned that from Office Space. Based upon what I am seeing in the press, and some things here in Arizona, this Friday will be no exception as I expect there to be another big bank announcement. Four friends have lost jobs in the last week and are struggling to find any work, and I am going to have to help a friend move this weekend because their house is going back to the bank. One person I know had someone access their bank account with a fake ATM card, and my next door neighbor got a call Tuesday from Wells Fargo as someone was trying to make a “Phone Cash Advance” on their account. And yet another indication that the system is broken is the credit shell game, with Experian no longer willing to sell credit scores to consumers. Technically, they were not doing it before, but when pushed to sell consumers the real FICO scores, instead of the “FAKO’s” they have been providing, they decided to bow out. Should we just go back to cash? That would solve a lot of problems.
On the positive side we here at Securosis are in a very good mood and have high hopes for the future. Principal among the reasons for this is we are officially on “Nugget watch”, or rather we are waiting for the little Mogull to arrive soon. Mom is in good health and spirits while Rich is furiously decorating, arranging and preparing for the arrival. Male nesting … it’s simultaneously cute and sad to watch. But I have to say, the baby’s room looks great! Stay tuned as I will post something as soon as I hear more news.
I had several conversations with different SIM/SEM vendors this week and I view the changes as positive. It’s no longer “Gee, look at all this neat data we have” nor trying to convince customers how great aggregation is (gaak!), and more about using that data to solve business problems and building some intelligence into the products. Rich and I are seeing some very cool things happening around encryption and key management that should make a lot of people very happy, and we will begin the encryption series we promised in the next couple weeks. And it looks like Motorola found some loose change under the couch, spinning out Good Technology to Visto; Visto should be able to put the technology to good use. That’s all positive! Rich & I are both wrapping up a couple of interesting projects and about to commence on new ones as well so things are busy. I am even starting to get excited about going to Source Boston and seeing a bunch of friends. Maybe we will even get to see where Mr. Hoff lands!
Rolling into the weekend I am focused on the positive, so here it is, the week in review:
Favorite Securosis Posts:
Favorite Outside Posts:
Top News and Posts:
Blog Comment of the Week:
Allen Barronov on Will This Be The Next PCI Requirement Addition:
If you are putting money down I’ll take you up on it let me just get some poor sucker’s credit card details in case I lose.
On a serious note: DLP is very reactive.
One advantage is that your CEO doesn’t have to say (quoting from Bob
Carr) “we were alerted by Visa” which sounds very weak and can really
be read as “we had no idea that people stole information from us until
someone else told us about it”. This is apparently quite normal.
Proactive is to analyse the entire PCI process from start to end and secure it accordingly.
A few companies that I have had the privilege of working for have
firewalled their “process network” off from their main business
network. The reason to do this is really to protect availability. If a
virus hits the business network then the (real) money making part of
the business can still function - there may be pain but the gadgets
still get made/gathered/fixed/etc.
A payment processing business should think: PCI transmission is
different from the normal network traffic and they should separate it
accordingly. If Sue from Accounts gets a virus on her PC, it should not
impact on PCI processing in any way (CIA).
I really like DLP but it is not a cure for bad network design.
I guess the answer is layers. Good network design (based on Business Processes) with DLP to catch the drips.
“You know what else everyone likes? Parfaits.” Donkey in Shrek.
Now, I am off for some more stealth photography.
Posted at Saturday 28th February 2009 4:40 pm
(3) Comments •
By Adrian Lane
While both Rich and I predicted this would happen, I admit I am still slightly surprised: Netezza has acquired Tizor for $3.1M in cash. Netezza press release here, and While I do not see a press release issued from either vendor xconomy has the story here. Surprising in the sense that I would not have expected a data warehousing vendor to acquire a database monitoring & auditing company. My guess is it’s the auto-discovery features that most interest them. But like many companies that provide data management and analysis, Netezza may be finding that their customers are asking for security and even compliance facilities around the data they manage. In that case, this move could really pay off.
I am certain that they were hoping for more, but $3M in cash is a pretty good return for their investors given the current market conditions and competitiveness in the DAM market. While it is my personal opinion, I have never considered the Tizor technology a class leading product. It took them a very long time to adapt the network monitoring appliance into a competitive product that met market demand. Their audit offering was not endorsed by companies I know who have evaluated the technology. They had some smart people over there, but like many of the DAM competitors, they have struggled to understand the customer buying center and have lacked the laser focus vision of some of the vendors like Guardium have demonstrated . But they have made consistent upgrades to the product and the auto-discovery option last year was a very smart move. All in all, Netezza is getting value, and the Tizor investors about $3M more than they would have gotten a few months from now.
I have to admit that my timing of these events has been wrong … I thought that this transaction would have happened at/by the end of last year, and I am waiting for more still. But the DAM vendors who are not profitable have a huge problem that move to quickly and you kill your value. Move too slowly and you are out of business. Sometimes the due diligence process takes a while.
Check back later as I will update the post as I hear more, of if Rich weighs in on this subject.
Posted at Friday 27th February 2009 9:06 pm
(2) Comments •
I was getting a little excited when I read this article over at NetworkWorld about how the PCI council will be releasing a prioritized roadmap for companies facing compliance. It’s a great idea- instead of flogging companies with a massive list of security controls, it will prioritize those controls and list specific milestones.
Now before I get to the fun part, I want to quote myself from one of my posts on PCI:
Going back to CardSystems, a large majority of major breaches involve companies that were PCI compliant, including (probably) Hannaford. TJX is an open question. In many cases, the companies involved were certified but found to be non-compliant after the breach, which indicates a severe breakdown in the certification process.
Now on to the fun (emphasis added by moi):
Businesses that are compliant with PCI standards have never been breached, says Bob Russo, general manager of the PCI Security Standards Council, or at least he’s never seen such a case. Victims may have attained compliance certification at some point, he says, but none has been in compliance at the time of a breach, he says.
What a load of shit. With the volume of breaches we’ve seen, this either means the standard and certification process are fundamentally broken, or companies have had their certifications retroactively revoked for political reasons after the fact. As I keep saying, PCI is really about protecting the card companies first, with as little cost to them as possible, and everyone else comes a distant second. It could be better, and the PCI Council has the power to make it so, but only if the process is fixed with more accountability of assessors, a revised assessment/audit process (not annual), a change to real end-to-end encryption, and a real R&D effort to fix the fundamental flaws in the system, instead of layering on patches that can never completely work.
You could also nominate me for the PCI Council Board of Advisors. I’m sure that would be all sorts of fun.
Seriously – we can fix this thing, but only by fixing the core of the program, not by layering on more controls and requirements.
Posted at Friday 27th February 2009 12:26 pm
(15) Comments •
By Adrian Lane
Just ran across this article on workers “stealing company data” on the BBC news web site. The story is based upon a recent Ponemon study (who else?) of former employees and the likelihood they will steal company information. It turns out that most of those polled will in fact take something with them. The Ponemon numbers are not surprising as this tracks closely with traditional forms of employee theft across most industries. What got me shaking my head was the sheer quantity of FUD being thrown out with the raw data.
A “surging wave” of activity? You bet there is! And it tightly corresponds to the number of layoffs. I am guessing when I say that the point Kevin Rowney of Vontu Symantec was trying to make is companies do very little to protect information from insiders, especially during layoffs. But the author make it sound as if insider theft is bringing about the collapse of western civilization.
What I don’t believe we can do here is try to justify security spending by saying “Look at these losses in revenue! They are staggering! Were getting killed by insider theft!” These companies are in trouble to begin with, which is why they are laying people off. Ex-employees may be taking information because their accounts are still active, or they may have left with it at the time they were fired. But just because the employee walked out with the information does not necessarily mean that the company suffered a loss. That data has to be used in some manner that affects the value of the company, or results in lost sales. And the capability for ex-employees to do this, especially in this economy, is probably going down, not up.
The employee who has backup tapes in their closet may dream about “sticking it” to their former employer, but odds are high that the information they employee has will never result in the company suffering damages. Heck, they would actually have to land a new job before that could happen. I know some HR reps who probably envision their ex-emplyees contacting their underground ‘connections’ to sell of backup tapes, but how many employees do you really think can carry this off? You think they are going to sell it on eBay? Call a competitor? We have seen how that turns out. No use, no loss.
“I had a very strong work ethic. The problem was my ethics in work.”
There is also a huge double standard here, where most companies propagate the very activity they decry. When I worked at a brokerage, it was one of our biggest fears that an employee would steal one of our “books of business”, taking it to another brokerage, and when I first learned about the difficulties in protecting data from insiders and enforcing proper use. On the flip side, it was expected every broker that interviewed had their own “book of business”. If they didn’t, they were ‘losers’ or some other expletive right out of Glengarry Glenn Ross. Having existing relationships that could immediately bring in clients to the organization was on eof the top 5 considerations for employment. Most salesmen, attorneys, financiers and executives are considered not just for the skills they possess, but the relationships they have, and the knowledge they bring to the position. That knowledge is typically in their heads, rolodexes and their iPhone. I am not saying that they did not have paper or electronic backups as well, as 15% of the respondents admitted they did. My point is companies cry foul that they are the the victims of insider theft, but in reality they fired or laid off an employee, and that employee took a job with a competitor. I have trouble calling that an insider attack.
Posted at Thursday 26th February 2009 11:03 am
(1) Comments •
A month or so I go I was invited by Jeremiah Grossman to help judge the Top 10 Web Hacking Techniques of 2008 (my fellow judges were Hoff, H D Moore, and Jeff Forristal).
The judging ended up being quite a bit harder than I expected- some of the hacks I was thinking of were from 2007, and there were a ton of new ones I managed to miss despite all the conference sessions and blog reading. Of the 70 submissions, I probably only remembered a dozen or so… leading to hours of research, with a few nuggets I would have missed otherwise.
I was honored to participate, and you can see the results over here at Jeremiah’s blog.
Posted at Wednesday 25th February 2009 1:09 pm
(0) Comments •
Had a very interesting call today with a client in the pharma research space. They would like to protect clinical study data as it moves to researcher’s computers, but are struggling with the best approach. On the call, I quickly realized that DLP, or a content tracking tool like Verdasys (who also does endpoint DLP) would be ideal. The only problem? They need Windows, Mac, and Linux support.
I couldn’t remember offhand of any DLP/tracking tool (or even DRM) that will work on all 3 platforms. This is an open call for you vendors to hit me up if you can help.
For you end users, where we ended up was with a few potential approaches:
- Switch to a remote virtual/hosted desktop for handling the sensitive data… such as Citrix or VMWare.
- Use Database Activity Monitoring to track who pulls the data.
- Endpoint encryption to protect the data from loss, but it won’t help when it’s moved to inappropriate locations.
- Network DLP to track it in email, but without the endpoint coverage it leaves a really big hole.
- Content discovery to keep some minimal tracking where it ends up (for managed systems), but that means opening up SMB/CIFS file sharing on the endpoint for admin access, which is in itself a security risk.
- Distributed encryption, which *does* have cross platform support, but still doesn’t stop the researcher from putting the data someplace it shouldn’t be, which is their main concern.
While this is one of those industries (research) with higher Mac/cross platform use than the average business, this is clearly a growing problem thanks to the consumerization of IT.
This situation also highlights how no single-channel solution can really protect data well. It’s the mix of network, endpoint, and discovery that really allows you to reduce risk without killing business process.
Posted at Wednesday 25th February 2009 12:54 pm
(4) Comments •
I’m almost willing to bet money on this one…
Due to the nature of the recent breaches, such as Hannaford, where data was exfiltrated over the network, I highly suspect we will see outbound monitoring and/or filtering in the next revision of the PCI DSS. For more details on what I mean, refer back to this post.
Consider this your first warning.
Posted at Saturday 21st February 2009 2:40 pm
(4) Comments •
Last Friday Adrian sent me an IM that he was just about finished with the Friday summary. The conversation went sort of like this:
Me: I thought it was my turn? Adrian: It is. I just have a lot to say.
It’s hard to argue with logic like that.
This is a very strange week here at Securosis Central. My wife was due to deliver our first kid a few days ago, and we feel like we’re now living (and especially sleeping) on borrowed time. It’s funny how procreation is the most fundamental act of any biological creature, yet when it happens to you it’s, like, the biggest thing ever! Sure, our parents, most of our siblings, and a good chunk of our friends have already been through this particular rite of passage, but I think it’s one of those things you can never understand until you go through it, no matter how much crappy advice other people give you or books you read.
Just like pretty much everything else in life.
Onto the week in review:
Webcasts, Podcasts, Outside Writing, and Conferences:
Favorite Securosis Posts:
Favorite Outside Posts:
Top News and Posts:
Blog Comment of the Week: Sharon on New Database Configuration Assessment Options
IMO mValent should be compared with CMDB solutions. They created a compliance story which in those days (PCI) resonates well.
You probably know this as well as I (now I”m just giving myself some credit ) but database vulnerability assessment should go beyond the task of reporting configuration options and which patches are applied. While those tasks are very important I do see the benefits of looking for actual vulnerabilities. I do not see how Oracle will be able to develop (or buy), sell and support a product that can identify security vulnerabilities in its own products.
Having said that, I am sure that many additional customers would look and evaluate mValent. The CMDB giants (HP, IBM and CA) should expect more competitive pressure.
Posted at Saturday 21st February 2009 1:39 pm
(1) Comments •
By Adrian Lane
Oracle has acquired mValent, the configuration management vendor. mValent provides an assessment tool to examine the configuration of applications. Actually, they do quite a bit more than that, but I wanted to focus on the value to database security and compliance in this post. This is a really good move on Oracle’s part as it fills a glaring hole that they have had for some time in their security and compliance offerings. I have never understood why Oracle did not provide this as part of OEM as every Oracle event I have been to in the last 5 years has sessions where DBA’s are swapping scripts to assess their database. Regardless, they have finally filled the gap. It provides them with a platform to implement their own best practice guidelines, and gives customers a way to implement their own security, compliance and operational policies around the database and (I assume) other application platforms. Sadly, many companies have not automated their database configuration assessments, and the market remains wide open, and this is a timely acquisition.
While the value proposition for this technology will be spun by Oracle’s marketing team in a few dozen different ways (change management, compliance audits, regulatory compliance, application controls, application audits, compliance automation, etc), don’t get confused by all of the terms. When it comes down to it, this is an assessment of application configuration. And it does provide value in a number of ways: security, compliance and operations management. The basic platform can be used in many different ways all depending upon how you bundle the policy sets and distribute reports.
Also keep in mind that a ‘database audit’ and ‘database auditing’ are two completely different things. Database auditing is about examining transactions. What we are talking about here is how the database is configured and deployed. To avoid the deliberate market confusion on the vendors part, here at Securosis we will stick to the terms Vulnerability Assessment and Configuration Assessment to describe the work that is being performed.
Tenable Network Security has also announced on their blog that they now have the ability to perform credentialed scans of the database. This means that Nessus is no longer just a pen-test style patch level checker, but a credentialed/peer based configuration assessment. By ‘Credentialed’ I mean that the scanning tool has a user name and password with some access rights the database. This type of assessment provides a lot more functionality because there is a lot more information available to you that is not available through a penetration test. This is necessary progression for the product as the ports, quite specifically the database ports, no longer return sufficient information for a good assessment of patch levels, or any of the important information for configuration.
If you want to produce meaningful compliance reports, this is the type of scan you need to provide. While I occasionally rip Tenable Security as this is something they should have done two years ago, it is really a great advancement for them as it opens up the compliance and operation management buying centers. Tenable must be considered a serious player in this space as this is a low cost, high value option. They will continue to win market share as they flesh out the policy set to include many of the industry best practices and compliance tests.
Oracle will represent an attractive option for many customers, and they should be able to immediately leverage their existing relations. While not cutting edge or best-of -breed in this class, I expect many customers will adopt as it will be bundled with what they are already buying, or the investment is considered lower risk as you are going with the worlds largest business software vendors. On the opposite end of the spectrum, companies who do not view this as business critical but still want thorough scans will employe the cost effective Tenable solution. Vendors like Fortinet, with their database security appliance, and Application Security’s AppDetective product, will be further pressed to differentiate their offerings to compete with the perceived top end and bottom ends of the market. Things should get interesting in the months to come.
Posted at Wednesday 18th February 2009 6:52 pm
(1) Comments •
I loved being a firefighter. In what other job do you get to speed around running red lights, chops someone’s door down with an axe, pull down their ceiling, rip down their walls, cut holes in their roof with a chainsaw, soak everything they own with water, and then have them stop by the office a few days later to give you the cookies they baked for you.
Now, if you try and do any of those things when you’re off duty and the house isn’t on fire, you tend to go to jail. But on duty and on fire? The police will arrest the homeowner if they get in your way.
Society has long accepted that there are times when the public interest outweighs even the most fundamental private rights. Thus I think it is long past time we applied this principle to cybersecurity and authorized appropriate intervention in support of national (and international) security.
One of the major problems we have in cybersecurity today is that the vulnerabilities of the many are the vulnerabilities of everyone. All those little unpatched home systems out there are the digital equivalent of burning houses in crowded neighborhoods. Actually, it’s probably closer to a mosquito-infested pool an owner neglects to maintain. Whatever analogy you want to use, in all cases it’s something that, if it were the physical world, someone would come to legally take care of, even if the owner tried to stop them.
But we know of multiple cases on the Internet where private researchers (and likely government agencies) have identified botnets or other compromised systems being used for active attack, yet due to legal fears they can’t go and clean the systems. Even when they know they have control of the botnet and can erase it and harden the host, they legally can’t. Our only option seems to be individually informing ISPs, which may or may not take action, depending on their awareness and subscriber agreements.
Here’s what I propose. We alter the law and empower an existing law enforcement agency to proactively clean or isolate compromised systems. This agency will be mandated to work with private organizations who can aid in their mission. Like anything related to the government, it needs specific budget, staff, and authority that can’t be siphoned off for other needs.
When a university or other private researcher discovers some botnet they can shut down and clean out, this law enforcement agency can review and authorize action. Everyone involved is shielded from being sued short of gross negligence. The same agency will also be empowered to work with international (and national) ISPs to take down malicious hosting and service providers (legally, of course). Again, this specific mission must be mandated and budgeted, or it won’t work.
Right now the bad guys operate with impunity, and law enforcement is woefully underfunded and undermandated for this particular mission. By engaging with the private sector and dedicating resources to the problem, we can make life a heck of a lot harder for the bad guys. Rather than just trying to catch them, we devote as much or more effort to shutting them down.
Call me an idealist.
(I don’t have any digital pics from firefighting days, so that’s a more-recent hazmat photo. The banda
a is to keep sweat out of my eyes; it’s not a daily fashion choice).
Posted at Wednesday 18th February 2009 12:50 pm
(10) Comments •
Nate Silver is one of those rare researchers with the uncanny ability to send your brain spinning off on unintended tangents totally unrelated to the work he’s actually documenting. His work is fascinating more for its process than its conclusions, and often generates new introspections applicable to our own areas of expertise. Take this article in Esquire where he discusses the concept of recency bias as applied to financial risk assessments.
Recency bias is the tendency to skew data and analysis towards recent events. In the economic example he uses he compares the risk of a market crash in 2008 using data from the past 60 years vs. the past 20. The difference is staggering; from one major downturn every 8 years (using 60 years of data) vs. a downturn every 624 years (using only 20 years of data). As with all algorithms, input selection deeply skews output results, with the potential for cataclysmic conclusions.
In the information security industry I believe we just as frequently suffer from selective inverse recency bias- giving greater credence to historical data over more recent information, while editing out the anomalous events that should drive our analysis more than the steady state. Actually, I take that back, it isn’t just information security, but safety and security in general, and it is likely of a deep evolutionary psychological origin. We cut out the bits and pieces we don’t like, while pretending the world isn’t changing.
Here’s what I mean- in security we often tend to assume that what’s worked in the past will continue to work in the future, even though the operating environment around us has completely changed. At the same time, we allow recency bias to intrude and selectively edit out our memories of negative incidents after some arbitrary time period. We assume what we’ve always done will always work, forgetting all those times it didn’t work.
From an evolutionary psychology point of view (assuming you go in for that sort of thing) this makes perfect sense. For most of human history what worked for the past 10, 20, or 100 years still worked well for the next 10, 20, or 100 years. It’s only relatively recently that the rate of change in society (our operating environment) accelerated to high levels of fluctuation in a single human lifetime. On the opposite side, we’ve likely evolved to overreact to short term threats over long term risks- I doubt many of our ancestors were the ones contemplating the best reaction to the tiger stalking them in the woods; our ancestors clearly got their asses out of there at least fast enough to procreate at some point.
We tend to ignore long term risks and environmental shifts, then overreact to short term incidents.
This is fairly pronounced in information security where we need to carefully balance historical data with our current environment. Over the long haul we can’t forget historical incidents, yet we also can’t assume that what worked yesterday will work tomorrow.
It’s important to use the right historical data in general, and more recent data in specific. For example, we know major shifts in technology lead to major new security threats. We know that no matter how secure we feel, incidents still occur. We know that human behavior doesn’t change, people will make mistakes, and are predictably unpredictable.
On the other hand, firewalls only stop a fraction of the threats we face, application security is now just as important as network security, and successful malware utilizes new distribution channels and propagation vectors.
Security is always a game of balance. We need to account for the past, without assuming its details are useful when defending against specific future threats.
Posted at Tuesday 17th February 2009 5:21 pm
(0) Comments •
By Adrian Lane
It’s Friday the 13th, and I am in a good mood. I probably should not be, given that every conversation seems to center around some negative aspect of the economy. I started my mornings this week talking with one person after another about a possible banking collapse, and then moved to a discussion of Sirius/XM going under. Others are furious about the banking bailout as it’s rewarding failure. Tuesday of this week I was invited to speak at a business luncheon on data security and privacy, so I headed down the hill to find the side of the roads filled with cars and ATV’s for sale. Cheap. I get to the parking lot and find it empty but for a couple of pickup trucks, all are for sale. The restaurant we are supposed to meet at shuttered its doors the previous night and went out of business. We move two doors down to the pizza joint where the TV is on and the market is down 270 points and will probably be worse by the end of the day. Still, I am in a good mood. Why? Because I feel like I was able to help people.
During the lunch we talked about data security and how to protect yourself on line, and the majority of these business owners had no idea about the threats to them both physical and electronic, and no idea on what to do about them. They do now. What was surprising was I found that everyone seemed to have recently been the victim of a scam, or someone else in their family had been. One person had their checks photographed at a supermarket and someone made impressive forgeries. One had their ATM account breached but no clue as to how or why. Another had false credit card charges. Despite all the bad news I am in a good mood because I think I helped some people stay out of future trouble simply by sharing information you just don’t see in the newspapers or mainstream press.
This leads me to the other point I wanted to discuss: Rich posted this week on “An Analyst Conundrum” and I wanted to make a couple additional points. No, not just about my being cheap … although I admit there are a group of people who capture the prehistoric moths that fly out of my wallet during the rare opening … but that is not the point of this comment. What I wanted to say is we take this Totally Transparent Research process pretty seriously, and we want all of our research and opinions out in the open. We like being able to share where our ideas and beliefs come from. Don’t like it? You can tell us and everyone else who reads the blog we are full of BS, and what’s more, we don’t edit comments. One other amazing aspect of conducting research in this way has been comments on what we have not said. More specifically, every time I have pulled content I felt was important but confused the overall flow of the post, readers pick up on it. They make note of it in the comments. I think this is awesome! Tells me that people are following our reasoning. Keeps us honest. Makes us better. Right or wrong, the discussion helps the readers in general, and it helps us know what your experiences are.
Rich would prefer that I write faster and more often than I do, especially with the white papers. But odd as it may seem, I have to believe the recommendations I make otherwise I simply cannot put the words down on paper. No passion, no writing. The quote Rich referenced was from an email I sent him late Sunday night after struggling with recommending a particular technology over another, and quite literally could not finish the paper until I had solved that puzzle in my own mind. If I don’t believe it based upon what I know and have experienced, I cannot put it out there. And I don’t really care if you disagree with me as long as you let me know why what I said is wrong, and how I screwed up. More, I especially don’t care if the product vendors or security researchers are mad at me. For every vendor that is irate with what I write, there is usually one who is happy, so it’s a zero sum game. And if security researchers were not occasionally annoyed with me there would be something wrong, because we tend to be a rather cranky group when others do not share our personal perspective of the way things are. I would rather have the end users be aware of the issues and walk into any security effort with their eyes open. So I feel good in getting these last two series completed as I think it is good advice and I think it will help people in their jobs. Hopefully you will find what we do useful!
On to the week in review:
Webcasts, Podcasts, Outside Writing, and Conferences:
Favorite Securosis Posts:
Favorite Outside Posts:
Top News and Posts:
Blog Comment of the Week:
Jack on The Business Justification for Data Security: Measuring Potential Loss:
A question/observation regarding the “qualifiable losses” you describe:
Isn’t the loss of “future business” a manifestation of damaged reputation? Likewise, reduced “customer loyalty”? After all, it seems to me that reputation is nothing more than how others view an organization’s value/liability proposition and/or the moral/ethical/competence of its leadership. It’s this perception that then determines customer loyalty and future business.
With this in mind, there are many events (that aren’t security-related) that can cause a shift in perceived value/liability, etc., and a resulting loss of market share, growth, cost of capital, etc. In my conversations with business management, many companies (especially larger ones) experience such events more frequently than most people realize, it’s just that (like most other things) the truly severe ones are less frequent. These historical events can provide a source of data regarding the practical effect of reputation events that can be useful in quantified or qualified estimates.
Next week … and all-Rich Friday post!
Posted at Saturday 14th February 2009 1:02 pm
(0) Comments •
By Adrian Lane
So far in this series we have discussed how to assess both the value of the information your company uses, and some potential losses should your data be stolen. The bad news is that security spending only mitigates some portion of the threats, but cannot eliminate them. While we would like our solutions to eradicate threats, it’s usually more complicated than that. Fortunately there is some good news, that being security spending commonly addresses other areas of need and has additional tangible benefits that should be factored into the overall evaluation. For example, the collection, analysis, and reporting capabilities built into most data security products – when used with a business processing perspective – supplement existing applications and systems in management, audit and analysis. Security investment can also be readily be leveraged to reduce compliance costs, improve systems management, efficiently analyze workflows, and gain a better understanding of how data is used and where it is located. In this post, we want make short mention of some of the positive & tangible aspects of security spending that you should consider. We will put this into the toolkit at the end of the series, but for now, we want to discuss cost savings and other benefits.
Reduced compliance/audit costs
Regulatory initiatives require that certain processes be monitored for policy conformance, as well as subsequent verification to ensure those policies and controls align appropriately with compliance guidelines. As most security products examine business processes for suspected misuse or security violations, there is considerable overlap with compliance controls. Certain provisions in the Gramm-Leach-Bliley Act (GLBA), Sarbanes-Oxley (SOX), and the Health Insurance Portability and Accountability Act (HIPPA) either call for security, process controls, or transactional auditing. While data security tools and products focus on security and appropriate use of information, policies can be structured to address compliance as well.
Let’s look at a couple ways security technologies assist with compliance:
- Access controls assist with separation of duties between operational, administrative, and auditing roles.
- Email security products provide with pretexting protection as required by GLBA.
- Activity Monitoring solutions perform transactional analysis, and with additional polices can provide process controls for end-of-period-adjustments (SOX) as well as address ‘safeguard’ requirements in GLBA.
- Security platforms separate the roles of data collection, data analysis, and policy enforcement, and can direct alerts to appropriate audiences outside security.
- Collection of audit logs, combined with automated filtering and encryption, address common data retention obligations.
- DLP, DRM, and encryption products assist in compliance with HIPAA and appropriate use of student records (FERPA).
- Filtering, analysis, and reporting help reduce audit costs by providing auditors with necessary information to quickly verify the efficacy and integrity of controls; gathering this information is typically an expensive portion of an audit.
- Auditing technologies provide a view into transactional activity, and establish the efficacy and appropriateness of controls.
Data security products collect information and events that have relevance beyond security. By design they provide a generic tool for the collection, analysis, and reporting of events that serve regulatory, industry, and business processing controls; automating much of the analysis and integrating with other knowledge management and response systems. As a result they can enhance existing IT systems in addition to their primary security functions. The total cost of ownership is reduced for both security and general IT systems, as the two reinforce each other – possibly without requiring additional staff. Let’s examine a few cases:
- Automating inspection of systems and controls on financial data reduces manual inspection by Internal Audit staff.
- Systems Management benefits from automating tedious inspection of information services, verifying that services are configured according to best practices; this can reduce breaches and system downtime, and ease the maintenance burden.
- Security controls can ensure business processes are followed and detect failure of operations, generating alerts in existing trouble ticketing systems.
Your evaluation process focuses on determining if you can justify spending some amount of money on a certain product or to address a specific threat. That laser focus is great, but data security is an enterprise issue, so don’t lose sight of the big picture. Data security products overlap with general risk reduction, similar to the way these products reduce TCO and augment other compliance efforts. When compiling your list of tradeoffs, consider other areas of risk & reward as well.
- Assessment and penetration technologies discover vulnerabilities and reduce exposure; keeping data and applications safe helps protect networks and hosts.
- IT systems interconnect and share data. Stopping threats in one area of business processing can improve reliability and security in connected areas.
- Discovery helps analysts process and understand risk exposure by providing locating data, and recording how it is used throughout the enterprise, and ensuring compliance with usage policies.
Also keep in mind that we are providing a model to help you justify security expenditures, but that does not mean our goal is to promote security spending. Our approach is pragmatic, and if you can achieve the same result without additional security products to support your applications, we are all for that. In much the same way that security can reduce TCO, some products and platforms have security built in, thus avoiding the need for additional security expenditures. We recognize that data security choices typically are the last to be made, after deployment of the applications for business processing, and after infrastructure choices to support the business applications. But if your lucky enough to have built in tools, use them.
Posted at Friday 13th February 2009 6:07 pm
(0) Comments •
I can’t believe I forgot to post this, but Martin was off in Chicago for work this week and Adrian joined me as guest host for the Network Security Podcast. We recorded live at my house, so the audio may sound a little different. If you listen really carefully, you can hear an appearance by Pepper the Wonder Cat, our Chief of Everything Officer here at Securosis.
The complete episode is here: Network Security Podcast, Episode 137, February 10, 2009 Time: 32:50
Posted at Friday 13th February 2009 12:34 pm
(0) Comments •