Securosis

Research

Is There Any DLP or Data Security On Mac/Linux?

Had a very interesting call today with a client in the pharma research space. They would like to protect clinical study data as it moves to researcher’s computers, but are struggling with the best approach. On the call, I quickly realized that DLP, or a content tracking tool like Verdasys (who also does endpoint DLP) would be ideal. The only problem? They need Windows, Mac, and Linux support. I couldn’t remember offhand of any DLP/tracking tool (or even DRM) that will work on all 3 platforms. This is an open call for you vendors to hit me up if you can help. For you end users, where we ended up was with a few potential approaches: Switch to a remote virtual/hosted desktop for handling the sensitive data… such as Citrix or VMWare. Use Database Activity Monitoring to track who pulls the data. Endpoint encryption to protect the data from loss, but it won’t help when it’s moved to inappropriate locations. Network DLP to track it in email, but without the endpoint coverage it leaves a really big hole. Content discovery to keep some minimal tracking where it ends up (for managed systems), but that means opening up SMB/CIFS file sharing on the endpoint for admin access, which is in itself a security risk. Distributed encryption, which *does* have cross platform support, but still doesn’t stop the researcher from putting the data someplace it shouldn’t be, which is their main concern. While this is one of those industries (research) with higher Mac/cross platform use than the average business, this is clearly a growing problem thanks to the consumerization of IT. This situation also highlights how no single-channel solution can really protect data well. It’s the mix of network, endpoint, and discovery that really allows you to reduce risk without killing business process. Share:

Share:
Read Post

Top 10 Web Hacking Technique of 2008

A month or so I go I was invited by Jeremiah Grossman to help judge the Top 10 Web Hacking Techniques of 2008 (my fellow judges were Hoff, H D Moore, and Jeff Forristal). The judging ended up being quite a bit harder than I expected- some of the hacks I was thinking of were from 2007, and there were a ton of new ones I managed to miss despite all the conference sessions and blog reading. Of the 70 submissions, I probably only remembered a dozen or so… leading to hours of research, with a few nuggets I would have missed otherwise. I was honored to participate, and you can see the results over here at Jeremiah’s blog. Share:

Share:
Read Post

Friday Summary, February 20, 2009

< div class=”wiki_entry”> Last Friday Adrian sent me an IM that he was just about finished with the Friday summary. The conversation went sort of like this: Me: I thought it was my turn? Adrian: It is. I just have a lot to say. It’s hard to argue with logic like that. This is a very strange week here at Securosis Central. My wife was due to deliver our first kid a few days ago, and we feel like we’re now living (and especially sleeping) on borrowed time. It’s funny how procreation is the most fundamental act of any biological creature, yet when it happens to you it’s, like, the biggest thing ever! Sure, our parents, most of our siblings, and a good chunk of our friends have already been through this particular rite of passage, but I think it’s one of those things you can never understand until you go through it, no matter how much crappy advice other people give you or books you read. Just like pretty much everything else in life. I suppose I could use this as a metaphor to the first time you suffer a security breach or something, but it’s Friday and I’ll spare you my over-pontification. Besides, there’s all sorts of juicy stuff going on out there in the security world, and far be it from me to waste you time with random drivel when I already do that the other 6 days of the week. Especially since you need to go disable Javascript in Adobe Acrobat. Onto the week in review: Webcasts, Podcasts, Outside Writing, and Conferences: Brian Krebs joined us on the Network Security Podcast. Favorite Securosis Posts: Rich: I love posts that stir debate, and A Small, Necessary Change for National Cybersecurity sure did the job. Adrian: Database Configuration Assessment Options. Favorite Outside Posts: Adrian: Rothman nails it this week with I’m a HIPAA, Hear Me Roar. Rich: Amrit on How Cloud, Virtualization, and Mobile Computing Impact Endpoint Management in the Enterprise. I almost think he might be being a little conservative on his time estimates. Top News and Posts: Kaminsky supports DNSSEC. His full slides are here. No, he’s not happy about it. Is there a major breach hiding out there? There is a major Adobe Acrobat exploit. Disable Javascript now. Verizon is implementing spam blocking. Nice, since they are one of the worst offenders and all. Sendio (email security) lands $3M. Glad we didn’t call that market dead. Microsoft sued over XP downgrade costs. Next, they’ll be sued for using the color blue in their logo. (Note to self- call lawyer). Much goodness at Black Hat DC. Too much to cover with individual links. Metasploit turns attack back on attackers. Stupid n00bs. Blog Comment of the Week: Sharon on New Database Configuration Assessment Options IMO mValent should be compared with CMDB solutions. They created a compliance story which in those days (PCI) resonates well. You probably know this as well as I (now I”m just giving myself some credit ) but database vulnerability assessment should go beyond the task of reporting configuration options and which patches are applied. While those tasks are very important I do see the benefits of looking for actual vulnerabilities. I do not see how Oracle will be able to develop (or buy), sell and support a product that can identify security vulnerabilities in its own products. Having said that, I am sure that many additional customers would look and evaluate mValent. The CMDB giants (HP, IBM and CA) should expect more competitive pressure. Share:

Share:
Read Post

Will This Be The Next PCI Requirement Addition?

I’m almost willing to bet money on this one… Due to the nature of the recent breaches, such as Hannaford, where data was exfiltrated over the network, I highly suspect we will see outbound monitoring and/or filtering in the next revision of the PCI DSS. For more details on what I mean, refer back to this post. Consider this your first warning. Share:

Share:
Read Post

A Small, Necessary, Legal Change For National Cybersecurity

I loved being a firefighter. In what other job do you get to speed around running red lights, chops someone’s door down with an axe, pull down their ceiling, rip down their walls, cut holes in their roof with a chainsaw, soak everything they own with water, and then have them stop by the office a few days later to give you the cookies they baked for you. Now, if you try and do any of those things when you’re off duty and the house isn’t on fire, you tend to go to jail. But on duty and on fire? The police will arrest the homeowner if they get in your way. Society has long accepted that there are times when the public interest outweighs even the most fundamental private rights. Thus I think it is long past time we applied this principle to cybersecurity and authorized appropriate intervention in support of national (and international) security. One of the major problems we have in cybersecurity today is that the vulnerabilities of the many are the vulnerabilities of everyone. All those little unpatched home systems out there are the digital equivalent of burning houses in crowded neighborhoods. Actually, it’s probably closer to a mosquito-infested pool an owner neglects to maintain. Whatever analogy you want to use, in all cases it’s something that, if it were the physical world, someone would come to legally take care of, even if the owner tried to stop them. But we know of multiple cases on the Internet where private researchers (and likely government agencies) have identified botnets or other compromised systems being used for active attack, yet due to legal fears they can’t go and clean the systems. Even when they know they have control of the botnet and can erase it and harden the host, they legally can’t. Our only option seems to be individually informing ISPs, which may or may not take action, depending on their awareness and subscriber agreements. Here’s what I propose. We alter the law and empower an existing law enforcement agency to proactively clean or isolate compromised systems. This agency will be mandated to work with private organizations who can aid in their mission. Like anything related to the government, it needs specific budget, staff, and authority that can’t be siphoned off for other needs. When a university or other private researcher discovers some botnet they can shut down and clean out, this law enforcement agency can review and authorize action. Everyone involved is shielded from being sued short of gross negligence. The same agency will also be empowered to work with international (and national) ISPs to take down malicious hosting and service providers (legally, of course). Again, this specific mission must be mandated and budgeted, or it won’t work. Right now the bad guys operate with impunity, and law enforcement is woefully underfunded and undermandated for this particular mission. By engaging with the private sector and dedicating resources to the problem, we can make life a heck of a lot harder for the bad guys. Rather than just trying to catch them, we devote as much or more effort to shutting them down. Call me an idealist. (I don’t have any digital pics from firefighting days, so that’s a more-recent hazmat photo. The banda a is to keep sweat out of my eyes; it’s not a daily fashion choice). Share:

Share:
Read Post

New Database Configuration Assessment Options

Oracle has acquired mValent, the configuration management vendor. mValent provides an assessment tool to examine the configuration of applications. Actually, they do quite a bit more than that, but I wanted to focus on the value to database security and compliance in this post. This is a really good move on Oracle’s part as it fills a glaring hole that they have had for some time in their security and compliance offerings. I have never understood why Oracle did not provide this as part of OEM as every Oracle event I have been to in the last 5 years has sessions where DBA’s are swapping scripts to assess their database. Regardless, they have finally filled the gap. It provides them with a platform to implement their own best practice guidelines, and gives customers a way to implement their own security, compliance and operational policies around the database and (I assume) other application platforms. Sadly, many companies have not automated their database configuration assessments, and the market remains wide open, and this is a timely acquisition. While the value proposition for this technology will be spun by Oracle’s marketing team in a few dozen different ways (change management, compliance audits, regulatory compliance, application controls, application audits, compliance automation, etc), don’t get confused by all of the terms. When it comes down to it, this is an assessment of application configuration. And it does provide value in a number of ways: security, compliance and operations management. The basic platform can be used in many different ways all depending upon how you bundle the policy sets and distribute reports. Also keep in mind that a ‘database audit’ and ‘database auditing’ are two completely different things. Database auditing is about examining transactions. What we are talking about here is how the database is configured and deployed. To avoid the deliberate market confusion on the vendors part, here at Securosis we will stick to the terms Vulnerability Assessment and Configuration Assessment to describe the work that is being performed. Tenable Network Security has also announced on their blog that they now have the ability to perform credentialed scans of the database. This means that Nessus is no longer just a pen-test style patch level checker, but a credentialed/peer based configuration assessment. By ‘Credentialed’ I mean that the scanning tool has a user name and password with some access rights the database. This type of assessment provides a lot more functionality because there is a lot more information available to you that is not available through a penetration test. This is necessary progression for the product as the ports, quite specifically the database ports, no longer return sufficient information for a good assessment of patch levels, or any of the important information for configuration. If you want to produce meaningful compliance reports, this is the type of scan you need to provide. While I occasionally rip Tenable Security as this is something they should have done two years ago, it is really a great advancement for them as it opens up the compliance and operation management buying centers. Tenable must be considered a serious player in this space as this is a low cost, high value option. They will continue to win market share as they flesh out the policy set to include many of the industry best practices and compliance tests. Oracle will represent an attractive option for many customers, and they should be able to immediately leverage their existing relations. While not cutting edge or best-of -breed in this class, I expect many customers will adopt as it will be bundled with what they are already buying, or the investment is considered lower risk as you are going with the worlds largest business software vendors. On the opposite end of the spectrum, companies who do not view this as business critical but still want thorough scans will employe the cost effective Tenable solution. Vendors like Fortinet, with their database security appliance, and Application Security’s AppDetective product, will be further pressed to differentiate their offerings to compete with the perceived top end and bottom ends of the market. Things should get interesting in the months to come. Share:

Share:
Read Post

Selective Inverse Recency Bias In Security

Nate Silver is one of those rare researchers with the uncanny ability to send your brain spinning off on unintended tangents totally unrelated to the work he’s actually documenting. His work is fascinating more for its process than its conclusions, and often generates new introspections applicable to our own areas of expertise. Take this article in Esquire where he discusses the concept of recency bias as applied to financial risk assessments. Recency bias is the tendency to skew data and analysis towards recent events. In the economic example he uses he compares the risk of a market crash in 2008 using data from the past 60 years vs. the past 20. The difference is staggering; from one major downturn every 8 years (using 60 years of data) vs. a downturn every 624 years (using only 20 years of data). As with all algorithms, input selection deeply skews output results, with the potential for cataclysmic conclusions. In the information security industry I believe we just as frequently suffer from selective inverse recency bias- giving greater credence to historical data over more recent information, while editing out the anomalous events that should drive our analysis more than the steady state. Actually, I take that back, it isn’t just information security, but safety and security in general, and it is likely of a deep evolutionary psychological origin. We cut out the bits and pieces we don’t like, while pretending the world isn’t changing. Here’s what I mean- in security we often tend to assume that what’s worked in the past will continue to work in the future, even though the operating environment around us has completely changed. At the same time, we allow recency bias to intrude and selectively edit out our memories of negative incidents after some arbitrary time period. We assume what we’ve always done will always work, forgetting all those times it didn’t work. From an evolutionary psychology point of view (assuming you go in for that sort of thing) this makes perfect sense. For most of human history what worked for the past 10, 20, or 100 years still worked well for the next 10, 20, or 100 years. It’s only relatively recently that the rate of change in society (our operating environment) accelerated to high levels of fluctuation in a single human lifetime. On the opposite side, we’ve likely evolved to overreact to short term threats over long term risks- I doubt many of our ancestors were the ones contemplating the best reaction to the tiger stalking them in the woods; our ancestors clearly got their asses out of there at least fast enough to procreate at some point. We tend to ignore long term risks and environmental shifts, then overreact to short term incidents. This is fairly pronounced in information security where we need to carefully balance historical data with our current environment. Over the long haul we can’t forget historical incidents, yet we also can’t assume that what worked yesterday will work tomorrow. It’s important to use the right historical data in general, and more recent data in specific. For example, we know major shifts in technology lead to major new security threats. We know that no matter how secure we feel, incidents still occur. We know that human behavior doesn’t change, people will make mistakes, and are predictably unpredictable. On the other hand, firewalls only stop a fraction of the threats we face, application security is now just as important as network security, and successful malware utilizes new distribution channels and propagation vectors. Security is always a game of balance. We need to account for the past, without assuming its details are useful when defending against specific future threats. Share:

Share:
Read Post

Friday Summary, 13th of February, 2009

It’s Friday the 13th, and I am in a good mood. I probably should not be, given that every conversation seems to center around some negative aspect of the economy. I started my mornings this week talking with one person after another about a possible banking collapse, and then moved to a discussion of Sirius/XM going under. Others are furious about the banking bailout as it’s rewarding failure. Tuesday of this week I was invited to speak at a business luncheon on data security and privacy, so I headed down the hill to find the side of the roads filled with cars and ATV’s for sale. Cheap. I get to the parking lot and find it empty but for a couple of pickup trucks, all are for sale. The restaurant we are supposed to meet at shuttered its doors the previous night and went out of business. We move two doors down to the pizza joint where the TV is on and the market is down 270 points and will probably be worse by the end of the day. Still, I am in a good mood. Why? Because I feel like I was able to help people. During the lunch we talked about data security and how to protect yourself on line, and the majority of these business owners had no idea about the threats to them both physical and electronic, and no idea on what to do about them. They do now. What was surprising was I found that everyone seemed to have recently been the victim of a scam, or someone else in their family had been. One person had their checks photographed at a supermarket and someone made impressive forgeries. One had their ATM account breached but no clue as to how or why. Another had false credit card charges. Despite all the bad news I am in a good mood because I think I helped some people stay out of future trouble simply by sharing information you just don’t see in the newspapers or mainstream press. This leads me to the other point I wanted to discuss: Rich posted this week on “An Analyst Conundrum” and I wanted to make a couple additional points. No, not just about my being cheap … although I admit there are a group of people who capture the prehistoric moths that fly out of my wallet during the rare opening … but that is not the point of this comment. What I wanted to say is we take this Totally Transparent Research process pretty seriously, and we want all of our research and opinions out in the open. We like being able to share where our ideas and beliefs come from. Don’t like it? You can tell us and everyone else who reads the blog we are full of BS, and what’s more, we don’t edit comments. One other amazing aspect of conducting research in this way has been comments on what we have not said. More specifically, every time I have pulled content I felt was important but confused the overall flow of the post, readers pick up on it. They make note of it in the comments. I think this is awesome! Tells me that people are following our reasoning. Keeps us honest. Makes us better. Right or wrong, the discussion helps the readers in general, and it helps us know what your experiences are. Rich would prefer that I write faster and more often than I do, especially with the white papers. But odd as it may seem, I have to believe the recommendations I make otherwise I simply cannot put the words down on paper. No passion, no writing. The quote Rich referenced was from an email I sent him late Sunday night after struggling with recommending a particular technology over another, and quite literally could not finish the paper until I had solved that puzzle in my own mind. If I don’t believe it based upon what I know and have experienced, I cannot put it out there. And I don’t really care if you disagree with me as long as you let me know why what I said is wrong, and how I screwed up. More, I especially don’t care if the product vendors or security researchers are mad at me. For every vendor that is irate with what I write, there is usually one who is happy, so it’s a zero sum game. And if security researchers were not occasionally annoyed with me there would be something wrong, because we tend to be a rather cranky group when others do not share our personal perspective of the way things are. I would rather have the end users be aware of the issues and walk into any security effort with their eyes open. So I feel good in getting these last two series completed as I think it is good advice and I think it will help people in their jobs. Hopefully you will find what we do useful! On to the week in review: Webcasts, Podcasts, Outside Writing, and Conferences: In a nepotistic extravaganza during Martin’s absence, this week’s network podcast included both Rich & Adrian, with Rich sharing a few rumors on the Heartland breach. Adrian was interviewed by SC Magazine on the Los Alamos Lab’s missing computers. Rich wrote up the Mac OS X Security Update for TidBITS. Macworld released their Security Superguide, with Rich & Chris as authors. Much to their surprise! Rich participated in an SC Magazine webcast on PCI. Rich moderated the WhiteHatWorld.com Thought Leadership Roundtable on Cloud Computing Security. (Sorry, replay link isn’t up yet.) Favorite Securosis Posts: Rich: Recent Breaches- How To Limit Malicious Outbound Connections. There are a couple of great comments with additional information, including one from Big Bad Mike Rothman, who is not dead yet. Adrian: An Analyst Conundrum for, well, the ten or so reasons I mentioned above. Favorite Outside Posts: Adrian: Showing some love for Dre … Talking about why WAF

Share:
Read Post

Los Alamos Missing Computers

Yahoo! News is reporting that the Los Alamos nuclear weapons research facility reportedly is missing some 69 computers according to a watchdog group who released an internal memo. Either they have really bad inventory controls, or they have a kleptomaniac running around the lab. Even for a mid-sized organization, this is a lot, especially given the nature of their business. Granted the senior manager says this does not mean there was a breach of classified information, and I guess I should give him the benefit of the doubt, but I have never worked at a company where sensitive information did not flow like water around the organization regardless of policy. The requirement may be to keep classified information off unclassified systems, but unless those systems are audited, how would you know? How could you verify if they are missing. We talk a lot about endpoint security and and the need to protect laptops, but really, if you work for an organization that deals with incredibly sensitive information (you know, like nuclear secrets) you need to encrypt all of the media regardless of the media being mobile or not. There are dozens of vendors that offer software encryption and most of the disk manufacturers are coming out with encrypted drives. And you are probably aware if you read this blog that we are proponents of DLP in certain cases; this type of policy enforcement for the movement of classified information would be a good example. You would think organizations such as this would be ahead of the curve in this area, but apparently not. Share:

Share:
Read Post

Adrian Appears on the Network Security Podcast

I can’t believe I forgot to post this, but Martin was off in Chicago for work this week and Adrian joined me as guest host for the Network Security Podcast. We recorded live at my house, so the audio may sound a little different. If you listen really carefully, you can hear an appearance by Pepper the Wonder Cat, our Chief of Everything Officer here at Securosis. The complete episode is here: Network Security Podcast, Episode 137, February 10, 2009 Time: 32:50 Show Notes: Arizona tracking drug prescriptions. I swear that stuff was from my shoulder surgery, officer! Kaspersky hacked. We mostly talk about the response. Metasploit to add an online service component. We can’t wait to learn more about what they are going to offer beyond password cracking. Melissa Hathaway appointed head of White House office of cybersecurity. We talk about some new info we have on the Heartland breach that isn’t in the press yet, so no link. Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.