Securosis

Research

Incite 5/19/2010: Benefits of Bribery

Don’t blink – you might miss it. No I’m not talking about my prowess in the bedroom, but the school year. It’s hard to believe, but Friday is the last day of school here in Atlanta. What the hell? It feels like a few weeks ago we put the twins’ name tags on, and put them on the bus for their first day of kindergarten. The end of school also means it’s summertime. Maybe not officially, but it’s starting to feel that way. I do love the summer. The kids do as well, and what’s not to love? Especially if you are my kids. There is the upcoming Disney trip, the week at the beach, the 5-6 weeks of assorted summer camp(s), and lots of fun activities with Mom. Yeah, they’ve got it rough. Yet we still face the challenge of keeping the kids grounded when they are faced with a life of relative abundance. Don’t get me wrong, I know how fortunate I am to be able to provide my kids with such rich experiences as they grow up. But XX1 got our goats over the weekend, when one of her friends got an iPod touch for her birthday. Of course, her reaction was “Why can’t I have an iPod touch, all my friends have them?” Thankfully the Boss was there, as I doubt I would have responded well to that line of questioning. She calmly told XX1 that with an attitude like that, she’ll be lucky if we don’t take away all her toys. And that she needs to be grateful for what she has, not focused on what she doesn’t. To be clear, not all of her friends have iPod touches. She is prone to exaggeration, like her Dad. What she doesn’t know is our plan to give her a hand-me down iPhone once we upgrade this summer. (Of course I’m upgrading, come on, now!) I think we need to tie it to some kind of achievement. Maybe if she works hard on her school exercises over the summer. Or is nice to her sister (yes, that is a problem). Or whatever kind of behavior we want to incent at any given time. There’s nothing like having a big anchor over her head to drag out every time she misbehaves. That’s right, it’s a bribe. I’m sure there are better ways than bribery to get the kids to do what we want. I’m just not sure what they are, and nothing we’ve tried seems to work like putting that old carrot out there and waiting for Pavlov to work his magic. – Mike. Photo credits: “Unplug for safety” originally uploaded by mag3737 Incite 4 U Where is the Blog Love? – I’m going to break the rules and link to one of my own posts. On Monday I called out the decline of blogging. Basically, people have either moved to Twitter or left the community discussion completely. Twitter is great, but it can’t replace a good blog war. In response, Andy the IT Guy, DanO, and LoverVamp jumped back on the scene. These are 3 sites I used to read every day (and still do, when they are updated) and maybe we can start rebuilding the community. Why is that important? Because blogs provide a more nuanced, permanent archive of knowledge with more reasoned debate than Twitter, however wonderful, can sustain. – RM Critical Infrastructure Condition Critical – We all take uninterrupted power for granted. Yet, we security folks understand how vulnerable the critical infrastructure is to cyber-attacks. Dark Reading has an interesting interview with with Joe Weiss, who has written a book about how screwed we are. A lot of the discussions sound very similar to every other industry that requires the regulatory fist of God to come crashing down before they fix anything. And NERC CIP is only a start, since it exempts the stuff that is really interesting, like networks and the actual control systems. Unfortunately it will take a massive outage caused by an attack to change anything. But we all know that because we’ve seen this movie before. – MR Desktop, The Way You Want It – I am a big fan of desktop virtualization, and I am surprised it has gotten such limited traction. I think people view it ass backwards. The label “dumb terminal” is in the back of people’s minds, and that not a progressive model. But desktop virtualization is much, much more than a refresh of the dumb terminal model. The ability to contain the work environment in a virtual server makes things a heck of a lot easier for IT, and benefits the employee, who can access a fully functional desktop from anywhere inside – and possibly outside – the company. Citrix giving each employee $2,100 to buy their own computer for work is a very smart idea. The benefits to Citrix are numerous. Every employee gets to pick the computer they want, for better or worse, and they are now invested in their choice, rather than considering a work laptop to be a disposable loaner. The work environment is kept safe in a virtual container, and employees still get fully mobile computing. Every user becomes a tester for the company’s desktop virtualization environment, bringing diverse environments under the microscope. And it shows how they can blend work and home environments, without compromising one for the other. This is a good move and makes sense for SMB and enterprise computing environments. – AL Security 5.0 – HTML5 is coming down the pipe, and Veracode has some great advice on what to keep an eye on from a security perspective. Not to show my age, but I remember hand-coding sites in HTML v1, and how exciting it was when things like JavaScript started appearing. Any time we have one of these major transitions we see security issues crop up, and as you start leveraging all the new goodness it never hurtss to start looking at security early in

Share:
Read Post

How to Survey Data Security Outcomes?

I received a ton of great responses to my initial post looking for survey input on what people want to see in a data security survey. The single biggest request is to research control effectiveness: which tools actually prevent incidents. Surveys are hard to build, and while I have been involved with a bunch of them, I am definitely not about to call myself an expert. There are people who spend their entire careers building surveys. As I sit here trying to put the question set together, I’m struggling for the best approach to assess outcome effectiveness, and figure it’s time to tap the wisdom of the crowd. To provide context, this is the direction I’m headed in the survey design. My goal is to have the core question set take about 10-15 minutes to answer, which limits what I can do a bit. Section 1: Demographics The basics, much of which will be anonymized when we release the raw data. Section 2: Technology and process usage I’ll build a multi-select grid to determine which technologies are being considered or used, and at what scale. I took a similar approach in the Project Quant for Patch Management survey, and it seemed to work well. I also want to capture a little of why someone implemented a technology or process. Rather than listing all the elements, here is the general structure: Technology/Process Not Considering Researching Evaluating Budgeted Selected Internal Testing Proof of Concept Initial Deployment Protecting Some Critical Assets Protecting Most Critical Assets Limited General Deployment General Deployment And to capture the primary driver behind the implementation: Technology/Process Directly Required for Compliance (but not an audit deficiency) Compliance Driven (but not required) To Address Audit Deficiency In Response to a Breach/Incident In Response to a Partner/Competitor Breach or Incident Internally Motivated (to improve security) Cost Savings Partner/Contractual Requirement I know I need to tune these better and add some descriptive text, but as you can see I’m trying to characterize not only what people have bought, but what they are actually using, as well as to what degree and why. Technology examples will include things like network DLP, Full Drive Encryption, Database Activity Monitoring, etc. Process examples will include network segregation, data classification, and content discovery (I will tweak the stages here, because ‘deployment’ isn’t the best term for a process). Section 3: Control effectiveness This is the tough one, where I need the most assistance and feedback (and I already appreciate those of you with whom I will be discussing this stuff directly). I’m inclined to structure this in a similar format, but instead of checkboxes use numerical input. My concern with numerical entry is that I think a lot of people won’t have the numbers available. I can also use a multiselect with None, Some, or Many, but I really hate that level of fuzziness and hope we can avoid it. Or I can do a combination, with both numerical and ranges as options. We’ll also need a time scale: per day, week, month, or year. Finally, one of the tougher areas is that we need to characterize the type of data, its sensitivity/importance, and the potential (or actual) severity of the incidents. This partially kills me, because there are fuzzy elements here I’m not entirely comfortable with, so I will try and constrain them as much as possible using definitions. I’ve been spinning some design options, and trying to capture all this information without taking a billion hours of each respondent’s time isn’t easy. I’m leaning towards breaking severity out into four separate meta-questions, and dropping the low end to focus only on “sensitive” information – which if lost could result in a breach disclosure or other material business harm. Major incidents with Personally Identifiable Information or regulated data (PII, credit cards, healthcare data, Social Security Numbers). A major incident is one that could result in a breach notification, material financial harm, or high reputation damage. In other words something that would trigger an incident response process, and involve executive management. Major incidents with Intellectual Property (IP). A major incident is one that could result in material financial harm due to loss of competitive advantage, public disclosure, contract violation, etc. Again, something that would trigger incident response, and involve executive management. Minor incidents with PII/regulated data. A minor incident would not result in a disclosure, fines, or other serious harm. Something managed within IT, security, and the business unit without executive involvement. Minor incidents with IP. Within each of these categories, we will build our table question to assess the number of incidents and false positive/negative rates: Technology Incidents Detected Incidents Blocked Incidents Mitigated (incident occurred but loss mitigated) Incidents Missed False Positive Detected Per Day Per Month Per Year N/A There are some other questions I want to work in, but these are the meat of the survey and I am far from convinced I have it structured well. Parts are fuzzier than I’d like, I don’t know how many organizations are mature enough to even address outcomes, and I have a nagging feeling I’m missing something important. So I could really use your feedback. I’ll fully credit everyone who helps, and you will all get the raw data to perform your own analyses. Share:

Share:
Read Post

Understanding and Selecting SIEM/LM: Business Justification

It’s time to resume our series on Understanding and Selecting a SIEM/Log Management solution. We have already discussed what problems this technology solves, with Use Cases 1 & Use Cases 2, but that doesn’t get a project funded. Next we need to focus on making the business case for the project and examine how to justify the investment in bean counter lingo. End User Motivations and Business Justification Securosis has done a lot of work on the motivation for security investments. Unfortunately our research shows budgets are allocated to visceral security issues people can see and feel, rather than being based on critical consideration of risks to the organization. In other words, it’s much harder to get the CEO to sign off on a six-figure investment when you can’t directly demonstrate a corresponding drop in profit or an asset loss. Complicating matters in many cases, such as the theft of a credit card, it’s someone else who suffers the loss. Thus compliance and/or regulation is really the only way to justify investments to address the quiet threats. The good news relative to SIEM and Log Management is the technology is really about improving efficiency by enhancing the ability to deal with the mushrooming amount of data generated by network and security devices. Or being able to detect an attack designed to elude a firewall or IPS (but not both). Or even making reporting and documentation (for compliance purposes) more efficient. You can build a model to show improved efficiency, so of all security technologies – you’d figure SIEM/Log Management would be pretty straight forward to justify. Of course, putting together a compelling business justification does not always result in a funded project. Remember when money gets tight (and when is money not tight?) sometimes it’s easier to flog employees to work harder, as opposed to throwing high dollar technology at the problem. Yes, the concept of automation is good, but quantifying the real benefits can be challenging. Going Back to the Well Our efforts are also hamstrung by a decade of mis-matched expectations relative to security spending. Our finance teams have seen it all, and in lots of cases haven’t seen the tangible value of the security technology. So they are justifiably skeptical relative to yet another ROI model showing a two week payback on a multi-million dollar investment. Yes, that’s a bit facetious, but only a bit. When justifying any investment, we need to ensure not to attempt to measure what can’t be accurately measured, which inevitably causes the model to collapse under its own cumbersome processes and assumptions. We also need to move beyond purely qualitative reasoning, which produces hard to defend results. Remember that security is an investment that produces neither revenue nor fully quantifiable results, thus trying to model it is asking for failure. Ultimately having both bought and sold security technology for many years, we’ve come to the conclusion that end user motivations can be broken down pretty simply into two buckets: Save Money or Make Money. Any business justification needs to very clearly show the bean counters how the investment will either add to the top line or help improve the bottom line. And that argument is far more powerful than eliminating some shadowy threat that may or may not happen. Although depending on the industry, implementing log management (in some form) is not optional. There are regulations (namely PCI) that specifically call out the need to aggregate, parse and analyze log files. So the point of justification becomes what kind of infrastructure is needed, at what level of investment – since solutions range from free to millions of dollars. To understand where our economic levers are as we build the justification model, we need to get back to the use cases (Part 1, Part 2), and show how these can justify the SIEM/Log Management investments. We’ll start with the two use cases, which are pretty straight forward to justify because there are hard costs involved. Compliance Automation The reality is most SIEM/Log Management projects come from the compliance budget. Thus _compliance automation is a “must do” business justification because regulatory or compliance requirements must be met. These are not options. For example, if your board of directors mandates new Sarbanes-Oxley controls, you are going to implement them. If your business accepts credit cards on Internet transactions, you are going to comply with PCI data security standard. But how to you justify a tool to make the compliance process more efficient? Get our your stop watch and start tracking the time it takes you to prepare for these audits. Odds are you know how long it took to get ready for your last audit, the auditor is going to continue looking over your shoulder – asking for more documentation on policies, processes, controls and changes. The business case is based on the fact that the amount of time it takes to prepare for the audit is going to continue going up and you need automation to keep those costs under control. Whether the audit preparation budget gets allocated for people or tools shouldn’t matter. So you pay for SIEM/Log Management with the compliance budget, but the value accrues to both the security function and streamlines operations. Sounds like a win/win to us. Operational Efficiency Our next use case is about improving efficiency and this is relatively straightforward to justify. If you look back at the past few years, the perimeter defenses of your organization have expanded significantly. This perimeter sprawl is due to purpose-built devices being implemented to address specific attack vectors. Think email gateway, web filter, SSL VPN, application aware firewall, web application firewall, etc. All of which have a legitimate place in a strong perimeter. Specifically each device requires management to set policies, monitor activity, and act on potential attacks. The system itself requires time to learn, time to manage, and time to update. which requires people and additional people aren’t really in the spending plan nowadays. Operational efficiency means less time

Share:
Read Post

Is Twitter Making Us Dumb? Bloggers, Please Come Back

When I first started the Securosis blog back in 2006 I didn’t really know what to expect. I already had access to a publishing platform (Gartner), and figured blogging would let me talk about the sorts of things that didn’t really fit my day job. What I didn’t expect, what totally stunned me, was the incredible value of participating in a robust community holding intense debates, in the open, on the permanent record. Debates of the written word, which to be cogent in any meaningful way take at least a little time to cobble together and spell check. I realized that the true value of blogging isn’t that anyone could publish anything, but the inter-blog community that develops as we cross-link and cross comment. It’s how Mike Rothman and I went from merely nodding acquaintances at various social functions, to full business partners. I met Chris Hoff when I blogged I was rolling through his home town, and he then took me out to dinner. Since then we’ve paired up for 2 years of top rated sessions at the RSA Conference, and become good friends. Martin McKeay went from some dude I’d never heard of to another close friend, with whom I now podcast on a weekly basis. And those three are just the tip of the list. Blogging also opened my world in ways I could never have anticipated. This open dialog fundamentally changed opinions and positions by exposing me to a wider community. Gartner was great, but very insular. I talked with other Gartner analysts, Gartner customers, and vendors… all a self-selecting community. With blogging, I found myself talking with everyone from CEOs to high school students. At least I used to, because I feel like that community, that experience, is gone. The community of interlinked blogs that made such an impact on me seems to be missing. Sure, we have the Security Blogger’s Network and the Meetup at RSA, but as I go through my daily reading and writing, it’s clear that we aren’t interacting at nearly the level of even 2 years ago. Fewer big debates, fewer comments (generally), and fewer discussions on the open record. I’m not the only one feeling the loss. Every Tuesday and Thursday we try to compile the best of the security web for the Securosis Incite and Friday Summary, and the pickings have been slim for a while now. There are only so many times we can link back to Gunnar, Bejtlich, or the New School. Heck, when we post the FireStarter on Monday, our goal isn’t to get comments on our site (although we like that), but to spur debate and discussion on everyone else’s sites. As you can tell by the title, I think Twitter is a major factor. Our multi-post debates are now compressed into 140 characters. Not that I dislike Twitter – I love it (maybe too much), but while it can replace a post that merely links to a URL, it can’t replace the longer dialog or discussions of blogging. I’m too lazy to run the numbers, but I’ve noticed a definite reduction in comments on our blog and blogging in general as Twitter rises in popularity. I’ve had people flat-out tell me they’ve given up on blogging to focus on Twitter. Correlation isn’t causation, and the plural of anecdote isn’t data, but anyone who was on the scene a few years ago easily sees the change. When I brought this up in our internal chat room, Chris Pepper said: It’s a good point that if you have a complicated thought, it’s probably better to stew on it and build a post than to type whatever you can fit in 140 characters, hit Return, then sigh with relief that you don’t have to think about it any more. Dear Bloggers, Please come back. I miss you. -me Share:

Share:
Read Post

FireStarter: Killing the Next Generation

As a former marketing guy, I’m sensitive to meaningless descriptors that obfuscate the value a product brings to a customer. Seeing Larry Walsh’s piece on next generation firewalls versus UTM got my blood boiling because it’s such a meaningless argument. It’s time we slay the entire concept of ‘next generation’ anything. That’s right, I’m saying it. The concept of a next generation is a load of crap. The vendor community has taken to calling incremental iterations ‘next generation’ because they can’t think of a real reason customers should upgrade their gear. Maybe the new box is faster, so the 2% of the users out there actually maxing out their gear get some relief. Maybe it’s a little more functional or adds a bit more device support. Again, this hardly ever provides enough value to warrant an upgrade. But time and time again, we hear about next generation this or next generation that. It makes me want to hurl. I guess we can thank the folks at Microsoft, who perfected the art of forced upgrades with little to no value-add. Even today continue to load into office suites feature after feature that we don’t need. If you don’t believe me, open up that old version of Word 2003 and it’ll work just fine. Let’s consider the idea of the “next generation firewall,” which I highlighted in last week’s Incite with announcements from McAfee and SonicWall. Basically SonicWall’s is bigger and McAfee’s does more with applications. I would posit neither of these capabilities are unique in the industry, nor are they disruptive in any way. Which is the point. To me, ‘next generation’ means disruption of the status quo. You could make the case that Salesforce.com disrupted the existing CRM market with an online context for the application. A little closer to home, you could say the application white listing guys are poised to disrupt the endpoint security agent. That’s if they overcome the perception that the technology screws up the user experience. For these kinds of examples, I’m OK with ‘next generation’ for true disruption. But here’s the real problem, at least in the security space: End users are numb. They hear ‘next generation’ puffery from vendors and they shut down. Remember, end users don’t care whether the technology is first, second, third, or tenth generation. They care whether a vendor can solve the problem. What example(s) do we have of a ‘next generation’ product/category really being ‘next generation’? Right, not too many. We can peek into the library and crack open the Innovator’s Dilemma again. The next generation usually emerges from below (kind of like UTM) targeting a smaller market segment with similar capabilities delivered at a much better price point. Eventually the products get functional enough to displace enterprise products, and that is your next generation. Riddle me this, Batman, what am I missing here? And all you marketing folks lurking (I know you’re out there), tell me why you continue to stand on the crutch of ‘next generation’, as opposed to figuring out what is important to end users. I’d really like to know. Photo credit: “BPL’s Project Next Generation” originally uploaded by The Shifted Librarian Share:

Share:
Read Post

Talking Database Assessment with Imperva

I will be presenting a webinar: “Understanding and Selecting a Database Assessment Solution” with Imperva this Wednesday, May 19th at 11am PST / 2pm EST. I’ll cover the deployment models, key features, and ways to differentiate assessment platforms. I’ll spend a little more time on applicability for compliance, as that is the key driver for adoption now, but cover other use cases as well. You can register and sign up for the webinar. As always, if you have questions you would like addressed, you can email me prior to the presentation. Share:

Share:
Read Post

Friday Summary: May 14, 2010

I was rummaging through the closet yesterday, when I came across some old notebooks from college. Yes, I am a pack rat. One of the books contained notes from Computer Science 110: Algorithm Design. Most of the coursework was looking for ways to make algorithms more efficient, and to select the right algorithm to get the job done. I remember spending weeks on sorting routines: bubble sort, merge sort, heap sort, sorts based upon the Fibonacci sequence, Quicksort, and a few others. All of which we ran against sample data sets; comparing performance; and collecting information on best case, median, and worst case results. Obviously with a pre-sorted list they all ran fast, but depending on the size and distribution of the data set our results were radically different. The more interesting discussion was the worst-case scenarios. One of the topics for discovering them was the Adversary Technique. Basically the adversary would re-arrange the data to make it as difficult as possible to sort. The premise was that, knowing the algorithm compared elements, (e.g., is X >= Y) the adversary would re-arrange all data elements into an order that forced the highest number of comparisons to be made. Some of the sorts were brilliant on average, but would be computing results until the end of time when confronted by a knowledgable adversary. All the sort algorithms are long since purged from my memory, and I can truthfully say I have never needed to develop a sorting routine in my entire career. But the adversary technique has been very useful tool in designing code. I really started using a variant of that method for writing error-handling routines so they worked efficiently while still handling errors. What is the most difficult result I could send back? When you start trying to think of errors to send back to a calling application, it’s amazing what chaos you can cause. The first time I saw an injection attack, a malicious stream sent back from a .plan file, I thought of the intelligent adversary. This is also a pretty handy concept when writing communication protocols, where you have to establish a trust relationship during multi-phase handshaking – the adversary technique is very good for discovering logic flaws. The intelligent adversary teaches you to ask the right questions, and is useful for identifying unnecessary complexity in code. If you don’t do this already, try a little adversarial role-playing the next time you have design work. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Rich at Dark Reading: A New Way to Choose Database Encryption. Adrian’s featured article on Database Activity Monitoring for Information Security Magazine. Adrian quoted in Goldman Sachs Sued for Illegal Database Access. Rich on the Digital Underground podcast with Dennis Fisher. Other Securosis mentions: MSDN SDL group response to Monday’s FireStarter. Robert Graham thinks we’re both full of $#!%. I confess that I am uncertain why Robert thinks our recommendations differ. Favorite Securosis Posts Rich: Unintended Consequences of Consumerization. One of the very first presentations I ever built as an analyst was on consumerization… mostly because I didn’t really know what I was doing at the time. But one tenet from that presentation still holds true – never underestimate the power of consumers, and we are all consumers. Mike Rothman: We Have Ways of Making You … Use a Password. Yet another example of legislation gone wild… David Mortman and Adrian Lane: FireStarter: Secure Development Lifecycle – You’re Doing It Wrong. Other Securosis Posts SAP Buys Sybase. Incite 5/12/2010: the Power of Unplugging. Help Build the Mother of All Data Security Surveys. Download Our Kick-Ass Database Encryption and Tokenization Paper. Favorite Outside Posts Rich: Why I left Facebook. I’m still on Facebook, but I do nothing I remotely consider private there. I only stay on it until there is an alternative to keep me connected with old friends and family. Maybe that’s hypocritical considering some of my other privacy statements. Mike Rothman: Getting the time dimension right. Russell helps understand security metrics versus risk analysis. “But to make a judgement about security and make decisions about alternative security postures, we need a useful estimate of risk to decide how much security is enough.” David Mortman: The Vulnerability Arms Race. Adrian Lane: A Brief, Incomplete, and Mostly Wrong History of Programming Languages. Project Quant Posts DB Quant: Planning Metrics (Part 2). DB Quant: Planning Metrics (Part 1). Research Reports and Presentations Understanding and Selecting a Database Encryption or Tokenization Solution. Low Hanging Fruit: Quick Wins with Data Loss Prevention. Report: Database Assessment. Top News and Posts HTML 5 and SQL Injection. Cigital has announced the latest BSIMM. Now with three times the number of large development shops who publicly admit that they tend to follow best practices. Anti-Malware Bypass. Interesting use of DoS to avoid detection. Verizon’s Cloud Security Strategy. Facebook and the never ending privacy discussion. Personally, I used lilSnitch to block everything Facebook. End of discussion. Building their army of hacker commandos, Chris and Jack are indoctrinating children with a weekly regimen of XSS and pummeling drills. Rumors spread: Hoff to become real-life Matthew Sobol. FBI claimed to be watching closely. Open Source IDS. Beta available. Few details, but Visa posted a warning about settlement fraud scams. Stolen Laptop Exposed Data on 207K. Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. Technically my favorite comment of the week was by David Mortman, professing shock that Andre Gironda actually agreed with someone, on a public forum no less! But alas, as he did not leave it on the blog, the award has to go to starbuck, in response to Secure Development Lifecycle–You’re Doing It Wrong. “Before you know it, HR reps will be including “SDL certification” requirements on every engineering job description, without a clue what they are demanding or why, so let’s stop this train before it runs too far off the tracks.” Damn right. By the way, I

Share:
Read Post

Unintended Consequences of Consumerization

The ripple effect, of how a small change creates a major exposure down the line, continues to amaze me. That’s why I enjoyed the NetworkWorld post on how the iPad brings a nasty surprise. The story is basically how the ability for iPads to connect to the corporate network exposed a pretty serious hole in one organization’s network defenses. Basically a minor change to the authentication mechanism for WiFi smart phones allowed unauthorized devices to connect to the corporate network. It’s an interesting read, but we really need to consider the issues with the story. First, clearly this guy was not scanning (at all) for rogue devices or even new devices on the network. That’s a no-no. In my React Faster philosophy, one of the key facets is to know your network (and your servers and apps too), which enables you to know when something is amiss. Like having iPads (unauthorized devices) connecting to your corporate network. So how do you avoid this kind of issue? Yes, I suspect you already know the answer. Monitoring Everything gets to the heart of what needs to happen. I’ll also add the corollary that you should be hacking yourself to expose potential issues like this. Your run-of-the-mill pen test would expose this issue pretty quickly, because the first step involves enumerating the network and trying to get a foothold inside. But only if an organization systematically tries to compromise their own defenses. Most importantly, this represented a surprise for the security manager. We all know surprise = bad for a security person. There are clear lessons here. The iPad won’t be the last consumer-oriented device attempting to connect to your network. So your organization needs a policy to deal with these new kinds of devices, as well as defenses to ensure random devices can’t connect to the corporate network – unless the risk of such behavior is understood and accepted. Every device connecting to the network brings risk. It’s about understanding that risk and allowing the business folks to determine whether the risk is worth taking. Share:

Share:
Read Post

Incite 5/12/2010: the Power of Unplugging

I’m crappy at vacations. It usually takes me a few days to unwind and relax, and then I blink and it’s time to go home and get back into the mess of daily life. But it’s worse than that – even when I’m away, I tend to check email and wade through my blog posts and basically not really disconnect. So the guilt is always there. As opposed to enjoying what I’m doing, I’m worried about what I’m not doing and how much is piling up while I’m away. This has to stop. It’s not fair to the Boss or the kids or even me. I drive pretty hard and I’ve always walked the fine line between passion and burnout. I’m happy to say I’m making progress, slowly but surely. Thanks to Rich and Adrian, you probably didn’t notice I’ve been out of the country for the past 12 days and did zero work. But I was and it was great. Leaving the US really forces me to unplug, mostly because I’m cheap. I don’t want to pay $1.50 a minute for cell service and I don’t want to pay the ridonkulous data roaming fees. So I don’t. I just unplug. OK, not entirely. When we get to the hotel at night, I usually connect to the hotel network to clean out my email, quickly peruse the blog feeds and call the kids (Skype FTW). Although WiFi is usually $25-30 per day and locked to one device. So I probably only connected half the days we were away. The impact on my experience was significant. When I was on the tour bus, or at dinner with my friends, or at an attraction – I didn’t have my head buried in the iWhatever. I was engaged. I was paying attention. And it was great. I always prided myself on being able to multi-task, which really means I’m proficient at doing a lot of things poorly at the same time. When you don’t have the distractions or interruptions or other shiny objects, it’s amazing how much richer the experience is. No matter what you are doing. Regardless of the advantages, I suspect unplugging will always remain a battle for me, even on vacation. Going out of the US makes unplugging easy. The real challenge will be later this summer, when we do a family vacation. I may just get a prepay phone and forward my numbers there, so I have emergency communications, but I don’t have the shiny objects flashing at me… But now that I’m thinking about it, why don’t more of us unplug during the week? Not for days at a time, but hours. Why can’t I take a morning and turn off email, IM, and even the web, and just write. Or think. Or plan world domination. Right, the only obstacle is my own weakness. My own need to feel important by getting email and calls and responding quickly. So that’s going to be my new thing. For a couple-hour period every week, I’m going to unplug. Am I crazy? Would that work for you? It’s an interesting question. Let’s see how it goes. – Mike Photo credits: “Unplug for safety” originally uploaded by mag3737 Incite 4 U Attack of the Next Generation Firewalls… – Everyone hates the term ‘next generation’, but every vendor seems to want to convince the market they’ve got the next best widget and it represents the new new thing. Example 1 is McAfee’s announcement of the next version of Firewall Enterprise, which adds application layer protection. Not sure why that’s next generation, but whatever. It makes for good marketing. Example 2 is SonicWall’s SuperMassive project, which is a great name, but seems like an impedance mismatch, given SonicWall’s limited success in the large enterprise. And it’s the large enterprise that needs 40Gbps throughput. My point isn’t to poke at marketing folks. OK, maybe a bit. But for end users, you need to parse and purge any next generation verbiage and focus on your issues. Then deploy whatever generation addresses the problems. – MR Cry Havok and Let Slip the Lawyers – I really don’t know what to think of the patent system anymore. On one hand are the trolls who buy IP, wait for someone else to actually make a product, and then sue their behinds. On the other is the fact that patents do serve a valuable role in society to provide economic incentive for innovation, but only when managed well. I’m on the road and thus haven’t had a chance to dig into F5’s lawsuit against Imperva for patent infringement on the WAF. Thus I don’t know if this is the real deal or a play to bleed funds or sow doubt with prospects, but I do know who will win in the end… the lawyers. – RM Bait and Switch – According to The Register, researchers have successfully exercised an attack to bypass all AV protection. “It works by sending them a sample of benign code that passes their security checks and then, before it’s executed, swaps it out with a malicious payload.” and “If a product uses SSDT hooks or other kind of kernel mode hooks on similar level to implement security features it is vulnerable.” I do not know what the real chances for success are, but the methodology is legit. SSDT has been used for a while now as an exploit path, but this is the first time that I have heard of someone tricking what are essentially non-threadsafe checker utilities. A simple code change to the scheduler priorities will fix the immediate issue, but undoubtedly with side effects to application responsiveness. What most interests me about this is that it illustrates a classic problem we don’t see all that often: timing attacks. Typically this type of hack requires intimate knowledge of how the targeted code works, so it is less common. I am betting we’ll see this trick applied to other applications in the near future. –

Share:
Read Post

We Have Ways of Making You … Use a Password

MSNBC has an interesting news item: a German court is ordering all wireless routers to have a password, or the owners will be fined if it is discovered that someone used their connection illegally. From the post: Internet users can be fined up to euro 100 ($126) if a third party takes advantage of their unprotected WLAN connection to illegally download music or other files, the Karlsruhe-based court said in its verdict. “Private users are obligated to check whether their wireless connection is adequately secured to the danger of unauthorized third parties abusing it to commit copyright violation,” the court said. OK, so this is yet another lame attempt to stop people from sharing music and movies by trying to make the ‘ISP’ (a router owner in this case) an accessory to the crime. I get that, but a $126.00 fine, in the event someone is caught using your WiFi illegally and they prosecuted, is not a deterrent. But there are interesting possibilities to consider. Would the fine still apply if the password was ‘1234’? What if they had a password, but used WEP? Some routers, especially older routers, use WEP as the default. It’s trivial to breach and gain access to the password, so is that any better? Do we fine the owner of the router, or do we now fine the producer of the router for implementing crappy security? Or is the manufacturer covered by their 78 page EULA? Many laws start as benign, just to get a foothold and set precedence, then turn truly punitive after time. What if the fine was raised to $1,260, or $12,600? Would that alter your opinion? I cannot see an instance where this law makes sense as a deterrent to the actions it levies fines against. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.