Securosis

Research

Incite 2/20/2013: Tartar Wars

5 years. It doesn’t seem that long. It seems like yesterday I was on the phone screaming at the office manager of my (previous) dentist. He told the Boss something and then backtracked on it, and I had to write a check to fix the problem. I had just dropped my dental insurance and that little optional procedure wasn’t going to be covered as he had said it would. I told them to pound sand, which was a good move – I settled for perhaps 30% of the cost 18 months later, before it went to collection. But at the same time, I dropped the dentist. He violated my trust and that was that. Though I seemed to have forgotten to find a new one. This was pretty uncharacteristic – I had been going every 6 months for cleanings since I was a kid. I had a handful of cavities but my teeth were in great shape. But none of my pals had a dentist they liked, so I kind of forgot about it. No big deal, I’ll find one. Sooner or later. And one year became two years, which then turned into 5. Turns out a friend of ours recently moved his dental practice around the corner, so I had a new guy I trusted. Combined with the call I got last week about the Boss needing a root canal (she hadn’t been in 5 years either), I knew it was time. The fact that Arthur Treacher’s famous Tartar Sauce was caked onto my teeth notwithstanding, it was time to pay my penance and go in. First of all, my guy does it right. Most folks hate the dentist, so he staffs his office with the nicest people on Earth. I wasn’t in a great mood, and within a minute they had me smiling and chatting it up. That is nothing short of amazing, given my general state of grumpiness. They were all super helpful and by the time my hygienist got through my health forms and X-rays, I knew her life story. Then she proceeded to sandblast my teeth for 35 minutes to clean them off. Evidently a lot of crap sticks to your teeth over 5 years. Yes, it was uncomfortable. But penance is never pleasant. At least she gave my gums a rest halfway through. A little polish, a bunch of floss and I was ready to meet with the big man. I was a little apprehensive because I figured with all the plaque build-up my teeth must be a train wreck. He cracks some jokes and then pokes and prods with his tools. Oh crap, here it comes… 3 new cavities and about 5 other areas to watch. Wow, it could have been a lot worse. I guess all that fluoride my Mom made me take when I was a kid worked okay. Of course he did mention my habit of grinding my teeth. Evidently that’s my subconscious way of dealing with the stress and paranoia of being me. Though it’s not causing too much damage right now. So I’ll need to be more aware and cut it out. Evidently I need to find another stress outlet. Maybe some vendor will have a nice squeeze toy or punching bag to give away at the RSAC next week. He also made an impassioned plea for me to floss more. I hate flossing. I mean hate. But hey, if it means I won’t have to get more fillings next year and the year after that, then I’ll just do it. I have declared war on tartar, and that damn floss is a key armament in my arsenal so I have no choice. A man’s got to do what a man’s got to do. –Mike Photo credits: Thong Lor dentist originally uploaded by Mrs Hilksom Heavy Research We’re back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. Network-based Threat Intelligence Quick Wins with NBTI Following the Trail of Bits Understanding the Kill Chain Understanding Identity Management for Cloud Services Architecture and Design Integration Newly Published Papers Building an Early Warning System Implementing and Managing Patch and Configuration Management Defending Against Denial of Service Attacks Securing Big Data: Security Recommendations for Hadoop and NoSQL Environments Pragmatic WAF Management: Giving Web Apps a Fighting Chance Incite 4 U Attribution. Meh. Indicators. WIN! With the Mandiant APT1 report making mass market waves yesterday (Rich covered it, and Adrian has some thoughts below), attribution is now big news. John Sawyer discussed this on Dark Reading last week, of course quoting the Mandiant PR machine. His point is that attribution is hard and the kind of profiling and work done by Mandiant is required to really be sure who a specific attacker is. And although Jeffrey Carr brings up some decent points about considering other actors before attributing (though he has no way to know to what degree Mandiant considered competing hypotheses), the reality is that Mandiant did the work and showed with reasonable certainty the specific actor is who they think it is. But ultimately will this do anything besides force the attackers to change tactics and reconsider their OpSec? Probably not, but that misses the point. What will be most valuable is the hundreds of indicators published with the research. Kudos to Mandiant for that. – MR Siri, build me a cloud: If you have been paying any attention to anything I have written or said on cloud security the past couple years (something I’m definitely not about to assume), you know I’m a huge fan of cloud automation and software defined security. We really cannot manage cloud security manually, and need to take lessons from the whole DevOps movement to become much more efficient in protecting cloud instances. One thing I have mentioned frequently is use of tools like Chef and Puppet for configuration automation (in the

Share:
Read Post

Cars, Babes, and Money: It’s RSAC Time

Now that we have posted our RSA Conference Guide, we can get back to lampooning the annual ritual of trying to get folks to scan their badges on the show floor. Great perspective here from Ranum on the bad behavior you’ll see next week, all in the name of lead generation. I’m not sure if I should be howling or repulsed by this idea: “this afternoon I was standing in my studio looking at some high-heeled stripper shoes (in my size) some fishnet stockings, and a knife-pleated Japanese Schoolgirl skirt (also in my size) and thinking “It’s too cold to do this …” Or something like that. My plan was to take a photograph of myself in “booth uniform” from the waist down, and my normal business-casual-slacker from the waist up. Because I threatened my boss that I’d work our booth at the conference wearing high heels and stockings.” Ranum in high heels and stockings is probably a pretty effective way to get out of jury duty as well. Marcus figures booth babes with platform shoes establish solid security credibility, right? What about vehicles? I also wanted to see if we could get an old WWII Sherman Tank to park by our booth, because apparently having a ridiculously irrelevant vehicle parked at your booth says a great deal about how well your products work. I wonder how much the union workers at Moscone would charge to place a Sherman tank on the show floor? But more seriously, what do these irrelevant vehicles have to do with security? Damn Ranum, asking these kinds of questions: How does dollars spent, length of inseam, or miles per hour, correlate to telling us something useful about: The quality of the product? How well it meets customers’ needs? How easy the product is to use? The company’s ability to innovate? Actually – it tells me quite a lot. It tells me I’m looking at a company that has a marketing organization that’s as out of touch as the management team that approved that booth set-up. Here’s a good idea: replace the Ferrari with a cardboard cut-out of a Ferrari and use the money you just saved to hire a new marketing team. But evidently there is another way: And I remember how, last year, I went by Palo Alto’s booth and Nir Zuk, the founder, was doing the pitches to a massive crowd – and answering some pretty crunchy technical question, too. (No: Nir was not in a miniskirt) That’s the kind of performance that would impress me if I were shopping for a company to invest in on their IPO. That’s the kind of performance that might interest me enough to take a look at their product – instead of their founder’s butt. Though if a security company founder has a butt worth looking at, well I’m probably OK with that… Yes, I’m kidding. See you next week at RSAC… Share:

Share:
Read Post

Network-Based Threat Intelligence: Quick Wins with NBTI

As we get back into Network-Based Threat Intelligence, let’s briefly revisit our first two posts. We started by highlighting the Kill Chain, which delved into the typical attack process used by advanced malware to achieve the attacker’s mission, which usually entails some kind of data exfiltration. Next we asked the 5 key questions (who, what, where, when, and how) to identify indicators of an advanced malware attack that can be captured by monitoring network traffic. With these indicators we can deploy sensors to monitor network traffic, and hopefully to identify devices exhibiting bad behavior, before real damage and exfiltration occur. That’s the concept behind the Early Warning System. Deployment As described, network-based threat intelligence requires monitoring key network segments for indicators of attack traffic (typically command and control). Many organizations have extensive and sprawling network infrastructure, so you probably cannot monitor everything initially. So it’s about initial prioritization of networks to give yourself the best chance to get the Quick Win and hopefully break the Data Breach Triangle. So where do you start? The first and easiest place to start monitoring the network is your egress pipes to the Internet. Today’s malware systematically uses downloaders to get the latest and greatest attack code, which means the compromised device need to communicate with the outside world at some point. This Internet communication offers your best opportunity to identify devices as compromised, if you monitor your egress networks and can isolate these communications. Besides providing an obvious choke point for identification of command and control traffic, egress connections tend to be lower bandwidth than a internal network segments, making egress monitoring more practical than full internal monitoring. We have long advocated full network packet capture, in order to enable advanced analytics and forensics on network traffic. As part of our React Faster and Better research, we named the Full Packet Capture Sandwich: deploying network capture devices on the perimeter and in front of particularly critical data stores. This approach is totally synergistic with network-based threat intelligence, since you will be capturing the network traffic and can look for command and control indicators that way. Of course, if full packet capture isn’t deployed (perhaps because it’s beyond the sophistication of your operations team), you can just monitor the networks using purpose-built sensors looking specifically for these indicators. Obviously real-time network-based threat intelligence feeds integrated into the system are critical in this scenario, as you only get one chance to identify C&C traffic because you aren’t capturing it. Another place for network traffic monitoring is internal DNS infrastructure. As described previously in the series, DNS request patterns can indicate domain generation algorithms and/or automated (rather than human) connection requests to the C&C network. Unless your organization is a telecom carrier you won’t have access to massive amounts of DNS traffic, but large enterprises running their own DNS can certainly identify trends and patterns within their infrastructure by monitoring DNS. Finally, in terms of deployment, you will always have the push/pull of inline vs. out-of-band approaches to network security. Remember that network-based threat intelligence is a largely reactive approach for identifying and finding command and control traffic which indicates a compromised device. In fact the entire Early Warning System concept is based on shortening the window between compromise and detection, rather than an effort to prevent compromise. Of course it would it even better to be able to identify C&C traffic on the egress pipe and block it, preventing compromised devices from communicating with attackers. But we need to be cautious with the bane of every security practitioner: the false positive. So before you block traffic or kill an IP session, you need to be sure you are right. Of course most organizations want the ability to disrupt attack traffic, but very few actually do. Most “active network controls”, including network-based malware detection devices, are implemented in monitoring/alerting mode, because most practitioners consider impacting a legitimate connection far worse than missing an attack. A jury of (network) peers So you have deployed network monitors – what now? How can we get that elusive Quick Win to show immediate value from network-based threat intelligence? You want to identify compromised devices based on communication patterns. But you don’t want to wrongly convict or disrupt innocent devices, so let’s dust off an analogy dating back to the anti-spam battles: the jury. During the early spam battles, analyzing email to identify unsolicited messages (spam) involved a number of analysis techniques (think 30-40) used to determine intent. None of those techniques is 100% reliable alone, but in combination, using a reasonable algorithm to properly weigh techniques effectiveness, spam could be detected with high reliability. That “spam cocktail” still underlies many of the email security products in use today. You will use the same approach to weigh all network-based malware indicators to determine whether a device is compromised or not, based on what you see from the network. It’s another cocktail approach, where each jury member looks at a different indicator to determine guilt or innocence. The jury foreman – your analysis algorithm – makes the final determination of compromise. By analyzing all the traffic from your key devices, you should be able to identify the clearly compromised ones. This type of detection provides the initial Quick Win. You had a compromised device that you didn’t know was compromised until you monitored the traffic it generated. That’s a win for monitoring & analysis! You should worry about whether you will find anything with this approach. In just about any reasonably-sized enterprise, the network will show a handful to a few dozen compromised devices. Nothing personal, folks, but we have yet to come across an environment of a few thousand machines without any compromised devices. It’s just statistics. Employees click on stuff, and that’s all she wrote. The real question is how well you know which devices are compromised and how severe the issues are – how quickly do you have to take action? Intelligence-driven focus Once you have identified which devices you believe have been compromised, your incident response process kicks in. Given resource constraints, it would likely be impractical to fully investigate every device, analyze each one, isolate

Share:
Read Post

Trust us, our CA is secure

Given the number of recent high profile CA compromises, it seems some of the folks who milk the SSL cash cow figured they should do something to sooth customer concerns about integrity. So what to do? What to do? Put a security council together to convince customers you take security seriously. From Dark Reading’s coverage of the announcement: “We felt SSL needed a leader,” says Jeremy Rowley, associate general counsel for DigiCert, which, along with Comodo, Entrust, GlobalSign, Go Daddy, Symantec, and Trend Micro, today officially launched the new organization. “We felt a group of CAs, rather than one CA,” was a better approach, he says. So the group will push for OCSP Stapling and then other technologies to be determined. But it’s not a standards body. So what is it again? “CASC is not a standards body. Instead, we will work on helping people understand the critical polices on SSL and … promote best practices in advancing the trust of CA operations,” DigiCert’s Rowley says. “Our main goal is to be an authoritative resource on SSL.” Guess these guys forgot that the weakest link breaks the chain. And out of the hundreds of root certs in the typical browser, one of those CAs will be the next weakest link. Photo credit: “Trust us, we’re expert” originally uploaded by Phauly Share:

Share:
Read Post

RSA Conference Guide 2013: Security Management and Compliance

Given RSA’s investment in security management technology (cough, NetWitness, cough) and the investments of the other big RSAC spenders (IBM, McAfee, HP), you will see a lot about the evolution of security management this year. We alluded to this a bit when talking about Security Big Data Analytics in our Key Themes piece, but let’s dig in a bit more… SIEM 3.0? We can’t even get SIEM 1.0 working. The integration of logs and packet capture is now called Security Analytics; we will hear a lot about how SIEM is old news and needs to evolve into Security Analytics to process, index, search, and report on scads of data. Make that two scads of data. So the buzz at the show will be all about NoSQL data structures, MapReduce functions, Pigs, and all sorts of other things that are basically irrelevant to getting your job done. Instead of getting caught up in the tsumami of hype, at the show focus on a pretty simple concept. How are these new tools going to help you do your job better? Today or maybe tomorrow. Don’t worry about the 5-year roadmap of technology barely out of the lab. Can the magic box tell you things you don’t know? Can it look for stuff you don’t know to look for? You need to understand enough to make sure you don’t trading one boat anchor, which you could never get to work, for another shinier anchor. So focus heavily on your use cases for that tool. You know, boring and unsexy things like alerting, forensics, and reporting, as we discussed in Selecting SIEM and Security Management 2.0 in days gone by. We do expect these new data models, analysis capabilities, and the ability to digest packet traffic and other data sources will make a huge difference in the effectiveness of security management platforms. But it’s still early, so keep a skeptical eye on show-floor marketing claims. Deeper Integration (Big IT’s Security Revenge) Big IT got religion over the past two years about how important security is to things like, well, everything. So they wrote big checks, bought lots of companies, and mostly let them erode and hemorrhage market share. The good news is that at least some of the Big IT players learned the errors of their ways, reorganized for success, and have done significant integration; all aimed at positioning their security management platforms in the middle of a bunch of complimentary product lines providing application, network, endpoint, and data security. Of course they all play lip service to heterogeneity and coopetition, but really they hate them. They want to sell you everything, with lock-in, and they are finally starting to provide arguments for doing it their way. Back in the real world you cannot just forklift the entire installed base of security technologies you have implemented over years. But that doesn’t mean you have to tell either your incumbent or competitors about that. Use better product integration as leverage when renewing or expanding controls. And especially for more mature technologies, looking at an integrated solution from a Big IT/Security player may be a pretty good idea. Share:

Share:
Read Post

Don’t Bring BS to a Data Fight

Thanks to a heads-up from our Frozen Tundra correspondent, Jamie Arlen, I got to read this really awesome response by Elon Musk of Tesla refuting the findings of a NYT car reviewer, A Most Peculiar Test Drive. After a negative experience several years ago with Top Gear, a popular automotive show, where they pretended that our car ran out of energy and had to be pushed back to the garage, we always carefully data log media drives. While the vast majority of journalists are honest, some believe the facts shouldn’t get in the way of a salacious story. The logs show again that our Model S never had a chance with John Broder. Logs? Oh crap. You think the reviewer realized Tesla would be logging everything? Uh, probably not. Then Musk goes through all the negative claims and pretty much shows the reviewer to be either not very bright (to drive past a charging station when the car clearly said it needed a charge) or deliberately trying to prove his point, regardless of the facts. I should probably just use Jamie’s words, as they are much better than mine. So courtesy of Jamie Arlen: It’s one of those William Gibson moments. You know, where “the future is here, it’s just not evenly distributed yet.” As more “things in the world” get smart and connected, Moore’s Law type interactions occur. The technology necessary to keep a Tesla car running and optimized requires significant monitoring and logging of all control systems, which has an unpleasant side effect for the reviewer. The kicker (for me) in all of this is the example that the NYT writer makes of himself: Sorry dude, the nerds have in-fact inherited the earth – if you want to play a game with someone who excels in the world of high-performance cars and orbital launch systems simultaneously, you need to be at least as smart as your opponent. Mr. Broder – you’ve cast yourself as Vizzini and yes, Elon does make a dashing Dread Pirate Roberts. Vizzini. Well played, Mr. Arlen. Well played. But Jamie’s point is right on the money – these sophisticated vehicle control systems may be intended to make sure the systems are running as they should. But clearly a lot can be done with the data after something happens. How about placing a car at the scene of a crime? Yeah, the possibilities are endless, but I’ll leave those discussions to Captain Privacy. I’m just happy data won over opinion in this case. UPDATE: It looks like we will get to to have a little he said/she said drama here, as Rebecca Greenfield tells Broder’s side of the story in this Atlantic Wire post. As you can imagine, the truth probably is somewhere in the middle. Share:

Share:
Read Post

RSA Conference Guide 2013: Endpoint Security

The more things change, the more they stay the same. Endpoint security remains predominately focused on dealing with malware and the bundling continues unabated. Now we increasingly see endpoint systems management capabilities integrated with endpoint protection, since it finally became clear that an unpatched or poorly configured device may be more of a problem than fighting off a malware attack. And as we discuss below, mobile device management (MDM) is next on the bundling parade. But first things first: advanced malware remains the topic of every day, and vendors will have a lot to say about it at RSAC 2013. AV Adjunctivitus Last year we talked about the Biggest AV Loser and there is some truth to that. But it seems most companies have reconciled themselves to the fact that they still need an endpoint protection suite to get the compliance checkbox. Endpoint protection vendors, of course, haven’t given up, and continue to add incremental capabilities to deal with advanced attacks. But the innovation is outside endpoint protection. IP reputation is yesterday’s news. As we discussed in our Evolving Endpoint Malware Detection research last year, it’s no longer about what the malware file looks like, but now all about what it does. We call this behavioral context, and we will see a few technologies addressing it at the RSA Conference. Some integrate at the kernel level to detect bad behavior, some replace key applications (such as the browser) to isolate activity, and others actually use very cool virtualization technology to keep everything separate. Regardless of how the primary technology works, the secondary bits provide a glimmer of hope that someday we might able to stop advanced malware. Not that you can really stop it, but we need something better than trying to get a file signature for a polymorphic attack. Also pay attention to proliferation analysis to deal with the increasing amount of VM-aware malware. Attackers know that all these network-based sandboxes (network-based malware detection) use virtual machines to explode the malware and determine whether it’s bad. So they do a quick check and when the malware is executed in a VM it does nothing. Quite spiffy. That a file that won’t trigger in the sandbox is likely wreak havoc once it makes its way onto a real device. At that point you can flag the file as bad, but it might already be running rampant through your environment. It would be great to know where that file came from and where it’s been, with a list of devices that might be compromised. Yup, that’s what proliferation analysis does, and it’s another adjunct we expect to become more popular over the next few years. Mobile. Still management, not security BYOD will be hot hot hot again at this year’s RSA Conference, as we discussed in Key Themes. But we don’t yet see much malware on these devices. Sure, if someone jailbreaks their device all bets are off. And Google still has a lot of work to provide a more structured app environment. But with mobile devices the real security problem is still management. It’s about making sure the configurations are solid, only authorized applications are loaded, and the device can be wiped if necessary. So you will see a lot of MDM (mobile device management) at the show. In fact, there are a handful of independent companies growing like weeds because any company with more than a dozen or so folks has a mobile management problem. But you will also see all the big endpoint security vendors talking about their MDM solutions. Like full disk encryption a few years ago, MDM is being acquired and integrated into endpoint protection suites at a furious clip. Eventually you won’t need to buy a separate MDM solution – it will just be built in. But ‘eventually’ means years, not months. Current bundled endpoint/MDM solutions are less robust than standalone solutions. But as consolidation continues the gap will shrink, until MDM is eventually just a negotiating point in endpoint protection renewal discussions. We will also see increasing containerization of corporate data. Pretty much all organizations have given up on trying to stop important data making its way onto mobile devices, so they are putting the data in walled gardens instead. These containers can be wiped quickly and easily, and allow only approved applications to run within the container with access to the important data. Yes, it effectively dumbs down mobile devices, but most IT shops are willing to make that compromise rather than give up control over all the data. The Increasingly Serious “AV Sucks” Perception Battle We would be the last guys to say endpoint security suites provide adequate protection against modern threats. But statements that they provide no value aren’t true either. It all depends on the adversary, the attack vector, monitoring infrastructure to react faster and better, and most importantly on complimentary controls. Recently SYMC took a head shot when the NYT threw them under the bus for an NYT breach. A few days later Bit9 realized that Karma is a Bit9h, when they apparently forgot to run their own software on internal devices and got were breached. I guess what they say about the shoemaker’s children is correct. It will be interesting to see how much the endpoint protection behemoths continue their idiotic APT defense positioning. As we have said over and over, that kind of FUD may sell some product but it is a short-sighted way to manage customer expectations. They will get hit, and then be pissed when they realize their endpoint protection vendor sold them a bill of goods. To be fair, endpoint protection folks have added a number of new capabilities to more effectively leverage the cloud, the breadth of their customer bases, their research capabilities, and to improve detection – as discussed above. But that doesn’t really matter if a customer isn’t using the latest and greatest versions of the software, or if they don’t have sufficient additional controls in place. Nor will it convince customers who already believe endpoint tools are inherently weak. They can ask Microsoft about that – most folks

Share:
Read Post

Incite 2/13/2013: Baby(sitter) on Board

The Boss and I don’t get out to see movies too often. At least for the last 12 years or so. It was hard to justify paying a babysitter for two extra hours so we could go see a movie. Quick dinner? Sure. Party with friends, absolutely. But a movie, not so much. We’d wait until Grandma came to visit, and then we’d do things like see movies and have date nights. But I’m happy to say that’s changing. You see, XX1 is now 12, which means she can babysit for the twins. We sent her to a day-long class on babysitting, where she learned some dispute resolution skills, some minor first aid, and the importance of calling an adult quickly if something goes south. We let her go on her maiden voyage New Year’s Eve. We went to a party about 10 minutes from the house. Worst case we could get home quickly. But no worries – everything went well. Our next outing was a quick dinner with some friends very close to the house. Again, no incidents at all. We were ready to make the next jump. That’s right, time for movie night! We have the typical discussions with XX1 about her job responsibilities. She is constantly negotiating for more pay (wonder where she got that?), but she is unbelievably responsible. We set a time when we want the twins in bed, and she sends us a text when they are in bed. The twins respect her authority when she’s in the babysitting mode, and she takes it seriously. It’s pretty impressive. Best of all, the twins get excited when XX1 is babysitting. Maybe it’s because they can watch bad TV all night. Or bang away on their iTouches. But more likely it’s because they feel safe and can hang out and have a good time with their siblings. For those of you (like me), who grew up in a constant state of battle with your siblings, it’s kind of novel. We usually have to set up an Aerobed over the weekend, so all three kids can pile into the same room for a sleepover. They enjoy spending time together. Go figure. Sure it’s great to be able to go out and not worry about having to pay a babysitter some ungodly amount, which compounds the ungodly amount you need to pay to enjoy Hollywood’s finest nowadays. But it’s even better to know that our kids will only grow closer through the rest of their lives. As my brother says, “You can pick your friends, but you can’t pick your family!” I’m just glad my kids seem to be okay with the family they have. –Mike Photo credits: Bad babysitter originally uploaded by PungoM Heavy Research We’re back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. Network-based Threat Intelligence Following the Trail of Bits Understanding the Kill Chain Understanding Identity Management for Cloud Services Architecture and Design Integration Newly Published Papers Building an Early Warning System Implementing and Managing Patch and Configuration Management Defending Against Denial of Service Attacks Securing Big Data: Security Recommendations for Hadoop and NoSQL Environments Pragmatic WAF Management: Giving Web Apps a Fighting Chance Incite 4 U We are all next: I may have been a little harsh in my post on the Bit9 hack: Karma is a Bit9h, but the key point is that all security vendors need to consider themselves high value targets. I wouldn’t be surprised if lot more get compromised and (attempt to) cover it up. There isn’t any schadenfreude here – I derive no pleasure from someone being hacked, no matter how snarky I seem sometimes. I also assume that it is only a matter of time until I get hacked, so I try to avoid discussing these issues from a false position of superiority. Wendy Nather provides an excellent reminder that defense is damn hard, with too many variables for anyone to completely control. In her words: “So if you’re one of the ones scolding a breach victim, you’re just displaying your own ignorance of the reality of security in front of those who know better. Think about that for a while, before you’re tempted to pile on.” Amen to that. – RM Swing and a miss: Managing database accounts to deny attackers easy access is a hassle – as pointed out by Paul Roberts in his post on Building and Maintaining Database Access Control Permissions. But the ‘headaches’ are not just due to default packages and allowing public access – these issues are actually fairly easy to detect and fix before putting a database server into production. More serious are user permissions within enterprise applications which have thousands of users assigned multiple roles. In these cases finding an over-subscribed user is like finding the proverbial “needle in a haystack”. The use of generic “service accounts” shared by multiple users – make it much harder to detect misuse, and if spotted to figure out who the real perpetrator is. Perhaps the most difficult problem is segregation of database administrative duties, where common tasks should be split up, at the expense of making administrators’ jobs far more complex – annoying and time-consuming. Admins are the ones who set these roles up, and they don’t want make their daily work harder. Validating good security requires someone with access and knowhow. Database operations are more difficult that database setup, which is why monitoring and periodic assessments are necessary to ensure security. – AL First things first: Wim Remes wrote an interesting post about getting value from a SIEM investment, Your network may not be what is SIEMs. Wim’s point is that you can get value from the SIEM, even if it’s horribly delayed and over budget (as so many are), but without a few key things in place initially, you would just be wasting your time. You need to know what’s important in your environment and

Share:
Read Post

Directly Asking the Security Data

We have long been fans of network forensics tools to provide a deeper and more granular ability to analyze what’s happening on the network. But most of these network forensics tools are still beyond the reach (in terms of both resources and expertise) of mass markets at this point. Rocky D of Visible Risk tackles the question, “I’m collecting packets, so what now?” in his Getting Started with Network Forensics Tools post. With these tools we can now ask questions directly of the data and not be limited to or rely on pre-defined questions that are based on an inference of subsets of data. The blinders are off. To us, the tools themselves aren’t the value proposition – the data itself and the innovation in analytical techniques is the real benefit to the organization. It always gets back to the security data. Because any filtered and/or normalized view of the data (or metadata, as the case may be) is inherently limited because it’s hard to go back and ask the question(s) you didn’t know to ask at the beginning of the investigation, query, etc. When investigating a security issue, you often don’t know what to ask ahead of time. But that pretty much breaks the model of SIEM (and most security, by the way) because you need to define the patterns you are looking for. Of course we know attackers are unpredictable by nature, so it is getting harder and harder to isolate attacks based on what we know attacks look like. When used properly, network forensic tools can fundamentally change your security organization from the broken alert-driven model into a more effective data-driven analytic model. It’s hard not to agree with this position, but the details remain squishy. Conceptually we buy this analytics-centric view of the world, where you pump a bunch of security data through a magic machine that finds patterns you didn’t know where there – the challenge is to interpret what those patterns really mean in the context of your problem. And that’s not something that will be automated any time soon, if ever. But unless you have the data the whole discussion is moot anyway. So start collecting packets now, and figure out what to do with them later. Share:

Share:
Read Post

Saving Them from Themselves

The early stages of the Internet felt a bit like the free love era, in that people could pretty much do what they wanted, even if it was bad for them. I remember having many conversations with telecom carriers about the issues of consumers doing stupid things, getting their devices pwned, and then wreaking havoc on other consumers on the same network. For years the carriers stuck their heads in the sand, basically offering endpoint protection suites for free and throwing bandwidth at the problem. But that seems to be changing. I know a few large- scale ISPs who put compromised devices in the penalty box, preventing them from doing much of anything until the device is fixed. This is an expensive proposition for an ISP. You, like me, probably end up doing a decent amount of tech support for less sophisticated family members, and you know how miserable it is to actually remediate a pwned machine. But as the operating systems have gotten much better at protecting themselves, attackers increasingly target applications. And that means attacking browsers (and other high-profile apps such as Adobe Reader and Java) where they are weakest: the plug-in architecture. So kudos to Mozilla, who has started blocking plug-ins as their default setting. It will now be up to the user to enable plug-ins, such as Java, Adobe, and Silverlight, according to Mozilla director of security assurance Michael Coates, who announced the new functionality yesterday in a blog post. Mozilla’s Click to Play feature will be the tool for that: “Previously Firefox would automatically load any plugin requested by a website. Leveraging Click to Play, Firefox will only load plugins when a user takes the action of clicking to make a particular plugin play, or the user has previously configured Click To Play to always run plugins on the particular website,” he wrote. Of course users will still be able to get around it (like the new Gatekeeper feature in Mac OS X), but they will need to make a specific decision to activate the plug-in. It’s a kind of default deny approach to plug-ins, which is a start. And more importantly it’s an indication that application software makers are willing to adversely affect the user experience to reduce attack surface. Which is good news from where I sit. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.