Securosis

Research

Network-based Malware Detection 2.0: The Network’s Place in the Malware Lifecycle

As we resume our Network-based Malware Detection (NBMD) 2.0 series, we need to dig into the malware detection/analysis lifecycle to provide some context on where network-based malware analysis fits in, and what an NBMD device needs to integrate with to protect against advanced threats. We have already exhaustively researched the malware analysis process. The process diagram below was built as part of Malware Analysis Quant. Looking at the process, NBMD provides the analyze malware activity phase – including building the testbed, static analysis, various dynamic analysis tests, and finally packaging everything up into a malware profile. All these functions occur either on the device or in some cloud-based sandbox for analyzing malware files. That is why scalability is so important, as we discussed last time. You basically need to analyze every file that comes through because you cannot wait for an employee’s device to be compromised before starting the analysis. Some other aspects of this lifecycle bear mentioning: Ingress analysis is not enough: Detecting and blocking malware on the perimeter is a central pillar of the strategy, but no NBMD capability can be 100% accurate and catch everything. You need other controls on endpoints, supplemented with aggressive egress filtering. Intelligence drives accuracy: Malware and tactics evolve so quickly that on-device analysis techniques must evolve as well. This requires a significant and sustained investment in threat research and intelligence sharing. Before we can dig into these two points we need to point out some other relevant research on these topics for additional context. The Securosis Data Breach Triangle shows a number of opportunities to interrupt a data breach. You can either protect the data (very hard), detect and stop the exploit, or catch the data with egress filtering. Success at any one of these will stop a breach. But putting all your eggs in one basket is unwise, so work on all three. For specifics on detecting and stopping exploits, refer to our ongoing CISO’s Guide to Advanced Attackers – particularly Breaking the Kill Chain, which covers stopping an attack. Remember – even if a device is compromised, unless critical data is exfiltrated it’s not a breach. The best case is to detect the malware before it hurts anything – NBMD is very interesting technology for this – but you also need to rely heavily on your incident response process to ensure you can contain the damage. Ingress Accuracy As with most detection activities, accuracy is critical. A false positive – incorrectly flagging a file as malware – disrupts work and wastes resources investigating a malware outbreak that never happened. You need to avoid these, so put a premium on accuracy. False negatives – missing malware and letting it through – are at least as bad. So how can you verify the accuracy of an NBMD device? There is no accepted detection accuracy benchmark so you need to do some homework. Start by asking the vendor tough questions to understand their threat intelligence and threat research capabilities. Read their threat research reports and figure out whether they are on the leading edge of research, or just a fast follower using other companies’ research innovations. Malware research provides the data for malware analysis, whether on the device or in the cloud. So you need to understand the depth and breadth of a vendor’s research capability. Dig deep and understand how many researchers they have focused on malware analysis. Learn how they aggregate the millions of samples in the wild to isolate patterns using fancy terms like big data analytics. Study and understand how they turn that research into detection rules and on-device tests. You will also want to understand how the vendor shares information with the broader security research community. No one company can do it all, so you want leadership and a serious investment in research, but you also need to understand how they collaborate with other groups and what alternative data sources they leverage for analysis. For particularly advanced malware samples, do they have a process to undertake manual analysis? Be sensitive to research diversity. Many NBMD devices use the same handful of threat intelligence services to populate their devices. That makes it very difficult to get intelligence diversity to detect fast-moving advanced attacks. Make sure you check out lab tests of devices to compare accuracy. These tests are all flawed – it is just barely theoretically possible to accurately model a real-world environment using live ammunition (malware), but things would immediately change. But these tests can be helpful for an apples-to-apples device comparison. The Second Derivative As part of a proof of concept, you may also want to route your ingress traffic through 2 or 3 of these devices in monitoring mode, to test relative accuracy and scalability on real traffic. That should give you a good indication of how well the device will perform for you. Finally, leverage “The 2nd Derivative Effect (2DE)” of malware analysis. When new malware is found, profiled, and determined to be bad, there is an opportunity to inoculate all the devices in use. This involves uploading the indicators, behaviors, and rules to identify and block it to a central repository; and then distributing that intelligence back out to all devices. The network effect in action. The more devices in the network, the more likely the malware will show up somewhere to be profiled, and the better your chance of being protected before it reaches you. Not always, but it’s is as good a plan as any. It sucks to be the first company infected – you miss the attack on its way in. But everyone else in the network benefits from your misfortune. This ongoing feedback loop requires extensive automation (with clear checks and balances to reduce bad updates) to accelerate distribution of new indicators to devices in the field. Plan B (When You Are Wrong) Inevitably you will be wrong sometimes, and malware will get through your perimeter. That means you will need to rely on the other security controls in your environment. When they fail you will want to make sure you don’t get popped by the same attack

Share:
Read Post

Quick thoughts on the iOS and OS X security updates

I am in the airport lounge after attending the WWDC keynote, and here are some quick thoughts on what we saw today: The biggest enhancement is iCloud Keychain. Doesn’t seem like a full replacement for 1Password/etc. (yet), but Apple’s target is people who won’t buy 1Password. Once this is built in, from the way it appears designed, it should materially help common folks with password issues. As long as they buy into the Apple ecosystem, of course. It will be very interesting to see how the activation lock feature works in the real world. Theft is rampant, and making these devices worthless will really put a dent in it, but activation locking is a tricky issue. Per-tab processes in Safari. I am very curious about whether there is more additional sandboxing (Safari already has some). My main concern these days is Flash, and that’s why I use Chrome. If either Adobe or Apple improve Flash sandboxing I will be very happy to switch back. For enterprises Apple’s focus appears to be on iOS and MDM/single sign on. I will research the new changes more. Per-app VPNs also looks quite nice, and might simplify some app wrapping that currently does this through alternate techniques. iWork in the cloud could be interesting, and looks much better than Google apps – but collaboration, secure login, and sharing will be key. Many questions on this one, and I’m sure we will know more before it goes live. I didn’t see much else. Mostly incremental, and I mainly plan to keep an eye on what happens in Safari because it is the biggest point of potential weakness. Nothing so dramatic on the defensive side as Gatekeeper and the Java lockdowns of the past year, but integrating password management is another real-world, casual user problem that hasn’t been cracked well yet. Share:

Share:
Read Post

Groupthink Kills Your Security Layers

As I continue working through my reading backlog I find interesting stuff that bears comment. When the folks over at NSS Labs attempted to poke holes in the concept of security layers I got curious. Only 3% of over 606 combinations of firewall, IPS, and Endpoint Protection (EPP) actually successfully blocked their full suite of attacks? There is only limited breach prevention available: NSS looked at 606 unique combinations of security product pairs (IPS + NGFW, IPS + IPS, etc.) and only 19 combinations (3 percent) were able to successfully detect ALL exploits used in testing. This correlation of detection failures shows that attackers can easily bypass several layers of security using only a small set of exploits. Most organizations should assume they are already breached and pair preventative technologies with both breach detection and security information and event management (SIEM) solutions. No kidding. It not novel to say that exploits work in today’s environment. Instead of just guessing at optimal combination of devices (which seems to be a value proposition NSS is pushing in the market now), what about getting a feel for the incremental effectiveness of just using a firewall. And then layering in an IPS, and finally looking at endpoint protection. Does IPS really make an incremental difference? That would be useful information – we already know it is very hard to block all exploits. NSS’s analysis of why layering isn’t as effective as you might think is interesting: groupthink. Many of these products are driven by the same research engines and intelligence sources. So if a source misses all its clients miss. Clearly a recipe for failure, so diversity is still important. Rats! Dan Geer and his monoculture stuff continue to bite us in the backside. But of course diversity adds management complexity. Usually significant complexity, so you need to balance different vendors at different control layers against the administrative overhead of effectively managing everything. And a significant percentage of attacks are successful not due to innovative exploits (of the sorts NSS tests), but because of operational failures implementing the technology, keeping platforms and products patched, and enforcing secure configurations. Photo credit: “groupthink” originally uploaded by khrawlings Share:

Share:
Read Post

A truism of security information sharing

From Share and share alike? Not Quite, by Mike Mimoso at Threatpost: “With retail, the challenge is that most of the companies we share with are direct competitors,” Phillips said. “From a security perspective, you have to get over that and share because we’re all facing the same challenges. There’s no way any of us will win the war on our own.” If sharing information on attacks provides your competitors a business advantage, you have serious issues unrelated to security. Share:

Share:
Read Post

Getting to Know Your Adversary

After a week of travel I am finally working through my reading list, and got around to RSnake’s awesome “Talk with a Black Hat” series. Check out Part 1, Part 2 and Part 3. He takes us behind the curtain – but instead of discussing impact, which your fraud and loss group can tell you – he documents tactics being used against us all the time. At the beginning of Part 1, RSnake tackles the ethical issues of communicating with and learning from black hats. I never saw this as an issue, but if you did, just read his explanation and get over it: I think it is incredibly important for security experts to have open dialogues with the blackhat community. It’s not at all dissimilar to police officers talking with drug dealers on a regular basis as part of their job: if you don’t know your adversary you are almost certainly doomed to failure. Right. A few interesting tidbits from Part 1, including “The whole blackhat market has moved from manual spreading to fully automated software.” And that this fellow’s motivation was pretty clear: “Money. I found it funny how watching tv and typing on my laptop would earn me a hard worker’s monthly wage in a few hours. [It was] too easy in fact.” And the lowest hanging fruit for an attack. Yup, pr0n sites. Now to discuss my personal favourite: porn sites. One reason why this is so easy: The admins don’t check to see what the adverts redirect to. Upload an ad of a well-endowed girl typing on Facebook, someone clicks, it does a drive by download again. But this is where it’s different: if you want extra details (for extortion if they’re a business man) you can use SET to get the actual Facebook details which, again, can be used in social engineering. There is similarly awesome perspective on monetizing DDoS (which clearly means it is not going away anytime soon), and that was only in Part 1. Part 2 and 3 are also great, but you should read them yourself to learn about your adversaries. And to leave you with some wisdom about mindsets: Q: What kind of people tend to want to buy access to your botnet and/or what do you think they use it for? A: Some people say governments use it, rivals in business. To be honest, I don’t care. If you pay you get a service. Simple. Simple. Yup, very simple. Photo credit: “Charles F Esolda” originally uploaded by angus mcdiarmid Share:

Share:
Read Post

Security Analytics with Big Data: New Events and New Approaches

So why are we looking at big data, and what problems can we expect it to solve that we couldn’t before? Most SIEM platforms struggle to keep up with emerging needs for two reasons. The first is that threat data does not come neatly packaged from traditional sources, such as syslog and netflow events. There are many different types of data, data feeds, documents, and communications protocols that contain diverse clues to a data breaches or ongoing attacks. We see clear demand to analyze a broader data set in order hopes of detecting advanced attacks. The second issue is that many types of analysis, correlation, and enrichment are computationally demanding. Much like traditional multi-dimensional data analysis platforms, crunching the data takes horsepower. More data is being generated; add more types of data we want, and multiply that by additional analysess – and you get a giant gap between what you need to do and what you can presently do. Our last post considered what big data is and how NoSQL database architectures inherently address several of the SIEM pain points. In fact, the 3Vs (Volume, Velocity, & Variety) of big data coincide closely with three of the main problems faced by SIEM systems today: scalability, performance, and effectiveness. This is why big data is such an important advancement for SIEM. Volume and velocity problems are addressed by clustering systems to divide load across many commodity servers, and variability through the inherent flexibility of big data / NoSQL. But of course there is more to it. Analysis: Looking at More Two of the most serious problems with current SIEM solutions are that they struggle with the amount of data to be managed, and they cannot deal with the “data velocity” of near-real-time events. Additionally, they need to accept and parse new and diverse data types to support new types of analysis. There are many different types of event data, any of which might contain clues to security threats. Common data types include: Human-readable data: There is a great deal of data which humans can process easily, but which is much more difficult for machines – including blog comments and Twitter feeds. Tweets, discussion fora, Facebook posts, and other types of social media are all valuable for threat intelligence. Some attacks are coordinated in fora, which means companies want to monitor these fora for warnings of possible or imminent attacks, and perhaps even details of the attacks. Some botnet command and control (C&C) communications occur through social media, so there is potential to detect infected machines through this traffic. Telemetry feeds: Cell phone geolocation, lists of sites serving malware, mobile device IDs, HR feeds of employee status, and dozens of other real-time data feeds denote changes in status, behavior, and risk profiles. Some of these feeds are analyzed as the stream of events is captured, while others are collected and analyzed for new behaviors. There are many different use cases but security practitioners, observing how effectively retail organizations are able to predict customer buying behavior, are seeking the same insight into threats. Financial data: We were surprised to learn how many customers use financial data purchased from third parties to help detect fraud. The use cases we heard centered around SIEM for external attacks against web services, but they were also analyzing financial and buying history to predict misuse and account compromise. Contextual data: This is anything that makes other data more meaningful. Contextual data might indicate automated processes rather than human behavior – a too-fast series of web requests, for example, might indicate a bot rather than a human customer. Contextual data also includes risk scores generated by arbitrary analysis of metadata, and detection of odd or inappropriate series of actions. Some is simply collected from a raw event source while other data is derived through analysis. As we improve our understanding of where to look for attack and breach cluse, we will leverage new sources of data and examine them in new ways. SIEM generates some contextual data today, but collection of a broader variety of data enables better analysis and enrichment. Identity and Personas: Today many SIEMs link with directory services to identify users. The goal is to link a human user to their account name. With cloud services, mobile devices, distributed identity stores, identity certificates, and two-factor identity schemes, it has become much harder to link human beings to account activity. As authentication and authorization facilities become more complex, SIEM must connect to and analyze more and different identity stores and logs. Network Data: Some of you are saying “What? I thought all SIEMs looked at network flow data!” Actually, some do but others don’t. Some collect and alert on specific known threats, but only a tiny portion of that passes down the wire. Cheap storage makes it feasible to store more network events and perform behavioral computation on general network trends, service usage, and other pre-computed aggregate views of network traffic. In the future we may be able to include all data. Each of these examples demonstrates what will be possible in the short term. In the long term we may record any and all useful or interesting data. If we can link in data sets that provide a different views or help us make better decisions, we will. We already collect many of these data types, but we have been missing the infrastructure to analyze them meaningfully. Analysis: Doing It Better One limitation of many SIEM platforms is their dependence on relational databases. Even if you strip away relational constructs that limit insertion performance, they still rely on a SQL language with traditional language processors. The fundamental relational database architecture was designed and optimized for relational queries. Flexibility is severely limited by SQL – statements always include FROM and WHERE clauses, and we have a limited number of comparison operators for searching. At a high level we may have Java support, but the actual queries still devolve down to SQL statements. SQL may be a trusty

Share:
Read Post

API Gateways: Security Enabling Innovation [New Series]

So why are we talking about this? Because APIs are becoming the de facto service interface – not only for cloud and mobile, but for just about every type of service. The need for security around these APIs is growing, which is why we have seen a rush of acquisitions to fill security product gaps. In what felt like a couple weeks Axway acquired Vordel, CA acquired Layer7, and Intel acquired Mashery. The acquirers all stated these steps were to accommodate security requirements stemming from steady adoption of APIs and associated web services. Our goal for this paper is to help you understand the challenges of securing APIs and to evaluate technology alternatives so you can make informed decisions about current trends in the market. We will start our discussion by mentioning what’s at stake, which should show why certain features are necessary. API gateways have a grand and audacious goal: enablement. Getting developers the tools, data, and functionality they need to realize the mobile, social, cloud and other use cases the enterprise wants to deliver. There is a tremendous amount of innovation in these spaces today, and the business goal is get to market ASAP. At the same time, security is not a nice-to-have – it’s a hard requirement. After all, the value of mobile, social, and cloud applications is in mixing enterprise functionality inside and outside the enterprise. And riding along is an interesting mix of user personas: customers, employees, and corporate identities, all mingling together in the same pool. API gateways must implement real security policies and protocols to protect enterprise services, brands, and identity. This research paper will examine current requirements and technical trends in API security. API gateways are not sexy. They do not generate headlines like cloud, mobile, and big data. But the APIs are the convergence point for all these trends, and the crux of IT innovation today. We all know cloud services scale almost too well to be real, at a price that seems to good to be true. But the APIs are part of what makes them so scalable and cheap. Of course open, API-driven, multi-tenant environments bring new risks along with their new potentials. As Netflix security architect Jason Chan says, securing your app on Amazon Cloud is like rock climbing – Amazon gives you a rope and belays you, but you are on the rock face. You are the one at risk. How do you manage that risk? API gateways play a central role in limiting the cloud’s attack surface and centralizing policy enforcement. Mobile apps pose similar risks in an entirely different technical environment. There is endless amount hype about iOS and Android security. But where are the breaches? On the server side. Why? Because attackers are pragmatic, and that’s where the data is. Mobile apps have vulnerabilities that attackers can go after one by one, but a breach of the server-side APIs exposes the whole enterprise enchilada. Say it with me in your best Taco Bell Chihuahua accent: The whole enchilada! Like cloud applications, API gateways need to reduce the enterprise’s risk by limiting attack surface. And mobile apps use web services differently than other enterprise applications, communications are mostly asynchronous, and the identity tokens are different too – expect to see less SAML or proprietary SSO, and more OAuth and OpenID Connect. API gateways address the challenges raised by these new protocols and interactions. APIs are an enabling technology, linking new and old applications together into a unified service model. But while cloud, mobile, and other innovations drive radical changes in the data center, one thing remains the same: the speed at which business wants to deploy new services. Fast. Faster! Yesterday, if possible. This makes developer enablement supremely important, and is why we need to weave security into the fabric of development – if it is not integrated at a fundamental level, security gets be removed as an impediment to shipping. The royal road is to things that make it easy for developers to understand how to build and deploy an app, grok the interfaces and data, and quickly provision developers and their app users to login – this is how IT shops are organizing teams, projects, and tech stacks. The DMZ has gone the way of the dodo. API gateways are about enabling developers to build cloud, mobile, and social apps on enterprise data, layered over existing IT systems. Third-party cloud services, mobile devices, and work-from-anywhere employees have destroyed (or at least completely circumvented) the corporate IT ‘perimeter’ – the ‘edge’ of the network has so many holes it no longer forms a meaningful boundary. And this trend, fueled by the need to connect in-house and third-party services, is driving the new model. API gateways curate APIs, provision access to users and developers, and facilitate key management. For security this is the place to focus – to centralize policy enforcement, implement enterprise protocols and standards, and manage the attack surface. This paper will explore the following API gateway concepts in detail. The content will be developed and posted to the Securosis blog for vetting by the developer and security community. As always, we welcome your feedback – both positive and negative. Our preliminary outline is: Access Provisioning: We will discuss developer access provisioning, streamlining access to tools and server support, user and administrator provisioning, policy provisioning and management, and audit trails to figure out who did what. Developer Tools: We will discuss how to maintain and manage exposed services, a way to catalogue services, client integration, build processes, and deployment support. Key Management: This post will discuss creating keys, setting up a key management service, key and certificate verification, and finally the key management lifecycle (creation, revocation, rotation/updating). Implementation: Here we get into the meat of this series. We will discuss exposing APIs and parameters, URL whitelisting, proper parameter parsing, and some deployment options that effect security. Buyers Guide: We will wrap this series with a brief buyers guide to help you understand the differences between implementations, as well as key considerations when establishing your evaluation priorities. We will also cover

Share:
Read Post

Friday Summary: June 7, 2013

I haven’t been writing much over the past few weeks because I took a few weeks with the family back in Boulder. The plan was to work in the mornings, do fun mountain stuff in the afternoons with the kids, and catch up with friends in the evenings. But the trip ended up turning into a bit of medical tourism when a couple bugs nailed us on day one. For the record, I can officially state that microbrews do not seem to cure viruses. But the research continues… It was actually great to get back home and catch up as best we could under the circumstances. My work suffered but we managed to hit a major chunk of the to-do list. For the kids I think the highlight was me waking up, noticing it was raining, and bundling the family up to the Continental Divide to chase snow. We bounced along an unpaved trail road in the rain, keeping one eye on the temperature and the other on our altitude, until the wet stuff turned into the white stuff. Remember, we live in Phoenix – when it started dumping right when we hit the trailhead, with enough accumulation for snowmen and angels, I was in Daddy heaven. For me, aside from generally catching up with people (and setting a PR in the Bolder Boulder 10K), another highlight was grabbing lunch with some rescue friends and then hanging out in the new headquarters with the kids for a couple hours. It has been a solid 7-8 years since I was on a call, but back at the Cage, surrounded by the gear I used to rely on and vehicles I used to drive, it all came back. Surprisingly little has changed, and I was really hoping the pager would go off so I might hitch along on a call. Er… then again, I’m not sure you are allowed to respond with lights and sirens when kids are in the back in their car seats. There is an intensity to the rescue community that even the security community doesn’t quite match. Shared sweat and blood in risky conditions, as I wrote about in The Magazine. That doesn’t mean it’s all one big lovefest, and there’s no shortage of personal and professional drama, but the bonds formed are intense and long-lasting. And the toys? Oh, man, you can’t beat the toys. That part of my life is on hold for a while as I focus on kids and the company, but it’s comforting to know that not only is it still there, it is still very familiar too. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian’s Dark Reading article on Database DoS. Favorite Securosis Posts David Mortman: New Google disclosure policy is quite good. Adrian Lane: Mobile Security Breaches. Astute, concise analysis from Mogull. Rich: Security Analytics with Big Data: New Events, New Approaches. Adrian is killing it with this series. Other Securosis Posts API Gateways: Security Enabling Innovation [New Series]. Matters Requiring Attention: 100 million or so. Apple Expands Gatekeeper. Incite 6/5/2013: Working in the House. Oracle adopts Trustworthy Computing practices for Java. A CISO needs to be a business person? No kidding… Security Analytics with Big Data: Defining Big Data. LinkedIn Rides the Two-Factor Train. Security Surrender. Finally! Lack of Security = Loss of Business. Network-based Malware Detection 2.0: Scaling NBMD. Friday Summary: May 31, 2013. Evernote Business Edition Doubles up on Authentication. Favorite Outside Posts David Mortman: Data Skepticism. Adrian Lane: NSA Collects Verizon Customer Calls. Interesting read, but not news. We covered this trend in 2008. The question was why the government gave immunity to telecoms for spying on us, and we now know: because they were doing it for the government. Willingly or under duress is the current question. Rich: Why we need to stop cutting down security’s tall poppies. Refreshing perspective. Research Reports and Presentations Email-based Threat Intelligence: To Catch a Phish. Network-based Threat Intelligence: Searching for the Smoking Gun. Understanding and Selecting a Key Management Solution. Building an Early Warning System. Implementing and Managing Patch and Configuration Management. Defending Against Denial of Service (DoS) Attacks. Securing Big Data: Security Recommendations for Hadoop and NoSQL Environments. Tokenization vs. Encryption: Options for Compliance. Pragmatic Key Management for Data Encryption. The Endpoint Security Management Buyer’s Guide. Top News and Posts Democratic Senator Defends Phone Spying, And Says It’s Been Going On For 7 Years. Expert Finds XSS Flaws on Intel, HP, Sony, Fujifilm and Other Websites. Whom the Gods Would Destroy, They First Give Real-time Analytics. Apple Updates OS X, Safari. Original Bitcoin Whitepaper. Unrelenting AWS Growth. Not security related, but the most substantive cloud adoption numbers I have seen. Note that the X axis of that graph is logarithmic – not linear! StillSecure acquired. Microsoft, US feds disrupt Citadel botnet network. Blog Comment of the Week This week’s best comment goes to Andy, in response to LinkedIn Rides the Two-Factor Train. This breaks the LinkedIn App for Windows phone. But who uses Windows phone, besides us neo-Luddites who refuse to buy into the Apple ecosystem? Share:

Share:
Read Post

Incite 6/5/2013: Working in the House

Once, years ago, I made the mistake of saying the Boss didn’t work. I got that statement shoved deep into my gullet because she works harder than I do. She just works in the house. My job is relatively easy – I can work from anywhere, with clients I enjoy, doing stuff that enjoy doing. Often it doesn’t feel like work at all. Compare that to the Boss, who has primary responsibility for the kids. That involves making sure they: get their homework done, are learning properly, have the support they need, and participate in their activities. But that’s the comparatively easy stuff and it’s not easy at all. She spends a lot more of her time managing the drama, which is ramping up for XX1 significantly as she and friends enter the tween stage. She also take very seriously her role of making sure the kids are well behaved, polite, and productive. And it shows. I’m biased, but my kids rarely do cringe-worthy stuff in public. I do have a minor hand in this stuff but she drives the ship. And why am I writing this now? No, I didn’t say anything stupid again to end up in the dog house. I just see how she’s handling her crunch time, which is getting the kids ready for camp, while making sure they see their friends before they head off for the summer, and working around a trip up North to see my Dad. Compared to crunch time the school year is a walk in the park. For those of you who don’t understand the misery of preparing for sleepaway camp, the camp sends a list of a zillion things you have to get. Clothes, towels, sheets, sporting equipment, creature comforts… the list is endless, and everything needs to have your kid’s name in it – if you want it to come back, anyway. Our situation is complicated because we have to ship the stuff to PA. Not only does she need to get everything, but everything needs to fit into two duffel bags. Over the years the intensity of crunch time has increased significantly. Four years ago she only had to deal with XX1 – that was relatively easy. Then XX1 and XX2 went to camp, but it was still manageable. But last year we had all three kids in camp, and decided to take a trip to Barcelona a month before they were due to leave, and went to Orlando for the girls to dance. It was nuts. This year she is way ahead of the game. We are two weeks out and pretty much everything is bought, labeled, and arranged. It’s really just a matter of packing the bags now. The whole operation ran like a well-oiled machine this year. Bravo! I am the first to criticize when stuff doesn’t work well, and usually the last to give credit when things work efficiently. I have already moved on to the next thing. We don’t have a 360-degree review process and we don’t pay bonuses at the end of the year in Chez Rothman. Working in our house is a thankless job. So it’s time to give credit where it’s due. But more importantly, she can now enjoy the next two weeks before the kids head off – without spending all her time buying, packing, and other stressful stuff. And I should also bank some karma points with the Boss to use the next time I do something stupid. Which should be in 3, 2, 1… –Mike Photo credit: “IT Task List” originally uploaded by Paul Gorbould Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. Quick Wins with Website Protection Services Deployment and Ongoing Management Protecting the Website Are Websites Still the Path of Least Resistance? Network-based Malware Detection 2.0 Scaling NBMD Evolving NBMD Advanced Attackers Take No Prisoners Security Analytics with Big Data Use Cases Introduction Newly Published Papers Email-based Threat Intelligence: To Catch a Phish Network-based Threat Intelligence: Searching for the Smoking Gun Understanding and Selecting a Key Management Solution Building an Early Warning System Implementing and Managing Patch and Configuration Management Incite 4 U Your professionalism offends me… Our man in Ireland, Brian Honan, brings up a third rail of sorts regarding some kind of accreditation for security folks. He rightly points out that there is no snake oil defense. But it’s not clear whether he wants folks to go to charm school or to learn decent customer skills so the bad apples don’t reflect badly on our industry. Really? Shack responds with a resounding no, but more because he’s worried about losing the individuality of the characters who do security. I don’t think we need yet another group to teach folks to wear long sleeves if they have tattoos. Believe me, if folks are worried about getting a professional security person, I’m sure one of the big accounting firms would be happy to charge them $300/hour for a n00b to show up in a suit. And some of the best customers are the ones who have bought snake oil in the past. Presumably they learned something and know what questions to ask. – MR BYOD in the real world: For the most part, the organizations I talk with these days are generally in favor of BYOD, with programs to allow at least some use of personally owned computing devices. Primarily they support mobile phones, but they expanding more quickly than most people predicted to laptops and tablets. Network World has a nice, clear article with some examples of BYOD programs in real, large organizations. These are refreshingly practical, with a focus on basic management and a minimal footprint on the devices. We’re talking ActiveSync and passcode enforcement, not those crazy virtual/work/personal swapping modes some vendors promote. I had another discussion with some enterprise managers about BYOD today and they

Share:
Read Post

Mobile Security Breaches

From an article based on ‘work’ by Check Point: 79% of businesses had a mobile security incident in the past year, in many cases incurring substantial costs, according to Check Point. The report found mobile security incidents cost over $100,000 for 42% of respondents, including 16% who put the cost at more than $500,000. Bullshit. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.