Securosis

Research

Are CISOs finally ‘real’ executives?

Many CISOs I have worked with over the past 10 years have consistently complained that no one else in the executive suite understands them. They can’t get the right level of support. They face constant roadblocks. Basically, they’re perplexed that business people are actually more worried about business. My response has always been that they weren’t Pragmatic enough. Of course they can read the book. Maybe they even adopt the concepts, and some will still run into the same difficulties. Basically, business folks won’t get it – until they have to. And lately, given the high-profile breaches and the beginning of CEO witch hunts, senior executives can no longer avoid security. Not entirely, anyway. But some CISOs have broken through and become real executives. Folks who other executives consult when making decisions. A person, running a group that adds value to the organization. I know! That’s pretty cool. How do you get there? Yes, you should be Pragmatic. But you can also take some tips from Qualys’ CSO, Andrew Wild. He gave a pretty good interview to the Enterprisers Project about how CISO can break through. Shockingly enough, it involves talking in business speak. “The board level interest requires a risk-based approach, and infosec leaders must embrace this and move away from a security controls focused approach to information security. That’s not to say that security controls aren’t important, because they are, but, from the top down, the focus needs to be on risk management. A critical component of implementing a successful risk-based approach is building strong relationships with business units, approaching them in a consultative manner to offer assistance and guidance.” — Andrew Wild, CSO, Qualys There are many other good tidbits in that interview. But remember that if you want to play in the C suite you had better understand your business and how security can make it better – whatever that means for you. Photo credit: “CEO – Tiare – Board Meeting – Franklin Canyon” originally uploaded by tiarescott   Share:

Share:
Read Post

Firestarter: China and Career Advancement

Mike’s at the Jersey Shore, Rich is in Boulder, and Adrian is… baking in Phoenix in between tree-killing monsoons. This week we kept it simple with two topics. First up, China’s accusations that iOS and iDevices are a security risk. Which they should know, since they are all built there. Second is a discussion on security careers. How to break in, and what hiring managers should really look for. The audio-only version is up too. Share:

Share:
Read Post

Leverging TI in Incident Response/Management: Really Responding Faster

In the introduction to our Leveraging Threat Intelligence in Incident Response/Management series we described how the world has changed since we last documented our incident response process. Adversaries are getting better and using more advanced tactics. The difficulty is compounded by corporate data escaping our control into the cloud, and the proliferation of mobile devices. When we started talking about reacting faster back in early 2007, not many folks were talking about the futility of trying to block every attack. That is less of an issue now that the industry understands security is imperfect, and continues to shift resources to detection and response. Butt the problem becomes more acute as the interval between attack and exfiltration continues to decrease. The ultimate goal of any incident management process is to contain the damage of attacks. This requires you to investigate and find the root causes of attacks faster. The words are easy, but how? Where do you look? The possible attack paths are infinite. To really react faster you need to streamline your investigations and make the most of your resources. That starts with an understanding of what information would interest to attackers. From there you can identify potential adversaries and gather threat intelligence to figure out their targets and tactics. With that information you can protect yourself, look for indicators of compromise via monitoring, and streamline your response when you (inevitably) miss. Adversary Analysis We suggest stating with adversary analysis because the attacks you will see vary greatly based on the attacker’s mission and assessment of the most likely (and easiest) way to compromise your environment. Evaluate the Mission: To start the process you need to learn what’s important in your environment, which leads you to identify interesting targets for attackers. This usually breaks down into a few discrete categories including intellectual property, protected customer data, and business operations information. Profile the Adversary: To defend yourself you will need to not only know what adversaries are likely to look for, but what kinds of tactics those attackers typically use, by type of adversary. So next figure out which categories of attacker you are likely to face. Categories include unsophisticated (uses widely available tools), organized crime, competitors, and state-sponsored. Each class has a different range of capabilities. Identify Likely Attack Scenarios: Based on the mission and the adversary’s general tactics, put your attacker hat on and figure out the path you would most likely take to achieve the mission. At this point the attack has already taken place (or is still in progress) and you are trying to assess and contain the damage. Hopefully investigating your proposed paths will prove or disprove your hypothesis. Keep in mind that you don’t need to be exactly right about the scenario. You need to make assumptions about what the attacker has done, and you will not predict their actions perfectly. The objective here is to get a head start on response, which means narrowing down the investigation by focusing on specific devices and attacks. Gathering Threat Intelligence Armed with context on likely adversaries we can move on to intelligence gathering. This entail learning everything we can about possible and likely adversaries, profiling probable behaviors, and determining which kinds of defenses and controls make sense to address higher probability attacks. Be realistic about what you can gather yourself and what intel you may need to buy. Optimally you can devote some resources to gathering and processing intelligence on an ongoing basis based on the needs of your organization, but in the real world you may need to supplement your resources with external data sources. Threat Intelligence Indicators Here is a high-level overview of the general kinds of threat intelligence you are likely to leverage to streamline your incident response/management. Malware Malware analysis is maturing rapidly; it is now possible to quickly and thoroughly understand exactly what a malicious code sample does, and define both technical and behavioral indicators to seek out within your environment, as described in gory detail in Malware Analysis Quant. More sophisticated malware analysis is required because classical AV blacklisting is no longer sufficient in the face of polymorphic malware and other attacker tactics to defeat file signatures. Instead you will identify indicators of what malware did to a device. Malware identification has shifted from what file looks like to what it does. As part of your response/management process, you’ll need to identify the specific pieces of malware you’ve found on the compromised devices. You can do that via a web-based malware analysis service. You basically upload a hash of a malware file to the service – if it recognizes the malware (via a hash match), you get the the analysis within minutes; if not you can then upload the whole file for a fresh analysis. These services run malware samples through proprietary sandbox environments and other analysis engines to figure out what malware does, build a detailed profile, and return a comprehensive report including specific behaviors and indicators you can search your environment for. Malware also provides additional clues. Can you tie the malware to a specific adversary? Or at least a category of adversaries? Do you see these kinds of activities during reconnaissance, exploitation, or exfiltration – a useful clue to the degree the attack has progressed. Reputation Reputation data, since its emergence as a primary data source in the battle against spam, seems to have become a component of every security control. Which makes sense because entities that behave badly are likely to continue doing so. The most common reputation data is based on IP addresses, offered as a dynamic list of known bad and/or suspicious addresses. As with malware analysis, identifying an adversary helps you look for associated tactics. Aside from IP addresses, pretty much everything within your environment can (and should) have a reputation. Devices, URLs, domains, and files, for starters. If you see traffic going to a site known to be controlled by a particular adversary, you can look for other devices communicating with that adversary.

Share:
Read Post

It’s Just a Matter of Time

So a couple of weeks ago in the Incite (4th snippet) I gave Jamie Arlen huge kudos for being a soothsayer. At Black Hat 2011 Jamie presented an attack scenario attacking high frequency trading networks, and Bloomberg recently reported that attack actually hit a hedge fund. But the attack never happened. Yeah, it turns out the cyber expert at BAE Systems who identified the attack was allegedly presenting a scenario to the management team – not a real attack. The attack, she said, “was inaccurately presented as a client case study rather than as an illustrative example.” Those folks are spinning so fast, I’m getting dizzy. While laughing my butt off. But back to the point of Jamie’s research. The attack is plausible and feasible, so it’s just a matter of time before it does really happen, if it hasn’t already. Photo credit: “Pants on Fire” originally uploaded by Mike Licht Share:

Share:
Read Post

Listen to Rich Talk, Win a … Ducati?

I have to admit, this is a bit of a first. I am participating in a cloud security webinar July 21st with Elastica, a cloud application security gateway firm (that’s the name I’m playing with for this category). It will be less slides and more discussion, and not about their product. This is a product category I have started getting a lot of questions on, even if there isn’t a standard name yet, and I will probably pop off a research paper on it this fall. But that isn’t the important part. Sometimes clients pony up an iPad or something if you sign up for a webinar. Heck, we’ve given out our fair share of Apple toys (and once a Chumby) to motivate survey participation. This time Elastica is, for real, giving away a Ducati Monster 696. No, I am not eligible to win. I thought it was a joke when they showed me the mockup of the contest page, but it is very real. You still have to pay delivery, title, insurance, customs, transportation, and registration fees. Needless to say, I feel a little pressure to deliver. (Good content – I don’t think they’d let me drive the Ducati to your house). Share:

Share:
Read Post

Summary: Boulder

Well, I did it. I survived over 6 months of weekly travel (the reason I haven’t been writing much). Even the one where the client was worried I was going to collapse due to flu in the conference room, and the two trips that started with me vomiting at home the morning I had to head to the airport. Yup. Twice. But for every challenge, there is a reward, and I am enjoying mine right now. No, not the financial benefits (actually those don’t suck either), but I ‘won’ a month without travel back in my home town of Boulder. I am sure I have written about Boulder before. I moved here when I was 18 and stayed for 15+ years, until I met my wife and moved to Phoenix (to be closer to family because kids). Phoenix isn’t bad, but Boulder is home (I grew up in Jersey but the skiing and rock climbing there are marginal). My goal for this month is to NOT TRAVEL, spend time with the family, and work at a relaxed pace. So far, so good. Heavy travel is hard on kids, especially young kids, and they are really enjoying knowing that when I walk out the door for ‘work’ and hop on my bicycle, I will be back at the end of the day. Boulder has changed since I left in 2006, but I suspect I have changed more. Three kids will do that to you. But after I ignore the massive real estate prices, proliferation of snooty restaurants, and increase in number of sports cars (still outnumbered by Subarus), it’s hard to complain about my home town doing so well. One unexpected change is the massive proliferation of startups and the resulting tech communities. I lived and worked here during the dot com boom, and while Boulder did okay, what I see now is a whole new level. I can’t walk into a coffee shop or lunch spot without overhearing discussions on the merits of various Jenkins plugins or improving metrics for online marketing campaigns. The offices that stood vacant after the loss of Access Graphics are now full of… well… people 10-15 years younger than me. For an outdoor athlete with a penchant for entrepreneurship, it’s hard to find someplace better to take a month-long ‘vacation’. As I hit local meetups (including speaking at the AWS meetup on the 22nd) I am loving engaging with a supportive tech community. Which isn’t a comment on the security community, but a recognition that sometimes it is extremely valuable to engage with a group of innovation-embracing technical professionals who aren’t getting their (personal) asses kicked by criminal and government hackers by the minute. I have always thought security professionals need to spend time outside our community. One of the ways I staved off burnout in emergency services was to have friends who weren’t cops and paramedics – I learned to compartmentalize that part of my life. If you can, check out a local DevOps or AWS meetup. It’s fun, motivating, and they have better swag. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Mortman quoted in The 7 skills Ops pros need to succeed with DevOps. Favorite Securosis Posts Adrian Lane: Incite 7/9/2014: One dollar…. One of Mike’s best all year. Rich: Increasing the Cost of Compromise. This is the strategy of Apple and Microsoft at the OS level, and it is paying off (despite common perception). Economics always wins. Well, except in politics. Other Securosis Posts Trends in Data Centric Security: Tools. Open Source Development and Application Security Survey Analysis [New Paper]. Leveraging Threat Intelligence in Incident Response/Management. Trends In Data Centric Security: Use Cases. Incite 7/2/2014 – Relativity. Updating the Endpoint Security Buyer’s Guide: Mobile Endpoint Security Management. Firestarter: G Who Shall Not Be Named. Favorite Outside Posts Adrian Lane: Threat Modeling for Marketing Campaigns. Educational walkthrough of how Etsy examined fraud and what to do about it. Smart people over there… Rich: Ideas to Keep in Mind When Designing User Interfaces. I really enjoy user interface and experience design. Mostly because I enjoy using well-designed systems. This isn’t security specific, but is absolutely worth a read… especially for product managers. Research Reports and Presentations Analysis of the 2014 Open Source Development and Application Security Survey. Defending Against Network-based Distributed Denial of Service Attacks. Reducing Attack Surface with Application Control. Leveraging Threat Intelligence in Security Monitoring. The Future of Security: The Trends and Technologies Transforming Security. Security Analytics with Big Data. Security Management 2.5: Replacing Your SIEM Yet? Defending Data on iOS 7. Eliminate Surprises with Security Assurance and Testing. What CISOs Need to Know about Cloud Computing. Top News and Posts Specially Crafted Packet DoS Attacks, Here We Go Again. Vulnerabilities (fixed) in AngularJS. DHS Releases Hundreds of Documents on Wrong Aurora Project. As my daughter would say, “Seriously?!?”. Microsoft Settles With No-IP Over Malware Takedown. Hackers (from you know where) crack and track shipping information. A great example of a target that doesn’t realize its value. Researchers Disarm Microsoft’s EMET. Mysterious cyberattack compromises more than a thousand power plant systems. Noticing a trend here? Share:

Share:
Read Post

Trends in Data Centric Security: Tools

The three basic data centric security tools are tokenization, masking, and data element encryption. Now we will discuss what they are, how they work, and which security challenges they best serve. Tokenization: You can think of tokenization like a subway or arcade token: it has no cash value but can be used to ride the train or play a game. In data centric security, a token is provided in lieu of sensitive data. The most common use case today is in credit card processing systems, as a substitute for credit card numbers. A token is basically just a random number – that’s it. The token can be made to look just like the original data type; in the case of credit cards the tokens are typically 16 digits long, they usually preserve the last four original numbers, and can even be generated such that they pass the LUHN validation check. But it’s a random value, with no mathematical relationship to the original, and no value other than as a reference to the original in some other (more secure) database. Users may choose to maintain a “token database” which associates the original value with the token in case they need to look up the original at some point in the future, but this is optional. Tokenization has advanced far beyond simple value replacement, and is lately being applied to more advanced data types. These days tokens are not just for simple things like credit cards and Social Security numbers, but also for JSON & XML files and web pages. Some tokenization solutions replace data stored within databases, while others can work on data streams – such as replacing unique cell IDs embedded in cellphone tower data streams. This enables both simple and complex data to be tokenized, at rest or in motion – and tokens can look like anything you want. Very versatile and very secure – you can’t steal what’s not there! Tokenization is used to ensure absolute security by completely removing the original sensitive values from secured data. Random values cannot be reverse engineered back to the original data. For example given a database where the primary key is a Social Security number, tokenization can generate unique and random tokens which fits in the receiving database. Some firms merely use the token as a placeholder and don’t need the original value. In fact some firms discard (or never receive) the original value – they don’t need it. Instead they use tokens simply because downstream applications might break without a SSN or compatible surrogate. Users who need to occasionally reference the original values use token vaults or equivalent technologies. They are designed to only allow credentialed administrators access to the original sensitive values under controlled conditions, but a vault compromise would expose all the original values. Vaults are commonly used for PHI and financial data, as mentioned in the last post. Masking: This is another very popular tool for protecting data elements while retaining aggregate values of data sets. For example we might substitute an individual’s Social Security number with a random number (as in tokenization), or a name randomly selected from a phone book, but retain gender. We might replace date of birth with a random value within X days of the original value to effectively preserve age. This way the original (sensitive) value is removed entirely without randomizing the value of the aggregate data set, to support later analysis. Masking is the principal method of creating useful new values without exposing the original. It is ideally suited for creating data sets which can be used for meaningful analysis without exposing the original data. This is important when you don’t have sufficient resources to secure every system within your enterprise, or don’t fully trust the environment where the data is being stored. Different masks can be applied to the same data fields, to produce different masked data for different use cases. This flexibility exposes much of the value of the original with minimal risk. Masking is very commonly used with PHI, test data management, and NoSQL analytics databases. That said, there are potential downsides as well. Masking does not offer quite as strong security as tokenization or encryption (which we will discuss below). The masked data does in fact bear some relationship to the original – while individual fields are anonymized to some degree, preservation of specific attributes of a person’s health record (age, gender, zip code, race, DoB, etc.) may provide more than enough information to reverse engineer the masked data back to the original data. Masking can be very secure, but that requires selection of good masking tools and application of a well-reasoned mask to achieve security goals while supporting desired analytics. Element/Data Field Encryption / Format Preserving Encryption (FPE): Encryption is the go-to security tool for the majority of IT and data security challenges we face today. Properly implemented, encryption provides obfuscated data that cannot be reversed into the original data value without the encryption key. What’s more, encryption can be applied to any type of data such as first and names, or entire data structures such as a file or database table. And encryption keys can be provided to select users, keeping data secret from those not entrusted with keys. But not all encryption solutions are suitable for a data centric security model. Most forms of encryption take human readable data and transform it into binary format. This is a problem for applications which expect text strings, or databases which require properly formatted Social Security numbers. These binary values create unwanted side effects and often cause applications to crash. So most companies considering data centric security need an encryption cipher that preserves at least format, and often data type as well. Typically these algorithms are applied to specific data fields (e.g.: name, Social Security number, or credit card number), and can be used on data at rest or applied to data streams as information moves from one place to the next. These encryption variants are commercially available, and provide

Share:
Read Post

Incite 7/9/2014: One dollar…

A few weeks ago I was complaining about travel and not being home – mostly because I’m on family vacations and doing work I enjoy. I acknowledged these are first world problems. I didn’t appreciate what that means. You lose touch with a lot of folks’ reality when you are in the maelstrom of your own crap. I’m too busy. The kids have too many activities. There are too many demands on my time.   That all stopped over the weekend. On the recommendation of a friend, I bought and watched Living on One Dollar. It’s a documentary about 4 US guys who went down to a small town in Guatemala and lived on one dollar a day. That was about the median income for the folks in that town. Seeing the living conditions. Seeing the struggle. It’s hard to live on that income. There is no margin for error. If you get sick you’re screwed because you don’t have money for drugs. You might not be able to afford to send your kids to school. If you are a day laborer and you don’t get work that day, you might not be able to feed your kids. If the roof is leaking, you might not have any money to fix it. But you know what I saw in that movie? Not despondency. Not fatalism, though I’m sure some folks probably feel that from time to time. I saw optimism. People in the town were taking out micro-loans to start their own businesses and then using the profits to go to school to better themselves. I saw kindness. One of the only people in the town with a regular salaried job gave money to another family that couldn’t afford medicine to help heal a sick mother. This was money he probably couldn’t spare. But he did anyway. I saw kids who want to learn a new language. They understand they had to work in the fields and might not be able to go to school every year, but they want to learn. They want to better themselves. They have the indomitable human spirit. Where many people would see pain and living conditions no one should have to suffer through, these folks saw optimism. Or the directors of the documentary showed that. They showed the impact of micro-finance. Basically it made me reconnect with gratitude. For where I was born. For the family I was born into. For the opportunities I have had. For the work I have put in to capitalize on those opportunities. Many of us won the birth lottery. We have opportunities that billions of other people in the world don’t have. So what are you going to do with it? I’m probably late the bandwagon, but I’m going to start making micro-loans. I know lots of you have done that for years, and that’s great. I’ve been too wrapped up in my own crap. But it’s never too late to start, so that’s what I’m going to do. So watch the movie. And then decide what you can do to help. And then do it. –Mike The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the conference this year. You can check it out on YouTube. Take an hour and check it out. Your emails, alerts and Twitter timeline will be there when you get back. Securosis Firestarter Have you checked out our new video podcast? Rich, Adrian, and Mike get into a Google Hangout and.. hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail. June 30 – G Who Shall Not Be Named June 17 – Apple and Privacy May 19 – Wanted Posters and SleepyCon May 12 – Another 3 for 5: McAfee/OSVDB, XP Not Dead, CEO head rolling May 5 – There Is No SecDevOps April 28 – The Verizon DBIR April 14 – Three for Five March 24 – The End of Full Disclosure March 19 – An Irish Wake March 11 – RSA Postmortem Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too. Leveraging Threat Intelligence in Incident Response/Management Introduction Endpoint Security Management Buyer’s Guide (Update) Mobile Endpoint Security Management Trends in Data Centric Security Introduction Use Cases Open Source Development and Application Security Analysis Development Trends Application Security Introduction Understanding Role-based Access Control Advanced Concepts Introduction NoSQL Security 2.0 Understanding NoSQL Platforms Introduction Newly Published Papers Advanced Endpoint and Server Protection Defending Against Network-based DDoS Attacks Reducing Attack Surface with Application Control Leveraging Threat Intelligence in Security Monitoring The Future of Security Security Management 2.5: Replacing Your SIEM Yet? Defending Data on iOS 7 Eliminating Surprises with Security Assurance and Testing Not so much Incite 4 U Oh about that cyber-policy… It looks like folks are getting interested in cyber-insurance. At least in the UK. And it’s mainstream news now, given that an article on Business Insider about the market. After the predictable Target breach reference they had some interesting numbers on the growth of the cyber-insurance market. To a projected over $2 billion in 2014. So what are you buying? Beats me. Is it “insurance cover from hackers stealing customer data and cyber terrorists shutting down websites to demand a ransom”? I didn’t realize you could value your data and get reimbursed if it’s stolen. And how is this stuff priced? I have no idea. A professor offers a good assessment: “When it comes to cyber there are lots of risks and they keep changing, and you have a general absence of actuarial material. The question for the underwriter is how on earth do I cover this?” And how on earth do you collect on it? It

Share:
Read Post

Open Source Development and Application Security Survey Analysis [New Paper]

We love data – especially when it tells us what people are doing about security. Which is why we were thrilled at the opportunity to provide a – dare I say open? – analysis of the 2014 Open Source Development and Application Security survey. And today we launch the complete research paper with our analysis of the results. Here are a couple highlights: Yes, after a widely-reported major vulnerability in an open source component used in millions of systems around the globe, confidence in open source security did not suffer. In fact, it ticked up. Ironic? Amazing? I was surprised and impressed. … and … 54% answered “Yes, we are concerned with open source vulnerabilities.” but roughly the same percentage of organizations do not have a policy governing open source vulnerabilities. We think this type of survey helps shed important light on how development teams perceive security issues and are addressing them. You can find the official survey results at http://www.sonatype.com/about/2014-open-source-software-development-survey. And our research paper is available for download, free as always: 2014 Open Source Development and Application Security Survey Analysis Finally, we would like to thank Sonatype, both for giving us access to the survey results and for choosing to license this research work to accompany their survey results! Without their interest and support for our work, we would not be able to provide you with research such as this. Share:

Share:
Read Post

Leveraging Threat Intelligence in Incident Response/Management

It’s hard to be a defender today. Adversaries continue to innovate, attacking software which is not under your control. These attacks move downstream as low-cost attack kits put weaponized exploits in the hands of less sophisticated adversaries, making them far more effective. But frequently attackers don’t even need to use innovative attacks because a little reconnaissance and a reasonably crafted phishing message can effectively target and compromise your employees. The good news is that we find very few still clinging to the hope that all attacks can be stopped by deploying the latest shiny object coming from a VC-funded startup. Where does that leave us? Pretty much where we have been for years. It is still about reacting faster – the sooner you know about an attack the sooner you can start managing it. In our IR fundamentals series and subsequent React Faster and Better paper, we mapped out a process for responding to these incidents completely and efficiently, utilizing tactics honed over decades in emergency response. But the world hasn’t stayed still over the past 3 years – not by a long shot. So let’s highlight a few things shifting the foundation under our (proverbial) feet. Better adversaries and more advanced tactics: Attackers continue to refine their tactics, progressing ever faster attack from to exfiltration. As we described in our Continuous Security Monitoring paper, attackers can be in and out with your data in minutes. That means if monitoring and assessment is not really continuous you leave a window of exposure. This puts a premium on reacting faster. Out of control data: If you haven’t read our paper on The Future of Security, do that now. We’ll wait. The paper explains how the combination of cloud computing and mobility fundamentally disrupts the way technology services are provisioned and delivered. They will have a broad and permanent impact on security, most obviously in that you lose most control over your data, because it can reside pretty much anywhere. So how can you manage incidents when you aren’t sure where the data is, and you may not have seen the attacks before? That could be the topic of the next Mission Impossible movie. Kidding aside, the techniques security professionals can use have evolved as well, thanks to the magic of Moore’s Law. Networks are faster, but we can now capture that traffic when necessary. Computers and devices are more powerful, but now we can collect detailed telemetry on them to thoroughly understand what happens to them. Most importantly, with our increasing focus on forensics, most folks don’t need to argue so hard that security data collection and analysis are critical to effectively responding and managing incidents. More Data As mentioned above, our technology to monitor infrastructure and analyze what’s going on has evolved quickly. Full network packet capture: New technologies have emerged that can capture multi-gbps network traffic and index it near real time for analysis. This provides much higher fidelity data for understanding what attackers might have done. Rather than trying to interpret log events and configuration changes, you can replay the attack and see exactly what happened and what was lost. This provides the kind of evidence essential for quickly identifying the root cause of an attack, as well as the basis for a formal investigation. Endpoint activity monitoring: We introduced this concept in our Endpoint Security Buyer’s Guide and fleshed it out in Advanced Endpoint and Server Protection. This approach enables you to collect detailed telemetry from endpoint devices, so you see every action on the device, including what software was executed and which changes were made – to the device and all its files. This granular activity history enable you to search for attack patterns (indicators of compromise) at any time. So even if you don’t know activity is malicious when it takes place, you can identify it later, so long as you keep the data. A ton of data: The good news is that, between network packets and endpoint telemetry, you have much more more data to analyze. The bad news is that you need technology that can actually analyze it. So we hear a lot about “big data” for security monitoring these days. Regardless of what it’s called by the industry hype machine; you need technologies to enable you to index, search through, and find patterns within the data – even when you don’t know exactly what you’re looking for. Fortunately other industries – like retail – have been analyzing data for unseen and unknown patterns for years, and many of their analytical techniques are now being applied to security. As a defender it is tough to keep up with attackers. But many of these new technologies help to fill the gaps. Technology is no longer the biggest issue for detecting, responding, and managing threats and attacks. The biggest problem is now the lack of skilled security professionals to do the work. In Search of… Responders It seems like every conversation we have with CISOs or other senior security professionals these days turns at some point to finding staff to handle attacks. Open positions stay open for extended periods. These organizations really need to be creative to find promising staffers and invest in training them, even though they often soon move on to a higher-paid consulting job or another firm. If you are in this position, you aren’t unique. Even the incident response specialist shops are resource constrained. There just aren’t enough people to meet demand. The security industry needs to address this on multiple fronts: Education: Continued investment in training people to understand core skills is required. More importantly, these folks need opportunities and resources to learn on the job – which is really the only way to keep up with modern attackers anyway. Automation: The tools need to continue evolving, to make response more efficient and accessible to less sophisticated staff. We are not talking about dumbing down the process, but instead about making it easier and more intuitive so less skilled folks

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.