Securosis

Research

How to Use the 2013 Verizon Data Breach Investigations Report

A few hours after this post goes live, the Verizon Enterprise risk team will release their 2013 Data Breach Investigations Report. This is a watershed year for the report, as they are now up to 19 contributing organizations including law enforcement agencies, multiple emergency response teams (CERTs), and even potential competitors. The report covers 47,000 incidents, among which there were 621 confirmed data disclosures. This is the best data set since the start of the report, so it provides the best insight into what is going on out there. We were fortunate enough to get a preview of the report and permission to post this a few hours before the report is released. In the next 24-72 hours you will see a ton of articles; as analysts we aren’t here to make a story or nab a headline, but to help you get your job done. We offer a very brief overview of the interesting things we saw in the report, but our main focus for this post is to save you a little time in using the results to improve security in your own organization. The best part this year is that the data reflects a more balanced demographic than in the past, and the Verizon risk team did a great job of breaking things out so you can focus on the pieces that matter to you. The report does an excellent job of showing how different demographics face different security risks, from different attackers, using different attack techniques. Instead of a bunch of numbers jumbled together, you can focus on the incidents most likely to affect your organization based on your size and industry. You probably know that from the beginning, but now you have numbers to back you up. But first: If you are an information security professional, you must read this report. Don’t make decisions based on news articles, this post, or any other secondary analysis. It’s a quick read, and well worth your time, even if you only skim it. Got it? There is a ton of good analysis in the report, and no outside summaries will cover the important things you need for making your own risk decisions. Not even ours. We could easily write a longer analysis of the DBIR than the DBIR itself. Key context Before we get any deeper, Verizon made two laudable decisions when compiling the report that might cause some hand wringing among those who don’t understand why: They almost completely removed references to lost record counts such as the number of credit card numbers lost. The report is much more diverse this year, and record counts (which are never particularly useful in breach analysis) were just being misused and misunderstood. Only 15% of confirmed incidents had anything close to a measurable lost records count, so it made no sense to mention counts. The report focuses on the 621 confirmed data loss incidents, not the 47,000 total incidents. Another great decision – most organizations have different definitions of ‘incident’, which made data normalization a nightmare. This is the Data Breach Investigations Report, not an analysis of every infected desktop on your network. These two great decisions make the report much more focused and useful for making risk decisions. A third piece of context is usually lost in much of the press coverage: When the DBIR says something like “password misuse was involved in an incident”, it means it was one of multiple factors in the incident – not necessarily the root cause. Later in the report they tie in the first of the chain of attacks used, but you can’t read, “76% of network intrusions exploited weak or stolen credentials” as “76% of incidents were the result of weak or stolen credentials”. Attacks use chains of techniques, and these are only one factor. Context really is king because your goal is to break the attack chain at the most efficient and cost effective point. The last piece of context is an understanding of what happens when 19 organizations participate. Some use VERIS (the open incident recording methodology published by Verizon) and others use their own frameworks. The Verizon risk team converts between methodologies as needed, and usually excludes data if there isn’t enough to cover the core needed to merge the data sets. This means they sometimes have more or less detail on incidents, and they are clear about this in the report. There is no way to completely avoid survey bias in a sample set like this – incidents must be detected to be reported, and a third party response team or law enforcement must be engaged for Verizon to get the data. This is why, for example, lost and stolen devices are practically nonexistent in this report. You don’t call Verizon or Deloitte for a forensics investigation when a salesperson loses a laptop. Then again, we know of approximately zero cases where a lost device resulted in fraud. They definitely incur costs due to loss reporting and customer notification, but we can’t find any ties to fraud. There is one choice we disagree with, and one area we hope they will drop, but they probably have to keep: The DBIR includes many incidents of ATM skimming and other physical attacks that don’t involve network intrusion. These are less useful to the infosec audience, and we believe the banking community already has these numbers from other places. Tampering with ATMs in order to install skimmers is the vast majority of the ‘Physical’ threat action, which represents 35% of the breaches in the DBIR. Year-over-year trends are nearly worthless now, due to the variety of contributors. It is a very different sample set from last year, the year before, or previous years. Perhaps if they filtered out only Verizon incidents, they could offer more useful trends. But people love these trend charts, despite the big changes in the sample set. ATM skimming attacks are still data breaches, but the security controls to mitigate them are managed outside information security in most financial institutions. For the most part this doesn’t negatively affect the data too much, but

Share:
Read Post

Security Analytics with Big Data [New Series]

Big Data is being touted as a ‘transformative’ technology for security event analysis – promised to detect threats in the ever-increasing volume of event data generated from in-house, mobile, and cloud-based services. But a combination of PR hype, vendor positioning, and customer questions has pushed it to the top of my research agenda. Many customers are asking “Wait, don’t I already have SIEM for event analysis?” Yes, you do. And SIEM is designed and built solve the same problems – but 7-8 years ago – and it is failing to keep up with current problems. It’s not just that we’re trying to scale up to a much larger set of data, but we also need to react to events an order of magnitude faster than before. Still more troubling is that we are collecting multiple types of data, each requiring new and different analysis techniques to detect advanced attacks. Oh, and while all that slows down SIEM and log management systems, you are under the gun to identify attacks faster than before. This trifecta of issues limit the usefulness of SIEM and Log Management – and makes customers cranky. Many SIEM platforms can’t scale to the quantity of data they need to manage. Some are incapable of even storing basic data as fast as it comes in – forget about storing and analyzing non-standard data types. ‘Real-time’ analysis is a commonly cited as SIEM feature but after collection, storage, normalization, correlation, and enrichment, you are lucky to access new events within an hour – much less within a minute. The good news is that big data, correctly deployed, can solve these issues. In this paper we will examine how big data addresses scalability and performance, improves analysis, can accommodate multiple data types, and will be leveraged with existing environments. Or goal is to help users differentiate reality from wishful thinking, and to provide enough information to make informed purchasing decisions. To do this we need to demystify big data and contrast how it differs from traditional data management systems. We will offer a clear and unique definition of big data and explain how it helps overcome current technical limitations. We will offer a pragmatic way for customers to leverage big data, enabling them to select a solution strategically. We will highlight the limitations of SIEM and Log Management, key areas of customer dissatisfaction, areas where big data excels in comparison. We will also discuss some changes required for big data analysis and data management, as well as a change in mindset necessary to take full advantage. This is not all theory and speculation – big data is currently being employed to detect security threats, address new requirements for IT security, and even help gauge the effectiveness of other security investments. Big data natively addresses ever-increasing event volume and the rate at which we need to examine new events. There is no question that it holds promise for security intelligence, both in the numerous ways it can parse information and through its native capabilities to sift proverbial needles from monstrous haystacks. Cloud and mobile architectures force us to reexamine how we manage security data, and to scale across broader sets of systems and events – neither of which mesh with the structured data repositories on which most organizations rely. But most IT and security practitioners do not yet fully understand big data or how to employ it so they are unable to weed through all the hype, FUD, and hyperbole. To take full advantage, however, requires both a deeper understanding of the technology and a subtle shift in mindset to enable informed decisions on incorporate big data into existing IT systems, perhaps by shifting to newer big data platforms. This research paper will highlight several areas: Use Cases: We will discuss issues customers cite with performance and scalability, particularly for security event analysis. We will discuss in detail how SIEM, Log Management, and event-centric systems struggle under new requirements for data velocity and data management, and why existing technologies aren’t cutting it. We will also discuss the inflexibility of pre-BD analysis, alerting, and reporting – and how they demand a new approach to security and forensics, as we struggle to keep pace with the evolution of IT. New Events and Approaches: This post will explain why we need to consider additional data types that go beyond events. Existing technologies struggle to meet emerging needs because threat data does not conform to traditional syslog and netflow event types. There is a clear trend toward broader data analysis to detect advanced attacks and better understand risks. What is Big Data and how does it work? This post will offer a basic definition of big data, along with a discussion of the native capabilities that make big data different than traditional analysis tools. We will discuss how features like HDFS, MapReduce, Hive, and Pig work together to address issues of scale, velocity, performance, and multiple data types. The promise of big data: We will explain why big data is viewed as a disruptive technology for security analytics. We will show how big data solutions mitigate problems and change security and event analysis. We will discuss how big data platforms handle collecting and parsing event data, and cover different queries and reports that support new threat analyses. How big data changes security platforms: This post will discuss how to supplement existing systems – through standalone instances, partial integration of big data with existing systems, systems that natively leverage big data infrastructure, or fully integrated systems that run atop NoSQL structures. We will also discuss operational changes to SIEM usage, including the growing importance of data scientists to security. Integration roadmap and planning: In this section we will address the common concerns, limitations, and realities of merging big data into your IT systems. Specifically, we will discuss: Integration and deployment issues Platform selection (diversity of platforms and data) Policy and report development Data privacy and sharing Big data platform security basics Our next post will cover use cases, the key areas where SIEM needs to improve,

Share:
Read Post

The CISO’s Guide to Advanced Attackers: Mining for Indicators

The key to dealing with advanced attackers is not closing off every window of vulnerability. As we have discussed throughout this series, advanced attackers will figure out a way to gain a foothold in your environment. Actually they will find multiple ways into your environment. So if you hope for any semblance of success, your goal cannot be to stop them – instead you need to work on shorteneing the window between compromise and detection. We have called that Reacting Faster and Better for years. 5 years to be exact, but who’s counting? The general concept is that you want to monitor your environment, gathering key security information that can either identify typical attack patterns as they are happening (yes, a SIEM-like capability), or more likely searching for indicators identified via intelligence activities. Collecting All the Security Data We say “all the security data” a bit tongue-in-cheek, but not too much. We have been saying Monitor Everything almost as long as we have been talking about Reacting Faster, because if you fail to collect data you won’t have an opportunity to get it later. Unfortunately most organizations don’t realize their security data collection leaves huge gaps until the high-priced forensics folks let you know they can’t truly isolate the attack, or the perpetrator, or the malware, or much of anything, because you just don’t have the data. Most folks only need to learn that lesson once. So the first order of business is to lay down a collection infrastructure to store all your security data. The good news is that you have likely been collecting security data for quite some time, and your existing investment and infrastructure should be directly useful for dealing with advanced attackers. This means existing log management system may be useful after all. But perhaps not – you might have tools that aren’t at all suited to helping you find advanced attackers in your midst. One step at a time – now let’s delve into the data you need to collect. Network Security Devices: Your firewalls and IPS devices generate huge logs of what’s blocked, what’s not, and which rules are effective. You will receive intelligence that typically involves port/protocol/destination combinations or application identifiers for next-generation firewalls, which can identify potential attack traffic. Configuration Data: One key area to mine for indicators is the configuration data from your devices. It enables you to look for very specific files and/or configurations that have been identified as indicators of compromise. Identity: Similarly information about logins, authentication failures, and other identity-related data is useful for matching against attack profiles from third-party threat intelligence providers. NetFlow: This is another data type commonly used in SIEM environments; it provides information on protocols, sources, and destinations for network traffic as it traverses devices. NetFlow records are similar to firewall logs but far smaller, making them more useful for high-speed networks. Flows can identify lateral movement by attackers, as well as large exfiltration file transfers. Network Packet Capture: The next frontier for security data collection is actually to capture all network traffic on key segments. Forensics folks have been doing this for years during investigations, but proactive continuous full packet capture – for the inevitable incident responses which haven’t even started yet – is still an early market. For more detail on how full packet capture impacts security operations check out our Network Security Analytics research. Application/Database Logs: Application and database logs are generally less relevant, unless they come from standard applications or components likely to be specifically targeted by attackers. But you might be able to discover unusual application and/or database transactions – which might represent bulk data removal, injection attempts, or efforts to attack your critical data. Vulnerability Scans: This is another information source with limited value, detailing which devices are vulnerable to specific attacks. They help eliminate devices from your search criteria to streamline search activities. Of course this isn’t an exhaustive list, and you are likely already capturing much of this data. That’s a good thing, but capturing and analyzing data within the context of a compliance audit is fundamentally different than trying to detect advanced attacker activity. We are sticking to the CISO view for this series so we won’t dig into the technical nuances of the collection infrastructure. But they must be built on a strong analytical foundation which provides a threat-centric view of the world rather than one a focused on compliance reporting. More advanced organizations may already have a Security Operations Center (SOC) leveraging a SIEM platform for more security-oriented correlation and forensics to pinpoint and investigate attacks. That’s a start, but you will likely require some kind of Big Data thing, which should be clear after we discuss what we need this detection platform to do. Attack Patterns FTW As much as we have talked about the futility of blocking every advanced attack, that doesn’t mean we shouldn’t learn from both the past and the misfortune of others. We spent a time early in this process on sizing up the adversary for some insight into what is likely to be attacked, and perhaps even how. That enables you to look for those attack patterns within your security data – the promise of SIEM technology for years. The ultimate disconnect with SIEM was the hard truth that you needed to know what you were looking for. Far too many vendors forgot to mention that little requirement when selling you a bill of goods. Perhaps they expected attackers to post their plans on Facebook or something? But once you do the work to model the likely attacks on your key information, and then enumerate those attack patterns in your tool, you can get tremendous value. Just don’t expect it to be fully automated. The best case is that you receive an alert about a very likely attack because it’s something you were looking for. But the quickest way to get killed is to plan for the best case. So we also need to ensure we are ready for the worst case. That is advanced attackers using attacks you haven’t seen before, in ways you don’t expect. That’s when

Share:
Read Post

Token Vaults and Token Storage Tradeoffs

Use of tokenization continues to expand as customers look to simplify PCI-DSS compliance. With this increased adoption comes a lot of vendor positioning and puffery, as they attempt to differentiate their products in an increasingly competitive market. Unfortunately this competitive positioning often causes confusion among buyers, which is why I have spent the last couple mornings answering questions on FPE vs. Tokenization, and the difference between a token vault and a database. Lately most questions center on differentiating tokenization data vaults, with the expected confusion caused by vendor hyperbole. In this post I will define a token vault and shed some light on their pros and cons. My goal is to help you determine as a consumer whether vaults are something to consider when selecting a tokenization solution. A token vault is where you store issued tokens and the credit card numbers they represent. The storage location is called a “token vault”. The vault typically contains other information, but for this discussion just think of the token vault as a long list of CC#/token pairs. A new type of solution called ‘stateless’ or ‘vault-less’ tokenization is now available. These systems use derived tokens, which can be recalculated from some secret value, and those do not need to be stored in a database. Recent press hype claims that token vaults are bad and you should stay away from them. The primary argument is “you don’t want a relational database as a token vault” – or more specifically, “an Oracle database makes a slow and expensive token vault, and customers don’t want that”. Not so fast! The issue is not clear-cut. It’s not that token vaults are good or bad, but of course there are tradeoffs. Token vaults are fine for many types of customers, but not suitable for others. There are three issues at the heart of this debate: cost, scale, and performance. Let’s take a closer look at each of them. Cost: If you are going to use an Oracle, IBM DB2, or Microsoft SQL Server database for your token vault, you will need a license for the database. And token vaults must be redundant so you will need at least a couple licenses. If you want to ensure that your tokenization system can handle large bursts of transactions – such as holiday shopping periods – you will need hefty servers. Databases are priced based on server capacity, so these licenses can get very expensive. That said, many customers running in-house tokenization systems already have database site licenses, so for many customers this is not an issue. Scale: If you have data processing sites where token servers are dispersed across remote data centers that cannot guarantee highly reliable communications, synchronization of token vaults is a serious issue. You need to ensure that credit cards are not misused, that you have transactional consistency across all locations, and that no token is issued twice. With ‘vault-less’ tokenization synchronization is a non-issue. If consistency across a scaled tokenization deployment is critical derived tokens are incredibly attractive. But some non-derived token systems with token vaults get around this issue by pre-allocating token sequences; this ensures tokens are unique, and synchronization latency is not a concern. This is a critical advantage for very large credit card processors and merchants but not a universal requirement. Performance: Some token server designs require a check inside the token vault prior to completing every transaction, in order to ensure to avoid duplicate credit cards or tokens. This is especially true when a single token is used to represent multiple transactions or merchants (multi-use tokens). Unfortunately early tokenization solutions generally had poor database architectures. They did not provide efficient mechanisms of indexing token/CC# pairs for quick lookup. This is not a flaw in the databases themselves – it was a mistake made token vault designers as they laid out their data! As the number of tokens climbs into the tens or hundreds of millions, lookup operations can become unacceptably slow. Many customers have poor impressions of token vaults because their early implementations got this wrong. So very wrong. Today lookup speed is often not a problem – even for large databases – but customers need to verify that any given solutions meets their requirements during peak loads. For some customers a ‘vault-less’ tokenization solution is superior across all three axes. For other customers, with deep understanding of relational databases… security, performance, and scalability are just part of daily operations management. No vendor can credibly claim that databases or token vaults are universally the wrong choice, just like that nobody can claim that any non-relational solution is always the right choice. The decision comes down to the customer’s environment and IT operations. I am willing to bet that the vendors of these solutions will have some additional comments, so as always the comments section is open to anyone who wants to contribute. Share:

Share:
Read Post

The CISO’s Guide to Advanced Attacks: Intelligence, the Crystal Ball of Security

As discussed in our first post in the CISO’s Guide to Advanced Attackers, the first step is to determine what kind of attack would have the greatest impact on your environment (most likely mission), so you can infer which kinds of adversaries you are likely to face. Armed with context on likely adversaries, we can move into the intelligence gathering phase. This involves learning everything we can about possible and likely adversaries, profiling probable behaviors, and determining which kinds of defenses and controls make sense to address the higher probabilities. As we mentioned when wrapping up the last post, at the end of the day these are all just educated guess. That’s why we keep using the word likely. But these guesses can be very useful for a head start on detect advanced attacks. When you are racing the clock with an adversary in your environment that head start can make the difference in whether key data is exfiltrated. Master the Basics But first there iss something we neglected in the introductory post, the importance of a strong set of security controls in place at the start of the process. Dealing with advanced attackers is not for unsophisticated or immature security organizations. The first order of business is to pick the low hanging fruit, and ensure you aren’t making it easy for attackers. What does that mean? You need to master the basics and have good security practices implemented. We will not go into detail here – you can check out our research library for chapter and verse on security practices. Before you can address advanced attacks, you need to have already hardened key devices, implemented a strong hygiene (patch and configuration management) program, and properly segmented your network to make it difficult for attackers to get at important data. We can laugh about the futility of traditional endpoint protection, but you still need some measure of protection on key devices with access to sensitive data. For the rest of this series we will assume (and yes, we know the hazards of assuming anything) that you are ready to deal with an advanced attacker – meaning you have a relatively mature security program in place with proper control sets. If you can’t make that kind of statement, go do that now, and you can resume reading this paper once you’re done. Profiling the Adversary For better or worse, the industry seems to believe that intelligence = “threat intelligence.” And the many organizations not doing much to shorten the detection cycle for advanced attacks can get away with this generalization. But threat intelligence is a subset of intelligence – to really understand your adversaries you need to go deeper than learning the indicators of compromise found in their last attack. That means you will want to learn what they do, how they do it, where they live, what they like to do, where they were trained, the tools they use, the attacks they have undertaken, the nuances of their attack code, and their motives. Yes, that is a big list, and not many organizations are in a position to gather this kind of real intelligence on adversaries. You can check out some of the publicly available information in the APT1 report, which provide unprecedented detail about these apparently state-sponsored Chinese hackers to get a feel for the depth of intelligence needed to seriously combat advanced attackers. In light of the reality of limited resources and even more limited intelligence expertise, you are likely to buy this kind of intelligence or get it from buddies who have more resources and expertise. You can gather a lot of intelligence by asking the right questions within your information sharing community or talking to researchers at your strategic information security vendors. Depending on how the intelligence is packaged, you may pay or get the ability to interact with their security researchers as part of your product/service agreement. The kind of adversary intelligence you need goes well beyond what’s published in the quarterly threat reports from all the security vendors. They tend to give away their least interesting data as bait, but they are very likely to have much more interesting data which use they for their own work – you just have to ask and possibly subscribe to get access. When we talk about how advanced attackers impact the security process at the end of this series, we will discuss how to integrate this type of adversary intelligence into your security program. Threat Intelligence Indicators Now that we have defined the intelligence terminology we can get into the stuff that will directly impact your security activity: the threat intelligence that has become such a hot topic in security circles. We have recently researched this topic extensively so we will highlight a bunch of it here, but we also recommend you read our papers on Building an Early Warning System, Network-based Threat Intelligence, and Email-based Threat Intelligence for a much deeper look at the specific data sources and indicators you will be looking for. But let’s start with a high-level overview of the general kinds of threat intelligence you are likely to leverage in your efforts to deal with advanced attackers. Malware Malware analysis is maturing rapidly, and it is becoming common to quickly and thoroughly understand exactly what a malicious code sample does and define behavioral indicators you can search for within your environment. We described this in gory detail in Malware Analysis Quant. For now suffice it to say that you aren’t looking for a specific file – that would just take us back to AV blacklists – instead you will seek indicators of what a file did to a device. Remember, it is no longer about what malware looks like – it is now about what it does. Fortunately a number of parties offer information services that provide data on specific pieces of malware. You can get an analysis based on a hash of a malware file, or upload a file that hasn’t been seen before. The services run malware samples through a sandbox to figure out what it does, profile it, and

Share:
Read Post

Run faster or you’ll catch privacy

One of the things that smacked me upside the head at a recent IANS Forum, where I run the CISO track, is the clear merging of the security and privacy functions under the purview of one executive. Of the 15 or so CISOs in the room, at least half also had responsibility for privacy. And many of them got this new responsibility as part of a recent reorganization. So once again be careful what you wish for. It was a lot more fun to be able to rail at the wacky privacy folks working for the CFO or General Counsel, wasn’t it? Not so much now that it’s your problem. To be fair this evolution is logical – you cannot really separate out the two if you accept that it’s all about protecting customers. Not only do you have to keep customer data private, but you could make the case that protecting intellectual property ensures you can deliver value to those customers. Malcolm Harkins, CISO (and now CPO) of Intel appeared on a podcast to explain why his organization recently gave him responsibility for the privacy function as well. Intel has added privacy to the portfolio of its top information security executive, Malcolm Harkins, who says too many information security professionals are “color blind or tone deaf” to privacy, wrongly thinking strong data protection provides privacy safeguards. Most security types didn’t want to deal with the policies and other squishy things privacy folks must deal with. It was easier to focus on technology and leave the softer stuff to other folks. We don’t have that choice any more, and if you’re at the CISO level and still largely focused on technology, you’re doing it wrong. But if you thought responsibility for privacy wasn’t bad enough, a few CISOs are now taking on responsibility for management of building access systems as well (as part of physical security), as they are increasingly integrated with existing IAM systems. The fun never ends… Photo credit: “Privacy” originally uploaded by PropagandaTimes Share:

Share:
Read Post

Intel Buys Mashery, or Why You Need to Pay Attention to API Security

Intel acquired API management firm Mashery today. readwrite enterprise posted a very nice write-up on how Mashery fits into the greater Intel strategy: Intel is in the midst of a shift away from just selling chips to selling software and services. This change, while little-noticed, has been long in the making. Intel bought McAfee for $7.7 billion in 2010, putting it into the security-software business. In 2005, Intel bought a smaller company, Sarvega, which specialized in XML gateways. (XML, or extensible markup language, is a broad descriptor of a file format commonly used in APIs; an XML gateway transports files to make APIs possible.) Ideally, Intel might sell the chips inside the servers running the software programs that communicate via these APIs, too. (It has a substantial business selling such chips.) But what’s more important is the notion that Intel has a product offering that speaks to innovative startups, not just struggling PC manufacturers. With the shift in the market from SOAP to REST over the last several years, and the explosion of APIs for just about everything, especially cloud and web services, tools like Mashery help both with the transformation and with gluing all the bits together. Because you can decide which bits of the API to expose and how, Mashery is a much more services-oriented way to manage which features – and what data – are exposed to different groups of users. It is an application-centric view of security with API management as the key piece. Stated another way, Intel is moving away from the firewall and SSL security model we are all familiar with. Many in the security space don’t see Intel as a player, despite its acquisition of McAfee. But Intel has been quietly developing products for tokenization, identity services, and security gateways for some time. Couple that with API security, and you start to get a clear picture of where Intel is headed – which is distinctly different than what McAfee offers for endpoints and back offices today. Share:

Share:
Read Post

On password hashing and how to respond to security flaws

I have been learning a lot lately about password hashing since we realized our own site used an inadequate mechanism (SHA256). I am also a major fan of 1Password for password generation and management. So I held my breath while reading how to use Hashcat on 1Password data: The reason for the high speed is what I think this might be a design flaw. Here is why: But if you take a close look now you see these both mechanisms do not match in combination. To find out if the masterkey is correct, all we need is to match the padding, so all we need to satisfy the CBC is the previous 16 byte of data of the 1040 byte block. This 16 byte data is provided in the keychain! In other words, there is no need to calculate the IV at all. I have an insanely long random master password, so this isn’t a risk for me (it sucks to type on my iPhone), but it’s darn creative and interesting. The folks at AgileBits posted a great response in the comments. Rather than denying the issue, they discussed the risk around it and how they already have an alternative because they recognized issues with their implementation: I could plead that we were in reasonably good company in making that kind of error, but as I’ve since learned, research in academic cryptography had been telling people not to use unauthenticated encryption for more than a decade. This is why today we aren’t just looking at the kinds of attacks that seem practical, but we are also paying attention to security theorems. In other words, they owned up and didn’t deny it, which is what we should all do. For more details, read this deeper response on the AgileBits site. It’s worth it for a sense of these password hashing issues, which are something all security pros need to start absorbing. Share:

Share:
Read Post

Safari enables per-site Java blocking

I missed this during all my travels, but the team at Intego posted a great overview: Meanwhile, Apple also released Safari 6.0.4 for Mountain Lion and Lion, as well as Safari 5.1.9 for Snow Leopard. The new versions of Safari give users more granular control over which sites may run Java applets. If Java is enabled, the next time a site containing a Java applet is visited, the user will be asked whether or not to allow the applet to load, with buttons labeled Block and Allow: Your options are always allow, always block, or prompt. I still highly recommend disabling Java entirely in all browsers, but some of you will need it and this is a good option without having to muck with plugins. Share:

Share:
Read Post

No news is just plain good: Friday Summary, April 18, 2013

I know the exact moment I stopped watching local news. It was somewhere around 10-15 years ago. A toddler had died after being left locked in a car on a hot day. I wasn’t actually watching the news, but one of the screamers for the upcoming broadcast came on during a commercial break for whatever I was watching. A serious looking female reporter, in news voice, mentioned the death and how hot cars could get in the Colorado sun. Then she threw a big outdoor thermometer in a car, slammed the door, and reminded me to watch the news at 10 to see the results. I threw up a little bit, I think. I don’t remember the exact moment I gave up on cable news, but it was sometime within the past year or two. I have a TV in my office I use for background noise; one of those little things you do when you have been working at home for a decade or so. I used to keep it on MSNBC but the bias finally went too over the top for me. Fox is out of the question, and I was trying out CNN. That lasted for less than an hour before I realized that Fox is for the right, MSNBC for the left, and CNN for the stupid. It was nothing other than sensational exploitative drivel. As an emergency responder I know what we see at night rarely correlates to actual events. I have been on everything from national incidents to smaller events that still attracted the local press. Even responders and commanders don’t always have the full picture – never mind a reporter hovering at the fringe. Once I was on the body recovery of a 14-year-old who died after falling off a cliff while taking a picture. I showed up on the third day of the search, right around when one of our senior members finally located him due to the green gloss of a disposable camera. He used a secondary radio channel to report his location and finding because we know the press scans all the emergency frequencies. I was quietly sent up and we didn’t stop the rest of the search, to provide a little decorum. Around the time the very small group of us arrived at the scene, the press finally figured it out. The next thing I knew there was a helicopter headed our way to get video. Of a dead kid. Who had been in the Colorado sun, outdoors, for 3 days. I used my metallic emergency blanket to cover him him and protect his family. Years later I was on another call to recover the body of a suicide in one of the most popular mountain parks in Boulder. Gunshot to the head. When we got to the scene one of the police investigators mentioned we that needed to watch what we said because the local station had a new boom mike designed to pick up our conversations at a distance. I never saw it, so maybe it wasn’t true. I don’t watch local news. I don’t watch cable news. Even this week I avoid it. They both survive only on exploitation and emotional manipulation. I do occasionally watch the old-school national news shows, where they still behave like journalists. I read. A lot. Sources with as little bias as I can find. According to the Guardian, research shows the news is bad for you. Right now I find it hard to disagree. On to the Summary: Favorite Securosis Posts Adrian Lane: Run faster or you’ll catch privacy. Managing privacy in large firms is its own private hell. Hello, EU privacy laws! Mike Rothman: Sorry for Security Rocking. LMFAO applied to security FTW. And evidently I slighted our contributor Gal, who believes he’s up to provide the definitive Security LMFAO version. Name that tune, brother! Rich: The CISO’s Guide to Advanced Attacks. I am jealous I’m not writing this one. David Mortman: Run faster or you’ll catch privacy Other Securosis Posts Intel Buys Mashery, or Why You Need to Pay Attention to API Security. On password hashing and how to reply to security flaws. Safari enables per-site Java blocking. Incite 4/17/2013: Tipping the balance between good and evil. Why you still need security groups with host firewalls. Is it murder if the victim is already dead?. Unused security intelligence is, well… dumb. Favorite Outside Posts Adrian Lane: Agilebits 1Password support and Design Flaw?. Good discussion of the flaw and a good response from AgileBits. Now… patch, please! Mike Rothman: Patton Oswalt on the Boston Marathon Attack. I linked to this in the Incite but it’s worth mentioning again. Great context about taking a long-term view, even when the wounds are fresh. David Mortman: NIST: It’s Time To Abandon Control Frameworks As We Know Them. Rich: EmergentChaos on the 1Password design flaw issue. Don’t just read the post – read the first comment. The guys at AgileBits show yet again why I trust them. Research Papers Email-based Threat Intelligence: To Catch a Phish. Network-based Threat Intelligence: Searching for the Smoking Gun. Understanding and Selecting a Key Management Solution. Building an Early Warning System. Implementing and Managing Patch and Configuration Management. Defending Against Denial of Service (DoS) Attacks. Securing Big Data: Security Recommendations for Hadoop and NoSQL Environments. Tokenization vs. Encryption: Options for Compliance. Top News and Posts ColdFusion hack used to steal hosting provider’s customer data. Wait, people still use Cold Fusion? (Rich – I used to totally rock CF, back in the day!) Oracle Patches 42 Java Flaws. House approves cybersecurity overhaul in bipartisan vote. Cloudscaling licenses Juniper virty networking for new OpenStack distro. Microsoft deploys 2-factor to all services. Obama threatens to veto CISPA. Get your popcorn. Update: DARPA Cyber Chief Peiter “Mudge” Zatko Heads To Google. Google does so many great security things, but their views on privacy kill their usefulness to me. Blog Comment of the Week This week’s best comment goes to fatbloke, in response

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.