Securosis

Research

Friday Summary: July 18, 2014, Rip Van Winkle edition

I have been talking about data centric security all week, so you might figure that’s what I will talk about in this week’s summary. Wrong. That’s because I’m having a Rip Van Winkle moment. I just got a snapshot of where we have been through the last six years, and I now see pretty clearly where we are going. It is because I have not done much coding over the last six years; now that I am playing around again I realize not just that everything has changed, but also why. It’s not just that every single tool I was comfortable with – code management, testing, IDE, bug tracking, etc. – has been chucked into the dustbin, it’s that most assumptions about how to work have been tossed on their ears. Server uptime used to be the measure of reliability – I now regularly kill servers to ensure reliability. I used to worry that Java was too slow, so I would code C – now I use JRuby to speed things up. I used to slow down code releases so QA could complete test sweeps – now I speed up the dev cycle so testing can happen faster. I used to configure servers and applications after I launched them – now I do it beforehand. Developers should never push to code production; developers should now push code to production as frequently as possible. Patching destabilizes production code; now we patch as fast as possible. We’ll fix it after we ship; now infrastructure and efficiency take precedence over features and functions. Task cards over detailed design specs; design for success gave way to “fail faster” and constant refactoring. My friends are gone, my dog’s dead, and much of what I knew is no longer correct. Rip Van Winkle. It’s like that. Step away for a couple years and all your points of reference have changed – but it’s a wonderful thing! Every process control assumption has been trampled on – for good reason: those assumptions proved wrong. Things you relied on are totally irrelevant because they have been replaced by something better. Moore’s Law predicts that compute power effectively doubles every two years while costs remain static. I think development is moving even faster. Ten years ago some firms I worked with released code once a year – now it’s 20 times a day. I know nothing all over again … and that’s great! On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian and Mort talk Big Data with George V Hulme. Mort quoted in Communicating at the speed of DevOps. Favorite Securosis Posts Mike Rothman: The Security Pros Guide to Cloud File Storage and Collaboration: Introduction. I’m looking forward to this series from Rich because there is a lot of noise and lots of competitors in the cloud-based storage game. Lots of hyperbole too in terms of what an enterprise needs. Adrian Lane: Firestarter: China and Career Advancement. Lots of people looking to get into security and lots looking to hire. But HR is an impediment so both sides need to think up creative ways to find talent. Other Securosis Posts Trends in Data Centric Security: Deployment Models. The Security Pro’s Guide to Cloud File Storage and Collaboration: Introduction. Incite 7/16/2014: Surprises. Are CISOs finally ‘real’ executives? Firestarter: China and Career Advancement. Leverging TI in Incident Response/Management: Really Responding Faster. It’s Just a Matter of Time. Listen to Rich Talk, Win a … Ducati? Summary: Boulder. Favorite Outside Posts Mike Rothman: Is better possible? Another great one by Godin. “If you accept the results you’ve gotten before, if you hold on to them tightly, then you never have to face the fear of the void, of losing what you’ve got, of trading in your success for your failure.” Yes, we call can get better. You just have to accept the fear that you’ll fail. Gunnar: Apple and IBM Team Up to Push iOS in the Enterprise. My Mobile security talk two years back was “From the iPhone in your pocket to the Mainframe”, now the best in class front ends meet the best in class back ends. Or what I call iBM. IBM and Apple match was a bright strategy by Ginni Rometty and Tim Cook, but it might have been drafted by David Ricardo, who formalized comparative advantage, a trade where both sides gain. Adrian Lane: Server Lifetime as SDLC Metric. And people say cloud is not that different … but isn’t it funny how many strongly held IT beliefs are exactly reversed in cloud services. David Mortman: Oracle’s Data Redaction is Broken. Research Reports and Presentations Analysis of the 2014 Open Source Development and Application Security Survey. Defending Against Network-based Distributed Denial of Service Attacks. Reducing Attack Surface with Application Control. Leveraging Threat Intelligence in Security Monitoring. The Future of Security: The Trends and Technologies Transforming Security. Security Analytics with Big Data. Security Management 2.5: Replacing Your SIEM Yet? Defending Data on iOS 7. Eliminate Surprises with Security Assurance and Testing. What CISOs Need to Know about Cloud Computing. Top News and Posts Oracle fixes 113 security vulnerabilities, 20 just in Java. Google’s Project Zero. Specially Crafted Packet DoS Attacks, Here We Go Again. SCOTUS’s new Rummaging Doctrine. Blog Comment of the Week This week’s best comment goes to Jeff, in response to Leverging TI in Incident Response/Management. Sorry if this goes a little bit off topic, but I believe this relates back to responding faster (and continuous security monitoring that Securosis has championed), but would like to get your thoughts on the best place/recommended infrastructure designs to terminate, decrypt, and inspect SSL traffic to/from a network so all relevant security tools – IPS/IDS, WAFs, proxoes, security gateways, etc., – can inspect the traffic to ensure a complete picture of what’s entering/leaving the network to allow for quick/faster responses to threats. Thx, Jeff Share:

Share:
Read Post

Trends in Data Centric Security: Deployment Models

So far we have talked about the need for data centric security, what that means, and which tools fit the model. Now it is time to paint a more specific picture of how to implement and deploy data centric security, so here are some concrete examples of how the tools are deployed to support a data centric model. Gateways A gateway is typically an appliance that sits in-line with traffic and applies security as data passes. Data packets are inspected near line speed, and sensitive data is replaced or obfuscated before packets are passed on. Gateways are commonly used used by enterprises before data is moved off-premise, such as up to the cloud or to another third-party service provider. The gateway sits inside the corporate firewall, at the ‘edge’ of the infrastructure, discovering and filtering out sensitive data. For example some firms encrypt data before it is moved into cloud storage for backups. Others filter web-based transactions inline, replacing credit card data with tokens without disrupting the web server or commerce applications. Gateways offer high-performance substitution for data in motion; but they must be able to parse the data stream to encrypt, tokenize, or mask sensitive data. Another gateway deployment model puts appliances in front of “big data” (NoSQL databases such as Hadoop), replacing data before insertion into the cluster. But support for high “input velocity” is a key advantage of big data platforms. To avoid crippling performance at the security bottleneck, gateways must be able to perform data replacement while keeping up with the big data platform’s ingestion rate. It is not uncommon to see a cluster of appliances feeding a single NoSQL repository, or even spinning up hundreds of cloud servers on demand, to mask or tokenize data. These service must secure data very quickly, so they don’t provide deep analysis. Gateways may even need to be told the location of sensitive data within the stream to support substitution. Hub and Spoke ETL (Extract, Transform, and Load) has been around almost as long as relational databases. It describes a process for extracting data from one database, masking it to remove sensitive data, then loading the desensitized into another database. Over the last several years we have seen a huge resurgence of ETL, as firms look to populate test databases with non-sensitive data that still provides a reliable testbed for quality assurance efforts. A masking or tokenization ‘hub’ orchestrates data movement and implements security. Modeled on test data management systems, modern systems alter health care data and PII (Personally Identifiable Information) to support use in multiple locations with inconsistent or inadequate security. The hub-and-spoke model is typically used to create multiple data sets, rather than securing streams of data; to align with the hub-and-spoke model, encryption and tokenization are the most common methods of protection. Encryption enables trusted users to decrypt the data as needed, and masking supports analytics without providing the real (sensitive) data. The graphic above shows ETL in its most basic form, but the old platforms have evolved into much more sophisticated data management systems. They can now discover data stored in files and databases, morph together multiple sources to create new data sets, apply different masks for different audiences, and relocate the results – as files, as JSON streams, or even inserted into a data repository. It is a form of data orchestration, moving information automatically according to policy. Plummeting compute and storage costs have made it feasible to produce and propagate multiple data sets to various audiences. Reverse Proxy As with the gateways described above, in the reverse-proxy model an appliance – whether virtual or physical – is inserted inline into the data flow. But reverse proxies are used specifically between users and a database. Offering much more than simple positional substitution, proxies can alter what they return to users based on the recipient and the specifics of their request. They work by intercepting and masking query results on the fly, transparently substituting masked results for the user. For example if a user queries too many credit card numbers, or if a query originates from an unapproved location, the returned data might be redacted. The proxy effectively intelligently dynamically masks data. The proxy may be an application running on the database or an appliance deployed inline between users and data to force all communications through the proxy. The huge advantage of proxies is t hat they enable data protection without needing to alter the database — they avoid additional programming and quality assurance validation processes. This model is appropriate for PII/PHI data, when data can be managed from a central locations but external users may need access. Some firms have implemented tokenization this way, but masking and redaction are more common. The principal use case is to protect data dynamically, based on user identity and the request itself. Other Options Many of you have used data centric security before, and use it today, so it is worth mentioning two security platforms in wide use today which don’t quite fit our use cases. Data Loss Prevention systems (DLP), and Digital Rights Management (DRM) are forms of DCS which have each been in use over a decade. Data Loss Prevention systems are designed to detect sensitive data and ensure data usage complies with security policy – on the network, on the desktop, and in storage repositories. Digital Rights Management embeds ownership and usage rules into the data, with security policy (primarily read and write access) enforced by the applications that use the data. DLP protects at the infrastructure layer, and DRM at the application layer. Both use encryption to protect data. Both allow users to view and edit data depending on security policies. DLP can be effectively deployed in existing IT environments, helping organizations gain control over data that is already in use. DRM typically needs to be built into applications, with security controls (e.g.,: encryption and ownership rights) applied to data as it is created. These platforms are designed to expose data (making it available

Share:
Read Post

Trends in Data Centric Security: Tools

The three basic data centric security tools are tokenization, masking, and data element encryption. Now we will discuss what they are, how they work, and which security challenges they best serve. Tokenization: You can think of tokenization like a subway or arcade token: it has no cash value but can be used to ride the train or play a game. In data centric security, a token is provided in lieu of sensitive data. The most common use case today is in credit card processing systems, as a substitute for credit card numbers. A token is basically just a random number – that’s it. The token can be made to look just like the original data type; in the case of credit cards the tokens are typically 16 digits long, they usually preserve the last four original numbers, and can even be generated such that they pass the LUHN validation check. But it’s a random value, with no mathematical relationship to the original, and no value other than as a reference to the original in some other (more secure) database. Users may choose to maintain a “token database” which associates the original value with the token in case they need to look up the original at some point in the future, but this is optional. Tokenization has advanced far beyond simple value replacement, and is lately being applied to more advanced data types. These days tokens are not just for simple things like credit cards and Social Security numbers, but also for JSON & XML files and web pages. Some tokenization solutions replace data stored within databases, while others can work on data streams – such as replacing unique cell IDs embedded in cellphone tower data streams. This enables both simple and complex data to be tokenized, at rest or in motion – and tokens can look like anything you want. Very versatile and very secure – you can’t steal what’s not there! Tokenization is used to ensure absolute security by completely removing the original sensitive values from secured data. Random values cannot be reverse engineered back to the original data. For example given a database where the primary key is a Social Security number, tokenization can generate unique and random tokens which fits in the receiving database. Some firms merely use the token as a placeholder and don’t need the original value. In fact some firms discard (or never receive) the original value – they don’t need it. Instead they use tokens simply because downstream applications might break without a SSN or compatible surrogate. Users who need to occasionally reference the original values use token vaults or equivalent technologies. They are designed to only allow credentialed administrators access to the original sensitive values under controlled conditions, but a vault compromise would expose all the original values. Vaults are commonly used for PHI and financial data, as mentioned in the last post. Masking: This is another very popular tool for protecting data elements while retaining aggregate values of data sets. For example we might substitute an individual’s Social Security number with a random number (as in tokenization), or a name randomly selected from a phone book, but retain gender. We might replace date of birth with a random value within X days of the original value to effectively preserve age. This way the original (sensitive) value is removed entirely without randomizing the value of the aggregate data set, to support later analysis. Masking is the principal method of creating useful new values without exposing the original. It is ideally suited for creating data sets which can be used for meaningful analysis without exposing the original data. This is important when you don’t have sufficient resources to secure every system within your enterprise, or don’t fully trust the environment where the data is being stored. Different masks can be applied to the same data fields, to produce different masked data for different use cases. This flexibility exposes much of the value of the original with minimal risk. Masking is very commonly used with PHI, test data management, and NoSQL analytics databases. That said, there are potential downsides as well. Masking does not offer quite as strong security as tokenization or encryption (which we will discuss below). The masked data does in fact bear some relationship to the original – while individual fields are anonymized to some degree, preservation of specific attributes of a person’s health record (age, gender, zip code, race, DoB, etc.) may provide more than enough information to reverse engineer the masked data back to the original data. Masking can be very secure, but that requires selection of good masking tools and application of a well-reasoned mask to achieve security goals while supporting desired analytics. Element/Data Field Encryption / Format Preserving Encryption (FPE): Encryption is the go-to security tool for the majority of IT and data security challenges we face today. Properly implemented, encryption provides obfuscated data that cannot be reversed into the original data value without the encryption key. What’s more, encryption can be applied to any type of data such as first and names, or entire data structures such as a file or database table. And encryption keys can be provided to select users, keeping data secret from those not entrusted with keys. But not all encryption solutions are suitable for a data centric security model. Most forms of encryption take human readable data and transform it into binary format. This is a problem for applications which expect text strings, or databases which require properly formatted Social Security numbers. These binary values create unwanted side effects and often cause applications to crash. So most companies considering data centric security need an encryption cipher that preserves at least format, and often data type as well. Typically these algorithms are applied to specific data fields (e.g.: name, Social Security number, or credit card number), and can be used on data at rest or applied to data streams as information moves from one place to the next. These encryption variants are commercially available, and provide

Share:
Read Post

Open Source Development and Application Security Survey Analysis [New Paper]

We love data – especially when it tells us what people are doing about security. Which is why we were thrilled at the opportunity to provide a – dare I say open? – analysis of the 2014 Open Source Development and Application Security survey. And today we launch the complete research paper with our analysis of the results. Here are a couple highlights: Yes, after a widely-reported major vulnerability in an open source component used in millions of systems around the globe, confidence in open source security did not suffer. In fact, it ticked up. Ironic? Amazing? I was surprised and impressed. … and … 54% answered “Yes, we are concerned with open source vulnerabilities.” but roughly the same percentage of organizations do not have a policy governing open source vulnerabilities. We think this type of survey helps shed important light on how development teams perceive security issues and are addressing them. You can find the official survey results at http://www.sonatype.com/about/2014-open-source-software-development-survey. And our research paper is available for download, free as always: 2014 Open Source Development and Application Security Survey Analysis Finally, we would like to thank Sonatype, both for giving us access to the survey results and for choosing to license this research work to accompany their survey results! Without their interest and support for our work, we would not be able to provide you with research such as this. Share:

Share:
Read Post

Friday Summary: Legal wrangling edition

This week’s intro has nothing to do with security – just a warning in case that matters to you. I’m betting most people spent their spare time this week watching the World Cup. Or perhaps “sick time”, given the apparent national epidemic that suddenly cleared up by Friday. I am not really a ‘football’ fan, but there were some amazing matches and I remain baffled at how a player thought he could get away with biting another player during a match. And then flop and cry that he hurt his mouth! Speechless! But being perverse, I spend most of my spare time this week following a couple court cases. Yes, legal battles. I’m weird that way. The most interesting was O’Bannon v. NCAA up in Oakland California. I am following it because this case has strong potential to completely change college athletics. If you haven’t been paying attention, the essence is that players cannot make money from marketing own their images, but colleges can. For example, a player might be ‘virtualized’ in an EA video game, and the college paid $10M, but the player cannot receive any financial compensation. The NCAA has drawn a line in the sand, and stated that players must receive less than the actual, federal rate for the cost of college attendance. But what gets me is that the NCAA president believes that if a player is in a photo with a product, and receives money from the company, then s/he is being exploited. If s/he is in the same photo, and does not receive money, then s/he is not being exploited. Their uniforms can have corporate logos, and that company can pay the coach to make players advertise their products. The players can be forced to appear in front of banners with corporate logos, and even be forced to drink water from bottles with their corporate logos, but none of that would be exploitation! Not on the NCAA’s watch. Thank goodness the president of the NCAA is there to protect students for these corporate pirates! Here’s a $1.6 million salary for your virtuous hard work, Mark! I joked with a friend recently that I honestly don’t know how we played college football in the 50s, 60s, and 70s without the millions and millions of dollars now being funneled into these programs. How could we have possibly played the game without all this money? I had not seen a game in years, and attended a local college game last fall; I was horrified that one team’s logo and image were completely subsumed by the local corporate sponsors – notably a local Indian casino. Appalled. The casino’s logo was displayed after each touchdown. The audience just clapped as the sponsoring casino paid for fireworks, and who doesn’t love fireworks? As a previous president stated about the NCAA, ‘amateurism’ plays to the participants, not the enterprise. At Texas the football program pays for the entire athletic department, including $5.3M for the head football coach, and still hands back $9M a year to the school. I’m told the University of Alabama grossed well over $100M in one year from its football program’s various revenue sources. Serious. Freaking. Money. From the various reports I am reading, it does not look good for the NCAA. I am not a betting man, but if pushed I would wager on the plaintiff’s side. And at some time in the future, after the appeals, suddenly the students who support this multi-billion dollar industry will get a big piece of the pie. I was rooting for Aereo. Really rooting for Aereo, but they lost their case against the broadcasters. Shot down by the Supreme Court verdict earlier this week. And honestly it’s hard to fault the verdict – give it a read. This is a huge win for broadcasters and cable carriers, and a serious loss for viewers. When it comes down to it Aereo is re-broadcasting others’ content and making a profit off it. We are not keen at Securosis when content aggregation sites routinely bundle our posts and sell advertising around it either. Still, why the hell can’t the broadcasters make this work and provide the content in a way users want? The broadcasting rules and contracts really need to change to allow some innovation, or viewers will ultimately go somewhere else to get what they want. As a consumer I am miffed that something provided over the air, for free, can’t be sent to me if I want to watch it (if you have ever lived just out of sight of a broadcast tower where so you got crappy reception, you know exactly what I am talking about). Or put it on your DVR. Or whatever private use you want to make of it – the customers you broadcast it to might actually want to watch the content at some convenient place and time. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian’s Webcast on the Open Source Development and Application Security Survey. Favorite Securosis Posts Adrian Lane: Open Source Development Analysis: Development Trends. Mike Rothman: Knucklehead-Employee.com. Yeah, it’s mine. But it’s still too damn funny. And I got to bust out the memegenerator. So it’s a win all around. Other Securosis Posts Incite 6/25/2014: June Daze. Trends in Data Centric Security [New Series]. Open Source Development Analysis: Application Security. Firestarter: Apple and Privacy. Favorite Outside Posts Adrian Lane: BoringSSL. This is not the introduction of BoringSSL, but the authors no BS got tired of waiting for politics to get this crap fixed approach without calling out OpenSSL. Bravo. Dave Lewis: The Akamai State of the Internet Report. James Arlen: Deloitte’s Global Defense Outlook 2014. Mike Rothman: Asymmetry of People’s Time in Security Incidents. Lenny Z does a good job of explaining why poor incident handling/management can make it much more expensive to clean up an attack than it is for the attacker. Be prepared, and change the economics. Unfortunately automated attacks now offer so much leverage that you probably cannot achieve parity. But don’t exacerbate the situation. Research Reports and Presentations Defending Against Network-based Distributed Denial of

Share:
Read Post

Trends in Data Centric Security [New Series]

It’s all about the data. The need of many different audiences to derive value from data is driving several disruptive trends in IT. The question that naturally follows is “How do you maintain control over data regardless of where it moves?” If you want to make data useful, by using it in as many places as you can, but you cannot guarantee those places are secure, what can you do? Today we launch a new series on Data Centric Security. We are responding to customer inquiries about what to do when moving data to locations they do not completely trust. The majority of these inquires are motivated by “big data” usage as firms move data into NoSQL clusters. The gist is that we don’t know how to secure these environments, we don’t really trust them, and we don’t want a repeat of data leakage or compliance violations. Here at Securosis we have blogged about NoSQL security for some time, but the specifics of customerinterest came as a surprise. They were not asking “How do I secure Hadoop?” but instead “How do I protect data?” with specific interest in tokenization and masking. An increasing number of firms are asking about data security for cloud environments and HIPPA compliance – again, more focused on data rather than system controls. This is what Data Centric Security (DCS) does: embed security controls into the data, rather than into applications or supporting infrastructure. The challenge is to implement security controls that do not not render the data inert. Put another way, they want to derive value from data without leaving it exposed. Sure, we can encrypt everything, but you cannot analyze encrypted data. To decrypt within the environment means distributing keys and encryption capabilities, implementing identity management, and ensuring the compute platform itself is trustworthy. And that last is impossible when we cannot guarantee the security of the platform. Data Centric Security provides security even when the systems processing data cannot be fully trusted. We can both propagate and use data to derive business value while still maintaining a degree of privacy and security. Sounds like a fantasy, but it’s real. But of course there are challenges, which I will detail later in this series. For now understand that you need to actively select the right security measure for the specific use case. This makes data centric security a form of data management, and requires you to apply security polices, transform the data, and orchestrate distribution. This is not intended to be an exhaustive research effort, but an executive summary of data centric security approaches for a couple emerging use cases. This series will cover: Use Cases: I will outline the top three use cases driving inquiries into data centric security, and specific challenges presented by them. Data Centric Technologies: We will examine a handful of technologies that support data centric security. We will explore tokenization, masking, and data element/format preserving encryption, as well as some other tricks. Data Centric Security Integration: We will discuss how to incorporate DCS into data management operations and deploy these technologies. This is a combination of tools and process, but where you begin your journey affects what you need to do. Our next post will cover DCS use cases. Share:

Share:
Read Post

Open Source Development Analysis: Development Trends

For the final installment of our analysis of the 2014 Open Source Development and Application Security Survey, we will focus on open source development trends. Our topic is less security per se, and more how developers use open source, how it is managed, and how it is perceived in the enterprise. Are open source components more trustworthy than commercial software? An unambiguous question in the survey asked, “Do you believe software assembled with open source is as secure as commercial off-the-shelf (COTS)?” Under 9% said that software assembled with open source is less secure, with over 35% stating they believed open source is more secure than COTS. Even more interesting: survey participants who responded before Heartbleed believed applications assembled using open source components were more secure that COTS was at 34.83%. After Heartbleed: 36.06%. Yes, after a major vulnerability in an open source component used in millions of systems around the globe, confidence in open source security did not suffer. In fact it ticked up a point. Ironic? Amazing? All I can say is I am surprised. What people believe is not necessarily fact. And we can’t really perform a quantitative head-to-head comparison between applications assembled with open source components and COTS security to verify this belief. But the survey respondents deal with open source and commercial software on a daily basis – they are qualified to offer a professional opinion. The net result is for every person who felt COTS was more secure, four felt that open source was more secure. In any sort of popular vote that qualifies as a landslide. Banning components “Has your company ever banned the use of an open source component, library or project?” The majority of respondents, some 78%, said “No”. Still, I have singled this question out as a development practice issue. Something I hear organizations talk about more and more. Software organizations ban components for a number of reasons. Licensing terms might be egregious. Or they might simply no longer trust a component’s reliability or security. For example virtually all released Struts components have severe security exploits, described by critical CVE warnings. Poorly written code has reliability and security issues. The two tend to go hand in hand. You can verify this by looking at bug tracking reports: you will see issues clump together around one or two problematic pieces of software. Banning a module is often politically messy as because it can be difficult to find or build a suitable replacement. But it is an effective, focused way to improve security and reliability. Post-Snowden we have seen increased discussion around trust and whether or not to use certain libraries because of potential subversion by the NSA. This is more of a risk perception issue than more tangible issues such as licensing, but nonetheless a topic of discussion. Regardless of your motivation, banning modules is an option to consider for critical – or suspect – elements of your stack. Open source policies Open source policies were a major focus area for the survey, and the question “Does your company have an open source policy?” was the lead in for several policy related questions. 47% of respondents said they have a policy. When asked, “What are the top three challenges with your open source policy?” the top three responses were that 39% believed that a top challenge is that it does not deal with security vulnerabilities, 41% stated there is little enforcement so workarounds are common, and 35% said what is expected is not clear. This raises the question: What is in an open source policy? The answer dovetails nicely with an early survey question: “When selecting components, what characteristics would be most helpful to you?” That is how you decide. Most companies have a licensing component to their policies, meaning which types of open source licenses are permitted. And most specify versioning and quality controls, such as no beta software. More often than not we see policies around security – such as components with critical vulnerabilities should be patched or avoided altogether. After those items, the contents of open source policies are wide open. They vary widely in how prescriptive they are – meaning how tightly they define ‘how’ and ‘what’. “Who in your organization is primarily responsible for open source policy / governance?” While the bulk of responsibility fell on development managers (34%) and IT architects (24%), much of it landed outside development. Legal, risk, and executive teams are unlikely to craft policies which development can implement easily. So development needs to either take ownership of policies, or work with outside groups to define feasible goals and the easiest route to them. We could spend many pages on policies, but the underlying issue is simple: Policies are supposed to make your life easier. If they don’t, you need to work on the policies. Yes, I know those of you who deal with regulatory compliance in your daily jobs scoff at this, but it’s true. Policies are supposed to help avoid large problems or failures down the road which cost serious time and resources to fix. Here is the simple dividing line: policies written without regard for how they will be implemented, or a clear path to make open source use easier and better, are likely to be bypassed. Just like development processes, policies take work to optimize. Once again, you can find the final results of the survey here. Share:

Share:
Read Post

Open Source Development Analysis: Application Security

Continuing our analysis of the 2014 Open Source Development and Application Security Survey, we can now discuss results as the final version has just been released. Today’s post focuses on application security related facets of the data. Several questions in the survey focused on security practices within open source development, including vulnerability tracking and who is responsibility for security. I will dive into the results in detail, sharing my perspective on where things are getting better, which results surprised me, and where I believe improvements and attention are still needed. Here we go… Who’s talking? When analyzing a survey I always start with this question. It frames many of the survey’s answers. Understanding who is responding also helps illuminate the perspective expressed on the issues and challenges discussed. When asked “What is your role in the organization?” the respondents were largely developers, at 42.78% of those surveyed. Considering that most architects, DevOps types, and build managers perform some development tasks, it is safe to say that over 50% of respondents have their hands on open source components and projects. A full 79% (include development managers) are in a position to understand the nuances of open source development, judge security, and reflect on policy issues. Is open source important? The short answer is “Hell yes, it’s important!” The (Maven) Central Repository – the largest source of open source components for developers – handled thirteen billion download requests last year. That’s more than a billion – with a ‘B’ – every month. This statistic gives you some idea of the scale of open source components used to assemble software applications today. What’s more, the Sonatype data shows open source component usage on the rise, growing 62% in 2013 over 2012, and more than doubling since 2011. When asked “What percentage of a typical application in your organization is comprised of open source components?” at least 75% of organizations rely on them in their development practices. While ‘0-20%’ was an option, I am willing to bet few were really at ‘zero’ because those people would be highly unlikely to participate in this survey. So I believe the number with some involvement (including 1-20%) is closer to 100%. The survey looked at use of open source components across verticals; they captured responses from most major industries including banks, insurance, technology/ISV, and government. Open source component usage is not relegated to a few target industries – it is widespread. The survey also asked “How many developers are in your organization?” to which almost 500 participants answered 1,000 or more. Small firms don’t have 1,000 developers, so at least 15% of responses were from large enterprises. That is a strong showing, given that only a few years ago large enterprises did not trust open source and generally refused to officially endorse its use on corporate systems. And with nearly 700 responses from organizations with 26-100 developers, the survey reflects a good balance of organizational size. Adoption continues to climb because open source have proven its worth – in terms of both quality and getting software built more quickly when you don’t try to build everything from scratch. More software than ever leverages contributions from the open source community, and widespread adoption makes open source software incredibly important. Are developers worried about security? Questions around software security were a theme of this year’s audit, which is why the name changed from years past to “Open Source Development and Application Security Survey”. A central question was “Are open source vulnerabilities a top concern in your position?”, to which 54.16% answered “Yes, we are concerned with open source vulnerabilities.” Concern among more than half of respondents is a good sign – security is seldom part of a product design specification, and has only recently become part of the design and testing phases of development. Respondents’ concerned with vulnerabilities is a positive sign. Viewed another way, 10 years ago that number was about zero, so we see a dramatic change in awareness. Outside development security practitioners get annoyed that only about 50% responded, “Yes” to this question. They zealously believe that when it comes to software development, everyone from the most senior software architect to the new guy in IT needs to consider security practices a priority. As we have seen in breaches over the last decade, failure only takes one weak link. Lending support to the argument that software development has a long way to go when it comes to security, 47.29% of respondents said “Developers know it (Security) is important, but they don’t have time to spend on it.” The response “I’m interested in security and the organization is not.” is very common across development organizations. Most developers know security is an open issue. But fixing security typically does not make its way up the list of priorities while there are important features to build – at least not until there is a problem. Developers’ growing interest in security practices is a good sign; allocation of resources and prioritization remains an issue. What are they doing about it? This year’s results offer a mixed impression of what development organizations are actually doing about security. For example one set of responses showed that developers (40.63%) are responsible for “tracking and resolving newly discovered vulnerabilities in components included in their production applications.” From a developer’s perspective this result looks legitimate. And the 2014 Verizon Data Breach Investigations Report makes clear that the application stack is where the main security issues are being exploited. But application security buying behavior does not jibe with patterns across the rest of the security industry. Understanding that the survey participants were mostly developers with an open source perspective, this number is still surprising because the vast majority of security expenditures are for network and endpoint security devices. Security, including application security, is generally bolted on rather than fixed from within. Jeremiah Grossman, Gunnar Peterson and others have all discussed the ineffectiveness of gearing security toward the network rather than applications. And the Whitehat Website Security Statistics report shows a long-term cost benefit from fixing problems within applications, but what we

Share:
Read Post

2014 Open Source Development Webcast this Wednesday

Reminder: 2014 Open Source Development Webcast this Wednesday A quick reminder: Brian Fox and I will be doing a webcast this Wednesday (June 18th) on the results of the 2014 Open Source Development and Application Security Survey. We have decided to divide the survey into a half dozen or so focus areas and discuss the results. We have different backgrounds in software development so we feel an open discussion is the best way to offer perspective on the results. Brian has been a developer and worked with the open source community for well over a decade, and I have worked with open source since the late ’90s and managed secure code development for about as long. The downside is that we were both created with the verbose option enabled, but we will be sure to leave time for comments at the end. Register for the webcast to listen in live. Talk to you Wednesday! Share:

Share:
Read Post

Friday Summary: June 13, 2014

As Rich said in last week’s Summary, the blog will be quiet this summer because we are busier than we have ever been before. The good news is that new research and Securosis offerings are usually the result. But that does not stop us from feeling guilty about our lack of blogging. With that, I leave you with a couple thoughts from my world this week on a Friday the 13th: Picture an older Formula 1 car. Blindingly fast. Pinnacle of design in its day. Maybe Stirling Moss or Ayrton Senna drove it to victory. It’s still beautiful and fast, but you can’t race it today because it’s not competitive. It can’t be. In some cases the rules of F1 change so great technologies can no longer be used (i.e., ground effects). In other cases the technologies are longer state-of-the-art. You cannot – and should not – retrofit an old chassis. So what do you do with the car? Seems a shame to relegate an F1 car to the dustbin, but you can’t compete with it any longer. As part of our day jobs we get asked to review products and make suggestions on the viability of platforms going forward. Sometimes product managers want us to vet their roadmaps; sometimes we are asked to support a due diligence effort. Whatever the case, we occasionally find cases where a great old product simply cannot compete or be retrofitted to be competitive. Don’t get emotionally attached because it was the S#!$ in its day, and best not think too much about sunk costs – just go back to the drawing board. A fresh start is your only answer. There are many alternatives to passwords. Some outfits, like nospronos.com have recycled an old idea to do away with user passwords on their World Cup web site. Users each get a ‘secret’ URL to their web page, and a public URL to share with friends. Sound familiar? Public key crypto is the gist here: the user gets a private account, the web site does not need to store or manage user passwords, and they can still share content with friends. Good idea? In the short term it’s great, because by the time the users lose or leak passwords the World Cup will be over. It’s fragile, but likely just secure enough to work just long enough. Well, it would have if they had only remembered to issue HTTPS URLs. Oh well… sans passwords and sans privacy means sans security. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Mortman quoted in “The 7 skills Ops pros need to succeed with DevOps”. Adrian quoted on Gazzang acquisition by Cloudera. Adrian to present on the Open Source and Application Security Survey next week. Favorite Securosis Posts David Mortman: Open Source development Analysis. Actually – we all selected the Open Source development Analysis this week. Wonder why? Other Securosis Posts Take our IT practices survey and win cool stuff (and free data). Incite 6/11/2014: Dizney. Summary: Summer. Cloudera acquires Gazzang. Favorite Outside Posts Rich: After Heartbleed, We’re Overreacting to Bugs That Aren’t a Big Deal. Our job isn’t to fix everything, but to manage risk and fix things in the right order. Mike Rothman: The CSO’s Failure to Lead. Is the inability to execute on a security program the fault of the CISO? Maybe. Maybe not. Causation is not correlation. Dave Lewis: ISS’s View on Target Directors Is a Signal on Cybersecurity. If you are keeping score at home we have a number of firsts: CIO dismissal, credit rating downgrade, CEO dismissal, and boardroom shakeup. That is a lot of firsts – this is a Sputnik moment for security. David Mortman: Here’s How You Pick the “Unpickable” Bike Lock. Adrian Lane: 10 things: #1 Parameterize Database Queries. Jim Bird is going through the OWASP Top 10 and explaining the whys and hows to protect against these issues. It’s a series, and this is the first post. Research Reports and Presentations Defending Against Network-based Distributed Denial of Service Attacks. Reducing Attack Surface with Application Control. Leveraging Threat Intelligence in Security Monitoring. The Future of Security: The Trends and Technologies Transforming Security. Security Analytics with Big Data. Security Management 2.5: Replacing Your SIEM Yet?. Defending Data on iOS 7. Eliminate Surprises with Security Assurance and Testing. What CISOs Need to Know about Cloud Computing. Defending Against Application Denial of Service Attacks. Top News and Posts GameOver Zeus botnet disrupted by FBI, Microsoft 14-Year Olds Hack ATM With Default Password More Fun with EMV Adobe, Microsoft Push Critical Security Fixes iOS 8 to stymie trackers and marketers with MAC address randomization ‘NSA-proof’ Protonet server crowdfunds $1m in under 90 minutes SSL/TLS MITM vulnerability CVE-2014-0224 TweetDeck wasn’t actually hacked, and everyone was silly Can You Track Me Now? Blog Comment of the Week This week’s best comment goes to Marco Tietz, in response to the Open Source Development and Application Security Analysis. That is pretty cool, looking forward to your coverage. As I’m working with Sonatype right now on a couple of things, I can confirm that they know what they are doing and that they are in fact in a pretty unique position to provide insights into open source usage and possible security implications. Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.