Securosis

Research

Firestarter: G Who Shall Not Be Named

As they fight to keep the Firestarter running through Google outages, vacations, and client travel, our dynamic trio return once again. This week they discuss some of the latest news from a particular conference held out in Washington DC last week which Mike stopped by (well, the lobby bar) and Rich used to help run. The audio-only version is up too.   Share:

Share:
Read Post

Updating the Endpoint Security Buyer’s Guide: Mobile Endpoint Security Management

In a rather uncommon occurrence, we are updating one of our papers within a year of publication. As shown by our recent deep dive into Advanced Endpoint and Server Protection, endpoint security is evolving pretty quickly. As mentioned in the latest version of our Endpoint Security Buyer’s Guide, mobile devices are just additional endpoints that need to be managed like any other device. But it has become clear that we need to dig a bit deeper into securing mobile endpoints, so we will. But the change requires a bit of context. We have said for years that management is the first problem users solve when introducing a new technology. Security comes only after management issues are under control. That has certainly been true of mobile devices, as evidenced by the rapid growth, maturity, and consolidation of Mobile Device Management (MDM) technologies. But you cannot really separate management from protection in the mobile endpoint context, as demonstrated by the fact that security features appeared very early among MDM offerings. Mobile devices are inherently better protected from malware attacks due to more modern mobile operating system architectures; so hygiene – including patching, configuration, and determining which applications can run on devices – becomes their key security requirement. This means there is leverage to gain by integrating mobile devices into the device management stack (where applicable) to enforce consistent policy regardless of device, ownership (for BYOD), or location. This has driven significant consolidation of mobile management companies into broader IT management players. In this update of the Endpoint Security Buyer’s Guide we will dig into mobile endpoint security management, defining more specifically what needs to be managed and protected. But most of all, we will focus on the leverage to be gained by managing these capabilities as part of your endpoint security management strategy. Defining Endpoints One of the key points we made early in the Endpoint Security Buyer’s Guide is that the definition of endpoint needs to be more inclusive. From a security standpoint if the device can run applications, access corporate data stores, and store corporate data locally, it is an endpoint and needs to be managed and protected. Smartphones and tablets clearly fit this bill, along with traditional PCs. Organizationally management of all these devices may not fall within a single operations group. That company-specific decision reflects business realities, particularly at large-scale enterprises with thousands of employees and huge IT shops which can afford specialist teams by device. In many smaller companies (the mid-market), we see these operational functions consolidated. But who does the work is less important than what is done to protect mobile endpoints – consistently and efficiently. Managing Endpoint Device Security Hygiene tends to be the main focus for managing mobile endpoint security, so here is a list of what that means in the mobile endpoint context: Enrollment: New devices show up, so registering each device and assigning it proper entitlements begins the process. This is typically handled via a self-service capability so users can register their devices and accept the organization’s policies (especially for employee-owned devices) without waiting for help desk intervention. Of course you cannot assume everyone gaining access will register their devices (especially attackers), so you will want some kind of passive discovery capability to identify unmanaged devices as well. Asset management: Next after enrollment comes the need to understand and track device configuration and security posture, which is really an asset management function. There may be other similar capabilities in use within the organization (such as a CMDB), in which case integration and interoperability with those systems is a requirement. OS configuration: Configuration of mobile endpoints should be based on policies defined by groups and roles within the organizations. These policies typically control many device aspects – including password strength, geolocation, activation lock, and device encryption. OS vendors offer robust and mature APIs to enable this capability, so most platforms offer have similar capabilities. Technology selection largely comes down to leverage managing policies within a consistent user experience across all devices. Patching: Software updates are critical to device security, so ensuring that mobile endpoints are patched in a timely fashion is another key aspect of mobile endpoint security. For mobile devices you will want to be sure you can update devices over the air, as they are often beyond reach of the corporate network, connecting to corporate networks only infrequently. Connectivity: An organization may want to actively control which networks devices use, especially because many public WiFi hotspots are simply insecure. So you will want the ability to specify and enforce policies for which networks devices can use, whether connections require a VPN to backhaul traffic through a central gateway, and whether to use a mobile VPN service to minimize the risk of man-in-the-middle and side-jacking attacks and snooping. Identity/group roles and policies: This capability involves integrating the mobile endpoint security management policy engine with Active Directory or another authoritative identity store. This leverages existing users and groups – managed elsewhere in the organization – to set MDM policies. As you build your mobile endpoint security management strategy, keep in mind that different operating systems offer different hooks and management capabilities. Mature PC operating systems offer one level of management maturity; mobile operating systems are maturing rapidly but don’t offer as much. So to provide a consistent experience and protection across devices you might need to reduce protection to the lowest common denominator of your least capable platform. Alternatively you can choose to support only certain functions on certain devices. For example PCs need to access corporate data (and SaaS application) over the corporate VPN, so they are easier to compromise and present more risk. Whereas more limited mobile devices, with better inherent protection, might be fine with less restrictive policies. This granularity can be established via policies within the endpoint security management platform. Over time MDM platforms will be able to compensate for limitations of underlying operating systems to provide a stronger protection as their capabilities mature. Managing Applications The improved security architectures of mobile operating systems have required attackers to increasingly

Share:
Read Post

Friday Summary: Legal wrangling edition

This week’s intro has nothing to do with security – just a warning in case that matters to you. I’m betting most people spent their spare time this week watching the World Cup. Or perhaps “sick time”, given the apparent national epidemic that suddenly cleared up by Friday. I am not really a ‘football’ fan, but there were some amazing matches and I remain baffled at how a player thought he could get away with biting another player during a match. And then flop and cry that he hurt his mouth! Speechless! But being perverse, I spend most of my spare time this week following a couple court cases. Yes, legal battles. I’m weird that way. The most interesting was O’Bannon v. NCAA up in Oakland California. I am following it because this case has strong potential to completely change college athletics. If you haven’t been paying attention, the essence is that players cannot make money from marketing own their images, but colleges can. For example, a player might be ‘virtualized’ in an EA video game, and the college paid $10M, but the player cannot receive any financial compensation. The NCAA has drawn a line in the sand, and stated that players must receive less than the actual, federal rate for the cost of college attendance. But what gets me is that the NCAA president believes that if a player is in a photo with a product, and receives money from the company, then s/he is being exploited. If s/he is in the same photo, and does not receive money, then s/he is not being exploited. Their uniforms can have corporate logos, and that company can pay the coach to make players advertise their products. The players can be forced to appear in front of banners with corporate logos, and even be forced to drink water from bottles with their corporate logos, but none of that would be exploitation! Not on the NCAA’s watch. Thank goodness the president of the NCAA is there to protect students for these corporate pirates! Here’s a $1.6 million salary for your virtuous hard work, Mark! I joked with a friend recently that I honestly don’t know how we played college football in the 50s, 60s, and 70s without the millions and millions of dollars now being funneled into these programs. How could we have possibly played the game without all this money? I had not seen a game in years, and attended a local college game last fall; I was horrified that one team’s logo and image were completely subsumed by the local corporate sponsors – notably a local Indian casino. Appalled. The casino’s logo was displayed after each touchdown. The audience just clapped as the sponsoring casino paid for fireworks, and who doesn’t love fireworks? As a previous president stated about the NCAA, ‘amateurism’ plays to the participants, not the enterprise. At Texas the football program pays for the entire athletic department, including $5.3M for the head football coach, and still hands back $9M a year to the school. I’m told the University of Alabama grossed well over $100M in one year from its football program’s various revenue sources. Serious. Freaking. Money. From the various reports I am reading, it does not look good for the NCAA. I am not a betting man, but if pushed I would wager on the plaintiff’s side. And at some time in the future, after the appeals, suddenly the students who support this multi-billion dollar industry will get a big piece of the pie. I was rooting for Aereo. Really rooting for Aereo, but they lost their case against the broadcasters. Shot down by the Supreme Court verdict earlier this week. And honestly it’s hard to fault the verdict – give it a read. This is a huge win for broadcasters and cable carriers, and a serious loss for viewers. When it comes down to it Aereo is re-broadcasting others’ content and making a profit off it. We are not keen at Securosis when content aggregation sites routinely bundle our posts and sell advertising around it either. Still, why the hell can’t the broadcasters make this work and provide the content in a way users want? The broadcasting rules and contracts really need to change to allow some innovation, or viewers will ultimately go somewhere else to get what they want. As a consumer I am miffed that something provided over the air, for free, can’t be sent to me if I want to watch it (if you have ever lived just out of sight of a broadcast tower where so you got crappy reception, you know exactly what I am talking about). Or put it on your DVR. Or whatever private use you want to make of it – the customers you broadcast it to might actually want to watch the content at some convenient place and time. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian’s Webcast on the Open Source Development and Application Security Survey. Favorite Securosis Posts Adrian Lane: Open Source Development Analysis: Development Trends. Mike Rothman: Knucklehead-Employee.com. Yeah, it’s mine. But it’s still too damn funny. And I got to bust out the memegenerator. So it’s a win all around. Other Securosis Posts Incite 6/25/2014: June Daze. Trends in Data Centric Security [New Series]. Open Source Development Analysis: Application Security. Firestarter: Apple and Privacy. Favorite Outside Posts Adrian Lane: BoringSSL. This is not the introduction of BoringSSL, but the authors no BS got tired of waiting for politics to get this crap fixed approach without calling out OpenSSL. Bravo. Dave Lewis: The Akamai State of the Internet Report. James Arlen: Deloitte’s Global Defense Outlook 2014. Mike Rothman: Asymmetry of People’s Time in Security Incidents. Lenny Z does a good job of explaining why poor incident handling/management can make it much more expensive to clean up an attack than it is for the attacker. Be prepared, and change the economics. Unfortunately automated attacks now offer so much leverage that you probably cannot achieve parity. But don’t exacerbate the situation. Research Reports and Presentations Defending Against Network-based Distributed Denial of

Share:
Read Post

Knucklehead-Employee.com

You have to love it when your employees take some initiative and aggressively take it to the competition who is cleaning your clock. They spend their time working the product, refining the messaging, and getting your mojo back in the market, right? Or you can just buy a domain like competitorFAIL.com and post some sophomoric insults at the competition. I’m pretty sure that favorably impacts the sales cycle, though it may more favorably impact the employee’s self-esteem. You might think this is a joke, but it’s not. Some HP ArcSight folks figured that if they couldn’t compete in the market, they might as well just insult Splunk, and that would help. They bought splunkfail.com and posted some zingers like this one on the Tweeter. “Splunk is a security company #AprilFoolsDay.” (April 1, 2014 @splunkfail). Seriously. This really happened. ROFL. Literally – I actually rolled on the floor laughing. The folks at Starbucks were not amused. Neither were the Splunk folks, and they (rightfully) complained to HP’s Ethics officer, who promptly dealt with the situation, resulting in those employees pulling down the site and giving the domain to Splunk. Though HP did claim no responsibility for the rogue employees. Maybe they will accept responsibility for providing an endless stream of LOLs for the rest of us. Share:

Share:
Read Post

Incite 6/25/2014: June Daze

I’m not sure why I ever think I’ll get anything done in June. I do try. I convince myself this year will be different. I look at the calendar and figure I’ll be able to squeeze in some writing. I’m always optimistic that I will be able to crank through it because there is stuff to get done. And then at the end of June I just shrug and say to myself, “Yup, another June gone and not much got done.” That’s not really true. I did a lot of travel. I took some great vacations with the family. I had great meetings with clients. But from a deliverables standpoint, not much got done at all. I shouldn’t be hard on myself because I have been at home a grand total of 30 hours for the entire month thus far. Seriously, 30 hours. Yes, I understand these are first world problems. I mentioned that the girls dance at Disney, then it was off to the west coast for a client meeting. Then I flew across the pond for a couple days in London for the Eskenzi PR CISO forum. For the first time (shocking!), I got to tour around London and it was great. What a cool city! Duh. As I mentioned in Solo Exploration I’ve made a point to explore cities I visit when possible, and equipped with my trusty mini-backpack I set out to see London. And I did. I saw shows. I checked out the sites with the rest of the tourists. I took selfies (since evidently that’s what all the kids do today). I met up with some friends of friends (non-work related) and former colleagues who I don’t get to see enough. It was great. But right when I got home, it was a frantic couple hours of packing to get ready for the annual beach trip with my in-laws. Yup, told you this was a first world problem. I did work a bit at the beach, but that was mostly to make sure I didn’t drown when I resurfaced today. I also had some calls to do since I wasn’t able to do them earlier in the month, and given that I commit to family time by noon, there wasn’t a lot of time to write. There never is in June. Then last Sunday we dropped the kids off for their 6+ weeks of camp and I spent another couple days meeting friends and clients in DC around a certain other analyst firm’s annual security conference. So by the time we packed up the van and headed back to ATL yesterday, I have basically been gone the entire month. Now I have a few days in ATL to dig out and then it’s another quick trip next week. Yes, this is the life I chose. Yes, I really enjoy the work. And yes, I’m in a daze and it won’t slow down until the middle of July. Then I’ll get to bang through the backlog and start work on summer projects. I could make myself crazy about what’s not getting done, or I can take a step back and remember things are great. I choose the latter, so I’ll get done what I can and smile about it. I will be sure to be a bit more realistic about what will get done next June. Until I’m not. –Mike Photo credit: “Daze” originally uploaded by Clifford Horn The fine folks at the RSA Conference posted the talk Jennifer Minella and I gave on mindfulness at the conference this year. You can check it out on YouTube. Take an hour and check it out. Your emails, alerts, and Twitter timeline will be there when you get back. Securosis Firestarter Have you checked out our new video podcast? Rich, Adrian, and Mike get into a Google Hangout and.. hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail. June 17 – Apple and Privacy May 19 – Wanted Posters and SleepyCon May 12 – Another 3 for 5: McAfee/OSVDB, XP Not Dead, CEO head rolling May 5 – There Is No SecDevOps April 28 – The Verizon DBIR April 14 – Three for Five March 24 – The End of Full Disclosure March 19 – An Irish Wake March 11 – RSA Postmortem Feb 21 – Happy Hour – RSA 2014 Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too. Trends in Data Centric Security Introduction Open Source Development and Application Security Analysis Development Trends Application Security Introduction Understanding Role-based Access Control Advanced Concepts Introduction NoSQL Security 2.0 Understanding NoSQL Platforms Introduction Newly Published Papers Advanced Endpoint and Server Protection Defending Against Network-based DDoS Attacks Reducing Attack Surface with Application Control Leveraging Threat Intelligence in Security Monitoring The Future of Security Security Management 2.5: Replacing Your SIEM Yet? Defending Data on iOS 7 Eliminating Surprises with Security Assurance and Testing Incite 4 U Problem fixed. Now clean up your mess. Yes, some 300k sites have yet to patch the OpenSSL ‘Heartbleed’ vulnerability, but a more troubling issue is that residual leaked data will cause ongoing problems, as Robert Hansen illustrated in The Ghost of Information Disclosure Past. Many vulnerable sites had credentials scraped, and while they asked their users to reset their passwords, they did not force resets. Attackers now have accumulated credentials which can provide fun and mayhem for anyone with 5 Bitcoins. The Heartbleed cleanup is messy, and in cases where (potentially) all user passwords could be compromised, it is best to “nuke from orbit” and require resets for all registered users. No one said it was easy, right? – AL You too can be a security person: There is no doubting the skills shortage in security. We routinely talk to folks who have open positions for 6-12 months and they are significantly compromising on the skills & capabilities of candidates.

Share:
Read Post

Trends in Data Centric Security [New Series]

It’s all about the data. The need of many different audiences to derive value from data is driving several disruptive trends in IT. The question that naturally follows is “How do you maintain control over data regardless of where it moves?” If you want to make data useful, by using it in as many places as you can, but you cannot guarantee those places are secure, what can you do? Today we launch a new series on Data Centric Security. We are responding to customer inquiries about what to do when moving data to locations they do not completely trust. The majority of these inquires are motivated by “big data” usage as firms move data into NoSQL clusters. The gist is that we don’t know how to secure these environments, we don’t really trust them, and we don’t want a repeat of data leakage or compliance violations. Here at Securosis we have blogged about NoSQL security for some time, but the specifics of customerinterest came as a surprise. They were not asking “How do I secure Hadoop?” but instead “How do I protect data?” with specific interest in tokenization and masking. An increasing number of firms are asking about data security for cloud environments and HIPPA compliance – again, more focused on data rather than system controls. This is what Data Centric Security (DCS) does: embed security controls into the data, rather than into applications or supporting infrastructure. The challenge is to implement security controls that do not not render the data inert. Put another way, they want to derive value from data without leaving it exposed. Sure, we can encrypt everything, but you cannot analyze encrypted data. To decrypt within the environment means distributing keys and encryption capabilities, implementing identity management, and ensuring the compute platform itself is trustworthy. And that last is impossible when we cannot guarantee the security of the platform. Data Centric Security provides security even when the systems processing data cannot be fully trusted. We can both propagate and use data to derive business value while still maintaining a degree of privacy and security. Sounds like a fantasy, but it’s real. But of course there are challenges, which I will detail later in this series. For now understand that you need to actively select the right security measure for the specific use case. This makes data centric security a form of data management, and requires you to apply security polices, transform the data, and orchestrate distribution. This is not intended to be an exhaustive research effort, but an executive summary of data centric security approaches for a couple emerging use cases. This series will cover: Use Cases: I will outline the top three use cases driving inquiries into data centric security, and specific challenges presented by them. Data Centric Technologies: We will examine a handful of technologies that support data centric security. We will explore tokenization, masking, and data element/format preserving encryption, as well as some other tricks. Data Centric Security Integration: We will discuss how to incorporate DCS into data management operations and deploy these technologies. This is a combination of tools and process, but where you begin your journey affects what you need to do. Our next post will cover DCS use cases. Share:

Share:
Read Post

Open Source Development Analysis: Development Trends

For the final installment of our analysis of the 2014 Open Source Development and Application Security Survey, we will focus on open source development trends. Our topic is less security per se, and more how developers use open source, how it is managed, and how it is perceived in the enterprise. Are open source components more trustworthy than commercial software? An unambiguous question in the survey asked, “Do you believe software assembled with open source is as secure as commercial off-the-shelf (COTS)?” Under 9% said that software assembled with open source is less secure, with over 35% stating they believed open source is more secure than COTS. Even more interesting: survey participants who responded before Heartbleed believed applications assembled using open source components were more secure that COTS was at 34.83%. After Heartbleed: 36.06%. Yes, after a major vulnerability in an open source component used in millions of systems around the globe, confidence in open source security did not suffer. In fact it ticked up a point. Ironic? Amazing? All I can say is I am surprised. What people believe is not necessarily fact. And we can’t really perform a quantitative head-to-head comparison between applications assembled with open source components and COTS security to verify this belief. But the survey respondents deal with open source and commercial software on a daily basis – they are qualified to offer a professional opinion. The net result is for every person who felt COTS was more secure, four felt that open source was more secure. In any sort of popular vote that qualifies as a landslide. Banning components “Has your company ever banned the use of an open source component, library or project?” The majority of respondents, some 78%, said “No”. Still, I have singled this question out as a development practice issue. Something I hear organizations talk about more and more. Software organizations ban components for a number of reasons. Licensing terms might be egregious. Or they might simply no longer trust a component’s reliability or security. For example virtually all released Struts components have severe security exploits, described by critical CVE warnings. Poorly written code has reliability and security issues. The two tend to go hand in hand. You can verify this by looking at bug tracking reports: you will see issues clump together around one or two problematic pieces of software. Banning a module is often politically messy as because it can be difficult to find or build a suitable replacement. But it is an effective, focused way to improve security and reliability. Post-Snowden we have seen increased discussion around trust and whether or not to use certain libraries because of potential subversion by the NSA. This is more of a risk perception issue than more tangible issues such as licensing, but nonetheless a topic of discussion. Regardless of your motivation, banning modules is an option to consider for critical – or suspect – elements of your stack. Open source policies Open source policies were a major focus area for the survey, and the question “Does your company have an open source policy?” was the lead in for several policy related questions. 47% of respondents said they have a policy. When asked, “What are the top three challenges with your open source policy?” the top three responses were that 39% believed that a top challenge is that it does not deal with security vulnerabilities, 41% stated there is little enforcement so workarounds are common, and 35% said what is expected is not clear. This raises the question: What is in an open source policy? The answer dovetails nicely with an early survey question: “When selecting components, what characteristics would be most helpful to you?” That is how you decide. Most companies have a licensing component to their policies, meaning which types of open source licenses are permitted. And most specify versioning and quality controls, such as no beta software. More often than not we see policies around security – such as components with critical vulnerabilities should be patched or avoided altogether. After those items, the contents of open source policies are wide open. They vary widely in how prescriptive they are – meaning how tightly they define ‘how’ and ‘what’. “Who in your organization is primarily responsible for open source policy / governance?” While the bulk of responsibility fell on development managers (34%) and IT architects (24%), much of it landed outside development. Legal, risk, and executive teams are unlikely to craft policies which development can implement easily. So development needs to either take ownership of policies, or work with outside groups to define feasible goals and the easiest route to them. We could spend many pages on policies, but the underlying issue is simple: Policies are supposed to make your life easier. If they don’t, you need to work on the policies. Yes, I know those of you who deal with regulatory compliance in your daily jobs scoff at this, but it’s true. Policies are supposed to help avoid large problems or failures down the road which cost serious time and resources to fix. Here is the simple dividing line: policies written without regard for how they will be implemented, or a clear path to make open source use easier and better, are likely to be bypassed. Just like development processes, policies take work to optimize. Once again, you can find the final results of the survey here. Share:

Share:
Read Post

Open Source Development Analysis: Application Security

Continuing our analysis of the 2014 Open Source Development and Application Security Survey, we can now discuss results as the final version has just been released. Today’s post focuses on application security related facets of the data. Several questions in the survey focused on security practices within open source development, including vulnerability tracking and who is responsibility for security. I will dive into the results in detail, sharing my perspective on where things are getting better, which results surprised me, and where I believe improvements and attention are still needed. Here we go… Who’s talking? When analyzing a survey I always start with this question. It frames many of the survey’s answers. Understanding who is responding also helps illuminate the perspective expressed on the issues and challenges discussed. When asked “What is your role in the organization?” the respondents were largely developers, at 42.78% of those surveyed. Considering that most architects, DevOps types, and build managers perform some development tasks, it is safe to say that over 50% of respondents have their hands on open source components and projects. A full 79% (include development managers) are in a position to understand the nuances of open source development, judge security, and reflect on policy issues. Is open source important? The short answer is “Hell yes, it’s important!” The (Maven) Central Repository – the largest source of open source components for developers – handled thirteen billion download requests last year. That’s more than a billion – with a ‘B’ – every month. This statistic gives you some idea of the scale of open source components used to assemble software applications today. What’s more, the Sonatype data shows open source component usage on the rise, growing 62% in 2013 over 2012, and more than doubling since 2011. When asked “What percentage of a typical application in your organization is comprised of open source components?” at least 75% of organizations rely on them in their development practices. While ‘0-20%’ was an option, I am willing to bet few were really at ‘zero’ because those people would be highly unlikely to participate in this survey. So I believe the number with some involvement (including 1-20%) is closer to 100%. The survey looked at use of open source components across verticals; they captured responses from most major industries including banks, insurance, technology/ISV, and government. Open source component usage is not relegated to a few target industries – it is widespread. The survey also asked “How many developers are in your organization?” to which almost 500 participants answered 1,000 or more. Small firms don’t have 1,000 developers, so at least 15% of responses were from large enterprises. That is a strong showing, given that only a few years ago large enterprises did not trust open source and generally refused to officially endorse its use on corporate systems. And with nearly 700 responses from organizations with 26-100 developers, the survey reflects a good balance of organizational size. Adoption continues to climb because open source have proven its worth – in terms of both quality and getting software built more quickly when you don’t try to build everything from scratch. More software than ever leverages contributions from the open source community, and widespread adoption makes open source software incredibly important. Are developers worried about security? Questions around software security were a theme of this year’s audit, which is why the name changed from years past to “Open Source Development and Application Security Survey”. A central question was “Are open source vulnerabilities a top concern in your position?”, to which 54.16% answered “Yes, we are concerned with open source vulnerabilities.” Concern among more than half of respondents is a good sign – security is seldom part of a product design specification, and has only recently become part of the design and testing phases of development. Respondents’ concerned with vulnerabilities is a positive sign. Viewed another way, 10 years ago that number was about zero, so we see a dramatic change in awareness. Outside development security practitioners get annoyed that only about 50% responded, “Yes” to this question. They zealously believe that when it comes to software development, everyone from the most senior software architect to the new guy in IT needs to consider security practices a priority. As we have seen in breaches over the last decade, failure only takes one weak link. Lending support to the argument that software development has a long way to go when it comes to security, 47.29% of respondents said “Developers know it (Security) is important, but they don’t have time to spend on it.” The response “I’m interested in security and the organization is not.” is very common across development organizations. Most developers know security is an open issue. But fixing security typically does not make its way up the list of priorities while there are important features to build – at least not until there is a problem. Developers’ growing interest in security practices is a good sign; allocation of resources and prioritization remains an issue. What are they doing about it? This year’s results offer a mixed impression of what development organizations are actually doing about security. For example one set of responses showed that developers (40.63%) are responsible for “tracking and resolving newly discovered vulnerabilities in components included in their production applications.” From a developer’s perspective this result looks legitimate. And the 2014 Verizon Data Breach Investigations Report makes clear that the application stack is where the main security issues are being exploited. But application security buying behavior does not jibe with patterns across the rest of the security industry. Understanding that the survey participants were mostly developers with an open source perspective, this number is still surprising because the vast majority of security expenditures are for network and endpoint security devices. Security, including application security, is generally bolted on rather than fixed from within. Jeremiah Grossman, Gunnar Peterson and others have all discussed the ineffectiveness of gearing security toward the network rather than applications. And the Whitehat Website Security Statistics report shows a long-term cost benefit from fixing problems within applications, but what we

Share:
Read Post

Firestarter: Apple and Privacy

Mike is out on a beach this week sunning himself (don’t think to hard about that) so Rich and Adrian join up to talk about some interesting developments in Apple privacy, and how Apple may be using it to get some competitive advantage. The audio-only version is up too. Share:

Share:
Read Post

2014 Open Source Development Webcast this Wednesday

Reminder: 2014 Open Source Development Webcast this Wednesday A quick reminder: Brian Fox and I will be doing a webcast this Wednesday (June 18th) on the results of the 2014 Open Source Development and Application Security Survey. We have decided to divide the survey into a half dozen or so focus areas and discuss the results. We have different backgrounds in software development so we feel an open discussion is the best way to offer perspective on the results. Brian has been a developer and worked with the open source community for well over a decade, and I have worked with open source since the late ’90s and managed secure code development for about as long. The downside is that we were both created with the verbose option enabled, but we will be sure to leave time for comments at the end. Register for the webcast to listen in live. Talk to you Wednesday! Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.