Securosis

Research

Ticker Symbol: HACK

I think the financial equivalent of jumping shark is Wall Street creating an ETF based on your theme. If so, cybersecurity has arrived. The ISE Cyber Security Index provides a benchmark for investors interested in tracking companies actively involved in providing technology and services that are designed to protect data, networks, hardware, software, and other cyber-dependent devices from unauthorized access and attacks. The index includes twenty-nine constituent companies, including VASCO Data Security International Inc. (ticker: VDSI), Palo Alto Networks Inc. (ticker: PANW), Symantc Corp. (ticker: SYMC), Juniper Networks Inc. (ticker: JNPR), FireEye Inc. (ticker: FEYE), and Splunk Inc. (ticker: SPLK). Before you invest your life savings in ETFs, listen to Vanguard founder Jack Bogle: “The ETF is like the famous Purdy shotgun that’s made over in England. It’s great for big game hunting, and it’s great for suicide.” Two interesting things to look at in ETFs are fees and weighting. The fees on this puppy look to be 0.75% – outlandishly high. For comparison Vanguard’s Dividend Growth ETF has a 0.1% fee. It is true that with foreign ETFs the fees are higher (to access foreign markets), but I do not know why HACK should have such a high fee – the shares they list are liquid and widely traded. Foreign issues themselves do not seem to dictate such a lavish expense ratio. As of October 30, 2014, the Underlying Index had 30 constituents, 6 of which were foreign companies, and the three largest stocks and their weightings in the Underlying Index were VASCO Data Security International, Inc. (8.57%), Imperva, Inc. (6.08%), and Palo Alto Networks, Inc. (5.49%). I cannot tell how it is weighted but if they follow the weighting on ISE then investors will wind up almost 10% into Vasco. The largest members of the index, per ISE, are: Vasco: 9.17% Imperva: 7.57% Qualys: 5.48% Palo Alto: 5.35% Splunk: 5.18% Infoblox: 5.04% That is near 40% in the top six holdings – pretty concentrated. The old school way to index is to weight by market capitalization, but that has been shown to be imperfect because size alone does not determine quality. The preferred weighting for the last few years (since Rob Arnott’s work) has been by value, which bases the percentage of each holding on value metrics like P/E. There is considerable evidence that this works much better than market cap. But we still have a problem: many tech companies, especially new ones, have no earnings! From reverse engineering the index membership it looks like they are using Price/Sales for weighting. For example: Vasco has a Price/Sales ratio 6.1. Palo Alto has a P/S ratio of 13.5. Vasco has about twice the weighting of Palo Alto because it is about twice as cheap on a Price to Sales basis. This is probably not best way to do it, but it is probably the best available way because market cap is flawed and would miss all the upstarts. Due to lack of earnings value metrics are a non-starter. The weightings appear roughly right per Price/Sales, but I could not get the numbers to work precisely. It is possible they are using an additional weighting factor like Relative Strength. Needless to say, this is all in the spirit of “As the Infosec Industry Turns…” and not financial advice of any kind. This is not a recommendation to buy, sell, or hold any of the issues mentioned. In the meantime remember the fees, and this from Jack Bogle: “Performance comes and goes but cost goes on forever.” HACK SEC filing Share:

Share:
Read Post

Incite 11/12/2014: Focus

Interruption is death for a writer. At least it is for me. I need to get into a flow state, where I’m locked in and banging words out. With my travel schedule and the number of calls I make even when not traveling, finding enough space to get into flow has been challenging. Very challenging. And it gets frustrating. Very frustrating. There is always some shiny object to pay attention to. A press release here. A tweet fight there. Working the agenda for a trip two weeks from now. Or something else that would qualify as ‘work’, but not work. Then achiever’s anxiety kicks in. The blog posts that get pushed back day after day, and the conflicts with projects needing to get started. I have things to do, but they don’t seem to get done. Not the writing stuff anyway. It’s a focus thing. More accurately a lack of focus thing. Most of the time I indulge my need to read NFL stories or do some ‘research’. Or even just to think big thoughts for a little while. But at some point I need to write. That is a big part of the business, and stuff needs to get done. So I am searching for new ways to do that. I shut down email. That helps a bit. I don’t answer the phone and don’t check Twitter. That helps too. Maybe I will try a new writing app that basically shuts down all the other apps. Maybe that will help ease the crush of the overwhelming to-do list. Of course my logical mind knows you just start writing. That I need to stop with the excuses and just write. I know the first draft is going to be crap, especially if it’s not flowing. I know that the inbound emails can wait a few hours. I know my Twitter timeline will be there after the post is live on the site. Yet my logical mind loses, as I just stare at the screen for a few more minutes. Then check email and Twitter. Again. Oy. Then I go into my pipeline tracker and start running numbers for the impact of not writing on my wallet. That helps. Until it doesn’t. We have had a good year, so the monkey brain wonders whether it’s not really a bad idea to just sandbag some of the projects and get 2015 off to a roaring start. But I still need to write. Then at some point, I just write. The excuses fall away. The words start to flow, and even make some sense. I get laser focused on the research that needs to get done, and it gets done. The blog fills up with stuff, and balance is restored to my universe. And I resign myself to just carrying around my iPad when I really need to write, because it’s harder to multi-task on that platform. I’ll get there. It’ll just take a little focus. –Mike Photo credit: “Focus” originally uploaded by Michael Dales The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the conference this year. You can check it out on YouTube. Take an hour and check it out. Your emails, alerts and Twitter timeline will be there when you get back. Securosis Firestarter Have you checked out our new video podcast? Rich, Adrian, and Mike get into a Google Hangout and.. hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail. October 27 – It’s All in the Cloud October 6 – Hulk Bash September 16 – Apple Pay August 18 – You Can’t Handle the Gartner July 22 – Hacker Summer Camp July 14 – China and Career Advancement June 30 – G Who Shall Not Be Named June 17 – Apple and Privacy May 19 – Wanted Posters and SleepyCon May 12 – Another 3 for 5: McAfee/OSVDB, XP Not Dead, CEO head rolling Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too. Network Security Gateway Evolution Introduction Monitoring the Hybrid Cloud: Evolving to the CloudSOC Emerging SOC Use Cases Introduction Building an Enterprise Application Security Program Recommendations Security Gaps Use Cases Introduction Security and Privacy on the Encrypted Network The Future is Encrypted Newly Published Papers Secure Agile Development Trends in Data Centric Security Leveraging Threat Intelligence in Incident Response/Management The Security Pro’s Guide to Cloud File Storage and Collaboration The 2015 Endpoint and Mobile Security Buyer’s Guide Open Source Development and Application Security Analysis Advanced Endpoint and Server Protection The Future of Security Incite 4 U Master of the Obvious: Cloud Edition: On my way to the re:Invent conference I read the subhead of a FUD-tastic eWeek article: IT Losing the Battle for Security in the Cloud, which is “More than two-thirds of respondents to a Ponemon Institute survey say it’s more difficult to protect sensitive data in the cloud using conventional security practices.” Um. This is news? The cloud is different! So if you want to secure it you need to do so differently. The survey really shows that most folks have no idea what they are talking about, expected in the early adoption phase of any technology. It is not actually necessarily harder to protect resources in the cloud. I just laugh and then cry a bit, as I realize the amount of education required for folks to understand how to do things in the cloud. I guess that is opportunity for guys like us, so I won’t cry too long… – MR Here we go again: There are a half dozen tokenization working groups proposing standards by my count. Each has vagueness baked into its published specification – many intentionally,

Share:
Read Post

Building an Enterprise Application Security Program: Recommendations

Our goal for this series is not to cover the breadth and depth of an entire enterprise application security program – most of you have that covered already. Instead it is to identify the critical gaps at most firms and offer recommendations for how to close them. We have covered use cases and pointed out gaps; now it’s time to offer recommendations for how to address the deficiencies. You will notice many of the gaps noted in the previous section are byproducts of either a) attackers exposing soft spots in security; or b) innovation with the cloud, mobile, and analytics changing the boundaries of what is possible. Core Program Elements Identity and Access Management: Identity and authorization mapping form your critical first line of defense for application security. SAP, Oracle, and other enterprise application vendors offer identity tools to link to directory services, help with single sign-on, and help map authorizations – key to ensuring users only get data they legitimately need. Segregation of duties is a huge part of access control programs, and your vendor likely covers most of your needs from within the platform. But there is an over-reliance on basic services, and while many firms have stepped up to integrate multiple identity stores with federated identity, attackers have shown most enterprises need to improve in some areas. Passwords: Passwords are simply not very good as a security control, and password rotation has never been proven to actually increase security; it turns out to actually be IT overhead for compliance’s sake. Phishing has proven effective for landing malware on users’ machines, enabling subsequent compromises, so we recommend two-factor authentication – at least for all administrative access. 2-factor is commonly available and can be integrated out-of-band to greatly increase the security of privileged accounts. Mobile: Protecting your users running your PCs on your network behind your firewalls is simply old news. Mobile devices are a modern – and prevalent – interface to enterprise applications. Most users don’t wait for your IT department to make policy or supply devices – they go buy their own and start using them immediately. It is important to consider mobile as an essential extension of the traditional enterprise landscape. These ‘new’ devices demand special consideration for how to deploy identity outside your network, how to _de-_provision users who have leave, and whether you need to quarantine data or apps on mobile devices. Cloud or ‘edge’ identity services, with token-based (typically SAML or OpenID) identity and mobile application management, should be part of your enterprise application security strategy. Configuration and Vulnerability Management: When we discussed why enterprise applications are different we made special mention several deficiencies in assessment products – particularly their ability to collect necessary information and lack of in-depth policies. But assessment is still one of the most powerful tools at your disposal, and generally the mechanism for validating 65% of security and compliance policies. It helps automate hundreds of repetitive, time-consuming, and highly technical system checks. We know it sounds cliche, but this really does save compliance and security teams time and money. These tools come with the most common security and compliance policies embedded to reduce custom development, and most provide a mechanism for non-technical stakeholders to obtain the technical data they need for reporting. You probably have something in place already, but there is a good chance it misses a great deal of what tools designed specifically for your application could acquire. We recommend making sure your product can obtain data, both inside and outside the application, with a good selection of policies specific to your key applications. A handful of generic application policies are a strong indicator that you have the wrong tool. Data Encryption: Most enterprise applications were designed and built with some data encryption capabilities. Either the application embeds its own encryption library and key management system, or it leverages the underlying database encryption engine to encrypt specific columns – or entire schemas – of data. Historically there have been several problems with this model. Many firms discovered that despite encrypted data, database indices and transaction logs contained and leaked unencrypted information. Additionally, encrypted data is stored in binary format, making it very difficult to calculate or report across. Finally, encryption has created performance and latency issues. The upshot is that many firms either turned encryption off entirely or removed it on temporary tables to improve performance. Fortunately there is an option which offers most of the security benefits without the downsides: transparent data encryption. It works underneath the application or database layer to encrypt data before it is stored on disk. It is faster that column encryption, transparent so no application layer changes are required, and avoids the risk of accidentally leaking data. Backups are still protected and you are assured that IT administrators cannot view readable data from disk. We recommend considering products from several application/database vendors and some third-party encryption vendors. Firewalls & Network Security: If you are running enterprise applications, you have firewalls and intrusion detection systems in place. And likely you also have next-generation firewalls, web application firewalls, and/or data loss prevention systems protecting you applications. Because these investments are already paid for and in place, they tend to be the default answer to any application security question. The law of the instrument states that if all you have is a hammer, everything looks like a nail. The problem is that these platforms are not optimal for enterprise application security, but they are nonetheless considered essential because every current security job falls to them. Unfortunately they do a poor job with application security because most of them were designed to detect and address network misuse, but they do not understand how enterprise applications work. Worse, as we shift ever more toward virtualization and the cloud, physical networks go away, making them less useful in general. But the real issue is that a product which was not designed to understand the application cannot effectively monitor its use or function. We recommend looking at

Share:
Read Post

Changing Pricing (for the first time ever)

This is a corporate news post, so skip it if all you want is our usual snarky security analysis. For the first time since starting Securosis we are increasing our prices. Yes, it has been over seven years without any change in pricing for our services. The new prices are only a modest bump, and also streamlined to remove the uncertainty of travel expenses on engagements. Call it ego, but we think we are a heck of a bargain. This only affects speaking/strategy days and retainers. Papers, Securosis Project Accelerator workshops, and one-off projects aren’t changing. Strategy day pricing stays the same at $6,000, but we are adding in $1,000 for travel expenses and will no longer bill travel separately (total of $7,000 for a strategy day or speaking engagement which involves travel). Webcasts stay the same, at $5,000 if we don’t need to travel. Our retainer rates are increasing slightly, around $2-3K each, with $2,000 also being added to our Platinum plan to cover the travel for the two included strategy days: $10K for Silver. $15K for Gold. $25K for Platinum. The new pricing goes into effect immediately for all new clients and renewals. As a reminder, for our papers we offer licenses, not sponsorship, so nothing has changed there. Securosis Project Accelerators (our focused end-user workshops for SaaS providers, enterprise cloud security, security management, network security, and database/big data security) are still $10,000. We do have some other workshops in the… works for next year, so if you are interested in another topic just ask. If you have any other questions, just go ahead and email. Service levels remain the same. You can only blame yourselves for keeping us so darn busy. Share:

Share:
Read Post

Monitoring the Hybrid Cloud: Emerging SOC Use Cases

In the introduction to our series on Monitoring the Hybrid Cloud we went through all the disruptive forces which are increasingly complicating security monitoring. These include the accelerating move to cloud computing and expanding access via mobile devices. Those new models require much greater automation, and significantly less visibility and control over the physical layer of the technology stack. So you need to think about monitoring a bit differently. This starts with getting a handle on the nuances of monitoring, depending on where applications run, so we will discuss monitoring both IaaS (Infrastructure as a Service) and SaaS (Software as a Service). Not that we discriminate against PaaS (Platform as a Service), but it is similar enough to IaaS that the concepts presented are similar. We will also talk about private clouds because odds are you haven’t been able to unplug your data center, so you need to provide an end-to-end view of the infrastructure you use, including both technology you control (in your data center) and stuff you don’t (in the cloud). Monitoring IaaS The biggest and most obvious challenge in monitoring Infrastructure as a Service is the difference in visibility because you don’t control the physical stack. So you are largely restricted to logs provided by your cloud service provider. We see pretty good progress in the depth and granularity available from these cloud log feeds, but you still get much less detail than from devices in your data center. You also cannot tap the network to get actual traffic (packet capture). IaaS vendors offer abstracted networking so many networking features you have come to rely on aren’t available. Depending on the maturity of your security program and incident response process, you may not be doing much packet capture on your environment now, but either way it is no longer an option now in the cloud. We will go into more detail later in this series, but one workaround is to run all traffic through a cloud-based choke point for collection. In essence you perform a ‘man-in-the-middle’ attack on your own network traffic to regain a semblance of the visibility inside your own data center, but that sacrifices much of the architectural flexibility drawing you to the cloud in the first place. You also need to figure out where to both aggregate collected logs (both from the cloud service and from specific instances) and where to analyze them. These decisions hinge on a number of factors including where the technology stacks run, the kinds of analysis to perform, and what expertise is available on staff. We will tackle specific architectural options in our next post. Monitoring SaaS If monitoring IaaS offers a ‘foggy’ view compared to what you see in your own data center, Software as a Service is ‘dark’. You see what the SaaS provider shows you, and that’s it. You have access to neither the infrastructure running your application, nor the data stores that house your data. So what can you do? You can take solace in the fact that many larger SaaS vendors are starting to get the message from angry enterprise clients, and providing an activity feed you can pull into your security monitoring environment. It won’t provide visibility into the technology stack, but you will be able to track what your employees are doing within the service – including administrative changes, record modifications, and login history. Keep in mind that you will need to figure out thresholds and actions to alert on, mostly likely by taking a baseline of activity and then looking for anomalies. There are no out-of-the-box rules to monitor SaaS. And as with IaaS you need to figure out the best place to aggregate and analyze data. Monitoring a Private Cloud Private clouds virtualize your existing infrastructure in your own data center, so you get full visibility, right? Not exactly. You will be able to tap the network within the data center for additional visibility. But without proper access and instrumentation within your private cloud you cannot see what is happening within the virtualized environment. As with IaaS, you can route network traffic within your private cloud through an inspection point, but again that would reduce flexibility substantially. The good news is that many existing security monitoring platforms are rapidly adding the ability to monitor within virtual collection points which run in a variety of private cloud environments. We will address alternatives to extend your existing monitoring environment later in this series. SLAs are your friend As we teach in the CCSK (Certificate of Cloud Security Knowledge) course, you really don’t have much leverage to demand access to logs, events, or other telemetry in a cloud environment. So you will want to exercise whatever leverage you have during the procurement process; document specific logs, access, etc. in your agreements. You will find that some cloud providers (the smaller ones) are much more willing to be contractually flexible than the cloud gorillas. So you will need to decide whether the standard level of logging from the big guys is sufficient for the analysis you need. The key is that once you sign an agreement, what you get is what you get. You will be able to weigh in on product roadmaps and make feature requests, but you know how that goes. CloudSOC If a large fraction of your technology assets have moved into the cloud there is a final use case to consider: moving the collection, analysis, and presentation functions of your monitoring environment into the cloud as well. It may not make much sense to aggregate data from cloud-based resources, and then move the data to your on-premise environment for analysis. More to the point, it is cheaper and faster to keep logs and event data in low-cost cloud storage for future audits and forensic analysis. So you need to weigh the cost and latency of moving data to your in-house monitoring system against running monitoring and analytics in the cloud, in light of the varying pricing models for cloud-based

Share:
Read Post

Leveraging Threat Intelligence in Incident Response/Management [Final Paper]

We continue to investigate the practical use of Threat Intelligence (TI) within your security program. After tackling how to Leverage Threat Intel in Security Monitoring, we now turn our attention to Incident Response and Management. In this paper we go deep into how your existing incident response and management processes can (and should) integrate adversary analysis and other threat intelligence sources, to help narrow down the scope of your investigations. We have also put together a snappy process map depicting how IR/M looks when you factor in external data. To really respond faster you need to streamline investigations and make the most of your resources. That starts with an understanding of what information would interest attackers. From there you can identify potential adversaries and gather threat intelligence to anticipate their targets and tactics. With that information you can protect yourself, monitor for indicators of compromise, and streamline your response when an attack is (inevitably) successful. You will have incidents. If you can respond to them faster and more effectively that’s a good thing, right? Integrating Threat Intel into the IR process is one way to do that. We’d like to thank Cisco and Bit9 + Carbon Black for licensing the content in this paper. We are grateful that our clients see the value of supporting objective research to educate the industry. Without forward-looking organizations you would be on your own… or paying up to get behind the paywall of big research. Check out the paper’s landing page, or download it directly: Leveraging Threat Intelligence in Incident Response/Management (PDF). Share:

Share:
Read Post

New Research Paper: Secure Agile Development

Security teams are tightly focused on bringing security to applications, and meeting compliance requirements in the delivery of applications and services. On the other hand job #1 for software developers is to deliver code faster and more efficiently, with security a distant second. Security professionals and developers often share responsibility for security, but finding the best way to embed security into the software development lifecycle (SDLC) is not an easy challenge. Agile frameworks have become the new foundation for code development, with an internal focus on ruthlessly rooting out tools and techniques that don’t fit this type of development. This means secure development practices, just like every other facet of development, must fit within the Agile framework – not the other way around. This paper offers an outline for security folks to understand development teams’ priorities and methodologies, and practical ways to work together within the Agile methodology. Here is an excerpt: Over the past 15 years, the way we develop software has changed completely. Development processes evolved from Waterfall, to rapid development, to extreme programing, to Agile, to Agile with Scrum, to our current darling: DevOps. Each evolutionary step was taken to build better software by improving the software building process. And each step embraced changes in tools, languages, and systems to encourage increasingly agile processes, while discouraging slower and more cumbersome processes. The fast flux of development evolution gradually deprecated everything that impeded agility … including security. Agile had an uneasy relationship with security because its facets which promoted better software development (in general) broke existing techniques for building security into code. Agile frameworks are the new foundation for code development, with an internal focus on ruthlessly rooting out tools and techniques that don’t fit the model. So secure development practices, just like every other facet of development, must fit within the Agile framework – not the other way around. We are also proud that Veracode has asked to license this content; without support like this we could not bring this quality research to you free of charge without registration. As with all our research, if you have questions or comments we encourage you to comment on the blog so open discussion can help the community. For a copy of the research download the PDF, or get a copy from our research library page on Secure Agile Development. Share:

Share:
Read Post

Summary: Comic Book Guy

Rich here. I only consistently read comic books for a relatively short period of my life. I always enjoyed them as a kid but didn’t really collect them until sometime around high school. Before that I didn’t have the money to buy them month to month. I kept up a little in college, but I probably had less free capital as a freshman than in elementary school. Gas money and cheap dates add up crazy fast. Much to my surprise, at the ripe old age of forty-something, I find myself back in the world of comics. It all started thanks to my kids and Netflix. Netflix has quite the back catalog of animated shows, including my all-time favorite, Spider-Man and His Amazing Friends. You know: Iceman and Firestar. I really loved that show as a kid, and from age three to four it was my middle daughter’s absolute favorite. Better yet, my kids also found Super Hero Squad; a weird and wonderful stylized comedy take on Marvel comics that ran for two seasons. It was one of those rare shows loaded with jokes targeting adults while also appealing to kids. It hooked both my girls, who then moved on to the more serious Avengers Assemble, which covered a bunch of the major comics events – including Secret Invasion, which ran as a season-long story arc. My girls love all the comics characters and stories. Mostly Marvel, which is what I know, but you can’t really avoid DC. Especially Wonder Woman. Their favorite race is the Super Hero Run where we all dress in costumes and run a 5K (I run, they ride in the Helicarrier, which civilians call a “jog stroller”). When it comes to ComiCon, my oldest will gut me with a Barbie if I don’t take her. The there are the movies. The kids are too young to see them all (mostly just Avengers), but I am stunned that the biggest movies today are all expressions of my childhood dreams. Good comic book movies? With plot lines that extend a decade or more? And make a metric ton of cash? Yes, decades. In case you hadn’t heard, Disney/Marvel announced their lineup through 2019. 2-3 films per year, with interlocking television shows on ABC and Netflix, all leading to a 2-film version of the Infinity Wars. My daughter wasn’t born when Iron Man came out, and she will be 10 when the final Avengers (announced so far) is released. Which is why I am back on the comics. Because I am **Dad*, and while I may screw up everything else, I will sure as hell make sure I can explain who the Skrull are, and why Thanos wants the Infinity Gems. I am even learning more about the Flash, and please forgive me, Aquaman. There are few things as awesome as sharing what you love with your kids, and them sharing it right back. I didn’t force this on my kids – they discovered comics on their own, and I merely encouraged their exploration. The exact same thing is happening with Star Wars, and in a year I will get to take my kids to see the first new film with Luke, Leia, and Han since I was a kid. My oldest will even be the same age I was when my father took me to Star Wars for the first time. No, those aren’t tears. I have allergies. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Rich in SC Magazine on Apple Security. Adrian will be discussing Enterprise App Security on the 19th. Webcast with Intel/Mashery November 18th on Data Centric Security. Favorite Securosis Posts Mike Rothman: Friday Summary: Halloween. Adrian and Emily get (yet) another dog. 😉 Rich: We are still low on posts, so I will leave it at that and tell you to read all of them this week 🙂 Other Securosis Posts Building an Enterprise Application Security Program: Security Gaps. Incite 11/5/2014: Be Like Water. Monitoring the Hybrid Cloud: Evolving to the CloudSOC [New Series]. Favorite Outside Posts Mike Rothman: Don’t Get Old. I like a lot of the stuff Daniel Miessler writes. I don’t like the term ‘old’ in this case because that implies age. I think he is talking more about being ‘stuck’, which isn’t really a matter of age. Rich: How an Agile Development Process Fits into the Security User Story. This is something I continue to struggle with as I dig deeper into Agile and DevOps. There is definitely room for more research into how to integrate security into user stories, and tying that to threat modeling. Maybe a project I should take up over the holidays. Adrian Lane: Facebook, Google, and the Rise of Open Source Security Software. It’s interesting that Facebook is building this in-house. And contributing to the open source community. But remember they bought PrivateCore last year too. So the focus on examining in-memory processes and protecting memory indicates their feelings on security. Oh, and Rich is quoted in this too! Research Reports and Presentations Trends in Data Centric Security White Paper. Leveraging Threat Intelligence in Incident Response/Management. Pragmatic WAF Management: Giving Web Apps a Fighting Chance. The Security Pro’s Guide to Cloud File Storage and Collaboration. The 2015 Endpoint and Mobile Security Buyer’s Guide. Analysis of the 2014 Open Source Development and Application Security Survey. Defending Against Network-based Distributed Denial of Service Attacks. Reducing Attack Surface with Application Control. Leveraging Threat Intelligence in Security Monitoring. The Future of Security: The Trends and Technologies Transforming Security. Top News and Posts FBI and Homeland Security shut down Silk Road 2, arrest alleged operator Apple comments on ‘Wirelurker’ malware, infected apps already blocked Accuvant and FishNet Security merging. That’s one BIG security VAR/services company. NSA Director Says Agency Shares Vast Majority of Bugs it Finds. They have said a lot of things lately – hopefully this one is true. Share:

Share:
Read Post

Building an Enterprise Application Security Program: Security Gaps

This post will discuss the common security domains with enterprise applications, areas where generalized security tools lack the depth to address application and database specific issues, and some advice on how to fill in the gaps. But first I want to announce that Onapsis has asked to license the content of this research series. As always, we are pleased when people like what we write well enough to get behind our work, and encourage our Totally Transparent Research style. With that, on with today’s post! Enterprise applications typically address a specific business function: supply chain management, customer relations management, inventory management, general ledger, business performance management, and so on. They may support thousands of users, tie into many other application platforms, but these are specialized applications with very high complexity. To understand the nuances of these systems, the functional components that comprise an application, how they are configured, and what a transaction looks like to that application takes years of study. Security tools also often specialize as well, focusing on a specific type of analysis – such as malware detection – and applying it in particular scenarios such as network flow data, log files, or binary files. They are generally designed to address threats across IT infrastructure at large; very few move up the (OSI) stack to look at generic presentation or application layer threats. And fewer still actually have any knowledge of specific application functions to understand a complex platform like Oracle’s Peoplesoft of SAP’s ERP systems. Security vendors pay lip service to understanding the application layer, but their competence typically ends at the network service port. Generic events and configuration data outside applications may be covered; internals generally are not. Let’s dig into specific examples: Understanding Application Usage The biggest gap and most pressing need is that most monitoring systems do not understand enterprise applications. To continuously monitor enterprise applications you need to collect the appropriate data and then make sense of it. This is a huge problem because data collection points vary by application, and each platform speaks a slightly different ‘language’. For example platforms like SAP speak in codes. To monitor SAP you need to understand SAP operation codes such as T-codes, and there are a lot of different codes. Second you need to know where to collect these requests – application and database log files generally do not provide the necessary information. As another example most Oracle applications rely heavily on stored procedures to efficiently process data within the database. Monitoring tools may see a procedure name and a set of variables in the user request, but unless you know what operation that procedure performs, you have no idea what is happening. Again you need to monitor the connection between the application platform and the database because audit logs do not provide a complete picture of events; then you need to figure out what the query, code, or procedure request means. Vendors who claim “deep packet inspection” for application security skirt understanding how the application actually works. Many use metadata (including time of day, user, application, and geolocation) collected from the network, possibly in conjunction with something like an SAP code, to evaluate user requests. They essentially monitor daily traffic to develop an understanding of ‘normal’, then attempt to detect fraud or inappropriate access without understanding the task being requested. This is certainly helpful for compliance and change management use cases, but not particularly effective for fraud or misuse detection. And it tends to generate false positive alerts. Products designed to monitor applications and databases actually understand their targeted application, and provide much more precise detection and enforcement. Building application specific monitoring tools is difficult and specialized work. But when you understand the application request you can focus your analysis on specific actions – order entry, for example – where insider fraud is most prevalent. This speeds up detection, lessens the burden of data collection, and makes security operations teams’ job easier. Application Composition Throughout this research we use the term ‘database’ a lot. Databases provide the core storage, search, and data management features for applications. Every enterprise application relies on a database of some sort. In fact databases are complex applications themselves. To address enterprise application security and compliance you must address many issues and requirements for both the and the application platforms. Application Deployments We seldom see two instances of the same application deployed the same. They are tailored to each company’s needs, with configuration and user provisioning to support specific requirements. This complicates configuration and vulnerability scanning considerably. What’s more, application and database assessment scans are very different from typical OS and network assessments, requiring different evaluation criteria to assess suitability. The differences lie in both how information is collected, and the depth and breadth of the rule set. All assessment products examine software revision levels, but generic assessment tools stop at list vulnerabilities and known issues, based exclusively on software versions. Understanding an application’s real issues requires a deeper look. For example test and sample applications often introduce back doors into applications, which attackers then exploit. Software revision level cannot tell you what risks are posed by vulnerable modules; only a thorough analysis of a full software manifest can do that. Separation of duties between application, database, and IT administrators cannot be determined by scanning a network port or even hooking into LDAP – it requires interrogation of applications and persistent data storage. Network configuration deficiencies, weak passwords and public accounts, all easily spotted by traditional scanners – provided they have a suitable policy to check – but scanners do not discover data ownership rights, user roles, whether auditing is enabled, unsafe file access rights, or dozens of other well-known issues. Data collection is the other major difference. Most assessment scans offer a basic network port scanner – for cases where agents are inappropriate – to interrogate the application. This provides a quick, non-invasive way to discover basic patch information. Application assessment scanners look for application specific settings, both on disk

Share:
Read Post

Incite 11/5/2014: Be Like Water

You want it and you want it now. So do I. Whatever it is. We live in an age of instant gratification. You don’t need to wait for the mailman to deliver letters – you get them via email. If you can’t wait the 2 days for Amazon Prime shipping, you order it online and pick it up at one of the few remaining brick and mortar stores. Record stores? Ha! Book stores? Double ha!! We live in the download age. You want it, you buy it (or not), and you download it. You have it within seconds. But what happens when you don’t get what you want or (egads!) when you have to wait? You are disappointed. We all are. We get locked into that thing. It’s the only outcome we can see. Maybe it’s a thing, maybe it’s an activity. Maybe it’s a reaction from someone, or more money, or a promotion. It could be anything, but you want it and you get pissy when you don’t get it – now! The problem comes down to attachment. Disappointment happens when you don’t get the desired outcome in the timeframe you want. Disappointment leads to unhappiness, which leads to sickness, and so it goes. I have made a concerted effort to stop attaching myself to specific outcomes. Sure, there are goals I have and things I want to achieve. But I no longer give myself a hard time when I don’t attain them. I don’t consider myself a failure when things don’t go exactly as I plan. At least I try not to… But I was struggling to find an analogy to rely on for this philosophy, until earlier this week. I was in a discussion in a private Facebook group, and I figured out the concept in a way I can easily remember and rely on when my mind starts running amok. I think many of us fall into the trap of seeing a desirable outcome and getting attached to that. I know I do. I’m trying to flow like water. Water doesn’t care where it ends up. It goes along the path the provides the least resistance at any given time. Not that we don’t need resistance from time to time to grow, rather we need to be flexible to adapt to the reality of the moment. Be like water. Water takes the shape of whatever vessel it’s in. Water flows. Water has no predetermined goal and can change form as needed. As the waves crash they show the awesome power of harnessed water. The analogy also works for me because I like being by the water, and the sound of water calms me. But I am not the only one who likes the water. Bruce Lee figured this out way before me and talked about it in this classic interview. Maybe the concept works for you, and maybe it doesn’t. It’s fine either way for me – I’m not attached to a particular outcome… –Mike Photo credit: “The soothing sound of flowing water” originally uploaded by Ib Aarmo The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the conference this year. You can check it out on YouTube. Take an hour and check it out. Your emails, alerts and Twitter timeline will be there when you get back. Securosis Firestarter Have you checked out our new video podcast? Rich, Adrian, and Mike get into a Google Hangout and.. hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail. October 27 – It’s All in the Cloud October 6 – Hulk Bash September 16 – Apple Pay August 18 – You Can’t Handle the Gartner July 22 – Hacker Summer Camp July 14 – China and Career Advancement June 30 – G Who Shall Not Be Named June 17 – Apple and Privacy May 19 – Wanted Posters and SleepyCon May 12 – Another 3 for 5: McAfee/OSVDB, XP Not Dead, CEO head rolling Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too. Monitoring the Hybrid Cloud: Evolving to the CloudSOC Introduction Building an Enterprise Application Security Program Introduction Use Cases Security and Privacy on the Encrypted Network The Future is Encrypted Secure Agile Development Deployment Pipelines and DevOps Building a Security Tool Chain Process Adjustments Working with Development Agile and Agile Trends Introduction Newly Published Papers Trends in Data Centric Security Leveraging Threat Intelligence in Incident Response/Management The Security Pro’s Guide to Cloud File Storage and Collaboration The 2015 Endpoint and Mobile Security Buyer’s Guide Open Source Development and Application Security Analysis Advanced Endpoint and Server Protection Defending Against Network-based DDoS Attacks The Future of Security Incite 4 U Shiny attack maps for everyone: I hand it to Bob Rudis and Alex Pinto for lampooning vendors’ attack maps. They have issued an open source attack map called IPew, which allows you to build your own shiny map to impress your friends and family. As they describe it, ‘IPew is an open source “live attack map” simulation built with D3 (Datamaps) that puts global cyberwar just a URL or git clone away for anyone wanting to display these good-for-only-eye-candy maps on your site.’ Humor aside, visualization is a key skill, and playing around with their tool may provide ideas for how you can present data in a more compelling way within your own shop. So it’s not all fun and games, but if you do need some time to decompress, set IPew to show the Internet having a bad day… War Games FTW. – MR Not for what you think: Occasionally we need to call BS on a post, and Antone Gonsalves on Fraudster Protection

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.