Securosis

Research

Summary: News…. and pulling an AMI from Packer and Jenkins

Rich here. Before I get into tech content, a quick personal note. I just signed up for my first charity athletic event, and will be riding 250 miles in 3 days to support challenged athletes. I’ve covered the event costs, so all donations go right to the cause. Click here if you are interested in supporting the Challenged Athletes Foundation (and my first attempt at fundraising since I sold lightbulbs for the Boy Scouts. Seriously. Lightbulbs. Really crappy ones which burned out in months, making it very embarrassing to ever hit that neighborhood again. Then again, that probably prepared me for a career in security sales). Publishing continues to be a little off here at Securosis as we all wrestle with summer vacations and work trips. That said, instead of the Tool of the Week I’m going with a Solution of the Week that s time, because I ran into what I think is a common technical issue I couldn’t find covered well anyplace else. With that, let’s jump right in… Top Posts for the Week This is the single most important news item of the week. Microsoft won their long-standing case against the Department of Justice (they should form a club with Apple). This means US-based cloud providers operating overseas do not need to break local laws for US government data requests. For now. Huge Win: Court Says Microsoft Does Not Need To Respond To US Warrant For Overseas Data Azure continues to push on encryption: Always Encrypted now generally available in Azure SQL Database DataDog was breached. It looks like they made smart decisions on password hashing, but they reset everything just to be safe. Here’s one take: Every (Data)dog Has its Day Jerry Gamblin runs a nifty script when he launches cloud servers to lock them down quickly. You could take his script and easily convert it for images, or the configuration automation tool of your choice. My First 10 Seconds On A Server – JGamblin.com This isn’t security specific, but some good history on the origins of AWS. How AWS came to be Solution of the Week As I was building the deployment pipeline lab for our cloud security training at Black Hat, I ran into a small integration issue that I was surprised I could not find documented anyplace else. So I consider it my civic duty to document it here. The core problem comes when you use Jenkins and Packer to build Amazon Machine Images (AMIs). I previously wrote about Jenkins and Packer. The flow is that you make a code (or other) change, which triggers Jenkins to start a new build, which uses Packer to create the image. The problem is that there is no built-in way to pull the image ID out of Packer/Jenkins and pass it on to the next step in your process. Here is what I came up with. This won’t make much sense unless you actually use these tools, but keep it as a reference in case you ever go down this path. I assume you already have Jenkins and Packer working together. When Packer runs it outputs the image ID to the console, but that isn’t made available as a variable you can access in any way. Jenkins is also weird about how you create variables to pass on to other build steps. This process pulls the image ID from the stored console output, stores it in a file in the workspace, then allows you to trigger other builds and pass the image ID as a parameter. Install the following additional plugins in Jenkins: Post-Build Script Plugin Parameterized Trigger plugin Get your API token for Jenkins by clicking on your name > configure. Make sure your job cleans the workspace before each build (it’s an environment option). Create a post-build task and choose “Execute a set of scripts”. Adjust the following code and replace the username and password with your API credentials. Then paste it into the “Execute Shell” field. This was for a throwaway training instance I’ve already terminated so these embedded credentials are worthless. Give me a little credit please:wget –auth-no-challenge –user= –password= http://127.0.0.1:8080/job/Website/lastBuild/consoleText export IMAGE_ID=$(grep -P -o -m 1 ‘(?<=AMI:\s)ami-.{8}’ consoleText) echo IMAGE_ID=$IMAGE_ID >> params.txt The wget calls the API for Jenkins, which provides the console text, which includes the image ID (which we grep out). Jenkins can run builds on slave nodes, but the console text is stored on the master, which is why it isn’t directly accessible some other way. The image ID is now in the the params.txt file in the workspace, so any other post build steps can access it. If you want to pass it to another job you can use the Parameterized Trigger plugin to pass the file. In our training class we add other AWS-specific information in that file to run automated deployment using some code I wrote for rolling updates. This isn’t hard, and I saw blog posts saying “pull it from the console text”, but without any specifics of how to access the text or what to do with the ID afterwards so you can access it in other post-build steps or jobs. In our case we do a bunch more, including launching an instance from the image for testing with Gauntlt, and then the rolling update itself if all tests pass. Securosis Blog Posts This Week Managed Security Monitoring: Selecting a Service Provider Building a Threat Intelligence Program [New Paper] Incite 6/29/16: Gone Fishin’ (Proverbially) Managed Security Monitoring: Use Cases Other Securosis News and Quotes I was quoted at ThreatPost on the DataDog breach. Datadog Forces Password Reset Following Breach Training and Events We are running two classes at Black Hat USA. Early bird pricing ends in a month, just a warning: Black Hat USA 2016 | Cloud Security Hands-On (CCSK-Plus) Black Hat USA 2016 | Advanced Cloud Security and Applied SecDevOps Share:

Share:
Read Post

Managed Security Monitoring: Selecting a Service Provider

Based on the discussion in our first post, you have decided to move toward a managed security monitoring service. Awesome! That was the easy part. Now you need to figure out what kind of deployment model makes sense, and then do the hard work of actually selecting the best service provider for you. That’s an important distinction to get straight up front. Vendor selection is about your organization. We know it can be easier to just go with a brand name. Or a name in the right quadrant to pacify senior management. Or the cheapest option. But none of those might be the best choice for your requirements. So the selection process requires an open mind and doing the work. You may end up with the brand name. Or the cheapest one. But at least you’ll know you got the best fit. Deployment Options The deployment decision really comes down to two questions: Who owns the security monitoring platform? Who buys the monitoring platform? Is it provided as part of a service, or do you have to buy it up front? Who is in charge of maintenance? Who pays for upgrades? What about scaling up? Are you looking at jumping onto a multi-tenant monitoring platform, property of your service provider? Where is the SOC? Who staffs it? The other key question concerns operation of the security monitoring platform. Is the central repository and console on your premises? Does it run in your service provider’s data center? Does it run in the cloud? Who fields the staff, especially if some part of the platform will run at your site? To break down the chart above, here are the options, which depend on how you answered the questions above: Traditional: The customer buys and operates the security monitoring platform. Alternatively the provider might buy the platform and charge the customer monthly, but that doesn’t affect operations. Either way the monitoring platform runs on the customer premises, staffed by the customer. This is not managed security monitoring. Hybrid: The customer owns the monitoring platform, which resides on-premise at the customer, but the service provider manages it. The provider handles alerts and is responsible for maintenance and uptime of the system. Outsourced: The service provider owns the platform that resides on the customer’s premises. Similar to the hybrid model, the provider staffs the SOC and assumes responsibility for operation and maintenance. Single-tenant: The service provider runs the monitoring platform in their SOC (or the cloud), but each customer gets its own instance, and there is no comingling of security data. Multi-tenant: The service provider has a purpose-built system to support many clients within the same environment, running in their SOC or the cloud. The assumption is that there are application security controls built into the system to ensure customer data stays is accessible only to authorized users, but that’s definitely something to check as part of your due diligence on the provider’s architecture. Selecting Your Provider We could probably write a book about selecting (and managing) a security monitoring service provider, and perhaps someday we will. But for now here are a few things to think about: Scale: You want a provider who can support you now and scale with you later. Having many customers roughly your size, as well as a technology architecture capable of supporting your plans, should be among your first selection criteria. Viability: Similarly important is your prospective provider’s financial stability. Given the time and burden of migration, and the importance of security monitoring, having a provider go belly up would put you in a precarious situation. Many managed security monitoring leaders are now part of giant technology companies, so this isn’t much of an issue any more. But if you are working with a smaller player, make sure you are familiar with their financials. Technology architecture: Does the provider use their own home-grown technology platform to deliver the service? Is it a commercial offering they customized to meet their needs as a provider – perhaps adding capabilities such as multi-tenancy? Did they design their own collection device, and does it support all your security/network/server/database/application requirements? Where do they analyze and triage alerts? Is it all within their system, or do they run a commercial monitoring platform? How many SOCs do they have, and how do they replicate data between sites? Understand exactly how their technology works so you can assess whether it fits your particular use cases and scalability requirements. Staff Expertise: It’s not easy to find and retain talented security analysts, so be sure to vet the background of the folks the provider will use to handle your account. Obviously you can’t vet them all, but understand the key qualifications of the analyst team – things like like years of experience, years with the provider, certifications, ongoing training requirements, etc. Also make sure to dig into their hiring and training regimens – over time they will need to hire new analysts and quickly get them productive, to deal with industry growth and the inevitable attrition. Industry specialization: Does this provider have many clients in your industry? This is important because there are many commonalities to both traffic dynamics and attack types within an industry, and you should leverage the provider’s familiarity. Given the maturity of most managed security offerings, it is reasonable to expect a provider to have a couple dozen similar customers in your industry. Research capabilities: One reason to consider a managed service is to take advantage of resources you couldn’t afford yourself, which a provider can amortize across their customers. Security research and the resulting threat intelligence are good examples. Many providers have full-time research teams investigating emerging attacks, profiling them, and keeping their collection devices up to date. Get a feel for how large and capable a research team a provider has, how their services leverage their research, and how you can interact with the research team to get the answers you need. Customization: A service provider delivers a reasonably standard service – leveraging

Share:
Read Post

Building a Threat Intelligence Program [New Paper]

Threat Intelligence has made a significant difference in how organizations focus resources on their most significant risks. Yet far too many organizations continue to focus on very tactical use cases for external threat data. These help, but they underutilizing the intelligence’s capabilities and potential. The time has come to advance threat intelligence into the broader and more structured TI program to ensure systematic, consistent, and repeatable value. A program must account for ongoing attack indicator changes and keep up with evolution in adversaries’ tactics. Our Building a Threat Intelligence Program paper offers guidance for designing a program and systematically leveraging threat intelligence. This paper is all about turning tactical use cases into a strategic TI capability to enable your organization to detect attacks faster. We would like to thank our awesome licensees, Anomali, Digital Shadows, and BrightPoint Security for supporting our Totally Transparent Research. It enables us to think objectively about how to leverage new technology in systematic programs to make your security consistent and reproducible. You can get the paper in our research library. Share:

Share:
Read Post

Incite 6/29/16: Gone Fishin’ (Proverbially)

It was a great Incite. I wrote it on the flight to Europe for the second leg of my summer vacation. I said magical stuff. Such depth and perspective, I even amazed myself. When I got to the hotel in Florence and went to post the Incite on the blog, it was gone. That’s right: G. O. N. E. And it’s not going to return. I was sore for a second. But I looked at Mira (she’s the new love I mentioned in a recent Incite) and smiled. I walked outside our hotel and saw the masses gathered to check out the awe-inspiring Duomo. It was hard to be upset, surrounded by such beauty. It took 3 days to get our luggage after Delta screwed up a rebooking because our flight across the pond was delayed, which made us upset. But losing an Incite? Meh. I was on vacation, so worrying about work just wasn’t on the itinerary. Over the years, I usually took some time off during the summer when the kids were at camp. A couple days here and there. But I would work a little each day. Convincing myself I needed to stay current, or I didn’t want things to pile up and be buried upon my return. It was nonsense. I was scared to miss something. Maybe I’d miss out on a project or a speaking gig. It turns out I can unplug, and no one dies. I know that because I’m on my way back after an incredible week in Florence and Tuscany, and then a short stopover in Amsterdam to check out the city before re-entering life. I didn’t really miss anything. Though I didn’t really totally unplug either. I checked email. I even responded to a few. But only things that were very critical and took less than 5 minutes. Even better, my summer vacation isn’t over. It started with a trip to the Jersey shore with the kids. We visited Dad and celebrated Father’s Day with him. That was a great trip, especially since Mira was able to join us for the weekend. Then it was off to Europe. And the final leg will be another family trip for the July 4th holiday. All told, I will be away from the day-to-day grind close to 3 weeks. I highly recommend a longer break to regain sanity. I understand that’s not really feasible for a lot of people. Fortunately getting space to recharge doesn’t require you to check out for 3 weeks. It could be a long weekend without your device. It could just be a few extra date nights with a significant other. It could be getting to a house project that just never seems to get done. It’s about breaking out of routine, using the change to spur growth and excitement when you return. So gone fishin’ is really a metaphor, about breaking out of your daily routine to do something different. Though I will take that literally over the July 4 holiday. There will be fishing. There will be beer. And it will be awesome. For those of you in the US, have a safe and fun July 4. For those of you not, watch the news – there are always a few Darwin Awards given out when you mix a lot of beer with fireworks. –Mike Photo credit: “Gone Fishing” from Jocelyn Kinghorn Security is changing. So is Securosis. Check out Rich’s post on how we are evolving our business. We’ve published this year’s Securosis Guide to the RSA Conference. It’s our take on the key themes of this year’s conference (which is really a proxy for the industry), as well as deep dives on cloud security, threat protection, and data security. And there is a ton of meme goodness… Check out the blog post or download the guide directly (PDF). The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the 2014 conference. You can check it out on YouTube. Take an hour. Your emails, alerts, and Twitter timeline will be there when you get back. Securosis Firestarter Have you checked out our video podcast? Rich, Adrian, and Mike get into a Google Hangout and… hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail. May 31 – Where to Start? May 2 – What the hell is a cloud anyway? Mar 16 – The Rugged vs. SecDevOps Smackdown Feb 17 – RSA Conference – The Good, Bad and Ugly Dec 8 – 2015 Wrap Up and 2016 Non-Predictions Nov 16 – The Blame Game Nov 3 – Get Your Marshmallows Oct 19 – re:Invent Yourself (or else) Aug 12 – Karma July 13 – Living with the OPM Hack May 26 – We Don’t Know Sh–. You Don’t Know Sh– May 4 – RSAC wrap-up. Same as it ever was. Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too. Managed Security Monitoring Use Cases Evolving Encryption Key Management Best Practices Use Cases Part 2 Introduction Incident Response in the Cloud Age In Action Addressing the Skills Gap More Data, No Data, or Both? Shifting Foundations Understanding and Selecting RASP Buyer’s Guide Integration Use Cases Technology Overview Introduction Maximizing WAF Value Management Deployment Introduction Recently Published Papers Shining a Light on Shadow Devices Building Resilient Cloud Network Architectures Building a Vendor (IT) Risk Management Program SIEM Kung Fu Securing Hadoop Threat Detection Evolution Building Security into DevOps Pragmatic Security for Cloud and Hybrid Networks Applied Threat Intelligence Endpoint Defense: Essential Practices Best Practices for AWS Security The Future of Security Incite 4 U More equals less? Huh? Security folks are trained that ‘more’ is rarely a good

Share:
Read Post

Managed Security Monitoring: Use Cases

Many security professionals feel the deck is stacked against them. Adversaries continue to improve their techniques, aided by plentiful malware kits and botnet infrastructures. Continued digitization at pretty much every enterprise means everything of interest in on some system somewhere. Don’t forget the double whammy of mobile and cloud, which democratizes access without geographic boundaries, and takes the one bastion of control, the traditional data center, out of your direct control. Are we having fun yet? Of course the news isn’t all bad – security has become very high profile. Getting attention and resources can sometimes be a little too easy – life was simpler when we toiled away in obscurity bemoaning that senior management didn’t understand or care about security. That’s clearly not the case today, as you get ready to present the security strategy to the board of directors. Again. And after that’s done you get to meet with the HR team trying to fill your open positions. Again. In terms of fundamentals of a strong security program, we have always believed in the importance of security monitoring to shorten the window between compromise and detection of compromise. As we posted in our recent SIEM Kung Fu paper: Security monitoring needs to be a core, fundamental, aspect of every security program. There are a lot of different concepts of what security monitoring actually is. It certainly starts with log aggregation and SIEM, although many organizations are looking to leverage advanced security analytics (either built into their SIEM or using third-party technology) to provide better and faster detection. But that’s not what we want to tackle in this new series, titled Managed Security Monitoring. It’s not about whether to do security monitoring, it’s a question of the most effective way to monitor resources. Given the challenges of finding and retaining staff, the increasingly distributed nature of data and systems that need to be monitored, and the rapid march of technology, it’s worth considering whether a managed security monitoring service makes sense for your organization. The fact is that, under the right circumstances, a managed service presents an interesting alternative to racking and stacking another set of SIEM appliances. We will go through drivers, use cases, and deployment architectures for those considering managed services. And we will provide cautions for areas where a service offering might not meet expectations. As always, our business model depends on forward-looking companies who understand the value of objective research. We’d like to thank IBM Security Systems for agreeing to potentially license this paper once completed. We’ll publish the research using our Totally Transparent Research methodology, which ensures our work is done in an open and accessible manner. Drivers for Managed Security Monitoring We have no illusions about the amount of effort required to get a security monitoring platform up and running, or what it takes to keep one current and useful, given the rapid adaptation of attackers and automated attack tools in use today. Many organizations feel stuck in a purgatory of sorts, reacting without sufficient visibility, yet not having time to invest to gain that much-needed visibility into threats. A suboptimal situation, often the initial trigger for discussion of managed services. Let’s be a bit more specific about situations where it’s worth a look at managed security monitoring. Lack of internal expertise: Even having people to throw at security monitoring may not be enough. They need to be the right people – with expertise in triaging alerts, validating exploits, closing simple issues, and knowing when to pull the alarm and escalate to the incident response team. Reviewing events, setting up policies, and managing the system, all take skills that come with training and time with the security monitoring product. Clearly this is not a skill set you can just pick up anywhere – finding and keeping talented people is hard – so if you don’t have sufficient expertise internally, that’s a good reason to check out a service-based alternative. Scalability of existing technology platform: You might have a decent platform, but perhaps it can’t scale to what you need for real-time analysis, or has limitations in capturing network traffic or other voluminous telemetry. And for organizations still using a first generation SIEM with a relational database backend (yes, they are still out there), you face a significant and costly upgrade to scale the system. With a managed service offering scale is not an issue – any sizable provider is handling billions of events per day and scalability of the technology isn’t your problem – so long as the provider hits your SLAs. Predictable Costs: To be the master of the obvious, the more data you put into a monitoring system, the more storage you’ll need. The more sites you want to monitor and the deeper you want visibility into your network, the more sensors you need. Scaling up a security monitoring environment can become costly. One advantage of managed offerings is predictable costs. You know what you’re monitoring and what it costs. You don’t have variable staff costs, nor do you have out-of-cycle capital expenses to deal with new applications that need monitoring. Technology Risk Transference: You have been burned before by vendors promising the world without delivering much of anything. That’s why you are considering alternatives. A managed monitoring service enables you to focus on the functionality you need, instead of trying to determine which product can meet your needs. Ultimately you only need to be concerned with the application and the user experience – all that other stuff is the provider’s problem. Selecting a provider becomes effectively an insurance policy to minimize your technology investment risk. Similarly, if you are worried about your ops team’s ability to keep a broad security monitoring platform up and running, you can transfer operational risk to the provider, who assumes responsibility for uptime and performance – so long as your SLAs are structured properly. Geographically dispersed small sites: Managed services also interest organizations needing to support many small locations without a lot of technical expertise.

Share:
Read Post

Summary: Modifying rsyslog to Add Cloud Instance Metadata

Rich here. Quick note: I basically wrote an entire technical post for Tool of the Week, so feel free to skip down if that’s why you’re reading. Ah, summer. As someone who works at home and has children, I’m learning the pains of summer break. Sure, it’s a wonderful time without homework fights and after-school activities, but it also means all 5 of us in the house nearly every day. It’s a bit distracting. I mean do you have any idea how to tell a 3-year-old you cannot ditch work to play Disney Infinity on the Xbox? Me neither, which explains my productivity slowdown. I’ve actually been pretty busy at ‘real work’, mostly building content for our new Advanced Cloud Security course (it’s sold out, but we still have room in our Hands-On class). Plus a bunch of recent cloud security assessments for various clients. I have been seeing some interesting consistencies, and will try to write those up after I get these other projects knocked off. People are definitely getting a better handle on the cloud, but they still tend to make similar mistakes. With that, let’s jump right in… Top Posts for the Week I haven’t read this entire paper because I hate registering for things, but the public bits look interesting: DevOpsSec: Securing software through continuous delivery Yeah, I’m on a logging kick. Here’s SANS on Docker container logging: Docker Containers Logging The folks at Signal Sciences are running a great series on Rugged DevOps. They’re starting to build out a reference model, and this first post hits some key areas to get you thinking: A Reference Model for DevOps I hesitated a bit on whether to include this post in from Evident.io because it’s a bit promotional, but it has some good content on changes you need to deal with around assessment in cloud computing. Keep in mind our Trinity platform (which we have in the labs) might potentially compete with Evident (but maybe not), so I’m also pretty biased on this content. Cloud Security: This Isn’t Your Father’s Datacenter While the bulk of the market is AWS, Google and Microsoft are pushing hard and innovating like mad. The upside is that any cloud provider who focuses on the enterprise needs to meet certain security baselines. Diane Greene wants to put the enterprise front and center in Google Cloud strategy Tool of the Week I’m going to detour a bit and focus on something all you admin types are very familiar with: rsyslog. Yes, this is the default system logger for a big chunk of the Linux world, something most of us don’t think that much about. But as I build out a cloud logging infrastructure I found I needed to dig into it to make some adjustments, so here is a trick to insert critical Amazon metadata into your logs (usable on other platforms, but I can only show so many examples). Various syslog-compatible tools generate standard log files and allow you to ship them off to a remote collector. That’s the core of a lot of performance and security monitoring. By default log lines look something like this: Jun 24 00:21:27 ip-172-31-40-72 sudo: ec2-user : TTY=pts/0 ; PWD=/var/log ; USER=root ; COMMAND=/bin/cat secure That’s the line outputting the security log from a Linux instance. See a problem? This log entry includes the host name (internal IP address) of the instance, but in the cloud a host name or IP address isn’t nearly as canonical as in traditional infrastructure. Both can be quite ephemeral, especially if you use auto scale groups and the like. Ideally you capture the instance ID or equivalent on other platforms, and perhaps also some other metadata such as the internal or external IP address currently associated with the instance. Fortunately it isn’t hard to fix this up. The first step is to capture the metadata you want. In AWS just visit: http://169.254.169.254/latest/meta-data/ To get it all. Or use something like: curl http://169.254.169.254/latest/meta-data/instance-id to get the instance ID. Then you have a couple options. One is to change the host name to be the instance ID. Another is to append it to entries by changing the rsyslog configuration (/etc/rsyslog.conf on CentOS systems), as in the below to add a %INSTANCEID% environment variable to the hostname (yes, this means you need to set INSTANCEID as an environment variable, and I haven’t tested this because I need to post the Summary before I finish, so you might need a little more text manipulation to make it work… but this should be close): template(name=”forwardFormat” type=”string” string=”<%PRI%>%TIMESTAMP:::date-rfc3339% %INSTANCEID%-%HOSTNAME% %syslogtag:1:32%%msg:::sp-if-no-1st-sp%%msg%” ) There are obviously a ton of ways you could slice this, and you need to add it to your server build configurations to make it work (using Ansible/Chef/Puppet/packer/whatever). But the key is to capture and embed the instance ID and whatever other metadata you need. If you don’t care about strict syslog compatibility, you have more options. The nice thing about this approach is that it will capture all messages from all the system sources you normally log, and you don’t need to modify individual message formats. If you use something like the native Amazon/Azure/Google instance logging tools… you don’t need to bother with any of this. Those tools tend to capture the relevant metadata for you (e.g., using Amazon’s CloudWatch logs agent, Azure’s Log Analyzer, or Google’s StackDriver). Check the documentation to make sure you get them correct. But many clients want to leverage existing log management, so this is one way to get the essential data. Securosis Blog Posts this Week Shining a Light on Shadow Devices [New Paper] Understanding and Selecting RASP: Buyers Guide Getting the SWIFT Boot Other Securosis News and Quotes Another quiet week… Training and Events We are running two classes at Black Hat USA. Early bird pricing ends in a month – just a warning: Black Hat USA 2016 | Cloud Security Hands-On (CCSK-Plus) Black Hat USA 2016 | Advanced Cloud Security and Applied SecDevOps Share:

Share:
Read Post

Shining a Light on Shadow Devices [New Paper]

Visible devices are only some of the network-connected devices in your environment. There are hundreds, quite possibly thousands, of other devices you don’t know about on your network. You don’t scan them periodically, and you have no idea of their security posture. Each one can be attacked, and might provide an adversary with opportunity to gain presence in your environment. Your attack surface is much larger than you thought. In our Shining a Light on Shadow Devices paper, we discuss the attacks on these devices which can become an issue on your network, along with some tactics to Share:

Share:
Read Post

Getting the SWIFT Boot

As long as I have been in security and following the markets, I have observed that no one says security is unimportant. Not out loud, anyway. But their actions usually show a different view. Maybe there is a little more funding. Maybe somewhat better visibility at the board level. But mostly security gets a lot of lip service. In other words, security doesn’t matter. Until it does. The international interbank payment system called SWIFT has successfully been hit multiple times by hackers, and a few other attempts have been foiled. Now they are going to start turning the screws on member banks, because SWIFT has finally realized they can be very secure but still get pwned. It doesn’t help when the New York Federal Reserve gets caught up in a ruse due to lax security at a bank in Bangladesh. So now the lip service is becoming threats. That member banks will have their access to SWIFT revoked if they don’t maintain a sufficient security posture. Ah, more words. Will this be like the words uttered every time someone asks if security is important? Or will there be actual action behind them? That action needs to include specific guidance on what security actually looks like. This is especially important for banks in emerging countries, which may not have a good idea of where to start. And yes, those organizations are out there. The action also needs to involve some level of third-party assessment. Self-assessment doesn’t cut it. I think SWIFT can take a page from the Payment Card Industry. The initial PCI-DSS, and the resulting work to get laggards over a (low) security bar did help. It’s not an ongoing sustainable answer, because at some point the assessments became a joke and the controls required by the standard have predictably failed to keep pace with attacks. But security at a lot of these emerging banks is a dumpster fire. And the folks who work with them realize where the weakest links are. But actions speak much louder than words, so watch for actions. Photo credit: “Boots” originally uploaded by Rob Pongsajapan Share:

Share:
Read Post

Understanding and Selecting RASP: Buyers Guide

Before we jump into today’s post, we want to thank Immunio for expressing interest in licensing this content. This type of support enables us to bring quality research to you, free of charge. If you are interested in licensing this Securosis research as well, please let us know. And we want to thank all of you who have been commenting throughout this series – we have received many good comments and questions. We have in fact edited most of the posts to integrate your feedback, and added new sections to address your questions. This research is certainly better for it! Share:

Share:
Read Post

Building Resilient Cloud Network Architectures [New Paper]

Building Resilient Cloud Network Architectures builds on our Pragmatic Security Cloud and Hybrid Networks research, focusing on cloud-native network architectures that provide security and availability infeasible in a traditional data center. The key is that cloud computing provides architectural options which are either impossible or economically infeasible in traditional data centers, enabling greater protection and better availability. We would like to thank Resilient Systems, an IBM Company, for licensing the content in this paper. We built the paper using our Totally Transparent Research model, leveraging what we’ve learned building cloud applications over the past 4 years. You can get the paper from the landing page in our research library. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.