Securosis

Research

3 Envelopes

I really enjoyed Thom Langford’s recent post Three Envelopes, One CISO, on the old parable about preparing three envelopes to defer blame for bad things – until you cannot shift it, when you take the bullet. In the CISO’s case it is likely to be a breach. So first blame your predecessor, though I have found that only works for about 6 months. If you get that long a honeymoon, then by the time you have been in the seat for 6 months it is your problem. For the second breach, blame your team. Of course this is limiting – you need them to work for you, but it’s a question of survival at this point, right? When the third breach comes around, you prepare 3 new envelopes, because you are done. Though most folks only get one breach now – especially if they bungle the response. But that’s not Thom’s point, nor is it mine. He brings the discussion back around to the recent Sony breach. Everyone seems to want to draw and quarter a CISO for all sorts of ills. It may be well-deserved, but the rush to judgement doesn’t really help anything, does it? Especially now that it seems to have been a highly sophisticated attack, which Mandiant called ‘unprecedented’. So did the CISOs do themselves any favors? Probably not. But as Thom says, We seem to want to chop down the CISO as soon as something goes wrong, rather than seeing it in the context of the business overall. Let’s wait and see what actually happened before declaring his Career Is So Over, and also appreciate that security breaches are not always the result of poor information security, but often simply a risk taken by the business that didn’t pay off. And with that I open the second envelope Rich gave me when I started at Securosis… Photo credit: “tiny envelope set: radioactive flora” originally uploaded by Angela Share:

Share:
Read Post

Monitoring the Hybrid Cloud: Migration Planning

We will wrap up this series with a migration path to monitoring the hybrid cloud. Whether you choose to monitor the cloud services you consume, or go all the way and create your own SOC in the cloud, these steps will get you there. Let’s dive in. Phase 1: Deploy Collectors The first phase is to collect and aggregate the data. You need to decide how to deploy event collectors – including agents, ‘edge’ proxies, and reverse proxies – to gather information from cloud resources. Your goal is to gather events as quickly and easily as possible, so start with what you know. That basically means leveraging the capabilities of your current security solution(s) to get these new events into the existing system. The complexity is not around understanding these new data sources – flow data and syslog output are well understood. The challenge comes in adapting collection methods designed for on-premises services with a cloud model. If an agent or collector works with your cloud provider’s environment, either to consume cloud vendor logs or those created by your own cloud-based servers, you are in luck. If not you will likely find yourself rerouting traffic to and/or from the cloud into a network proxy to capture events. Depending on the type of cloud service (such as SaaS or IaaS) you will have various means to access event data (such as logs and API connectivity), as outlined in our solution architectures post. We suggest collecting data directly from the cloud provider whenever possible, because much of that data is unavailable from instances or applications running inside the cloud. Monitoring agents can be deployed in IaaS or private cloud environments, where you control the full stack. But in other cloud models, particularly PaaS and SaaS, agents are generally not viable. There you need to rely on proxies that can collect data from all types of cloud deployments, provided you can route traffic through their data-gathering choke points. It is decidedly suboptimal to insert choke points in your cloud network, but it may be necessary. Finally, you have might instead be able to use remote API calls from an on-premise collector to pull events directly from your cloud provider. Not all cloud providers offer this access, and if they do you will likely need to code something yourself from their API documentation. Once you understand what is available you can figure out whether your source provides sufficiently granular data. Each cloud provider/vendor API, and each event log, offer a slightly different set of events in a slightly different format. Be prepared to go back to the future – you may need to build a collector based on sample data from your provider, because not all of the cloud vendors/providers offer logs in syslog or a similarly convenient format. Also look for feed filter options to screen out events you are not interested in – cloud services are excellent at flooding systems with (irrelevant) data. Our monitoring philosophy hasn’t changed. Collect as much data as possible. Get everything the cloud vendor provides as the basis for security monitoring. Then fill in the deficiencies with agents, proxy filters, and cloud monitoring services as needed. This is a very new capability, so likely you will need to build API interface layers to your cloud service providers. Finally keep in mind that using proxies and/or forcing cloud traffic through appliances at the ‘edge’ of your cloud is likely to require re-architecting both on-premise and cloud networks to funnel traffic in and out of your collection point. This also requires that disconnected devices (phones/tablets and laptops not on the corporate network) be configured to send traffic through the choke points / gateways, and cloud services must be configured to reject any direct access which bypasses these portals. If an inspection point can be bypassed it cannot effectively monitor security. Now that you have figured out your strategy and deployed basic collectors, it is time to integrate these new data sources into the monitoring environment. Phase 2: Integrate and Monitor Cloud-based Resources To integrate these cloud-based event sources into the monitoring solution you need to decide which deployment model will best fit your needs. If you already have an on-premise SOC platform and supporting infrastructure it may make sense to simply feed the events into your existing SIEM, malware detection, or other monitoring systems. But a few considerations might change your decision. Capacity: Ensure the existing system can handle your anticipated event volume. SaaS and PaaS environments can be noisy, so expect a significant uptick in event volume, and account for the additional storage and processing overhead. Push vs. Pull: Log Management and SIEM systems can collect events as remote systems and agents push events to them. Then the collector grabs the events, possibly performing some event preprocessing, and forwards the stream to the main aggregation point. But what if you cannot run a remote agent to push the data to you? Most cloud events must be pulled from the cloud service via an active API request. While pull requests are secured across HTTPS, SSL, or even VPN connections, this doesn’t happen magically – a program or script must initiate the transfer. Additionally, the program (script) must supply credentials or identity tokens to the cloud service. You need to know whether your current system is capable of initiating the pull request, and whether it can securely manage the remote API service credentials necessary to collect data. Data Retention: Cloud services require network access, so you need to plan for when your connection is down – especially given the frequency of DoS attacks and network service outages. Make sure you understand the impact if you cannot collect remote events for a time. If the connection goes down, how long can relevant security data be retained or buffered? You don’t want to lose that data. The good news is that many PaaS and IaaS platforms provide easy mechanisms to archive event feeds to long-term storage, to avoid event data loss, but

Share:
Read Post

Security Best Practices for Amazon Web Services

This is a short series on where to start with AWS security. We plan to release it as a concise white paper soon. It doesn’t cover everything but is designed to kickstart and prioritize your cloud security program on Amazon. We do plan to write a much deeper paper next year, but we received several requests for something covering the fundamentals, so here you go… Building on a Secure Foundation Amazon Web Services is one of the most secure public cloud platforms available, with deep datacenter security and many user-accessible security features. Building your own secure services on AWS requires properly using what AWS offers, and adding additional controls to fill the gaps. Amazon’s datacenter security is extensive – better than many organizations achieve for their in-house datacenters. Do your homework, but unless you have special requirements you can feel comfortable with their physical, network, server, and services security. AWS datacenters currently hold over a dozen security and compliance certifications, including SOC 1/2/3, PCI-DSS, HIPAA, FedRAMP, ISO 27001, and ISO 9001. Never forget that you are still responsible for everything you deploy on top of AWS, and for properly configuring AWS security features. AWS is fundamentally different than even a classical-style virtual datacenter, and understanding these differences is key for effective cloud security. This paper covers the foundational best practices to get you started and help focus your efforts, but these are just the beginning of comprehensive cloud security. Defend the Management Plane One of the biggest risks in cloud computing is an attacker gaining access to the cloud management plane: the web interface and APIs to configure and control your cloud. Fail to lock down this access and you might as well just hand over your datacenter to the bad guys. Fortunately Amazon provides an extensive suite of capabilities to protect the management plane at multiple levels, including both preventative and monitoring controls. Unfortunately the best way to integrate these into existing security operations isn’t always clear; it can also be difficult to identify any gaps. Here are our start-to-finish recommendations. Control access and compartmentalize The most important step is to enable Multifactor Authentication (MFA) for your root account. For root accounts we recommend using a hardware token which is physically secured in a known location which key administrators can access in case of emergency. Also configure your Security Challenge Questions with random answers which aren’t specific to any individual. Write down the answers and also store them in a secure but accessible location. Then create separate administrator accounts using Amazon’s Identity and Access Management (IAM) for super-admins, and also turn on MFA for each of those accounts. These are the admin accounts you will use from day to day, saving your root account for emergencies. Create separate AWS accounts for development, testing, and production, and other cases where you need separation of duties. Then tie the accounts together using Amazon’s consolidated billing. This is a very common best practice. Locking down your root account means you always keep control of your AWS management, even in case an administrator account is compromised. Using MFA on all administrator accounts means you won’t be compromised even if an attacker manages to steal a password. Using different AWS accounts for different environments and projects compartmentalizes risks while supporting cross-account access when necessary. Amazon’s IAM policies are incredibly granular, down to individual API calls. They also support basic logic, such as tying a policy to resources with a particular tag. It can get complicated quickly, so aside from ‘super-admin’ accounts there are several other IAM best practices: Use the concept of least privilege and assign different credentials based on job role or function. Even if someone needs full administrative access sometimes, that shouldn’t be what they use day to day. Use IAM Roles when connecting instances and other AWS components together. This establishes temporary credentials which AWS rotates automatically. Also use roles for cross account access. This allows a user or service in one AWS account to access resources in another, without having to create another account, and ties access to policies. Apply object-level restrictions using IAM policies with tags. Tag objects and the assigned IAM policies are automatically enforced. For administrative functions use different accounts and credentials for each AWS region and service. If you have a user directory you can integrate it with AWS using SAML 2.0 for single sign-on. But be careful; this is most suitable for accounts that don’t need deep access to AWS resources, because you lose the ability to compartmentalize access using different accounts and credentials. Never embed Access Keys and Secret Keys in application code. Use IAM Roles, the Security Token Service, and other tools to eliminate static credentials. Many attackers are now scanning the Internet for credentials embedded in applications, virtual images, and even posted on code-sharing sites. These are only a starting point, focused on root and key administrator accounts. Tying them to multifactor authentication is your best defense against most management plane attacks. Monitor activity Amazon provides three tools to monitor management activity within AWS. Enable all of them: CloudTrail logs all management (API) activity on AWS services, including Amazon’s own connections to your assets. Where available it provides complete transparency for both your organization’s and Amazon’s access. CloudWatch monitors the performance and utilization of your AWS assets, and ties tightly into billing. Set billing alarms to detect unusually high levels of activity. You can also send system logs to CloudWatch but this isn’t recommended as a security control. Config is a new service that discovers services and configurations, and tracks changes over time. It is a much cleaner way to track configuration activity than CloudTrail. CloudTrail and Config don’t cover all regions and services, so understand where the gaps are. As of this writing Config is still in preview, with minimal coverage, but both services expand capabilities regularly. These features provide important data feeds but most organizations use additional tools for overall collection and analysis, including log management and SIEM. As a

Share:
Read Post

Summary: 88 Seconds

Rich here. I don’t remember actually seeing Star Wars in the movie theater. I was six years old in 1977, and while I cannot remember the feelings of walking along the sticky theater floor, finding a seat I probably had to kneel on to see the screen from, and watching as the lights dimmed and John Williams assaulted my ears, I do remember standing with my father outside. In a line that stretched around the building. My lone image of this transformative day is of waiting near the back doors, my father beside me, wondering just what the big deal was. Memories of the film itself come from the television in the living room of my childhood home. Not from years later, when VCRs invaded suburbia and VHS vs. Beta made the evening news, but that year. 1977. When I watched my very own copy of Star Wars on a three-quarter-inch professional video deck connected to our TV. My father was recently shut out of a business he co-founded when his partner, who owned the majority share, decided to take everything. The company was contracting to place video decks on long-haul merchant ships and provide first-run movies to entertain the crews. The business fell apart after my dad left, and all he walked away with (so far as I know – he died when I was in high school) was that video player and three sets of tapes (each tape only held an hour). A documentary on the US Bicentennial celebration we attended as a family in NYC, the Wizard of Oz, and Star Wars. Imagine being the only kid in your neighborhood – heck, possibly the entire state – with a copy of Star Wars at home in 1977 or 1978 (it’s possible I got the tape in 78, but I’m pretty sure it was 77). Tapes of higher quality than VHS or Beta; not that it mattered with our TV. I watched Star Wars hundreds of times over the next few years. I watched it so many times that, to this day, I still start to get up to swap tapes every time the Millennium Falcon is pulled into the Death Star by the tractor beam. And, as has happened to so many others over the past 37 years, the film, and its sequels, didn’t merely influence my life, it defined it in many ways. It is hard to know how anything truly affects you in the long term. But I have to assume the philosophies of the fictional jedi [Ed: Not entirely fictional. Wish fullfillment FTW!] pointed me in certain directions. To martial arts, public service, the study of Japanese history, an obsession with space and science, an attraction to women who kick ass, and a moral framework that prizes self-sacrifice and the protection of others. To bombing recklessly down a Pikes Peak hiking trail on my mountain bike, laughing hysterically as I dodged the trees like I was on a speeder bike. (I was working rescue – it was totally legit!). So the day after Thanksgiving I fired up my Apple TV, went to the Trailers app, and shed a few tears over the next 88 seconds. More tears than I expected. I never thought I would live to see a new Star Wars. A new story – not merely backstory with an inevitable ending. With the actors of my youth, playing the same characters. Written by the writer of Empire, and directed by the guy who saved Star Trek?!? And I certainly never thought I would be standing in line in a theater next December, holding the hand of my daughter, who will be the same age I was when it all started in 1977. (And her younger sister, but probably not the boy – he won’t even be 3 yet). I realize I have been geeking out a lot lately here in the Summary, but for good reason. These are the tools I used to define myself as I built my identity. Perhaps not the same tools you used, and not the only tools, but certainly some of the most influential. I no longer need to look back on them nostalgically. I don’t need to relive my youth. I can once again make them part of my future, and perhaps drag my own children along with me. It’s gonna be a hell of a year. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Nada. No one loves us anymore. Favorite Securosis Posts Mike Rothman: Monitoring the Hybrid Cloud: Solution Architectures. These concepts will become a lot more important in 2015 as the lack of visibility in cloud-land becomes a higher profile issue. Rich: Winding Down. Like Mike, I’m cramming, but also blocking some time to relax and refocus for the coming year. I can’t really say much, but it’s going to be a wild one. Other Securosis Posts Security Best Practices for Amazon Web Services. Monitoring the Hybrid Cloud: Technical Considerations. Firestarter: Numbness. Securing Enterprise Applications [New White Paper]. Favorite Outside Posts Adrian Lane: Dog Follows Athletes. Not security but a great story. Mike Rothman: Fixed vs. Growth: The Two Basic Mindsets that Shape Our Lives. A very interesting article about how you view the world. There is no single right answer, but understanding your mindset enables you to make decisions that work better for you. Rich: The Sony Hack Is A Watershed Moment – Especially If North Korea Is Involved. Not really. Saudi Aramco was the watershed moment. The one that sent shock waves through government and the energy industry. But nothing grabs the headlines like Hollywood. Just imagine if they posted naked pictures of Seth Rogen and James Franco! Research Reports and Presentations Securing Enterprise Applications. Secure Agile Development. Trends in Data Centric Security White Paper. Leveraging Threat Intelligence in Incident Response/Management. Pragmatic WAF Management: Giving Web Apps a Fighting Chance. The Security Pro’s Guide to Cloud File Storage and Collaboration. The 2015 Endpoint and Mobile Security Buyer’s Guide. Analysis

Share:
Read Post

Incite 12/3/2014: Winding Down

As I sit in yet another hotel, banging out yet another Incite, overlooking yet another city that isn’t home, this is a good time to look back on 2014 because this is my last scheduled trip for this year. It has been an interesting year. At this point the highs this year feel higher, and the lows lower. There were periods when I felt sick from the whiplash of ups and downs. That’s how life is sometimes. Of course my mindfulness practice helps me handle the turbulence with grace, and likely without much external indication of the inner gyrations. But in 5 years how will I look back on 2014? I have no idea. I have tried not to worry about things like the far future. At that point, XX1 will be leaving for college, the twins will be driving, and I’ll probably have the same amount of gray hair. Sure, I will plan. But I won’t worry. I have been around long enough to know that my plans aren’t worth firing the synapses to devise them. In fact I don’t even write ‘plans’ down any more. It is now December, when most of us start to wind down the year, turning our attention to the next. We are no different at Securosis. For the next couple weeks we will push to close out projects that have to get done in 2014 and start working with folks on Q1 activities. Maybe we will even get to take some time off over the holidays. Of course vacation has a rather different meaning when you work for yourself and really enjoy what you do. But I will slow down a bit. My plan is to push through my handful of due writing projects over the next 2 weeks or so. I will continue to work through my strategy engagements. Then I will really start thinking about what 2015 looks like. Though I admit the slightly slower pace has given me opportunity to be thankful for everything. Certainly those higher highs, but also the lower lows. It’s all part of the experience I can let make me crazy, or I can accept bumps as part of the process. I guess all we can do each year is try to grow from every experience and learn from the stuff that doesn’t go well. For better and worse, I learned a lot this year. So I am happy as I write this although I know happiness is fleeting – so I’ll enjoy the feeling while I can. And then I will get back to living in the moment – there really isn’t anything else. –Mike Photo credit: “wind-up dog” originally uploaded by istolethetv The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the conference this year. You can check it out on YouTube. Take an hour and check it out. Your emails, alerts and Twitter timeline will be there when you get back. Securosis Firestarter Have you checked out our new video podcast? Rich, Adrian, and Mike get into a Google Hangout and.. hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail. November 25 – Numbness October 27 – It’s All in the Cloud October 6 – Hulk Bash September 16 – Apple Pay August 18 – You Can’t Handle the Gartner July 22 – Hacker Summer Camp July 14 – China and Career Advancement June 30 – G Who Shall Not Be Named June 17 – Apple and Privacy May 19 – Wanted Posters and SleepyCon May 12 – Another 3 for 5: McAfee/OSVDB, XP Not Dead, CEO head rolling Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too. Network Security Gateway Evolution Introduction Monitoring the Hybrid Cloud: Evolving to the CloudSOC Technical Considerations Solution Architectures Emerging SOC Use Cases Introduction Security and Privacy on the Encrypted Network The Future is Encrypted Newly Published Papers Securing Enterprise Applications Secure Agile Development Trends in Data Centric Security Leveraging Threat Intelligence in Incident Response/Management The Security Pro’s Guide to Cloud File Storage and Collaboration The 2015 Endpoint and Mobile Security Buyer’s Guide Open Source Development and Application Security Analysis Advanced Endpoint and Server Protection The Future of Security Incite 4 U CISO in the clink… I love this headline: Can a CISO serve jail time? Duh, of course they can. If they deal meth out of the data center, they can certainly go to jail. Oh, can they be held accountable for breaches and negligence within their organization? Predictably, the answer is: it depends. If you are clearly negligent then all bets are off. But if you act in the best interests of the organization as you see them … it is hard to see how a CISO could be successfully prosecuted. That said, there is a chance, so you need to consult a lawyer before taking the job to understand where your liability begins and ends (based on your agreement), and then you can make an informed decision on whether to take the job. Or at least build some additional protection into your agreement. – MR Productivity Killer: Sometimes we need a reminder that security isn’t all about data breaches and DDoS. Sometimes something far far worse happens. Just ask Sony Pictures. Last week employees showed up to work to find their entire infrastructure compromised and offline. Yep, down to some black hat hax0rs graphic taking over everyone’s computer screens, just like in… er… the movies. I don’t find any humor in this. Despite what Sony is doing to the Spider-Man franchise, they are just a company with people trying to get their jobs done, make a little scratch, and build products

Share:
Read Post

Monitoring the Hybrid Cloud: Technical Considerations

New platforms for hybrid cloud monitoring bring both new capabilities and new challenges. We have already discussed some differences between monitoring the different cloud models, and some of the different deployment options available. This post will dive into some technical considerations for these new hybrid platforms, highlighting potential benefits and issues for data security, privacy, scalability, security analytics, and data governance. As cool as a ‘CloudSOC’ sounds, there are technical nuances which need to be factored into your decision and selection processes. There are also data privacy issues because some types of information fall under compliance and jurisdictional regimes. Cloud computing and service providers can provide an opportunity to control infrastructure costs more effectively, but service models costs are calculated differently that on-premise systems, so you need to understand the computing and storage characteristics of the SOC platform in detail to understand where you are spending money. Let’s jump into some key areas where you need to focus. Data Security As soon as event data is moved out of one ‘cloud’ such as say Salesforce into another, you need to consider the sensitivity of the data, which forces a decision on how to handle security. Using SSL or similar technology to secure the data in motion is the easy part – what to do with the data at rest, once it reaches the CloudSOC, is far more challenging. You can get some hints from folks who have already grappled with this question: security monitoring providers. These services either build their own private clouds to accommodate and protect client data, or leverage yet another IaaS or PaaS cloud to provide the infrastructure to store the data. Many of you will find the financial and scalability advantages of storing cloud data in a cloud services more attractive than moving all that collected data back to an on-premise system. Regardless of whether you build your own CloudSOC or use a managed service, a key part of your security strategy will be the Service Level Agreements (SLAs) you establish with your providers. These agreements specify the security controls implemented by the provider, and if something is not specified in that agreement the provider has no obligation to provide it. An SLA is a good place to start, but be wary of unspecified areas – those are where gaps are most likely emerge. A good place to start is a comparison of what the provider does with what you do internally today. We recommend you ask questions and get clear answers on every topic you don’t understand because once you execute the agreement you have no further leverage to negotiate. And if you are running your own make sure you carefully plan out your cloud security model to take advantage of what your IaaS provider offers. You may decide some data is too sensitive to be stored in the cloud without obfuscation (encryption) or removal (typically redaction, tokenization, or masking). Data Privacy and Jurisdiction Over and above basic data security for logs and event data, some countries have strict laws about how Personally Identifiable Information (PII) data may be collected and stored, and some even require that PII not leave its country of origin – even encrypted. If you do business in these countries your team likely already understands the regulations today, but for a hybrid SOC deployment you also need to understand the locations of your primary and backup cloud data centers, and their regional laws as well. This can be incredibly confusing – particularly when data protection laws conflict between countries. Once you understand the requirements and where your cloud (including CloudSOC) providers are located, you can effectively determine which security controls you need. Once again data encryption addresses many legal requirements, and data masking and tokenization services can remove sensitive data without breaking your applications or impairing security analytics. The key is to know where the data will be stored to figure out the right mix of controls. Automation and Scalability If you have ever used Dropbox or Salesforce or Google Docs, you know how easy it is to store data in the cloud. When you move beyond SaaS to PaaS and IaaS, you will find it is just as easy to spin up whole clusters of new applications and servers with a few clicks. Security monitoring, deploying collectors, and setting up proxies for traffic filtering, all likewise benefit from the cloud’s ease of use and agility. You can automate the deployment of collectors, agents, or other services; or agents can be embedded in the start-up process for new instances or technology stacks. Verification and discovery of services running in your cloud can be performed with a single API call. Automation is a hallmark of the cloud so you can script pretty much anything you need. But getting started with basic collection is a long way from getting a CloudSOC into production. As you move to a production environment you will be constructing and refining initialization and configuration scripts to launch services, and defining templates which dictate when collectors or analytics instances are spun up or shut down via the magic of autoscaling. You will be writing custom code to call cloud APIs to collect events, and writing event filters if the API does not offer suitable options. It is basically back to the future, hearkening back to the early days of SIEM when you spent as much time writing and tuning collectors as analyzing data. Archiving is also something you ll need to define and implement. The cloud offers very granular control of which data gets moved from short-term to long-term storage, and when. In the long run cloud models offer huge benefits for automation and on-demand scalability, but there are short-term set-up and tuning costs to get a CloudSOC working the way you need. A managed CloudSOC service will do much of this for you, at additional cost. Other Considerations Management Plane: The management plane for cloud services is a double-edged sword; IT admins now have the power to automate

Share:
Read Post

Monitoring the Hybrid Cloud: Solution Architectures

The good old days: Monitoring employees on company-owned PCs, accessing the company data center across corporate networks. You knew where everything was, and who was using it. And the company owned it all, so you could pretty much dictate where and how you performed security monitoring. With cloud and mobile? Not so much. To take advantage of cloud computing you will need to embrace new approaches to collecting event data if you hope to continue security monitoring. The sources, and the information they contain, are different. Equally important – although initially more subtle – is how to deploy monitoring services. Deployment architectures are critical to deploying and scaling any Security Operations Center; defining how you manage security monitoring infrastructure and what event data you can capture. Furthermore, how you deploy the SOC platform impacts performance and data management. There are a variety of different architectures, intended to meet the use cases outlined in our last post. So now we can focus on alternative ways to deploy collectors in the cloud, and the possibility of using a cloud security gateway as a monitoring point. Then we will take a look at the basic cloud deployment models for a SOC architected to monitor the hybrid cloud, focusing on how to manage pools of event data coming from distributed environments – both inside and outside the organization. Data collection strategies API: Automated, elastic, and self-service are all intrinsic characteristics for cloud computing. Most cloud service providers offer a management dashboard for convenience (and unsophisticated users), but advanced cloud features are typically exposed only via scripts and programs. Application Programming Interfaces (APIs) are the primary interfaces to cloud services; they are essential for configuring a cloud environment, configuring and activating monitoring, and gathering data. These APIs can be called from any program or service, running either on-premise or within a cloud environment. So APIs are the cloud equivalent to platform agents, providing many of the same capabilities in the cloud where a ‘platform’ becomes a virtualized abstraction and a traditional agent wouldn’t really work. API calls return data in a variety of ways, including the familiar syslog format, JSON files, and even various formats specific to different cloud providers. Regardless, aggregating data returned by API calls is a new key source of information for monitoring hybrid clouds. Cloud Gateways: Hybrid cloud monitoring often hinges on a gateway – typically an appliance deployed at the ‘edge’ of the network to collect events. Leveraging the existing infrastructure for data management and SOC interfaces, this approach requires all cloud usage to first be authenticated to the cloud gateway as a choke point; after inspection, traffic is passed on to the appropriate cloud service. The resulting events are then passed to event collection services, comparable to on-premise infrastructure. This enables tight integration with existing security operations and monitoring platforms, and the initial authentication allows all resource requests to be tied to specific user credentials. Cloud 2 Cloud: A newer option is to have one cloud service – in this case a monitoring service – act as a proxy to another cloud service; tapping into user requests and parsing out relevant data, metadata, and application calls. Similarly to using a managed service for email security, traffic passes through a cloud provider to parse incoming requests before they are forwarded to internal or cloud applications. This model can incorporate mobile devices and events – which otherwise never touch on-premise networks – by passing their traffic through an inspection point before they reach cloud service providers such as Salesforce and Microsoft Azure. This enables the SOC to provide real-time event analysis and alert on policy violations, with collected events forwarded to the SOC (either on-premise or in the cloud) for storage. In some cases by proxying traffic these services can also add additional security – such as checks against on-premise identity stores, to ensure employees are still employed before granting access to cloud resources. App Telemetry: Like cloud providers, mobile carriers, mobile OS providers, and handset manufacturers don’t provide much in the way of logging capabilities. Mobile platforms are intended to be secured from outsiders and not leak information between apps. But we are beginning to see mobile apps developed specifically for corporate use, as well as company-specific mobile app containers on devices, which send basic telemetry back to the corporate customer to provide visibility into device activity. Some telemetry feeds include basic data about the device, such as jailbreak detection, while others append user ‘fingerprints’ to authorize requests for remote application access. These capabilities are compiled into individual mobile apps or embedded into app containers which protect corporate apps and data. This capability is very new, and will eventually help to detect fraud and misuse on mobile endpoints. Agents: You are highly unlikely to deploy agentry in SaaS or PaaS clouds; but there are cases where agents have an important role to play in hybrid clouds, private clouds, and Infrastructure as a Service (IaaS) clouds – generally when you control the infrastructure. Because network architecture is virtualized in most clouds, agents offer a way to collect events and configuration information when traditional visibility and taps are unavailable. Agents also call out to cloud APIs to check application deployment. Supplementary Services: Cloud SOCs often rely on third-party intelligence feeds to correlate hostile acts or actors attacking other customers, helping you identify and block attempts to abuse your systems. These are almost always cloud-based services that provide intelligence, malware analysis, or policies based on a broader analysis of data from a broad range of sites and data in order to detect unwanted behavior patterns. This type of threat intelligence supplements hybrid SOCs and helps organizations detect potential attacks faster, but it is not itself a SOC platform. You can refer to our other threat intelligence papers to dig deeper into this topic. (link to threat intel research) Deployment Strategies The following are all common ways to deploy event collectors, monitoring systems, and operations centers to support security monitoring: On-premise: We will forgo

Share:
Read Post

Firestarter: Numbness

SLmageddon V12. Polar Vortices. Ebola. APT123. We live in an era when every week it seems some massive new vulnerability, exploit, or attack is going to take down society. This week Rich, Mike, and Adrian tackle the endless progression of bad news; and how to maintain focus when everyone wants you to save the children. As a side note, if you haven’t seen or read about #feministhackerbarbie on Twitter… oh my, you need to. The audio-only version is up too. Share:

Share:
Read Post

Securing Enterprise Applications [New White Paper]

Securing enterprise applications is hard work. These are complex platforms, with lots of features and interfaces, reliant on database support, and often deployed across multiple machines. They leverage both code provided by the vendor, as well as hundreds – if not thousands – of supporting code modules produced specifically for the customer’s needs. This make every environment a bit different, and acceptable application behavior unique to every company. This is problematic because during our research we found that most organizations rely on security tools which work on the network fringes, around applications. These tools cannot see inside an application to fully understand its configuration and feature set, nor do they understand application-layer communication. This approach is efficient because a generic tool can see a wide variety of threats, but misses subtle misuse and most serious misconfigurations. We decided to discuss some of our findings. But to construct an entire application security program for enterprise applications would require 100 pages of research, and still fail to provide complete coverage. Many firms have had enterprise applications from Oracle and SAP deployed for a decade or more, so we decided to focus on areas where the security problems have changed, or where tools and technology have superseded approaches that were perfectly acceptable just a couple years ago. This research paper spotlight these problem areas and offers specific suggestions for how to close the security gaps. Here is an except: Supply chain management, customer relationship management, enterprise resource management, business analytics, and financial transaction management, are all multi-billion dollar application platforms unto themselves. Every enterprise depends upon them to orchestrate core business functions, spend tens of millions of dollars on software and support. We are beyond explaining why enterprise applications need security to protect these investments – it is well established that insiders and persistent adversaries target these applications. Companies invest heavily in these applications, hardware to run them, and teams to keep them up and running. They perform extensive risk analysis on their business implications and the costs of downtime. And in many cases their security investments are a byproduct of these risk profiles. Application security trends in the 1-2% range of total application investment, so we cannot say large enterprises don’t take security seriously – they spend millions and hire dedicate staff to protect these platforms. That said, their investments are not always optimal – enterprises may bet on solutions with limited effectiveness, without a complete understanding of the available options. It is time for a fresh look. In this research paper, Building an Enterprise Application Security program, we will take a focused look at the major facets in an enterprise application security program, and make practical suggestions on how to improve efficiency and effectiveness of your security program. Or goal is to discuss specific security and compliance use cases for large enterprise applications, highlight gaps, and explain some application-specific tools to address these issues. This will not be an exhaustive examination of enterprise application security controls, rather a spotlight common deficiencies with the core pillars of security controls and products. We would like to thank Onapsis for licensing this research. They reached out to us on this topic and asked to back this effort, which we are very happy about, because support like this enables us to keep doing what we do. You can download a copy of the research in our research library or download it directly: Securing Enterprise Applications. As always, if you have questions or comments, please drop us a line! Share:

Share:
Read Post

Friday Summary: November 21, 2014

Thus ends the busiest four weeks I have had since joining Securosis. A few conferences – AWS Re:Invent was awesome – a few client on-site days, meeting with some end customers, and about a half dozen webcasts, have together left me gasping for air. We all need a little R&R here and the holidays are approaching, so Firestarters and blog posts will be a bit sporadic. Technically it is still Friday, so here goes today’s (slightly late) summary. I am ignorant of a lot of things, and I thought this one was odd enough that I would ask more knowledgable people in the community for assistance in explaining how this works. The story starts like this: A few months ago the new Lamborghini Huracan was introduced. Being a bit of a car weenie I went to the web site – http://huracan.lamborghini.com – in a Safari browser to see some pictures of the new car. Nice! I wish I could afford one – not that I would drive it much. I would probably just stare at it in the garage. Regardless, I had never been to the Lamborghini web site before. So I was a little surprised the next morning when I opened up a new copy of Firefox, which was trying to make a request to http://media.lamborghini.com. WTF? As I started to dig into this, I saw it was a repeating pattern. I visited http://www.theabsolutesound.com, and when I opened my newly installed Aviator browser, it tried to connect to http://media.theabsolutesound.com. Again, I had never been to that site in the Aviator browser, but recently visited it from FF. Amazon Web services, Tech Target, and a dozen or so requests to connect to media.sitename.com or files.sitename.com popped up. But the capper was a few weeks later, when my computer tried to send the same request to media.theabsolutesound.com from an email client! That is malware behavior, likely adware! So is this behavior part of an evercookie Flash/Java exploit through persistent data? I had Java disabled and Flash set to prompt before launch, so I thought a successful cross-browser attack via those persistence methods was unlikely. Of course it is entirely possible that I missed something. Anyway, if you know about this and would care to explain it – or have a link – I would appreciate an education on current techniques for browser/user tracking. I am clearly missing something. As a side note, as I pasted the huracan.lamborghini.com link into my text editor to wrote this post, an Apple services daemon tried to send a packet to gs-loc.apple.com with that URL in it. Monitor much? If you don’t already run an outbound firewall like Little Snitch, I highly recommend it. It is a great way to learn who sends what where and completely block lots of tracking nonsense. Puppy names. Everybody does it: before you get a new puppy you discuss puppy names. Some people even buy a book, looking for that perfect cute name to give their snugly little cherub. They fail to understand their mistake until after the puppy is in their home. They name the puppy from the perspective of prepuppy normal life. Let me save you some trouble and provide some good puppy names for you, ones more appropriate for the post-puppy honeymoon: “Outside!” – the winner by a landslide. “Drop-It!” “Stinky!” “No, no, no!” “Bad!” “Not again!” “Stop!” “OWW, NO!” “Little bastard” “Come here!” “Droptheshoe!” “AAhhhhrrrr” “F&%#” or the swear word of you choice. Trust me on this – the puppy is going to think one of these is their name anyway, so starting from this list saves you time. My gift to you. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian on Secure Agile Development, via SANS Adrian on Pragmatic WAF Management Securosis Posts Ticker Symbol: HACK. Incite 11/12/2014: Focus. Building an Enterprise Application Security Program: Recommendations. Changing Pricing (for the first time ever). Monitoring the Hybrid Cloud: Emerging SOC Use Cases. Favorite Outside Posts Mike Rothman: Open Whisper Systems partners with WhatsApp to provide end-to-end encryption. The future will be encrypted. Even WhatsApp! Much to the chagrin of the NSA… Rich: Secure Agile Development. Think like a Developer. Maybe I have been spending too much time coding lately, but I love this concept. Needless to say we have lately been spending a lot of time on this area. Adrian Lane: Experimental Videogame Consoles That Let You Make One Move a Day. In a world of instant gratification, getting back to a slow pace is refreshing and awesome. Research Reports and Presentations Securing Enterprise Applications. Secure Agile Development. Trends in Data Centric Security White Paper. Leveraging Threat Intelligence in Incident Response/Management. Pragmatic WAF Management: Giving Web Apps a Fighting Chance. The Security Pro’s Guide to Cloud File Storage and Collaboration. The 2015 Endpoint and Mobile Security Buyer’s Guide. Analysis of the 2014 Open Source Development and Application Security Survey. Defending Against Network-based Distributed Denial of Service Attacks. Reducing Attack Surface with Application Control. Top News and Posts Microsoft patches critical bug that affects every Windows version since 95 Google Removes SSLv3 Fallback Support From Chrome Nasty Security Bug Fixed in Android Lollipop 5.0 Amazon Web Services releases key management service U.S. Marshals Using Fake, Airplane-based Cell Towers Facebook’s ‘Privacy Basics’ Is A Privacy Guide You May Actually Want To Read Hiding Executable Javascript in Images That Pass Validation UPnP Devices Used in DDoS Attacks Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.