Securosis

Research

Security and Privacy on the Encrypted Network: Use Cases

In the first post of this series on Security and Privacy on the Encrypted Network, we argued that organizations need to encrypt more traffic. Unfortunately the inability to see and inspect encrypted traffic impairs the ability to enforce security controls/policies and meet compliance mandates. So let’s dig into how to strategically decrypt traffic in order to address a few key use cases – including enforcing security policies and monitoring for security and compliance. We also need to factor in the HR and privacy issues associated with decrypting traffic – you don’t want to end up on the wrong side of a worker council protesting your network security approach. What to Decrypt The first step in gaining visibility into the encrypted network is to set policies for when traffic will be decrypted and for how long. These decisions depend more on organizational culture than anything else, so you need to figure out what will work for your company. As security guys we favor more decryption than less, because that enables more comprehensive inspection… and therefore stronger monitoring and enforcement. But this is a company-specific choice. Several factors influence decryption policies, most obviously the applications themselves. Let’s briefly cover the main applications you are most likely to decrypt: Webmail: Employees think they are doing your organization a favor by working at all times of the day. But this always-on workforce requires use of personal devices, and may decide (however misguided) that it’s easiest to send work documents to personal machines via personal email accounts. What could go wrong? And of course there are more malicious uses for webmail in a corporate environment. There are endpoint DLP agents that should catch this behavior, but if you don’t have them deployed you should be inspecting outbound webmail traffic. The complication is that most webmail is now encrypted so you need to decrypt sessions to inspect the traffic. Web browsing: Similarly social media sites and other web properties utilize user-generated content that may be protected or sensitive, so you need to ensure you can enforce policies on web application traffic as well. Many apps use SSL/TLS by default, so you will need to decrypt to enforce acceptable use policies and protect data. SaaS Apps: Business functions are increasingly migrating to Software as a Service (SaaS) so it is important to inspect SaaS traffic. You may want to enforce tighter content policies on SaaS apps, but first you need to decrypt their traffic for inspection and enforcement. Custom Apps: Similarly your custom web apps (or partner web apps) require scrutiny given the likelihood that they will use sensitive data. As with SaaS apps, you will want to enforce granular policies for these apps, which requires decryption. To net it out, if an application has access to protected or critical data you should decrypt and inspect its traffic. Within each application defined above, secondary attributes may demand or preclude decryption. For example certain web apps/sites should be whitelisted because they handle private employee data, such as consumer healthcare and financial sites. Another policy trigger will be individual employees and groups. Maybe you don’t want to decrypt traffic from the legal team, because it is likely protected and sensitive. And of course there are the folks who require exceptions. Like the CEO, who gets to do whatever he/she wants and may approve an exception for their own traffic. There will be other exceptions (we guarantee it), so make sure your policies include the ability to selectively decrypt and enforce policies. For example one app may need to always be inspected (regardless of user) based on the sensitivity of data it can access. Likewise perhaps one set of users won’t have their traffic inspected at all. You should have flexibility to decrypt traffic to enforce policies, based on applications and users/groups, to accurately map to business processes and requirements. Regardless of the use case for decryption, you will want to be flexible about what gets decrypted, for whom, and when. Where to Decrypt? Now that you know what to decrypt you need to determine the best place to do it. This decision hinges on type of traffic (ingress vs. egress), which applications need to be inspected, and which devices you need to send data to for monitoring and/or enforcement. Firewall: Firewalls frequently take on the decryption role because they is inline for both egress and ingress, and already enforcing policies – especially as they evolve toward application-aware Next Generation Firewalls (NGFW). Unfortunately decryption is computationally demanding, which creates scaling issues even for larger and more powerful firewalls. IPS: IPS is an inspection technology, so an inability to inspect encrypted traffic is a serious limitation. To address this some organizations decrypt on their IPS devices. The IPS function is computationally demanding so these devices tend to have more horsepower, which helps when doing decryption. But as with firewalls, scalability can be an issue. Web filter: Due to their role, web filters need to decrypt traffic. They tend to be a bit underpowered compared to other devices in the DMZ, so unless there is minimal encrypted traffic, they can run out of gas quickly. Dedicated SSL decryption device: For organizations with a lot of encrypted traffic (which is becoming more common), a few dedicated decryption devices are available which specialize in decrypting traffic without disrupting employees, offering flexibility in how to route decrypted traffic for either active controls (FW, IPS, web filter, etc.) or monitoring, and then re-encrypting as it continues out to the Internet. We will get into specifics of selecting and deploying these devices in our next post. Cloud-based offerings: As Security as a Service (SECaaS) offerings mature, organizations have the option to decrypt in the cloud, removing their responsibility for scalability. On the other hand this requires potentially sensitive data to be decrypted and inspected in the cloud, which may be a cultural or regulatory challenge. These devices are typically deployed inside your network permiter, so you remain blind to attackers encrypting internal reconnaissance traffic, or traffic moving

Share:
Read Post

Summary: Nantucket

Rich here. There once was a boy from Securosis. Who had an enormous… to do list. With papers to write… And much coding in sight… It’s time to bag out and just post this. Okay, not my best work, but the day got away from me after spending all week out in the DC area teaching cloud security for Black Hat. Thanks to a plane change I didn’t have WiFi on the way home, and lost an unexpected day of work. Next week will likely be our last Firestarter, Summary, and Incite for the year. We will still have some posts after that, then kick back into high gear come January. 2014 was our most insane year yet, with some of the best work of our careers (okay, mine, but I think Mike and Adrian are also pretty pleased.) 2015 is already looking to give ‘14 a run for the money. And when you run your own small business, “run for the money” is a most excellent problem to have. Unless it involves cops. That gets awkward. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Another quiet week. We promise to return to our media whoring soon. Favorite Securosis Posts Mike Rothman: Summary: 88 Seconds. Rich + tears. I’d need to see that to believe it. But I get it. Very emotional to share such huge parts of your own childhood with your children. Rich: 3 Envelopes. Other Securosis Posts Security and Privacy on the Encrypted Network: Use Cases. Incite 12/10/2014: Troll off the old block. Monitoring the Hybrid Cloud: Migration Planning. Favorite Outside Posts Mike Rothman: Sagan’s Baloney Detection Kit. As an analyst, I make a living deciphering other folks’ baloney. Carl Sagan wrote a lot about balancing skepticism with openness, and this post on brainpickings.org is a great summary. Though I will say sometimes I choose to believe in stuff that can’t be proven. So your baloney may be my belief system, and we shouldn’t judge either way. Rich: Analyzing Ponemon Cost of Data Breach. Jay Jacobs is a true data analyst. The kind of person who deeply understands numbers and models. He basically rips the Ponemon cost of a breach number to shreds. Ponemon can do good work, but that number has always been clearly flawed, and Jay clearly illustrates why. Using numbers. Research Reports and Presentations Securing Enterprise Applications. Secure Agile Development. Trends in Data Centric Security White Paper. Leveraging Threat Intelligence in Incident Response/Management. Pragmatic WAF Management: Giving Web Apps a Fighting Chance. The Security Pro’s Guide to Cloud File Storage and Collaboration. The 2015 Endpoint and Mobile Security Buyer’s Guide. Analysis of the 2014 Open Source Development and Application Security Survey. Defending Against Network-based Distributed Denial of Service Attacks. Reducing Attack Surface with Application Control. Top News and Posts Due to all the lost time this week I’m a bit low on stories, but here are some of the bigger ones. Iran hacked the Sands Hotel earlier this year, causing over $40 million in damage. Tripwire acquired by Belden. Didn’t see that one coming. $710M. Adobe Patches Flash Player Vulnerability Under Attack. Treasury Dept: Tor a Big Source of Bank Fraud. No surprise, and that’s one Tor vector that should be blocked. Blog Comment of the Week This week’s best comment goes to Ke, in response to My $500 Cloud Security Screwup. This is happening to me… Somehow the credential file was committed in git, which is strange because it is in the .gitignore file. I saw the email from AWS and deleted the key in 30 minutes and I found my account restricted at that time. One day after, however, I found a $1k bill in my account. It is also odd that I did not receive the alert email even though I enabled an alert. I am a student and I cannot afford this money 🙁 Share:

Share:
Read Post

Incite 12/10/2014: Troll off the old block

Every so often the kids do something that makes me smile. Evidently the Boss and I are doing something right and they are learning from our examples. I am constantly amused by the huge personality XX2 has, especially when performing. She’s the drama queen, but in a good way… most of the time. The Boy is all-in on football and pretty much all sports – which of course makes me ecstatic. He is constantly asking me questions about players I’ve never heard of (thanks Madden Mobile!); he even stays up on Thursday, Sunday, and Monday nights listening to the prime-time game using the iPod’s radio in his room. We had no idea until he told me about a play that happened well after he was supposed to be sleeping. But he ‘fessed up and told us what he was doing, and that kind of honesty was great to see. And then there is XX1, who is in raging teenager mode. She knows everything and isn’t interested in learning from the experience of those around her. Very like I was as a teenager. Compared to some of her friends she is a dream – but she’s still a teenager. Aside from her independence kick she has developed a sense of humor that frequently cracks me up. We all like music in the house. And as an old guy I just don’t understand the rubbish the kids listen to nowadays. Twice a year I have to spend a bunch of time buying music for each of them. So I figured we’d try Spotify and see if that would allow all of us to have individual playlists and keep costs at a manageable level. I set up a shared account and we all started setting up our lists. It was working great. Until I was writing earlier this week, jamming to some new Foo Fighters (Sonic Highways FTW), and all of a sudden the playlist switched to something called Dominique by the Singing Nun. Then Spotify goes berserk and cycles through some hardcore rap and dance. I had no idea what was going on. Maybe my phone got possessed or something. Then it clicked – XX1 was returning the favor for all the times I have trolled her over the years. Yup, XX1 hijacked my playlist and was playing things she knew aren’t anywhere near my taste. I sent her a text and she confessed to the prank. Instead of being upset I was very proud. Evidently you can’t live with a prankster and not have some of that rub off. Now I have to start planning my revenge. But for the moment I will just enjoy the fact that my 14-year-old daughter still cares enough to troll me. I know soon enough getting any kind of attention will be a challenge. –Mike Photo credit: “Caution Troll Ahead” originally uploaded by sboneham The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the conference this year. You can check it out on YouTube. Take an hour and check it out. Your emails, alerts and Twitter timeline will be there when you get back. Securosis Firestarter Have you checked out our video podcast, The Firestarter? Rich, Adrian, and Mike get into a Google Hangout and.. hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail despite Adrian’s best efforts to keep us on track. November 25 – Numbness October 27 – It’s All in the Cloud October 6 – Hulk Bash September 16 – Apple Pay August 18 – You Can’t Handle the Gartner July 22 – Hacker Summer Camp July 14 – China and Career Advancement June 30 – G Who Shall Not Be Named June 17 – Apple and Privacy May 19 – Wanted Posters and SleepyCon May 12 – Another 3 for 5: McAfee/OSVDB, XP Not Dead, CEO head rolling Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too. Network Security Gateway Evolution Introduction Monitoring the Hybrid Cloud: Evolving to the CloudSOC Migration Planning Technical Considerations Solution Architectures Emerging SOC Use Cases Introduction Security and Privacy on the Encrypted Network The Future is Encrypted Newly Published Papers Securing Enterprise Applications Secure Agile Development Trends in Data Centric Security Leveraging Threat Intelligence in Incident Response/Management The Security Pro’s Guide to Cloud File Storage and Collaboration The 2015 Endpoint and Mobile Security Buyer’s Guide Open Source Development and Application Security Analysis Advanced Endpoint and Server Protection The Future of Security Incite 4 U Flowing downhill: Breaches are ugly. Losing credit card numbers, in particular, can be costly. But after the PCI fines, the banks are always lurking in the background. When Target lost 40 million credit cards, and the banks needed to rotate card numbers and reissue, it isn’t like Target paid for that. And the card brands most certainly will never pay for that. No, they sit there, collect PCI fines (despite Target passing their assessment), and keep the cash. The banks were left holding the bag, and they are sure as hell going to try to get their costs covered. A group of banks just got court approval to move forward with a lawsuit to recover their damages from Target. They are seeking class action status. If the old TJX hack is any indication, they will get it and receive some level of compensation. Resolving all the costs of a breach like this plays out over years, and odds are we will no idea of the true costs for at least 5. Cloud security “grows up”? It’s funny when the hype machine wants to push something faster than it is ready to go. Shimmy argued that Cloud security grows up,

Share:
Read Post

3 Envelopes

I really enjoyed Thom Langford’s recent post Three Envelopes, One CISO, on the old parable about preparing three envelopes to defer blame for bad things – until you cannot shift it, when you take the bullet. In the CISO’s case it is likely to be a breach. So first blame your predecessor, though I have found that only works for about 6 months. If you get that long a honeymoon, then by the time you have been in the seat for 6 months it is your problem. For the second breach, blame your team. Of course this is limiting – you need them to work for you, but it’s a question of survival at this point, right? When the third breach comes around, you prepare 3 new envelopes, because you are done. Though most folks only get one breach now – especially if they bungle the response. But that’s not Thom’s point, nor is it mine. He brings the discussion back around to the recent Sony breach. Everyone seems to want to draw and quarter a CISO for all sorts of ills. It may be well-deserved, but the rush to judgement doesn’t really help anything, does it? Especially now that it seems to have been a highly sophisticated attack, which Mandiant called ‘unprecedented’. So did the CISOs do themselves any favors? Probably not. But as Thom says, We seem to want to chop down the CISO as soon as something goes wrong, rather than seeing it in the context of the business overall. Let’s wait and see what actually happened before declaring his Career Is So Over, and also appreciate that security breaches are not always the result of poor information security, but often simply a risk taken by the business that didn’t pay off. And with that I open the second envelope Rich gave me when I started at Securosis… Photo credit: “tiny envelope set: radioactive flora” originally uploaded by Angela Share:

Share:
Read Post

Monitoring the Hybrid Cloud: Migration Planning

We will wrap up this series with a migration path to monitoring the hybrid cloud. Whether you choose to monitor the cloud services you consume, or go all the way and create your own SOC in the cloud, these steps will get you there. Let’s dive in. Phase 1: Deploy Collectors The first phase is to collect and aggregate the data. You need to decide how to deploy event collectors – including agents, ‘edge’ proxies, and reverse proxies – to gather information from cloud resources. Your goal is to gather events as quickly and easily as possible, so start with what you know. That basically means leveraging the capabilities of your current security solution(s) to get these new events into the existing system. The complexity is not around understanding these new data sources – flow data and syslog output are well understood. The challenge comes in adapting collection methods designed for on-premises services with a cloud model. If an agent or collector works with your cloud provider’s environment, either to consume cloud vendor logs or those created by your own cloud-based servers, you are in luck. If not you will likely find yourself rerouting traffic to and/or from the cloud into a network proxy to capture events. Depending on the type of cloud service (such as SaaS or IaaS) you will have various means to access event data (such as logs and API connectivity), as outlined in our solution architectures post. We suggest collecting data directly from the cloud provider whenever possible, because much of that data is unavailable from instances or applications running inside the cloud. Monitoring agents can be deployed in IaaS or private cloud environments, where you control the full stack. But in other cloud models, particularly PaaS and SaaS, agents are generally not viable. There you need to rely on proxies that can collect data from all types of cloud deployments, provided you can route traffic through their data-gathering choke points. It is decidedly suboptimal to insert choke points in your cloud network, but it may be necessary. Finally, you have might instead be able to use remote API calls from an on-premise collector to pull events directly from your cloud provider. Not all cloud providers offer this access, and if they do you will likely need to code something yourself from their API documentation. Once you understand what is available you can figure out whether your source provides sufficiently granular data. Each cloud provider/vendor API, and each event log, offer a slightly different set of events in a slightly different format. Be prepared to go back to the future – you may need to build a collector based on sample data from your provider, because not all of the cloud vendors/providers offer logs in syslog or a similarly convenient format. Also look for feed filter options to screen out events you are not interested in – cloud services are excellent at flooding systems with (irrelevant) data. Our monitoring philosophy hasn’t changed. Collect as much data as possible. Get everything the cloud vendor provides as the basis for security monitoring. Then fill in the deficiencies with agents, proxy filters, and cloud monitoring services as needed. This is a very new capability, so likely you will need to build API interface layers to your cloud service providers. Finally keep in mind that using proxies and/or forcing cloud traffic through appliances at the ‘edge’ of your cloud is likely to require re-architecting both on-premise and cloud networks to funnel traffic in and out of your collection point. This also requires that disconnected devices (phones/tablets and laptops not on the corporate network) be configured to send traffic through the choke points / gateways, and cloud services must be configured to reject any direct access which bypasses these portals. If an inspection point can be bypassed it cannot effectively monitor security. Now that you have figured out your strategy and deployed basic collectors, it is time to integrate these new data sources into the monitoring environment. Phase 2: Integrate and Monitor Cloud-based Resources To integrate these cloud-based event sources into the monitoring solution you need to decide which deployment model will best fit your needs. If you already have an on-premise SOC platform and supporting infrastructure it may make sense to simply feed the events into your existing SIEM, malware detection, or other monitoring systems. But a few considerations might change your decision. Capacity: Ensure the existing system can handle your anticipated event volume. SaaS and PaaS environments can be noisy, so expect a significant uptick in event volume, and account for the additional storage and processing overhead. Push vs. Pull: Log Management and SIEM systems can collect events as remote systems and agents push events to them. Then the collector grabs the events, possibly performing some event preprocessing, and forwards the stream to the main aggregation point. But what if you cannot run a remote agent to push the data to you? Most cloud events must be pulled from the cloud service via an active API request. While pull requests are secured across HTTPS, SSL, or even VPN connections, this doesn’t happen magically – a program or script must initiate the transfer. Additionally, the program (script) must supply credentials or identity tokens to the cloud service. You need to know whether your current system is capable of initiating the pull request, and whether it can securely manage the remote API service credentials necessary to collect data. Data Retention: Cloud services require network access, so you need to plan for when your connection is down – especially given the frequency of DoS attacks and network service outages. Make sure you understand the impact if you cannot collect remote events for a time. If the connection goes down, how long can relevant security data be retained or buffered? You don’t want to lose that data. The good news is that many PaaS and IaaS platforms provide easy mechanisms to archive event feeds to long-term storage, to avoid event data loss, but

Share:
Read Post

Security Best Practices for Amazon Web Services

This is a short series on where to start with AWS security. We plan to release it as a concise white paper soon. It doesn’t cover everything but is designed to kickstart and prioritize your cloud security program on Amazon. We do plan to write a much deeper paper next year, but we received several requests for something covering the fundamentals, so here you go… Building on a Secure Foundation Amazon Web Services is one of the most secure public cloud platforms available, with deep datacenter security and many user-accessible security features. Building your own secure services on AWS requires properly using what AWS offers, and adding additional controls to fill the gaps. Amazon’s datacenter security is extensive – better than many organizations achieve for their in-house datacenters. Do your homework, but unless you have special requirements you can feel comfortable with their physical, network, server, and services security. AWS datacenters currently hold over a dozen security and compliance certifications, including SOC 1/2/3, PCI-DSS, HIPAA, FedRAMP, ISO 27001, and ISO 9001. Never forget that you are still responsible for everything you deploy on top of AWS, and for properly configuring AWS security features. AWS is fundamentally different than even a classical-style virtual datacenter, and understanding these differences is key for effective cloud security. This paper covers the foundational best practices to get you started and help focus your efforts, but these are just the beginning of comprehensive cloud security. Defend the Management Plane One of the biggest risks in cloud computing is an attacker gaining access to the cloud management plane: the web interface and APIs to configure and control your cloud. Fail to lock down this access and you might as well just hand over your datacenter to the bad guys. Fortunately Amazon provides an extensive suite of capabilities to protect the management plane at multiple levels, including both preventative and monitoring controls. Unfortunately the best way to integrate these into existing security operations isn’t always clear; it can also be difficult to identify any gaps. Here are our start-to-finish recommendations. Control access and compartmentalize The most important step is to enable Multifactor Authentication (MFA) for your root account. For root accounts we recommend using a hardware token which is physically secured in a known location which key administrators can access in case of emergency. Also configure your Security Challenge Questions with random answers which aren’t specific to any individual. Write down the answers and also store them in a secure but accessible location. Then create separate administrator accounts using Amazon’s Identity and Access Management (IAM) for super-admins, and also turn on MFA for each of those accounts. These are the admin accounts you will use from day to day, saving your root account for emergencies. Create separate AWS accounts for development, testing, and production, and other cases where you need separation of duties. Then tie the accounts together using Amazon’s consolidated billing. This is a very common best practice. Locking down your root account means you always keep control of your AWS management, even in case an administrator account is compromised. Using MFA on all administrator accounts means you won’t be compromised even if an attacker manages to steal a password. Using different AWS accounts for different environments and projects compartmentalizes risks while supporting cross-account access when necessary. Amazon’s IAM policies are incredibly granular, down to individual API calls. They also support basic logic, such as tying a policy to resources with a particular tag. It can get complicated quickly, so aside from ‘super-admin’ accounts there are several other IAM best practices: Use the concept of least privilege and assign different credentials based on job role or function. Even if someone needs full administrative access sometimes, that shouldn’t be what they use day to day. Use IAM Roles when connecting instances and other AWS components together. This establishes temporary credentials which AWS rotates automatically. Also use roles for cross account access. This allows a user or service in one AWS account to access resources in another, without having to create another account, and ties access to policies. Apply object-level restrictions using IAM policies with tags. Tag objects and the assigned IAM policies are automatically enforced. For administrative functions use different accounts and credentials for each AWS region and service. If you have a user directory you can integrate it with AWS using SAML 2.0 for single sign-on. But be careful; this is most suitable for accounts that don’t need deep access to AWS resources, because you lose the ability to compartmentalize access using different accounts and credentials. Never embed Access Keys and Secret Keys in application code. Use IAM Roles, the Security Token Service, and other tools to eliminate static credentials. Many attackers are now scanning the Internet for credentials embedded in applications, virtual images, and even posted on code-sharing sites. These are only a starting point, focused on root and key administrator accounts. Tying them to multifactor authentication is your best defense against most management plane attacks. Monitor activity Amazon provides three tools to monitor management activity within AWS. Enable all of them: CloudTrail logs all management (API) activity on AWS services, including Amazon’s own connections to your assets. Where available it provides complete transparency for both your organization’s and Amazon’s access. CloudWatch monitors the performance and utilization of your AWS assets, and ties tightly into billing. Set billing alarms to detect unusually high levels of activity. You can also send system logs to CloudWatch but this isn’t recommended as a security control. Config is a new service that discovers services and configurations, and tracks changes over time. It is a much cleaner way to track configuration activity than CloudTrail. CloudTrail and Config don’t cover all regions and services, so understand where the gaps are. As of this writing Config is still in preview, with minimal coverage, but both services expand capabilities regularly. These features provide important data feeds but most organizations use additional tools for overall collection and analysis, including log management and SIEM. As a

Share:
Read Post

Summary: 88 Seconds

Rich here. I don’t remember actually seeing Star Wars in the movie theater. I was six years old in 1977, and while I cannot remember the feelings of walking along the sticky theater floor, finding a seat I probably had to kneel on to see the screen from, and watching as the lights dimmed and John Williams assaulted my ears, I do remember standing with my father outside. In a line that stretched around the building. My lone image of this transformative day is of waiting near the back doors, my father beside me, wondering just what the big deal was. Memories of the film itself come from the television in the living room of my childhood home. Not from years later, when VCRs invaded suburbia and VHS vs. Beta made the evening news, but that year. 1977. When I watched my very own copy of Star Wars on a three-quarter-inch professional video deck connected to our TV. My father was recently shut out of a business he co-founded when his partner, who owned the majority share, decided to take everything. The company was contracting to place video decks on long-haul merchant ships and provide first-run movies to entertain the crews. The business fell apart after my dad left, and all he walked away with (so far as I know – he died when I was in high school) was that video player and three sets of tapes (each tape only held an hour). A documentary on the US Bicentennial celebration we attended as a family in NYC, the Wizard of Oz, and Star Wars. Imagine being the only kid in your neighborhood – heck, possibly the entire state – with a copy of Star Wars at home in 1977 or 1978 (it’s possible I got the tape in 78, but I’m pretty sure it was 77). Tapes of higher quality than VHS or Beta; not that it mattered with our TV. I watched Star Wars hundreds of times over the next few years. I watched it so many times that, to this day, I still start to get up to swap tapes every time the Millennium Falcon is pulled into the Death Star by the tractor beam. And, as has happened to so many others over the past 37 years, the film, and its sequels, didn’t merely influence my life, it defined it in many ways. It is hard to know how anything truly affects you in the long term. But I have to assume the philosophies of the fictional jedi [Ed: Not entirely fictional. Wish fullfillment FTW!] pointed me in certain directions. To martial arts, public service, the study of Japanese history, an obsession with space and science, an attraction to women who kick ass, and a moral framework that prizes self-sacrifice and the protection of others. To bombing recklessly down a Pikes Peak hiking trail on my mountain bike, laughing hysterically as I dodged the trees like I was on a speeder bike. (I was working rescue – it was totally legit!). So the day after Thanksgiving I fired up my Apple TV, went to the Trailers app, and shed a few tears over the next 88 seconds. More tears than I expected. I never thought I would live to see a new Star Wars. A new story – not merely backstory with an inevitable ending. With the actors of my youth, playing the same characters. Written by the writer of Empire, and directed by the guy who saved Star Trek?!? And I certainly never thought I would be standing in line in a theater next December, holding the hand of my daughter, who will be the same age I was when it all started in 1977. (And her younger sister, but probably not the boy – he won’t even be 3 yet). I realize I have been geeking out a lot lately here in the Summary, but for good reason. These are the tools I used to define myself as I built my identity. Perhaps not the same tools you used, and not the only tools, but certainly some of the most influential. I no longer need to look back on them nostalgically. I don’t need to relive my youth. I can once again make them part of my future, and perhaps drag my own children along with me. It’s gonna be a hell of a year. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Nada. No one loves us anymore. Favorite Securosis Posts Mike Rothman: Monitoring the Hybrid Cloud: Solution Architectures. These concepts will become a lot more important in 2015 as the lack of visibility in cloud-land becomes a higher profile issue. Rich: Winding Down. Like Mike, I’m cramming, but also blocking some time to relax and refocus for the coming year. I can’t really say much, but it’s going to be a wild one. Other Securosis Posts Security Best Practices for Amazon Web Services. Monitoring the Hybrid Cloud: Technical Considerations. Firestarter: Numbness. Securing Enterprise Applications [New White Paper]. Favorite Outside Posts Adrian Lane: Dog Follows Athletes. Not security but a great story. Mike Rothman: Fixed vs. Growth: The Two Basic Mindsets that Shape Our Lives. A very interesting article about how you view the world. There is no single right answer, but understanding your mindset enables you to make decisions that work better for you. Rich: The Sony Hack Is A Watershed Moment – Especially If North Korea Is Involved. Not really. Saudi Aramco was the watershed moment. The one that sent shock waves through government and the energy industry. But nothing grabs the headlines like Hollywood. Just imagine if they posted naked pictures of Seth Rogen and James Franco! Research Reports and Presentations Securing Enterprise Applications. Secure Agile Development. Trends in Data Centric Security White Paper. Leveraging Threat Intelligence in Incident Response/Management. Pragmatic WAF Management: Giving Web Apps a Fighting Chance. The Security Pro’s Guide to Cloud File Storage and Collaboration. The 2015 Endpoint and Mobile Security Buyer’s Guide. Analysis

Share:
Read Post

Incite 12/3/2014: Winding Down

As I sit in yet another hotel, banging out yet another Incite, overlooking yet another city that isn’t home, this is a good time to look back on 2014 because this is my last scheduled trip for this year. It has been an interesting year. At this point the highs this year feel higher, and the lows lower. There were periods when I felt sick from the whiplash of ups and downs. That’s how life is sometimes. Of course my mindfulness practice helps me handle the turbulence with grace, and likely without much external indication of the inner gyrations. But in 5 years how will I look back on 2014? I have no idea. I have tried not to worry about things like the far future. At that point, XX1 will be leaving for college, the twins will be driving, and I’ll probably have the same amount of gray hair. Sure, I will plan. But I won’t worry. I have been around long enough to know that my plans aren’t worth firing the synapses to devise them. In fact I don’t even write ‘plans’ down any more. It is now December, when most of us start to wind down the year, turning our attention to the next. We are no different at Securosis. For the next couple weeks we will push to close out projects that have to get done in 2014 and start working with folks on Q1 activities. Maybe we will even get to take some time off over the holidays. Of course vacation has a rather different meaning when you work for yourself and really enjoy what you do. But I will slow down a bit. My plan is to push through my handful of due writing projects over the next 2 weeks or so. I will continue to work through my strategy engagements. Then I will really start thinking about what 2015 looks like. Though I admit the slightly slower pace has given me opportunity to be thankful for everything. Certainly those higher highs, but also the lower lows. It’s all part of the experience I can let make me crazy, or I can accept bumps as part of the process. I guess all we can do each year is try to grow from every experience and learn from the stuff that doesn’t go well. For better and worse, I learned a lot this year. So I am happy as I write this although I know happiness is fleeting – so I’ll enjoy the feeling while I can. And then I will get back to living in the moment – there really isn’t anything else. –Mike Photo credit: “wind-up dog” originally uploaded by istolethetv The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the conference this year. You can check it out on YouTube. Take an hour and check it out. Your emails, alerts and Twitter timeline will be there when you get back. Securosis Firestarter Have you checked out our new video podcast? Rich, Adrian, and Mike get into a Google Hangout and.. hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail. November 25 – Numbness October 27 – It’s All in the Cloud October 6 – Hulk Bash September 16 – Apple Pay August 18 – You Can’t Handle the Gartner July 22 – Hacker Summer Camp July 14 – China and Career Advancement June 30 – G Who Shall Not Be Named June 17 – Apple and Privacy May 19 – Wanted Posters and SleepyCon May 12 – Another 3 for 5: McAfee/OSVDB, XP Not Dead, CEO head rolling Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too. Network Security Gateway Evolution Introduction Monitoring the Hybrid Cloud: Evolving to the CloudSOC Technical Considerations Solution Architectures Emerging SOC Use Cases Introduction Security and Privacy on the Encrypted Network The Future is Encrypted Newly Published Papers Securing Enterprise Applications Secure Agile Development Trends in Data Centric Security Leveraging Threat Intelligence in Incident Response/Management The Security Pro’s Guide to Cloud File Storage and Collaboration The 2015 Endpoint and Mobile Security Buyer’s Guide Open Source Development and Application Security Analysis Advanced Endpoint and Server Protection The Future of Security Incite 4 U CISO in the clink… I love this headline: Can a CISO serve jail time? Duh, of course they can. If they deal meth out of the data center, they can certainly go to jail. Oh, can they be held accountable for breaches and negligence within their organization? Predictably, the answer is: it depends. If you are clearly negligent then all bets are off. But if you act in the best interests of the organization as you see them … it is hard to see how a CISO could be successfully prosecuted. That said, there is a chance, so you need to consult a lawyer before taking the job to understand where your liability begins and ends (based on your agreement), and then you can make an informed decision on whether to take the job. Or at least build some additional protection into your agreement. – MR Productivity Killer: Sometimes we need a reminder that security isn’t all about data breaches and DDoS. Sometimes something far far worse happens. Just ask Sony Pictures. Last week employees showed up to work to find their entire infrastructure compromised and offline. Yep, down to some black hat hax0rs graphic taking over everyone’s computer screens, just like in… er… the movies. I don’t find any humor in this. Despite what Sony is doing to the Spider-Man franchise, they are just a company with people trying to get their jobs done, make a little scratch, and build products

Share:
Read Post

Monitoring the Hybrid Cloud: Technical Considerations

New platforms for hybrid cloud monitoring bring both new capabilities and new challenges. We have already discussed some differences between monitoring the different cloud models, and some of the different deployment options available. This post will dive into some technical considerations for these new hybrid platforms, highlighting potential benefits and issues for data security, privacy, scalability, security analytics, and data governance. As cool as a ‘CloudSOC’ sounds, there are technical nuances which need to be factored into your decision and selection processes. There are also data privacy issues because some types of information fall under compliance and jurisdictional regimes. Cloud computing and service providers can provide an opportunity to control infrastructure costs more effectively, but service models costs are calculated differently that on-premise systems, so you need to understand the computing and storage characteristics of the SOC platform in detail to understand where you are spending money. Let’s jump into some key areas where you need to focus. Data Security As soon as event data is moved out of one ‘cloud’ such as say Salesforce into another, you need to consider the sensitivity of the data, which forces a decision on how to handle security. Using SSL or similar technology to secure the data in motion is the easy part – what to do with the data at rest, once it reaches the CloudSOC, is far more challenging. You can get some hints from folks who have already grappled with this question: security monitoring providers. These services either build their own private clouds to accommodate and protect client data, or leverage yet another IaaS or PaaS cloud to provide the infrastructure to store the data. Many of you will find the financial and scalability advantages of storing cloud data in a cloud services more attractive than moving all that collected data back to an on-premise system. Regardless of whether you build your own CloudSOC or use a managed service, a key part of your security strategy will be the Service Level Agreements (SLAs) you establish with your providers. These agreements specify the security controls implemented by the provider, and if something is not specified in that agreement the provider has no obligation to provide it. An SLA is a good place to start, but be wary of unspecified areas – those are where gaps are most likely emerge. A good place to start is a comparison of what the provider does with what you do internally today. We recommend you ask questions and get clear answers on every topic you don’t understand because once you execute the agreement you have no further leverage to negotiate. And if you are running your own make sure you carefully plan out your cloud security model to take advantage of what your IaaS provider offers. You may decide some data is too sensitive to be stored in the cloud without obfuscation (encryption) or removal (typically redaction, tokenization, or masking). Data Privacy and Jurisdiction Over and above basic data security for logs and event data, some countries have strict laws about how Personally Identifiable Information (PII) data may be collected and stored, and some even require that PII not leave its country of origin – even encrypted. If you do business in these countries your team likely already understands the regulations today, but for a hybrid SOC deployment you also need to understand the locations of your primary and backup cloud data centers, and their regional laws as well. This can be incredibly confusing – particularly when data protection laws conflict between countries. Once you understand the requirements and where your cloud (including CloudSOC) providers are located, you can effectively determine which security controls you need. Once again data encryption addresses many legal requirements, and data masking and tokenization services can remove sensitive data without breaking your applications or impairing security analytics. The key is to know where the data will be stored to figure out the right mix of controls. Automation and Scalability If you have ever used Dropbox or Salesforce or Google Docs, you know how easy it is to store data in the cloud. When you move beyond SaaS to PaaS and IaaS, you will find it is just as easy to spin up whole clusters of new applications and servers with a few clicks. Security monitoring, deploying collectors, and setting up proxies for traffic filtering, all likewise benefit from the cloud’s ease of use and agility. You can automate the deployment of collectors, agents, or other services; or agents can be embedded in the start-up process for new instances or technology stacks. Verification and discovery of services running in your cloud can be performed with a single API call. Automation is a hallmark of the cloud so you can script pretty much anything you need. But getting started with basic collection is a long way from getting a CloudSOC into production. As you move to a production environment you will be constructing and refining initialization and configuration scripts to launch services, and defining templates which dictate when collectors or analytics instances are spun up or shut down via the magic of autoscaling. You will be writing custom code to call cloud APIs to collect events, and writing event filters if the API does not offer suitable options. It is basically back to the future, hearkening back to the early days of SIEM when you spent as much time writing and tuning collectors as analyzing data. Archiving is also something you ll need to define and implement. The cloud offers very granular control of which data gets moved from short-term to long-term storage, and when. In the long run cloud models offer huge benefits for automation and on-demand scalability, but there are short-term set-up and tuning costs to get a CloudSOC working the way you need. A managed CloudSOC service will do much of this for you, at additional cost. Other Considerations Management Plane: The management plane for cloud services is a double-edged sword; IT admins now have the power to automate

Share:
Read Post

Monitoring the Hybrid Cloud: Solution Architectures

The good old days: Monitoring employees on company-owned PCs, accessing the company data center across corporate networks. You knew where everything was, and who was using it. And the company owned it all, so you could pretty much dictate where and how you performed security monitoring. With cloud and mobile? Not so much. To take advantage of cloud computing you will need to embrace new approaches to collecting event data if you hope to continue security monitoring. The sources, and the information they contain, are different. Equally important – although initially more subtle – is how to deploy monitoring services. Deployment architectures are critical to deploying and scaling any Security Operations Center; defining how you manage security monitoring infrastructure and what event data you can capture. Furthermore, how you deploy the SOC platform impacts performance and data management. There are a variety of different architectures, intended to meet the use cases outlined in our last post. So now we can focus on alternative ways to deploy collectors in the cloud, and the possibility of using a cloud security gateway as a monitoring point. Then we will take a look at the basic cloud deployment models for a SOC architected to monitor the hybrid cloud, focusing on how to manage pools of event data coming from distributed environments – both inside and outside the organization. Data collection strategies API: Automated, elastic, and self-service are all intrinsic characteristics for cloud computing. Most cloud service providers offer a management dashboard for convenience (and unsophisticated users), but advanced cloud features are typically exposed only via scripts and programs. Application Programming Interfaces (APIs) are the primary interfaces to cloud services; they are essential for configuring a cloud environment, configuring and activating monitoring, and gathering data. These APIs can be called from any program or service, running either on-premise or within a cloud environment. So APIs are the cloud equivalent to platform agents, providing many of the same capabilities in the cloud where a ‘platform’ becomes a virtualized abstraction and a traditional agent wouldn’t really work. API calls return data in a variety of ways, including the familiar syslog format, JSON files, and even various formats specific to different cloud providers. Regardless, aggregating data returned by API calls is a new key source of information for monitoring hybrid clouds. Cloud Gateways: Hybrid cloud monitoring often hinges on a gateway – typically an appliance deployed at the ‘edge’ of the network to collect events. Leveraging the existing infrastructure for data management and SOC interfaces, this approach requires all cloud usage to first be authenticated to the cloud gateway as a choke point; after inspection, traffic is passed on to the appropriate cloud service. The resulting events are then passed to event collection services, comparable to on-premise infrastructure. This enables tight integration with existing security operations and monitoring platforms, and the initial authentication allows all resource requests to be tied to specific user credentials. Cloud 2 Cloud: A newer option is to have one cloud service – in this case a monitoring service – act as a proxy to another cloud service; tapping into user requests and parsing out relevant data, metadata, and application calls. Similarly to using a managed service for email security, traffic passes through a cloud provider to parse incoming requests before they are forwarded to internal or cloud applications. This model can incorporate mobile devices and events – which otherwise never touch on-premise networks – by passing their traffic through an inspection point before they reach cloud service providers such as Salesforce and Microsoft Azure. This enables the SOC to provide real-time event analysis and alert on policy violations, with collected events forwarded to the SOC (either on-premise or in the cloud) for storage. In some cases by proxying traffic these services can also add additional security – such as checks against on-premise identity stores, to ensure employees are still employed before granting access to cloud resources. App Telemetry: Like cloud providers, mobile carriers, mobile OS providers, and handset manufacturers don’t provide much in the way of logging capabilities. Mobile platforms are intended to be secured from outsiders and not leak information between apps. But we are beginning to see mobile apps developed specifically for corporate use, as well as company-specific mobile app containers on devices, which send basic telemetry back to the corporate customer to provide visibility into device activity. Some telemetry feeds include basic data about the device, such as jailbreak detection, while others append user ‘fingerprints’ to authorize requests for remote application access. These capabilities are compiled into individual mobile apps or embedded into app containers which protect corporate apps and data. This capability is very new, and will eventually help to detect fraud and misuse on mobile endpoints. Agents: You are highly unlikely to deploy agentry in SaaS or PaaS clouds; but there are cases where agents have an important role to play in hybrid clouds, private clouds, and Infrastructure as a Service (IaaS) clouds – generally when you control the infrastructure. Because network architecture is virtualized in most clouds, agents offer a way to collect events and configuration information when traditional visibility and taps are unavailable. Agents also call out to cloud APIs to check application deployment. Supplementary Services: Cloud SOCs often rely on third-party intelligence feeds to correlate hostile acts or actors attacking other customers, helping you identify and block attempts to abuse your systems. These are almost always cloud-based services that provide intelligence, malware analysis, or policies based on a broader analysis of data from a broad range of sites and data in order to detect unwanted behavior patterns. This type of threat intelligence supplements hybrid SOCs and helps organizations detect potential attacks faster, but it is not itself a SOC platform. You can refer to our other threat intelligence papers to dig deeper into this topic. (link to threat intel research) Deployment Strategies The following are all common ways to deploy event collectors, monitoring systems, and operations centers to support security monitoring: On-premise: We will forgo

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.