As we begin our series on Multi-cloud logging, we start with reasons some traditional logging approaches donât work. I donât like to start with a negative tone, but we need to point out some challenges and pitfalls which often beset firms on first migration to cloud. That, and it helps frame our other recommendations later in this series. Letâs take a look at some common issues by category. Tooling Scale & Performance: Most log management and SIEM platforms were designed and first sold before anyone had heard of clouds, Kafka, or containers. They were architected for âhub-and-spokeâ deployments on flat networks, when âScalabilityâ meant running on a bigger server. This is important because the infrastructure we now monitor is agile â designed to auto-scale up when we need processing power, and back down to reduce costs. The ability to scale up, down, and out is essential to the cloud, but often missing from older logging products which require manual setup, lacking full API enablement and auto-scale capability. Data Sources: We mentioned in our introduction that some common network log sources are unavailable in the cloud. Contrawise, as automation and orchestration of cloud resources are via API calls, API logs become an important source. Data formats for these new log sources may change, as do the indicators used to group events or users within logs. For example servers in auto-scale groups may share a common IP address. But functions and other âserverlessâ infrastructure are ephemeral, making it impossible to differentiate one instance from the next this way. So your tools need to ingest new types of logs, faster, and change their threat detection methods by source. Identity: Understanding who did what requires understandings identity. An identity may be a person, service, or device. Regardless, the need to map it, and perhaps correlate it across sources, becomes even more important in hybrid and multi-cloud environments Volume: When SIEM first began making the rounds, there were only so many security tools and they were pumping out only so many logs. Between new security niches and new regulations, the array of log sources sending unprecedented amounts of logs to collect and analyze grows every year. Moving from traditional AV to EPP, for example, brings with it a huge log volume increase. Add in EDR logs and youâre really into some serious volumes. On the server side, moving from network and server logs to add application layer and container logs brings a non-trivial increase in volume. There are only so many tools designed to handle modern event rates (X billio events per day) and volumes (Y terabytes per day) without buckling under the load, and more importantly, there are only so many people who know how to deploy and operate them in production. While storage is plentiful and cheap in the cloud, you still need to get those logs to the desired storage from various on-premise and cloud sources â perhaps across IaaS, PaaS, and SaaS. If you think thatâs easy call your SaaS vendor and ask how to export all your logs from their cloud into your preferred log store (S3/ADLS/GCS/etc.). That old saw from Silicon Valley, âBut does it scale?â is funny but really applies in some cases. Bandwidth: While weâre on the topic of ridiculous volumes, letâs discuss bandwidth. Network bandwidth and transport layer security between on-premise and cloud and inter-cloud is non-trivial. There are financial costs, as well as engineering and operational considerations. If you donât believe me ask your AWS or Azure sales person how to move, say, 10 terabytes a day between those two. In some cases architecture only allows a certain amount of bandwidth for log movement and transport, so consider this when planning migrations and add-ons. Structure Multi-account Multi-cloud Architectures: Cloud security facilitates things like micro-segmentation, multi-account strategies, closing down all unnecessary network access, and even running different workloads in different cloud environments. This sort of segmentation makes it much more difficult for attackers to pivot if they gain a foothold. It also means you will need to consider which cloud native logs are available, what you need to supplement with other tooling, and how you will stitch all these sources together. Expecting to dump all your events into a syslog style service and let it percolate back on-premise is unrealistic. You need new architectures for log capture, filtering, and analysis. Storage is the easy part. Monitoring âup the Stackâ: As cloud providers manage infrastructure, and possibly applications as well, your threat detection focus must shift from networks to applications. This is both because you lack visibility into network operations, but also because cloud network deployments are generally more secure, prompting attackers to shift focus. Even if youâre used to monitoring the app layer from a security perspective, for example with a big WAF in front of your on-premise servers, do you know whether you vendor has a viable cloud offering? If youâre lucky enough to have one that works in both places, and you can deploy in cloud as well, answer this (before you initiate the project): Where will those logs go, and how will you get them there? Storage vs. Ingestion: Data storage in cloud services, especially object storage, is so cheap it is practically free. And long-term data archival cloud services offer huge cost advantages over older on-premise solutions. In essence we are encouraged to store more. But while storage is cheap, itâs not always cheap to ingest more data into the cloud because some logging and analytics services charge based upon volume (gigabytes) and event rates (number of events) ingested into the tool/service/platform. Example are Splunk, Azure Eventhubs, AWS Kinesis, and Google Stackdriver. Many log sources for the cloud are verbose â both number of events and amount of data generated from each. So you will need to architect your solution to be economically efficient, as well as negotiate with your vendors over ingestion of noisy sources such as DNS and proxies, for example. A brief side note on âclosedâ logging pipelines: Some vendors want to own your logging pipeline on top of your analytics toolset. This may