Securosis

Research

Understanding COVID, ARDS, and Mechanical Ventilation

April 7 Update: some research is emerging since I posted this that COVID related ARDS is not typical ARDS. Here’s the medical reference for providers but it’s very early evidence so far we should keep an eye on: COVID-19 Does Not Lead to a “Typical” ARDS. This was further validated by an article in MedScape that previews some emerging peer-reviewed research. Thus while my explanations of ARDS and ventilators is accurate, the ties to COVID-19 are not and new treatment protocols are emerging. Although this is a security blog, this post has absolutely nothing to do with security. No parallels from medicine, no mindset lessons, just some straight-up biology. As many readers know I am a licensed Paramedic. I first certified in the early 1990’s, dropped down to EMT for a while, and bumped back up to full medic two years ago. Recently I became interested in flight and critical care and completed an online critical care and flight medic course from the great team at FlightBridgeED. Paramedics don’t normally work with ventilators – it is an add-on skill specific for flight and critical care (ICU) transports. I’m a neophyte to ventilator management, with online and book training but no practice, but I understand the principles, and thanks to molecular biology back in college, have a decent understanding of cellular processes. COVID-19 dominates all our lives now, and rightfully so. Ventilators are now a national concern and one the technology community is racing to help with. Because of my background I’ve found myself answering a lot of questions on COVID-19, ARDS, and ventilators. While I’m a neophyte at running vents, I’m pretty decent at translating complex technical subjects for non-experts. Here’s my attempt to help everyone understand things a bit better. The TL;DR is that COVID-19 damages the lungs, which for some people triggers the body to overreact with too much inflammation. This extra fluid interferes with gas exchange in the lungs, and oxygen can’t as easily get into the bloodstream. You don’t actually stop breathing, so we use the ventilators to change pressure and oxygen levels, in an attempt to diffuse more oxygen through this barrier and into the lungs without, causing more damage by overinflating them. We start with respiration Before we get into COVID and ventilators we need to understand a little anatomy and physiology. Cells need oxygen to convert fuel into energy. Respiration is the process of getting oxygen into cells and removing waste products – predominantly CO2. We get oxygen from our environment and release CO2 through ventilation: air moving in and out of our lungs. Those gases are moved around in our blood, and the actual gas exchange occurs in super-small capillaries which basically wrap around our cells. The process of getting blood to tissues is called perfusion. Theis is all just some technical terminology to say: our lungs take in oxygen and release carbon dioxide, we move the gases around using our circulatory system, and we exchange gases in and out of cells in super-small capillaries. Pure oxygen is a toxin, and CO2 diffused in blood is an acid, so our bodies have all sorts of mechanisms to keep things running. Everything works thanks to diffusion and a few gas laws (Graham’s, Henry’s, and Dalton’s are the main ones). Our lungs have branches and end in leaves called alveoli. Alveoli are pretty wild – they have super-thin walls to allow gases to pass through, and are surrounded by capillaries to transfer gasses into and out of our blood. They look like clumps of bubbles, because they maximize surface area to facilitate the greatest amount of gas exchange in the smallest amount of space. Healthy alveoli are covered in a thin liquid called surfactant, which keeps them lubricated so they can open and close and slide around each other as we breathe. Want to know one reason smokers and vapers have bad lungs? All those extra chemicals muck up surfactant, thicken cell walls, and cause other damage. In smokers a bunch of the alveoli clump together, losing surface area, in a process called atelectasis (remember that word). Our bodies try to keep things in balance, and have a bunch of tools to nudge things in different directions. The important bit for our discussion today is that ventilation is managed through how much we breathe in for a given breath (tidal volume), and how many times a minute we breathe (respiratory rate). This combination is called our minute ventilation and is normally about 6-8 liters per minute. This is linked to our circulation (cardiac output), which is around 5 liters per minute at rest. The amount of oxygen delivered to our cells is a function of our cardiac output and the amount of oxygen in our blood. We need good gas exchange with our environment, good gas exchange into our bloodstream, and good gas exchange into our cells. COVID-19 screws up the gas exchange in our lungs, and everything falls apart from there. Acute Respiratory Distress Syndrome ARDS is basically your body’s immune system gone haywire. It starts with lung damage – which can be an infection, trauma, or even metabolic. One of the big issues with ventilators is that they can actually cause ARDS with the wrong settings. This triggers an inflammatory response. A key aspect of inflammation is various chemical mediators altering cell walls, especially those capillaries – and then they start leaking fluid. In the lungs this causes a nasty cascade: Fluid leaks from the capillaries and forms a barrier/buffer of liquid between the alveoli and the capillaries, and separates them. This reduces gas exchange. Fluid leaks into the alveoli themselves, further inhibiting gas exchange. The cells are damage by all this inflammation, triggering another stronger immune response. Your body is now in a negative reinforcement cycle and making things worse by trying to make them better. This liquid and a bunch of the inflammation chemicals dilute the surfactant and damage the alveolar walls, causing atelectasis. In later stages of ARDS your

Share:
Read Post

Mastering the Journey—Building Network Manageability and Security for your Path

This is the third post in our series, “Network Operations and Security Professionals’ Guide to Managing Public Cloud Journeys”, which we will release as a white paper after we complete the draft and have some time for public feedback. You might want to start with our first and second posts. Special thanks to Gigamon for licensing. As always, the content is being developed completely independently using our Totally Transparent Research methodology. Learning cloud adoption patterns doesn’t just help us identify key problems and risks – we can use them to guide operational decisions to address the issues they consistently raise. This research focuses on managing networks and network security, but the patterns include broad security and operational implications which cover all facets of your cloud journey. Governance issues aside, we find that networking is typically one of the first areas of focus for organizations, so it’s a good target for our first focused research. (For the curious, IAM and compliance are two other top areas organizations focus on, and struggle with, early in the process). Recommendations for a Safe and Smooth Journey Developer Led Mark sighed with relief and satisfaction as he validated the VPN certs were propagated and approved the ticket for firewall rule change. The security group was already in good shape and they managed to avoid having to add any kind of direct connect to the AWS account for the formerly-rogue project. He pulled up their new cloud assessment dashboard and all the critical issues were clear. It would still take the IAM team and the project’s developers a few months to scale down unneeded privileges but… not his problem. The federated identity portal was already hooked up and he would get real time alerts on any security group changes. “Now onto the next one,” he mumbled after he glanced at his queue and lost his short-lived satisfaction. “Hey, stop complaining!” remarked Sarah, “We should be clear after this backlog now that accounting is watching the credit cards for cloud charges; just run the assessment and see what we have before you start complaining.” Having your entire organization dragged into the cloud thanks to the efforts of a single team is disconcerting, but not unmanageable. The following steps will help you both wrangle the errant project under control, and build a base for moving forward. This was the first adoption pattern we started to encounter a decade ago as cloud starting growing, so there are plenty of lessons to pull from. Based on our experiences, a few principles really help manage the situation: Remember that to meet this pattern you should be new to either the cloud in general, or to this cloud platform specifically. These are not recommendations for unsanctioned projects covered by your existing experience and footprint. Don’t be antagonistic. Yes, the team probably knew better and shouldn’t have done it… but your goal now is corrective actions, not punitive. You goal is to reduce urgent risks while developing a plan to bring the errant project into the fold. Don’t simply apply your existing policies and tooling from other environments to this one. You need tooling and processes appropriate for this cloud provider. In our experience, despite the initial angst, these projects are excellent opportunities to learn your initial lessons on this platform, and to start building out for a larger supported program. If you keep one eye on immediate risks and the other on long-term benefits, everything should be fine. The following recommendations go a long way towards reducing risks and increasing your chances of success. But before the bullet points we have one overarching recommendation: As you gain control over the unapproved project, use it to learn the particulars of this cloud provider and build out your core cloud management capabilities. When you assess, set yourself up to support your next ten assessments. When you enable monitoring and visibility, do so in a way which supports your next projects. Wherever possible build a core service rather than a one-off. Step one is to figure out what you are dealing with: How many environments are involved? How many accounts, subscriptions, or projects? How are the environments structured? This involves mapping out the application, the PaaS services offered by the provider (they offer PaaS services such as load balancers and serverless capabilities), the IAM, the network(s), and the data storage. How are the services configured? How are the networks structured and connected? The Software Defined Networks (SDN) used by all major cloud platforms only look the same on the surface – under the hood they are quite a bit different. And, most importantly, Where does this project touch other enterprise resources and data?!? This is essential for understanding your exposure. Are there unknown VPN connections? Did someone peer through an existing dedicated network pipe? Is the project talking to an internal database over the Internet? We’ve seen all these and more. Then prioritize your biggest risks: Internet exposures are common and one of the first things to lock down. We commonly see resources such as administrative servers and jump boxes exposed to the Internet at large. In nearly every single assessment we find at least one instance or container with port 22 exposed to the world. The quick fix for these is to lock them down to your known IP address ranges. Cloud providers’ security groups are very effective because they just drop traffic which doesn’t meet the rules, so they are an extremely effective security control and a better first step than trying to push everything through an on-premise firewall or virtual appliance. Identity and Access Management is the next big piece to focus on. This research is focused more on networking, so we won’t spend much time on this here. But when developers build out environments they almost always over-privilege access to themselves and application components. They also tend to use static credentials, because unsanctioned projects are unlikely to integrate into your federated identity management. Sweep out static credentials, enable federation, and turn

Share:
Read Post

Defining the Journey—the Four Cloud Adoption Patterns

This is the second post in our series, “Network Operations and Security Professionals’ Guide to Managing Public Cloud Journeys”, which we will release as a white paper after we complete the draft and have some time for public feedback. You might want to start with our first post. Special thanks to Gigamon for licensing. As always, the content is being developed completely independently using our Totally Transparent Research methodology. Understanding Cloud Adoption Patterns Cloud adoption patterns represent the most common ways organizations move from traditional operations into cloud computing. They contain the hard lessons learned by those who went before. While every journey is distinct, hands-on projects and research have shown us a broad range of consistent experiences, which organizations can use to better manage their own projects. The patterns won’t tell you exactly which architectures and controls to put in place, but they can serve as a great resource to point you in the right general direction and help guide your decisions. Another way to think of cloud adoption patterns is as embodying the aggregate experiences of hundreds of organizations. To go back to our analogy of hiking up a mountain, it never hurts to ask the people who have already finished the trip what to look out for. Characteristics of Cloud Adoption Patterns We will get into more descriptive detail as we walk through each pattern, but we find this grid useful to define the key characteristics. Characteristics Developer Led Data Center Transformation Snap Migration Native New Build Size Medium/Large Large Medium/Large All (project-only for mid-large) Vertical All (minus financial and government) All, including Financial and Government Variable All Speed Fast then slow Slow (2-3 years or more) 18-24 months Fast as DevOps Risk High Low(er) High Variable Security Late Early Trailing Mid to late Network Ops Late Early Early to mid Late (developers manage) Tooling New + old when forced Culturally influenced; old + new Panic (a lot of old) New, unless culturally forced to old Budget Owner Project based/no one IT, Ops, Security IT or poorly defined Project-based, some security for shared services Size: The most common organization sizes. For example developer-led projects are rarely seen in small startups, because they can skip directly to native new builds, but common in large companies. Vertical: We see these patterns across all verticals, but in highly-regulated ones like financial services and government, certain patterns are less common due to tighter internal controls and compliance requirements. Speed: The overall velocity of the project, which often varies during the project lifetime. We’ll jump into theis more, but an example is developer-led, where initial setup and deployment are very fast, but then wrangling in central security and operational control can take years. Risk: This is an aggregate of risk to the organization and of project failure. For example in a snap migration everything tends to move faster than security and operations can keep up, which creates a high chance of configuration error. Security: When security is engaged and starts influencing the project. Network Ops: When network operations becomes engaged and starts influencing the project. While the security folks are used to being late to the party, since developers can build their own networks with a few API calls, this is often a new and unpleasant experience for networking professionals. Tooling: The kind of tooling used to support the project. “New” means new, cloud-native tools. “Old” means the tools you already run in your data centers. Budget Owner: Someone has to pay at some point. This is important because it represents potential impact on your budget, but also indicates who tends to have the most control over a project. Characteristics of Cloud Adoption Patterns In this section we will describe what the patterns look like, and identify some key risks. In our next section we will offer some top-line recommendations to improve your chances of success. One last point before we jump into the patterns themselves: while they focus on the overall experiences of an organization, patterns also apply at the project level, and an organization may experience multiple patterns at the same time. For example it isn’t unusual for a company with a “new to cloud” policy to also migrate existing resources over time as a long-term project. This places them in both the data center transformation and native new build patterns. Developer Led Mark was eating a mediocre lunch at his desk when a new “priority” ticket dropped into his network ops queue. “Huh, we haven’t heard from that team in a while… weird.” He set the microwaved leftovers to the side and clicked open the request… Request for firewall rule change: Allow port 3306 from IP 52.11.33.xxx/32. Mission critical timeline. “What the ?!? That’s not one of our IPs?” Mark thought as he ran a lookup. “amazonaws.com? You have GOT to be kidding me? We shouldn’t have anything up there. Mark fired off emails to his manager and the person who sent the ticket, but he had a bad feeling he was about to get dragged into the kind of mess that would seriously ruin his plans for the next few months. Developer-led projects are when a developer or team builds something in a cloud on their own, and central IT is then forced to support it. We sometimes call this “developer tethering”, because these often unsanctioned and/or uncoordinated projects anchor an organization to a cloud provider, and drag the rest of the organization in after them. These projects aren’t always against policy – this pattern is also common in mergers and acquisitions. This also isn’t necessarily a first step into the cloud overall – it can also be a project which pulls an enterprise into a new cloud provider, rather than their existing preferrred cloud provider. This creates a series of tough issues. To meet the definition of this pattern we assume you can’t just shut the project down, but actually need to support it. The project has been developed and deployed without the input of security or

Share:
Read Post

Your Cloud Journeys is Unique, but Not Unknown

This is the first post in a new series, our “Network Operations and Security Professionals’ Guide to Managing Public Cloud Journeys”, which we will release as a white paper after we complete the draft and have some time for public feedback. Special thanks to Gigamon for licensing. As always, the content is being developed completely independently using our Totally Transparent Research methodology. Cloud computing is different, disruptive, and transformative. It has no patience for traditional practices or existing architectures. The cloud requires change, and there is a growing body of documentation on end states you should strive for, but a lack of guidance on how to get there. Cloud computing may be a journey, but it’s one with many paths to what is often an all-too-nebulous destination. Although every individual enterprise has different goals, needs, and capabilities for their cloud transition, our experience and research has identified a series of fairly consistent patterns. You can think of moving to cloud as a mountain with a single peak, with everyone starting from the same trailhead. But this simplistic view, which all too often underlies conference presentations and tech articles, fails to capture the unique opportunities and challenges you will face. At the other extreme, we can think of the journey as involving a mountain range with innumerable peaks, starting points, and paths… and a distinct lack of accurate maps. This is the view that tends to end with hands thrown up in the air, expressions of impossibility, and analysis paralysis. But our research and experience guide us between those extremes. Instead of a single constrained path which doesn’t reflect individual needs, or totally individualized paths which require you to build everything and learn every lesson from scratch, we see a smaller set of common options, with consistent characteristics and experiences. Think of it as starting from a few trailheads, landing on a few peaks, and only dealing with a handful of well-marked trails. These won’t cover every option, but can be a surprisingly useful way to help structure your journey, move up the hill more gracefully, and avoid falling off some REALLY sharp cliff edges. Introducing Cloud Adoption Patterns Cloud adoption patterns represent a consolidated set of cloud adoption journeys, compiled through discussions with hundreds of enterprises and dozens of hands-on projects. Less concrete than specific cloud controls, they are a more general way of predicting and understanding the problems facing organizations when moving to cloud, based on starting point and destination. These patterns have different implications across functional teams, and are especially useful for network operations and network security, because they tend to fairly accurately predict many architectural implications, which then map directly to management processes. For example there are huge differences between a brand-new startup or cloud project without any existing resources, a major data center migration, and a smaller migration of key applications. Even a straight-up lift and shift migration is extremely different if it’s a one-off vs. a smaller project vs. wrapped up in a massive data center move with a hard cutoff deadline (often thanks to an hosting contract which is not being renewed). Each case migrates an existing application stack into the cloud, but the different scope and time constraints dramatically affect the actual migration process. We’ll cover them in more detail in our next post, but the four patterns we have identified are: Developer led: A development team starts building something in the cloud outside normal processes, and then pulls the rest of the organization behind them. Data center transformation: An operations-led process defined by an organization planning a methodical migration out of existing data centers and into the cloud, sometimes over a decade or more. Snap migration: An enterprise is forced out of some or all their data centers on a short timeline, due to contract renewals or other business drivers. Native new build: The organization plans to build a new application or several completely in the cloud using native technologies. You likely noticed we didn’t mention some common terms like “refactor” and “new to cloud”. Those are important concepts but we consider them options on the journey, not to define the journey. Our four patterns are about the drivers for your cloud migration and your desired end state. Using Cloud Adoption Patterns The adoption patterns offer a framework for thinking about your upcoming (or in-process) journey, and help identify both strategies for success and potential failure points. These aren’t proscriptive like the Cloud Security Maturity Model or the Cloud Controls Matrix; they won’t tell you exactly which controls to implement, but are more helpful when choosing a path, defining priorities, mapping architectures, and adjusting processes. Going back to our mountain-climbing analogy, the cloud adoption patterns point you down the right path and help you decide which gear to take, but it’s still up to you to load your pack, know how to use the gear, plan your stops, and remember your sunscreen. These patterns represent a set of characteristics we consistently see based on how organizations move to cloud. Any individual organization might experience multiple patterns across different projects. For example a single project might behave more like a startup, even while you concurrently run a larger data center migration. Our next post will detail the patterns with defining characteristics. You can use it to determine your overall organizational journey, as well as to plan out individual projects with their own characteristics. To help you better internalize these patterns, we will offer fictional examples based on real experiences and projects. Once you know which path you are on, our final sections will include top-line recommendations for network operations and security, and tie back to our examples to show how they play out in real life. We will also highlight the most common pitfalls and their potential consequences. This research should help you better understand what approaches will work best for your project in your organization. We are focusing this first round on networking, but future work will build on this basis to

Share:
Read Post

Firestarter: Multicloud Deployment Structures and Blast Radius

In this, our second Firestarter on multicloud deployments, we start digging into the technological differences between the cloud providers. We start with the concept of how to organize your account(s). Each provider uses different terminology but all support similar hierarchies. From the overlay of AWS organizations to the org-chart-from-the-start of an Azure tenant we dig into the details and make specific recommendations. We also discuss the inherent security barriers and cover a wee bit of IAM. Watch or listen: Share:

Share:
Read Post

DisruptOps: Breaking Attacker Kill Chains in AWS: IAM Roles

Breaking Attacker Kill Chains in AWS: IAM Roles Over the past year I’ve seen a huge uptick in interest for concrete advice on handling security incidents inside the cloud, with cloud native techniques. As organizations move their production workloads to the cloud, it doesn’t take long for the security professionals to realize that the fundamentals, while conceptually similar, are quite different in practice. One of those core concepts is that of the kill chain, a term first coined by Lockheed Martin to describe the attacker’s process. Break any link and you break the attack, so this maps well to combining defense in depth with the active components of incident response. Read the full post at DisruptOps. Share:

Share:
Read Post

Firestarter: So you want to multicloud?

This is our first in a series of Firestarters covering multicloud. Using more than one IaaS cloud service provider is, well, a bit of a nightmare. Although this is widely recognized by anyone with hands-on cloud experience that doesn’t mean reality always matches our desires. From executives worried about lock in to M&A activity we are finding that most organizations are being pulled into multicloud deployments. In this first episode we lay out the top level problems and recommend some strategies for approaching them. Watch or listen: Share:

Share:
Read Post

What We Know about the Capital One Data Breach

I’m not a fan of dissecting complex data breaches when we don’t have any information. In this case we do know more than usual due to the details in the complaint filed by the FBI. I want to be very clear that this post isn’t to blame anyone and we have only the most basic information on what happened. The only person we know is worthy of blame here is the attacker. As many people know Capital One makes heavy use of Amazon Web Services. We know AWS was involved in the attack because the federal complaint specifically mentions S3. But this wasn’t a public S3 bucket. Again, all from the filed complaint: The attacker discovered a server (likely an instance – it had an IAM role) with a misconfigured firewall. It presumably had a software vulnerability or was vulnerable due to to a credential exposure. The attacker compromised the server and extracted out its IAM role credentials. These ephemeral credentials allow AWS API calls. Role credentials are rotated automatically by AWS, and much more secure than static credentials. But with persistent access you can obviously update credentials as needed. Those credentials (an IAM role with ‘WAF’ in the title) allowed listing S3 buckets and read access to at least some of them. This is how the attacker exfiltrated the files. Some buckets (maybe even all) were apparently encrypted, and a lot of the data within those files (which included credit card applications) was encrypted or tokenized. But the impact was still severe. The attacker exfiltrated the data and then discussed it in Slack and on social media. Someone in contact with the attacker saw that information, including attack details in GitHub. This person reported it to Capital One through their reporting program. Capital One immediately involved the FBI and very quickly closed the misconfigurations. They also began their own investigation. They were able to determine exactly what happened very quickly, likely through CloudTrail logs. Those contained the commands issued by that IAM role from that server (which are very easy to find). They could then trace back the associated IP addresses. There are many other details on how they found the attacker in the complaint, and it looks like Capital One did quite a bit of the investigation themselves. So misconfigured firewall (Security Group?) > compromised instance > IAM role credential extraction > bucket enumeration > data exfiltration. Followed by a rapid response and public notification. As a side note, it looks like the attacker may have been a former AWS employee, but nothing indicates that was a factor in the breach. People will say the cloud failed here, but we saw breaches like this long before the cloud was a thing. Containment and investigation seem to have actually run far faster than would have been possible on traditional infrastructure. For example Capital One didn’t need to worry about the attacker turning off local logging – CloudTrail captures everything that touches AWS APIs. Normally we hear about these incidents months or years later, but in this case we went from breach to arrest and disclosure in around two weeks. I hope that someday Capital One will be able to talk about the details publicly so the rest of us can learn. No matter how good you are, mistakes happen. The hardest problem in security is solving simple problems at scale. Because simple doesn’t scale, and what we do is damn hard to get right every single time. Share:

Share:
Read Post

DisruptOps: Build Your Own Multi-Cloud Security Monitoring in 30 Minutes or Less with StreamAlert

Build Your Own Multi-Cloud Security Monitoring in 30 Minutes or Less with StreamAlert One of the most difficult problems in cloud security is building comprehensive multi-account/multi-cloud security monitoring and alerting. I’d say maybe 1 out of 10 organizations I assess or work with have something effective in place when I first show up. That’s why I added a major monitoring lab based on AirBnB’s StreamAlert project to the Securosis Advanced Cloud Security and Applied DevSecOps training class (we still have some spots available for our Black Hat 2019 class). Read the full post at DisruptOps Share:

Share:
Read Post

Apple Flexes Its Privacy Muscles

Apple events follow a very consistent pattern, which rarely changes beyond the details of the content. This consistency has gradually become its own language. Attend enough events and you start to pick up the deliberate undertones Apple wants to communicate, but not express directly. They are the facial and body expressions beneath the words of the slides, demos, and videos. Five years ago I walked out of the WWDC keynote with a feeling that those undertones were screaming a momentous shift in Apple’s direction. That privacy was emerging as a foundational principle for the company. I wrote up my thoughts at Macworld, laying out my interpretation of Apple’s privacy principles. Privacy was growing in importance at Apple for years before that, but that WWDC keynote was the first time they so clearly articulated that privacy not only mattered, but was being built into foundational technologies. This year I sat in the WWDC keynote, reading the undertones, and realized that Apple was upping their privacy game to levels never before seen from a major technology company. That beyond improving privacy in their own products, the company is starting to use its market strength to pulse privacy throughout the tendrils that touch the Apple ecosystem. Regardless of motivations – whether it be altruism, the personal principles of Apple executives, or simply shrewd business strategy – Apple’s stance on privacy is historic and unique in the annals of consumer technology. The real question now isn’t whether they can succeed at a technical level, but whether Apple’s privacy push can withstand the upcoming onslaught from governments, regulators, the courts, and competitors. Apple has clearly explained that they consider privacy a fundamental human right. Yet history is strewn with the remains of well-intentioned champions of such rights. How privacy at Apple changed at WWDC19 When discussing these shifts in strategy, at Apple or any other technology firm, it’s important to keep in mind that the changes typically start years before outsiders can see them, and are more gradual than we can perceive. Apple’s privacy extension efforts started at least a couple years before WWDC14, when Apple first started requiring privacy protections to participate in HomeKit and HealthKit. The most important privacy push from WWDC19 is Sign In with Apple, which offers benefits to both consumers and developers. In WWDC sessions it became clear that Apple is using a carrot and stick approach with developers: the stick is that App Review will require support for Apple’s new service in apps which leverage competing offerings from Google and Facebook, but in exchange developers gain Apple’s high security and fraud prevention. Apple IDs are vetted by Apple and secured with two-factor authentication, and Apple provides developers with the digital equivalent of a thumbs-up or thumbs-down on whether the request is coming from a real human being. Apple uses the same mechanisms to secure iCloud, iTunes, and App Store purchases, so this seems to be a strong indicator. Apple also emphasized they extend this privacy to developers themselves. That it isn’t Apple’s business to know how developers engage with users inside their apps. Apple serves as an authentication provider and collects no telemetry on user activity. This isn’t to say that Google and Facebook abuse their authentication services, Google denies this accusation and offers features to detect suspicious activity. Facebook, on the other hand, famously abused phone numbers supplied for two-factor authentication, as well as a wide variety of other user data. The difference between Sign In with Apple and previous privacy requirements within the iOS and Mac ecosystems is that the feature extends Apple’s privacy reach beyond its own walled garden. Previous requirements, from HomeKit to data usage limitations on apps in the App Store, really only applied to apps on Apple devices. This is technically true for Sign In with Apple, but practically speaking the implications extend much further. When developers add Apple as an authentication provider on iOS they also need to add it on other platforms if they expect customers to ever use anything other than Apple devices. Either that or support a horrible user experience (which, I hate to say, we will likely see plenty of). Once you create your account with an Apple ID, there are considerable technical complexities to supporting non-Apple login credentials for that account. So providers will likely support Sign In with Apple across their platforms, extending Apple’s privacy reach beyond its own platforms. Beyond sign-in Privacy permeated WWDC19 in both presentations and new features, but two more features stand out as examples of Apple extending its privacy reach: a major update to Intelligent Tracking Prevention for web advertising, and HomeKit Secure Video. Privacy preserving ad click attribution is a surprisingly ambitious effort to drive privacy into the ugly user and advertising tracking market, and HomeKit Secure Video offers a new privacy-respecting foundation for video security firms which want to be feature competitive without the mess of building (and securing) their own back-end cloud services. Intelligent Tracking Prevention is a Safari feature to reduce the ability of services to track users across websites. The idea is that you can and should be able to enable cookies for one trusted site, without having additional trackers monitor you as you browse to other sites. Cross-site tracking is endemic to the web, with typical sites embedding dozens of trackers. This is largely to support advertising and answer a key marketing question: did an ad lead to you visit a target site and buy something? Effective tracking prevention is an existential risk to online advertisements and the sites which rely on them for income, but this is almost completely the fault of overly intrusive companies. Intelligent Tracking Prevention (combined with other browser privacy and security features) is a stick and privacy preserving ad click attribution is the corresponding carrot. It promises to enable advertisers to track conversion rates without violating user privacy. An upcoming feature of Safari, and a proposed web standard, Apple promises that browsers will remember ad clicks for seven days. If

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.