High Availability in Cloud Foundry

This topic describes the components used to ensure high availability in Cloud Foundry, vertical and horizontal scaling, and the infrastructure required to support scaling component VMs for high availability.

Components of a High Availability Deployment

This section describes the system components needed to ensure high availability.

Availability Zones

During product updates and platform upgrades, the VMs in a deployment restart in succession, rendering them temporarily unavailable. During outages, VMs go down in a less orderly way. Spreading components across Availability Zones (AZs) and scaling them to a sufficient level of redundancy maintains high availability during both upgrades and outages and can ensure zero downtime.

Deploying Cloud Foundry across three or more AZs and assigning multiple component instances to different AZ locations lets a deployment operate uninterrupted when entire AZs become unavailable. Cloud Foundry maintains its availability as long as a majority of the AZs remain accessible. For example, a three-AZ deployment stays up when one entire AZ goes down, and a five-AZ deployment can withstand an outage of up to two AZs with no impact on uptime.

External Load Balancers

Production environments should use a highly-available customer-provided load balancing solution that does the following:

  • Provides load balancing to each of the Cloud Foundry Router IP addresses
  • Supports SSL termination with wildcard DNS location
  • Adds appropriate x-forwarded-for and x-forwarded-proto HTTP headers to incoming requests
  • (Optional) Supports WebSockets

If you are deploying in lab and test environments, the use-haproxy.yml ops file enables HAProxy for your foundation.

Blob Storage

For storing blobs, large binary files, the best approach for high availability is to use external storage such as Amazon S3 or an S3-compatible service.

If you store blobs internally using WebDAV or NFS, these components run as single instances and you cannot scale them. For these deployments, use the high availability features of your IaaS to immediately recover your WebDAV or NFS server VM if it fails.

The singleton Collector and Compilation components do not affect platform availability.

Vertical and Horizontal Scaling for High Availability

You can scale platform capacity vertically by adding memory and disk, or horizontally by adding more VMs running instances of Cloud Foundry components. The nature of the applications you host on Cloud Foundry should determine whether you should scale vertically or horizontally.

Scale cf

For more information about scaling applications and maintaining app uptime, see Scaling an Application Using cf scale and Using Blue-Green Deployment to Reduce Downtime and Risk.

Scale Vertically

Scaling vertically means adding memory and disk to your component VMs.

To scale vertically, ensure that you allocate and maintain enough of the following:

  • Free space on host Diego cell VMs so that apps expected to deploy can successfully be staged and run.
  • Disk space and memory in your deployment such that if one host VM is down, all instances of apps can be placed on the remaining Host VMs.
  • Free space to handle one AZ going down if deploying in multiple AZs.

Scale Horizontally

Scaling horizontally means increasing the number of VM instances dedicated to running a functional component of the system.

You can horizontally scale most Cloud Foundry components to multiple instances to achieve the redundancy required for high availability.

You should also distribute the instances of multiply-scaled components across different AZs to minimize downtime during ongoing operation, product updates, and platform upgrades. If you use more than three AZs, ensure that you use an odd number of AZs.

The following table provides recommended instance counts for a high-availability deployment. You can decrease the footprint of your deployment by specifying fewer instances and combining multiple components onto a single VM.

Component Total Instances Notes
Diego Cell ≥ 2 The optimal balance between CPU/memory sizing and instance count depends on the performance characteristics of the apps that run on Diego cells. Scaling vertically with larger Diego cells makes for larger points of failure, and more apps go down when a cell fails. On the other hand, scaling horizontally decreases the speed at which the system rebalances apps. Rebalancing 100 cells takes longer and demands more processing overhead than rebalancing 20 cells.
Diego Brain ≥ 2 One per AZ, or two if only one AZ.
Diego BBS ≥ 2 One per AZ, or two if only one AZ.
PostgreSQL Server 0 or 1 0 if Postgres database is external.
MySQL Proxy ≥ 2
NATS Server ≥ 2 You might run a single NATS instance if you lack the resources to deploy two stable NATS servers. Components using NATS are resilient to message failures and the BOSH resurrector recovers the NATS VM quickly if it becomes non-responsive.
Cloud Controller API ≥ 2 Scale the Cloud Controller to accommodate the number of requests to the API and the number of apps in the system.
Cloud Controller Worker ≥ 2 Scale the Cloud Controller to accommodate the number of asynchronous requests to the API and background jobs.
Router ≥ 2 Scale the router to accommodate the number of incoming requests. Additional instances increase available bandwidth. In general, this load is much less than the load on host VMs.
UAA ≥ 2
Doppler Server ≥ 2 Deploying additional Doppler servers splits traffic across them. For high availability, use at least two per Availability Zone.
Loggregator TC ≥ 2 Deploying additional Loggregator Traffic Controllers allows you to direct traffic to them in a round-robin manner. For high availability, use at least two per Availability Zone.
etcd ≥ 3 Set this to an odd number equal to or one greater than the number of AZs you have, in order to maintain quorum. Distribute the instances evenly across the AZs, at least one instance per AZ.

Configure Support for High Availability Components

This section describes the surrounding infrastructure required to support scaling component VMs for high availability.

Setting max_in_flight values

For each component, the variable max_in_flight limits how many instances of that component are restarted simultaneously during updates or upgrades. You set max_in_flight in the manifest as a system-wide value, plus any component-specific overrides. Values for max_in_flight can be any integer between 1 and 100.

To ensure zero downtime during updates, set max_in_flight for each component to a number low enough to prevent overburdening the component instances left running. Here are some guidelines:

  • For host VMs, the closer their resource usage is to 100%, the lower you should set max_in_flight, to allow non-migrating cells to pick up the work of cells stopping and restarting for migration. If resource usage is already close to 100%, scale up your host VMs before any updates.
  • For quorum-based components like etcd and Diego BBS, set max_in_flight to 1.
  • For other components, set max_in_flight to the number of instances that you can afford to have down at any one time. This depends on your capacity planning. With higher redundancy, you can make the number high so that updates run faster. But if your components are at high utilization, you should keep the number low.

Never set max_in_flight to a value greater than or equal to the number of instances you have running for a component.

Resource Pools

To configure your resource pools according to the requirements of your deployment, see Building a Manifest in the Cloud Foundry BOSH documentation.

Each IaaS has different ways of limiting resource consumption for scaling VMs. Consult with your IaaS administrator to ensure additional VMs and related resources, like IPs and storage, will be available when scaling.

Databases

For database services deployed outside Cloud Foundry, plan to leverage your infrastructure’s high availability features and to configure backup and restore where possible.

Note: Data services may have single points of failure depending on their configuration.

Create a pull request or raise an issue on the source for this page in GitHub