Cloud Foundry Components
Page last updated:
Cloud Foundry components include a self-service application execution engine, an automation engine for application deployment and lifecycle management, and a scriptable command line interface (CLI), as well as integration with development tools to ease deployment processes. Cloud Foundry has an open architecture that includes a buildpack mechanism for adding frameworks, an application services interface, and a cloud provider interface.
Refer to the descriptions below for more information about Cloud Foundry components. Some descriptions include links to more detailed documentation.
The router routes incoming traffic to the appropriate component, either a Cloud Controller component or a hosted application running on a Diego Cell.
The router periodically queries the Diego Bulletin Board System (BBS) for which cells and containers each application is currently running on. Then it recomputes new routing tables based on the IP addresses of each cell virtual machine (VM) and the host-side port numbers for the cell’s containers.
The OAuth2 server (the UAA) and Login Server work together to provide identity management.
The Cloud Controller (CC) directs the deployment of applications. When a developer pushes an application to Cloud Foundry, she is targeting the Cloud Controller. The Cloud Controller then directs the Diego Brain through the CC-Bridge to coordinate individual Diego cells to stage and run applications.
In pre-Diego architecture, the Cloud Controller’s Droplet Execution Agent (DEA) performed these app lifecycle tasks.
To keep applications available, cloud deployments must constantly monitor their states and reconcile them with their expected states, starting and stopping processes as required. In pre-Diego architecture, the Health Manager (HM9000) performed this function. The nsync, Converger, and Cell Reps use a more distributed approach.
These three components work along a spectrum of representations that extends between the user and the application containers. At one end, the user sets how each application should be scaled. At the other end, instances of that application running on widely-distributed VMs may crash or become unavailable.
The nsync, Converger, and Cell Rep components work together along this spectrum to keep apps running as they should. nsync watches for changes in application scaling instructions from the Cloud Controller and writes them into
DesiredLRP structures in Diego’s internal BBS database. Inside each cell, the Cell Rep watches the Garden for the state and health of the application instances running in the cell’s containers, and updates corresponding
ActualLRP values in the shared BBS as they change locally.
At the center of the process, the Diego Brain’s Converger component monitors the
ActualLRP values, and launches or kills application instances as appropriate to reconcile
ActualLRP counts with
The blob store is a repository for large binary files, which github cannot easily manage because github is designed for code. Blob store binaries include:
- Application code packages
Each application VM has a Diego Cell that executes application start and stop actions locally, manages the VM’s containers, and reports app status and other data to the BBS and Loggregator.
In pre-Diego CF architecture, the DEA node performed the task of managing the applications and containers on a VM.
Applications typically depend on services such as databases or third-party SaaS providers. When a developer provisions and binds a service to an application, the service broker for that service is responsible for providing the service instance.
Cloud Foundry component VMs communicate with each other internally through HTTP and HTTPS protocols, sharing temporary messages and data stored in two locations:
A Consul server stores longer-lived control data, such as component IP addresses and distributed locks that prevent components from duplicating actions.
Diego’s Bulletin Board System (BBS) stores more frequently updated and disposable data such as cell and application status, unallocated work, and heartbeat messages. The BBS is currently implemented in etcd.
The route-emitter component uses the NATS protocol to broadcast the latest routing tables to the routers. In pre-Diego CF architecture, the NATS Message Bus carried all internal component communications.
The metrics collector gathers metrics and statistics from the components. Operators can use this information to monitor a Cloud Foundry deployment.
The Loggregator (log aggregator) system streams application logs to developers.View the source for this page in GitHub