Service Mesh Architecture
This topic describes the routing flow and architecture of the service mesh data and control plane in Cloud Foundry.
The ingress service mesh data plane is a parallel routing path for ingress traffic for apps on Cloud Foundry Application Runtime. It is deployed alongside the existing Cloud Foundry routing tier and manages istio routes for applications.
A route is managed by istio if it is associated with an istio managed domain. These are specified in the manifest.
- A new route is added to CAPI and mapped to one or more applications
- The route and mapping are sent to copilot
- Copilot then exposes that configuration in a way Pilot can understand, Pilot polls for it
- Pilot distributes the configuration to the ingress Envoys
- The request hits your load balancer.
- The load balancer directs the request to one of your ingress envoys (on the istio-router vm)
- The ingress envoy then chooses which app container to send the request to
- The app container has an iptables rule which DNATs the request to its local Envoy sidecar
- The envoy sidecar passes the request along to the application
Service mesh container-to-container networking allows applications to communicate internally via a sidecar envoy. This allows for more featureful internal routing, including client side load balancing, timeouts, and retries.
- A new internal route is added to CAPI and mapped to one or more apps. This route gets an associated VIP.
- CAPI tells Copilot the route information for the app.
- BBS sends the app location to Copilot.
- Copilot tells the Pilot on each Diego cell the route information for the app.
- The Pilot on each Diego cell configures every sidecar Envoy on its cell.
- AppA makes an HTTP request to an internal service mesh domain for App B.
- The internal service mesh domain is resolved to a VIP.
- AppA sends traffic to the VIP which gets rerouted to the sidecar Envoy.
- The sidecar Envoy picks an instance associated with the VIP based on default load balancing settings and sends traffic to that instance of AppB.
- Network policy is enforced on the Diego Cell for AppB. If successful, traffic continues to AppB. a. If the request takes too long to connect to AppB it will time out. b. On a 500 error the Envoy sidecar will retry 3 times.
The following table lists each component in the service mesh architecture and describes its function.
|CAPI||Cloud Controller receives API requests from the cf CLI and stores information about routes. It distributes this route information to Copilot.|
|BBS||BBS sends information about apps across all Diego cells to Copilot.|
|Copilot||Copilot acts as an interface between Cloud Foundry routes and Istio configuration types. It sends configuration to Pilot through Mesh Configuration Protocol (MCP).|
|Pilot||Pilot is an Istio component that can accept configuration from multiple sources simultaneously and distribute configuration intelligently across ingress and sidecar envoys. There is one Pilot on the Istio Control VM and a Pilot on every Diego cell|
|Envoy||Envoy Proxy is a lightweight proxy designed for microservices. It routes traffic based on configuration it receives from Pilot and emits in-depth metrics based on that traffic. There is one edge envoy for ingress traffic and there is a sidecar envoy in every app container.|
|Load Balancer||The load balancer is a reverse proxy provided by the IaaS, or a physical machine, that distributes network traffic across the ingress envoys while presenting a single public endpoint. This is not the same load balancer used by Gorouter.|
|istio-release||A BOSH release that deploys Istio-related components and configures any existing components to use them.|