Customizing the Cloud Foundry Deployment Manifest for AWS

Page last updated:

This topic describes how to create the Cloud Foundry deployment manifest for Amazon Web Services (AWS). Before creating a manifest, you must have already set up an environment for Cloud Foundry on AWS and deployed BOSH on AWS.

To create a Cloud Foundry manifest, you must perform the following steps:

  1. Use the BOSH CLI to retrieve your BOSH Director UUID, which you use to customize your manifest stub.
  2. Create a manifest stub in YAML format. See the example manifest stub for AWS below, and follow the editing instructions to customize it for your deployment.
  3. Use a script to combine the manifest stub with other configuration files in the cf-release repository to generate your deployment manifest.

Note: AWS defaults to using Fog with AWS credentials for the Cloud Controller blobstore. For alternative blobstore configurations, see the Cloud Controller Blobstore Configuration topic.

Step 1: Retrieve Your BOSH Director UUID

To perform these procedures, you must have installed the BOSH CLI.

  1. Use the bosh target command with the address of your BOSH Director to connect to the BOSH Director. Log in with the default user name and password, admin and admin, or use the username and password that you set when you installed BOSH.
    $ bosh target https://bosh.my-domain.example.com
    Target set to `bosh'
    Your username: admin
    Enter password: *****
    Logged in as 'admin'
    
  2. Use the bosh status --uuid command to view information about your BOSH deployment. Record the UUID of the BOSH Director. You use the UUID when customizing the Cloud Foundry deployment manifest stub.
    $ bosh status --uuid
    abcdef12-3456-7890-abcd-ef1234567890
    

Step 2: Create Your Manifest Stub

Review the example manifest stub for AWS, and then follow the editing instructions to customize it for your deployment.

Cloud Foundry Deployment Manifest Stub for AWS

---
meta:
  environment: ENVIRONMENT

director_uuid: DIRECTOR_UUID

networks:
- name: cf1
  subnets:
    - range: 10.10.16.0/20
      reserved:
        - 10.10.16.2 - 10.10.16.9
      static:
        - 10.10.16.10 - 10.10.16.255
      gateway: 10.10.16.1
      dns:
        - 10.10.0.2
      cloud_properties:
        security_groups:
          - cf
        subnet: (( properties.template_only.aws.subnet_ids.cf1 ))
- name: cf2
  subnets:
    - range: 10.10.80.0/20
      reserved:
        - 10.10.80.2 - 10.10.80.9
      static:
        - 10.10.80.10 - 10.10.80.255
      gateway: 10.10.80.1
      dns:
        - 10.10.0.2
      cloud_properties:
        security_groups:
          - cf
        subnet: (( properties.template_only.aws.subnet_ids.cf2 ))

properties:
  template_only:
    aws:
      access_key_id: AWS_ACCESS_KEY
      secret_access_key: AWS_SECRET_ACCESS_KEY
      availability_zone: ZONE_1
      availability_zone2: ZONE_2
      subnet_ids:
        cf1: SUBNET_ID_1
        cf2: SUBNET_ID_2

  system_domain: SYSTEM_DOMAIN
  system_domain_organization: SYSTEM_DOMAIN_ORGANIZATION
  app_domains:
   - APP_DOMAIN

  ssl:
    skip_cert_verify: true

  cc:
    staging_upload_user: STAGING_UPLOAD_USER
    staging_upload_password: STAGING_UPLOAD_PASSWORD
    bulk_api_password: BULK_API_PASSWORD
    db_encryption_key: CCDB_ENCRYPTION_KEY
    mutual_tls:
      ca_cert: CC_MUTUAL_TLS_CA_CERT
      public_cert: CC_MUTUAL_TLS_PUBLIC_CERT
      private_key: CC_MUTUAL_TLS_PRIVATE_KEY
  ccdb:
    db_scheme: CCDB_SCHEME
    roles:
      - tag: admin
        name: CCDB_USER_NAME
        password: CCDB_PASSWORD
    databases:
      - tag: cc
        name: ccdb
    address: CCDB_ADDRESS
    port: CCDB_PORT
  consul:
    encrypt_keys:
      - CONSUL_ENCRYPT_KEY
    ca_cert: CONSUL_CA_CERT
    server_cert: CONSUL_SERVER_CERT
    server_key: CONSUL_SERVER_KEY
    agent_cert: CONSUL_AGENT_CERT
    agent_key: CONSUL_AGENT_KEY
  etcd:
    require_ssl: true
    ca_cert: ETCD_CA_CERT
    client_cert: ETCD_CLIENT_CERT
    client_key: ETCD_CLIENT_KEY
    peer_ca_cert: ETCD_PEER_CA_CERT
    peer_cert: ETCD_PEER_CERT
    peer_key: ETC_PEER_KEY
    server_cert: ETCD_SERVER_CERT
    server_key: ETCD_SERVER_KEY
  login:
    saml:
      serviceProviderKey: SERVICE_PROVIDER_PRIVATE_KEY
  loggregator:
    tls:
      ca_cert: LOGGREGATOR_CA_CERT
      doppler:
        cert: LOGGREGATOR_DOPPLER_CERT
        key: LOGGREGATOR_DOPPLER_KEY
      trafficcontroller:
        cert: LOGGREGATOR_TRAFFICCONTROLLER_CERT
        key: LOGGREGATOR_TRAFFICCONTROLLER_KEY
      metron:
        cert: LOGGREGATOR_METRON_CERT
        key: LOGGREGATOR_METRON_KEY
      syslogdrainbinder:
        cert: LOGGREGATOR_SYSLOGDRAINBINDER_CERT
        key: LOGGREGATOR_SYSLOGDRAINBINDER_KEY
  loggregator_endpoint:
    shared_secret: LOGGREGATOR_ENDPOINT_SHARED_SECRET
  nats:
    user: NATS_USER
    password: NATS_PASSWORD
  router:
    status:
      user: ROUTER_USER
      password: ROUTER_PASSWORD
  uaa:
    admin:
      client_secret: ADMIN_SECRET
    ca_cert: UAA_CA_CERT
    cc:
      client_secret: CC_CLIENT_SECRET
    clients:
      cc_routing:
        secret: CC_ROUTING_SECRET
      cloud_controller_username_lookup:
        secret: CLOUD_CONTROLLER_USERNAME_LOOKUP_SECRET
      doppler:
        secret: DOPPLER_SECRET
      gorouter:
        secret: GOROUTER_SECRET
      tcp_emitter:
        secret: TCP-EMITTER-SECRET
      tcp_router:
        secret: TCP-ROUTER-SECRET
      login:
        secret: LOGIN_CLIENT_SECRET
      notifications:
        secret: NOTIFICATIONS_CLIENT_SECRET
      cc-service-dashboards:
        secret: CC_SERVICE_DASHBOARDS_SECRET
    jwt:
      verification_key: JWT_VERIFICATION_KEY
      signing_key: JWT_SIGNING_KEY
    scim:
      users:
      - name: admin
        password: ADMIN_PASSWORD
        groups:
        - scim.write
        - scim.read
        - openid
        - cloud_controller.admin
        - doppler.firehose
    sslCertificate: UAA_SERVER_CERT
    sslPrivateKey: UAA_SERVER_KEY
  uaadb:
    db_scheme: UAADB_SCHEME
    roles:
      - tag: admin
        name: UAADB_USER_NAME
        password: UAADB_USER_PASSWORD
    databases:
      - tag: uaa
        name: uaadb
    address: UAADB_ADDRESS
    port: UAADB_PORT
  hm9000:
    server_key: HM9000_SERVER_KEY
    server_cert: HM9000_SERVER_CERT
    client_key: HM9000_CLIENT_KEY
    client_cert: HM9000_CLIENT_CERT
    ca_cert: HM9000_CA_CERT

Editing Instructions

Deployment Manifest Stub Contents Editing Instructions

meta:
  environment: ENVIRONMENT
    
Replace ENVIRONMENT with an arbitrary name describing your environment, for example aws-prod.

director_uuid: DIRECTOR_UUID
    
Replace DIRECTOR_UUID with the BOSH Director UUID. Run the BOSH CLI command bosh status --uuid to view the BOSH Director UUID.

networks:
- name: cf1
  subnets:
    - range: 10.10.16.0/20
      reserved:
        - 10.10.16.2 - 10.10.16.9
      static:
        - 10.10.16.10 - 10.10.16.255
      gateway: 10.10.16.1
      dns:
        - 10.10.0.2
      cloud_properties:
        security_groups:
          - cf
        subnet: (( properties.template_only.aws.subnet_ids.cf1 ))
- name: cf2
  subnets:
    - range: 10.10.80.0/20
      reserved:
        - 10.10.80.2 - 10.10.80.9
      static:
        - 10.10.80.10 - 10.10.80.255
      gateway: 10.10.80.1
      dns:
        - 10.10.0.2
      cloud_properties:
        security_groups:
          - cf
        subnet: (( properties.template_only.aws.subnet_ids.cf2 ))
      
If you used bosh aws create to create your AWS environment, change the IP addresses here if needed to fit the subnet range defined in your `bosh.yml` file.

This example assumes you have two subnets in your AWS VPC with CIDRs 10.10.16.0/20 and 10.10.80.0/20, respectively. Update the values for range, reserved, static, and gateway accordingly if the CIDRs for your subnets are different.

This example also assumes that you have a security group cf suitable for your Cloud Foundry VMs. Change this to the name of your security group if necessary.

properties:
  template_only:
    aws:
      access_key_id: AWS_ACCESS_KEY
      secret_access_key: AWS_SECRET_ACCESS_KEY
      availability_zone: ZONE_1
      availability_zone2: ZONE_2
      subnet_ids:
        cf1: SUBNET_ID_1
        cf2: SUBNET_ID_2
      
Replace AWS_ACCESS_KEY and AWS_SECRET_ACCESS_KEY with AWS credentials to allow Cloud Controller to manage assets in the S3 buckets you have prepared for this deployment.

Replace ZONE_1 and ZONE_2 with two EC2 Availability Zones that you want to distribute your deployment across.

Replace SUBNET_ID_1 and SUBNET_ID_2 with the VPC subnet IDs corresponding to the subnets configured in the networks section above.

If you used bosh aws create, the key and zone values are in the bosh_environment file that you originally bootstrapped from. The subnet IDs correspond to the subnets `cf1` and `cf2`.

  system_domain: SYSTEM_DOMAIN
  system_domain_organization: SYSTEM_DOMAIN_ORGANIZATION
  app_domains:
   - APP_DOMAIN
      
Replace SYSTEM_DOMAIN with the domain to be used for all system components. For instance, the Cloud Controller API will be reachable through api.SYSTEM_DOMAIN. Replace APP_DOMAIN with the domain you want associated with applications pushed to your Cloud Foundry installation. For instance, if you push my-app, it will be available through my-app.APP_DOMAIN.

We recommend that you provide separate values for SYSTEM_DOMAIN and APP_DOMAIN, for example sys.cloud-09.cf-app.com and apps.cloud-09.cf-app.com. For simple deployments, you can use the same domain for both, for example cloud-09.cf-app.com. However, you cannot have one domain property extend the other, for example you cannot have your SYSTEM_DOMAIN set to cloud-09.cf-app.com and your APP_DOMAIN set to apps.cloud-09.cf-app.com.

If you used bosh aws create, concatenating the BOSH_VPC_SUBDOMAIN and BOSH_VPC_DOMAIN values in the bosh_environment file and use this concatenated string for both SYSTEM_DOMAIN and APP_DOMAIN.

Choose a name you for the SYSTEM_DOMAIN_ORGANIZATION. This organization will be created and configured to own the SYSTEM_DOMAIN.

  ssl:
    skip_cert_verify: true
    
Set skip_cert_verify to true to skip SSL certificate verification. If you want to use SSL certificates to secure traffic into your deployment, see the Securing Traffic into Cloud Foundry topic.

  cc:
    staging_upload_user: STAGING_UPLOAD_USER
    staging_upload_password: STAGING_UPLOAD_PASSWORD
    bulk_api_password: BULK_API_PASSWORD
    db_encryption_key: CCDB_ENCRYPTION_KEY
    
The Cloud Controller API endpoint requires basic authentication. Replace STAGING_UPLOAD_USER and STAGING_UPLOAD_PASSWORD with a username and password of your choosing.

Replace BULK_API_PASSWORD with a password of your choosing. Health Manager uses this password to access the Cloud Controller bulk API.

Replace CCDB_ENCRYPTION_KEY with a secure key that you generate to encrypt sensitive values in the Cloud Controller database. You can use any random string. For example, run the following command from a command line to generate a 32-character random string: LC_ALL=C tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 32 ; echo

  ccdb:
    db_scheme: CCDB_SCHEME
    roles:
      - tag: admin
        name: CCDB_USER_NAME
        password: CCDB_PASSWORD
    databases:
      - tag: cc
        name: ccdb
    address: CCDB_ADDRESS
    port: CCDB_PORT
    
This section of the stub defines how the Cloud Controller connects to its database. The values depend on how you deployed your database.

If you used bosh aws create, find the necessary values in the generated aws_rds_bosh_receipt.yml file. The database is an Amazon RDS instance, and the CCDB_* values in the stub must match the scheme, username, password, address, and port defined by AWS.

If you deployed your database without bosh aws create, such as by using the postgres job in cf-release, you must set the CCDB_* values to match the configuration of the database node. If you are using PostgreSQL, your database must have the required extensions available for Cloud Foundry: uuid-ossp, pgcrypto, and citext. The db_scheme for a PostgreSQL database is postgresql, not postgres.

  consul:
    encrypt_keys:
      - CONSUL_ENCRYPT_KEY
    ca_cert: CONSUL_CA_CERT
    server_cert: CONSUL_SERVER_CERT
    server_key: CONSUL_SERVER_KEY
    agent_cert: CONSUL_AGENT_CERT
    agent_key: CONSUL_AGENT_KEY
      
See the Security Configuration for Consul topic.

    login:
      saml:
        serviceProviderKey: SERVICE_PROVIDER_PRIVATE_KEY
    
Generate a PEM-encoded RSA key pair. You can generate a key pair by running the command openssl req -x509 -nodes -newkey rsa:2048 -days 365 -keyout key.pem -out cert.pem This command creates cert.pem, which contains your public key, and key.pem, which contains your private key. Replace SERVICE_PROVIDER_PRIVATE_KEY with the full private key, include the BEGIN and END delimiter lines, under serviceProviderKey.
For RSA keys, you only need to configure the private key.

  loggregator:
    tls:
      ca_cert: LOGGREGATOR_CA_CERT
      doppler:
        cert: LOGGREGATOR_DOPPLER_CERT
        key: LOGGREGATOR_DOPPLER_KEY
      trafficcontroller:
        cert: LOGGREGATOR_TRAFFICCONTROLLER_CERT
        key: LOGGREGATOR_TRAFFICCONTROLLER_KEY
      metron:
        cert: LOGGREGATOR_METRON_CERT
        key: LOGGREGATOR_METRON_KEY
      syslogdrainbinder:
        cert: LOGGREGATOR_SYSLOGDRAINBINDER_CERT
        key: LOGGREGATOR_SYSLOGDRAINBINDER_KEY
    
To generate the certificates and keys for the Loggregator components, you need:
  • The original CA certificate and key used to sign the keypairs for TLS between the Cloud Controller and the Diego BBS
  • The generate-loggregator-certs script from the cf-release repo
Generate the certificates and keys for Loggregator using the generate-loggregator-certs script as follows:
$ generate-loggregator-certs CA_CERT CA_KEY
Where CA_CERT is the path and filename for the original CA certificate and CA_KEY is the path and filename for the corresponding key.
For example,
$ ./scripts/generate-loggregator-certs cf-ca.cert cf-ca.key
This script outputs a directory named loggregator-certs that contains a set of files with the certificates and keys you need for Loggregator.
In the stub, replace… with the contents of this file…
LOGGREGATOR_CA_CERT loggregator-ca.crt
LOGGREGATOR_DOPPLER_CERT doppler.crt
LOGGREGATOR_DOPPLER_KEY doppler.key
LOGGREGATOR_TRAFFICCONTROLLER_CERT trafficcontroller.crt
LOGGREGATOR_TRAFFICCONTROLLER_KEY trafficontroller.key
LOGGREGATOR_METRON_CERT metron.crt
LOGGREGATOR_METRON_KEY metron.key
LOGGREGATOR_SYSLOGDRAINBINDER_CERT syslogdrainbinder.crt
LOGGREGATOR_SYSLOGDRAINBINDER_KEY syslogdrainbinder.key

  loggregator_endpoint:
    shared_secret: LOGGREGATOR_ENDPOINT_SHARED_SECRET
    
Generate a string secret and replace LOGGREGATOR_ENDPOINT_SHARED_SECRET.

  nats:
    user: NATS_USER
    password: NATS_PASSWORD
      
Replace NATS_USER and NATS_PASSWORD with a username and secure password of your choosing. Cloud Foundry components use these credentials to communicate with each other over the NATS message bus.

  router:
    status:
      user: ROUTER_USER
      password: ROUTER_PASSWORD
      
Replace ROUTER_USER and ROUTER_PASSWORD with a username and secure password of your choosing.

  uaa:
    admin:
      client_secret: ADMIN_SECRET
    cc:
      client_secret: CC_CLIENT_SECRET
    clients:
      cc-service-dashboards:
        secret: CC_SERVICE_DASHBOARDS_SECRET
      cc_routing:
        secret: CC_ROUTING_SECRET
      cloud_controller_username_lookup:
        secret: CLOUD_CONTROLLER_USERNAME_LOOKUP_SECRET
      doppler:
        secret: DOPPLER_SECRET
      gorouter:
        secret: GOROUTER_SECRET
      tcp_emitter:
        secret: TCP-EMITTER-SECRET
      tcp_router:
        secret: TCP-ROUTER-SECRET
      login:
        secret: LOGIN_CLIENT_SECRET
      notifications:
        secret: NOTIFICATIONS_CLIENT_SECRET
    
Replace the values for all secret keys with secure secrets that you generate.

    jwt:
      verification_key: JWT_VERIFICATION_KEY
      signing_key: JWT_SIGNING_KEY
    
Generate a PEM-encoded RSA key pair, and replace JWT_SIGNING_KEY with the private key, and JWT_VERIFICATION_KEY with the corresponding public key. Generate a key pair by running the command openssl rsa -in jwt-key.pem -pubout > key.pub. This command creates jwt-key.pem.pub, which contains your public key, and jwt-key.pem, which contains your private key.
Copy in the full keys, including the BEGIN and END delimiter lines.

    scim:
      users:
      - name: admin
        password: ADMIN_PASSWORD
        groups:
          - scim.write
          - scim.read
          - o...
    
Generate a secure password and replace ADMIN_PASSWORD with that value to set the password for the Admin user of your Cloud Foundry installation.

  uaadb:
    db_scheme: UAADB_SCHEME
    roles:
      - tag: admin
        name: UAADB_USER_NAME
        password: UAADB_USER_PASSWORD
    databases:
      - tag: uaa
        name: uaadb
    address: UAADB_ADDRESS
    port: UAADB_PORT
      
This section of the stub defines how the UAA connects to its database. The values depend on how you deployed your database.

If you used bosh aws create, find the necessary values in the generated aws_rds_bosh_receipt.yml file. The database is an Amazon RDS instance, and the UAADB_* values in the stub must match the scheme, username, password, address, and port defined by AWS.

If you deployed your database without bosh aws create, such as by using the postgres job in cf-release, you must set the UAADB_* values to match the configuration of the database node. If you are using PostgreSQL, your database must have the required extensions available for Cloud Foundry: uuid-ossp, pgcrypto, and citext. The db_scheme for a PostgreSQL database is postgresql, not postgres.

  hm9000:
    server_key: HM9000_SERVER_KEY
    server_cert: HM9000_SERVER_CERT
    client_key: HM9000_CLIENT_KEY
    client_cert: HM9000_CLIENT_CERT
    ca_cert: HM9000_CA_CERT
    
Generate SSL certificates for HM9000 and replace these values. You can run the scripts/generate-hm9000-certs script in the cf-release repository to generate self-signed certificates.

Note: You can configure blacklists of IP address ranges to prevent future apps deployed to your Cloud Foundry installation from attempting to drain syslogs to internal Cloud Foundry components. See the Log Drain Blacklist Configuration topic for more information.

Step 3: Generate Your Manifest

To generate a deployment manifest, perform the following steps:

  1. Clone the cf-release GitHub repository. Use git clone to copy the latest Cloud Foundry configuration files onto your computer.

    $ git clone https://github.com/cloudfoundry/cf-release.git
    
  2. From the cf-release directory, run the update script to fetch all the submodules.

    $ cd cf-release
    $ ./scripts/update
    

    Note: Ensure that you have the most up-to-date version of the Cloud Foundry code and all required submodules.

  3. Install spiff, a command line tool for generating deployment manifests.

  4. Run the following command from the cf-release directory to create a deployment manifest named cf-deployment.yml:

    $ ./scripts/generate_deployment_manifest IAAS PATH-TO-MANIFEST-STUB > cf-deployment.yml
    

    • Replace IAAS with aws, openstack, or vsphere. Use vsphere for vSphere, vCloud Air, and vCloud Director.
    • Replace PATH-TO-MANIFEST-STUB with the location of your cf-stub.yml file.

    Note: The generate_deployment_manifest script can accept multiple stub files. For example, the following command passes two stub files to the script:
    ./scripts/generate_deployment_manifest vsphere cf-stub.yml cf-consul.yml > cf-deployment.yml

  5. Use the bosh deployment command to set your deployment to the generated manifest:

    $ bosh deployment cf-deployment.yml
    

Now you are ready to deploy Cloud Foundry. See the Deploying Cloud Foundry topic for instructions.

View the source for this page in GitHub