Deploying Cloud Foundry on AWS with BOSH AWS Bootstrap
WARNING: The command |
Cloud Foundry tools simplify the process of deploying a Cloud Foundry instance to a variety of platforms, including Amazon Web Services (AWS). The following document guides you through using BOSH and the cf Command Line Interface (CLI) to deploy Cloud Foundry to Amazon Web Services.
Select a DNS domain name for your Cloud Foundry instance. For example, if you select the domain name cloud.example.com, Cloud Foundry deploys each of your applications as APP-NAME.cloud.example.com.
Create an AWS Route 53 Hosted Zone for your domain on the AWS Route 53 control panel. The control panel displays a delegation set, which is a list of addresses to which you must delegate DNS authority for your domain. For example, if you selected the domain name cloud.example.com, each address in the delegation set should become an NS record in the DNS server for example.com.
Ruby 1.9.3 and git (1.8 or later) are prerequisites for the following steps.
After you install Ruby and git, install the
$ gem install bundler
Create a deployments directory with a sub-directory for your deployment.
$ mkdir deployments $ cd deployments $ mkdir cf-example
In the sub-directory, create a file named
Gemfilewith the following contents:
source 'https://rubygems.org' ruby "1.9.3" gem "bosh_cli_plugin_aws"
bundle installto install the gems you specified in the
$ bundle install
Create a file named
bosh_environmentand add the following contents, replacing the values in each line to match your configuration:
export BOSH_VPC_DOMAIN=example.com export BOSH_VPC_SUBDOMAIN=my-subdomain export BOSH_AWS_ACCESS_KEY_ID=AWS_ACCESS_KEY_ID export BOSH_AWS_SECRET_ACCESS_KEY=AWS_SECRET_ACCESS_KEY export BOSH_AWS_REGION=my-aws-region export BOSH_VPC_PRIMARY_AZ=us-east-1a # see note below export BOSH_VPC_SECONDARY_AZ=us-east-1d # see note below
Note: The values you add for
BOSH_VPC_SUBDOMAINmust correspond to the DNS domain name you set up when configuring Route 53.
source bosh_environmentto set the environment variables required for deploying to AWS.
$ source bosh_environment
Note: The following steps only support deployment to the
us-east-1region of AWS.
To deploy MicroBOSH, review this guide.
Choose an availability zone that is listed as “operating normally” in the Health Status section of the the AWS Console for your region.
bosh aws createto create a VPC Internet Gateway, VPC subnets, three RDS databases, and a NAT VM for Cloud Foundry subnet routing. This command generates two receipt files,
aws_vpc_receipt.yml, that you use when deploying Cloud Foundry.
$ bosh aws create Executing migration CreateKeyPairs allocating 1 KeyPair(s) Executing migration CreateVpc . . . details in S3 receipt: aws_rds_receipt and file: aws_rds_receipt.yml Executing migration CreateS3 creating bucket xxxx-bosh-blobstore creating bucket xxxx-bosh-artifacts
Note: RDS database creation may take 20 or more minutes.
Deploy MicroBOSH from the workspace directory, using the
bosh aws bootstrap microcommand:
$ bosh aws bootstrap micro WARNING! Your target has been changed to `https://10.10.0.6:25555'! Deployment set to '/Users/pivotal/cf/deployments/micro/micro_bosh.yml' Deploying new micro BOSH instance `micro/micro_bosh.yml' to `https://10.10.0.6:25555' (type 'yes' to continue): yes Deploy MicroBOSH using existing stemcell (00:00:00) . . . Deployed `micro/micro_bosh.yml' to `https://10.10.0.6:25555', took 00:04:57 to complete
bosh aws bootstrapcommand prompts you for a user name and password. Create a user name and password for later use accessing your MicroBOSH installation.
After MicroBOSH has deployed successfully, you can check its status:
$ bosh status Updating director data... done Config ~/.bosh_config Director Name micro-xxxx URL https://x.x.x.x:25555 Version 1.5.0.pre.xxx (release:xxxxx bosh:xxxxx) User admin UUID xxxxxx-xxxx-xxxx-xxxx-xxxxxxxx CPI aws dns enabled (domain_name: microbosh) compiled_package_cache disabled Deployment Manifest ~/cf/deployments/micro/micro_bosh.yml
bosh public stemcellscommand to view a list of available public stemcells.
$ bosh public stemcells +---------------------------------------------+ | Name | +---------------------------------------------+ | bosh-stemcell-1657-aws-xen-ubuntu.tgz | | bosh-stemcell-1657-aws-xen-centos.tgz | | light-bosh-stemcell-1657-aws-xen-ubuntu.tgz | | light-bosh-stemcell-1657-aws-xen-centos.tgz | | bosh-stemcell-1657-openstack-kvm-ubuntu.tgz | | bosh-stemcell-1657-vsphere-esxi-ubuntu.tgz | | bosh-stemcell-1657-vsphere-esxi-centos.tgz | +---------------------------------------------+ To download use `bosh download public stemcell <stemcell_name>'. For full url use --full.
bosh download public stemcellcommand to download the latest
$ bosh download public stemcell bosh-stemcell-1657-aws-xen-ubuntu.tgz
bosh upload stemcellcommand to upload the stemcell to the BOSH Director:
$ bosh upload stemcell ./bosh-stemcell-1657-aws-xen-ubuntu.tgz
Create a manifest stub file named
cf-stub.yml. You will use Spiff to merge this file with
Cloud Foundry templates to generate a BOSH manifest for your Cloud Foundry
Example stub file:
--- name: cf-example director_uuid: 80be9b46-435f-41db-96a4-453f8d59f53c releases: - name: cf version: latest networks: - name: cf1 type: manual subnets: - range: 10.10.16.0/20 name: default_unused reserved: - 10.10.16.2 - 10.10.16.9 static: - 10.10.16.10 - 10.10.16.253 gateway: 10.10.16.1 dns: - 10.10.0.2 cloud_properties: security_groups: - cf subnet: (( properties.template_only.aws.subnet_ids.cf1 )) - name: cf2 type: manual subnets: - range: 10.10.80.0/20 name: default_unused reserved: - 10.10.80.2 - 10.10.80.9 static: - 10.10.80.10 - 10.10.80.253 gateway: 10.10.80.1 dns: - 10.10.0.2 cloud_properties: security_groups: - cf subnet: (( properties.template_only.aws.subnet_ids.cf2 )) properties: template_only: aws: access_key_id: PLACEHOLDER-ACCESS-KEY-ID secret_access_key: PLACEHOLDER-SECRET-KEY availability_zone: us-east-1a # Change this if you'd like to availability_zone2: us-east-1b # Change this if you'd like to subnet_ids: cf1: PLACEHOLDER-SUBNET-FOR-AZ1 cf2: PLACEHOLDER-SUBNET-FOR-AZ2 domain: PLACEHOLDER-DOMAIN nats: user: PLACEHOLDER-NATS-USER password: PLACEHOLDER-NATS-PASSWORD cc: db_encryption_key: PLACEHOLDER-CC-DB-ENCRYPTION-KEY bulk_api_password: PLACEHOLDER-BULK-API-PASSWORD staging_upload_password: PLACEHOLDER-STAGING-UPLOAD-PASSWORD staging_upload_user: PLACEHOLDER-STAGING-UPLOAD-USER uaa: scim: users: - admin|the_admin_pw|scim.write,scim.read,openid,cloud_controller.admin #change if you like - services|the_services_pw|scim.write,scim.read,openid,cloud_controller.admin #change if you like admin: client_secret: PLACEHOLDER-UAA-ADMIN-CLIENT-SECRET jwt: signing_key: PLACEHOLDER-UAA-JWT-SIGNING-KEY # Use the YAML "|" character to format multiline RSA key data verification_key: PLACEHOLDER-UAA-JWT-VERIFICATION-KEY # Use the YAML "|" character to format multiline RSA key data clients: login: secret: PLACEHOLDER-UAA-CLIENTS-LOGIN-SECRET developer_console: secret: PLACEHOLDER-UAA-CLIENTS-DEVELOPER-CONSOLE-SECRET app-direct: secret: PLACEHOLDER-UAA-CLIENTS-APP-DIRECT-SECRET support-services: secret: PLACEHOLDER-UAA-CLIENTS-SUPPORT-SERVICES-SECRET servicesmgmt: secret: PLACEHOLDER-UAA-CLIENTS-SERVICESMGMT-SECRET space-mail: secret: PLACEHOLDER-UAA-CLIENTS-SPACE-MAIL-SECRET notifications: secret: PLACEHOLDER-UAA-CLIENTS-NOTIFICATION-SECRET batch: username: PLACEHOLDER-UAA-BATCH-USERNAME password: PLACEHOLDER-UAA-BATCH-PASSWORD cc: client_secret: PLACEHOLDER-UAA-CC-CLIENT-SECRET uaadb: PLACEHOLDER_UAADB_PROPERTIES ccdb: PLACEHOLDER_CCDB_PROPERTIES router: status: user: PLACEHOLDER-ROUTER-STATUS-USER password: PLACEHOLDER-ROUTER-STATUS-PASSWORD dea_next: disk_mb: 400001 memory_mb: 6656 loggregator_endpoint: shared_secret: PLACEHOLDER-LOGGREGATOR-SECRET ssl: skip_cert_verify: false
Replace placeholders with the appropriate data:
PLACEHOLDER-DIRECTOR-UUID- the BOSH director UUID. You can get it by running
bundle exec bosh status.
PLACEHOLDER-SECRET-KEY- AWS access key id and secret key. You can use the same ones as you used in the
bosh_environmentfile above, or generate new ones.
PLACEHOLDER-SUBNET-FOR-AZ2- Look in
~/deployments/cf-example/aws_vpc_receipt.yml(this file was generated when you ran
bosh aws create). They are under ‘subnets’ ‘cf1’ and ‘cf2’.
bosh_environmentfile above (hosted domain created in route 53).
PLACEHOLDER-CCDB-PROPERTIES- copy these from
Generate secure keys for the following placeholders:
Generate RSA key pair for:
You can also replace the users username/password in
If you are using Self-Signed SSL certificates:
$ git clone https://github.com/cloudfoundry/cf-release.git
updatehelper script to update the
$ cd cf-release $ ./update
Run the following Spiff command from the
cf-releasedirectory to create a deployment manifest named
./generate_deployment_manifest INFRASTRUCTURE MANIFEST-STUB > cf-deployment.yml
Replace INFRASTRUCTURE with
warden, and replace MANIFEST-STUB with the name and location of your
cf-stub.yml file. For example:
$ ./generate_deployment_manifest aws cf-stub.yml > cf-deployment.yml
bosh targetto target the BOSH Director:
$ bosh target Current target is https://x.x.x.x:25555 (micro-xxxxxx)
Set your deployment to the generated manifest:
$ bosh deployment cf-deployment.yml
bosh create releaseto create a Cloud Foundry release. This command prompts you for a development release name.
$ bosh create release
bosh upload releaseto upload the generated release to the BOSH Director:
$ bosh upload release
Deploy the uploaded Cloud Foundry release:
$ bosh deploy
bosh deploycan take 2-3 hours to complete.
curlto test the API endpoint of your Cloud Foundry installation.
$ curl api.subdomain.domain/info
curlsucceeds, it should return the JSON-formatted information. If
curldoes not succeeds, check your networking and make sure your domain has an NS record for your subdomain.
You should be able to target your Cloud Foundry installation with the cf Command Line Interface (CLI) and log in as an administrator.
The user name,
adminand the password,
fakepassword, are specified in the deployment manifest under **uaa:scim:users.
For more information about managing organizations, spaces, users, and applications, refer to the cf topic.
If you make change to your manifest, run
bosh deployto update your Cloud Foundry deployment with these changes.
If you make changes to the
bosh create release && bosh upload release && bosh deployto update your Cloud Foundry deployment with these changes.
If you want your Cloud Foundry to be able to provision services, you must deploy a services release. Refer to the services documentation.
You also might be interested in the community-managed services release.
bosh aws destroyto destroy your AWS environment.
WARNING: The command
bosh aws destroydestroys everything in your AWS account, including all S3 buckets and all instances. Do not use this command unless you want to lose everything in your AWS account, including objects and files unrelated to your Cloud Foundry deployment.
$ bosh aws destroy
Remove any YAML artifacts:
$ rm -f *.yml