Adding Volume Services to Your Deployment

This topic describes how Cloud Foundry (CF) operators can deploy volume services.

Overview

A volume service gives apps access to a remote filesystem, such as NFS. To provide a volume service for CF developers to use with their apps, you must deploy a driver and broker pair. For current versions of CF that have been been deployed with cf-deployment, deploying brokers and drivers is typically accomplished using operations files as outlined below in Example: Deploy NFS to CF.

Additional Information

For more information about volume services and the drivers and brokers available to CF, see the following:

Note: For test purposes, you can deploy the Local Volume Release if running a single Diego Cell CF deployment. This is not intended for production deployments.

Contact

If you have any questions, you can contact the team that develops volume services for CF on the #persi channel in the Cloud Foundry (Open Source) Slack organization.

Example: Deploy NFS Volume Service to CF

The following procedure provides an example of how to deploy the NFS broker and corresponding driver to an existing CF deployment.

Prerequisites

This procedure requires the following:

Redeploy Cloud Foundry with NFS Enabled

  1. Clone the cf-deployment repository from Git, if you do not already have it:

    $ cd ~/workspace
    $ git clone https://github.com/cloudfoundry/cf-deployment.git
    $ cd ~/workspace/cf-deployment

  2. Redeploy your cf-deployment while including the NFS ops file:

    $ bosh -e my-env -d cf deploy cf.yml -v deployment-vars.yml \
        -o operations/enable-nfs-volume-service.yml

    Note: The above bosh deploy command is an example, but your deployment command should match the one you used to deploy CF initially, with the addition of a -o operations/enable-nfs-volume-service.yml option.

  3. Run the nfsbrokerpush errand to deploy the NFS service broker application:

    $ bosh -e my-env -d cf run-errand nfsbrokerpush

Your CF deployment now has a running service broker and volume drivers and is ready to mount NFS volumes.

Grant Access to the NFS Broker

Grant access to the services of the broker.

$ cf enable-service-access nfs

CF Developers can now create an NFS service and bind instances to their apps as outlined in the Using an External File System (Volume Services) topic.

(Optional) LDAP Support

For better NFS security, configure your deployment to connect to an external LDAP server. Configuring an LDAP server enables the NFS volume driver to:

  • Ensure that the application developer has valid credentials (according to the LDAP server) to use an account.
  • Translate user credentials into a valid UID and GID for that user.

The principal benefit of this feature is that it secures the NFS volume service so that it is no longer possible for an application developer to bind to an NFS share using an arbitrary UID and potentially gain access to sensitive data stored by another user or application. Once LDAP support is enabled, regular UID and GID parameters are disabled and application developers will need to provide valid credentials for any user they wish to use on the nfs server.

Changes to your LDAP server

It is not generally necessary to make adjustments to your LDAP server to enable integration, but you will need the following:

  • Your LDAP server must be reachable through the network from the Diego cell VMs on the port you will use to connect (normally 389 or 636)
  • You should provision a service account on the LDAP server that has read-only access to user records. This account will be used by nfsv3driver to look up usernames and convert them to UIDs. In Windows server 2008 or later this can be accomplished by creating a new user and adding it to the Read-only Domain Controllers group.
  • Your LDAP schema must contain uidNumber and gidNumber fields for the user accounts used by nfs services. These fields are used to establish the correct UID for a named user.

Changes to your Cloud Foundry deployment.

Include the enable-nfs-ldap operations file in your deployment to turn on LDAP authentication. You will need to provide the following variables in a variables file or with the -v option on the BOSH command line:

  • nfs-ldap-service-user: LDAP service account user name
  • nfs-ldap-service-password: LDAP service account password
  • nfs-ldap-host: LDAP server host name or ip address
  • nfs-ldap-port: LDAP server port
  • nfs-ldap-proto: LDAP server protocol (tcp or udp)
  • nfs-ldap-fqdn: LDAP fqdn for user records we will search against when looking up user UIDs

(Optional) Deploying the Test Servers

The NFS volume service includes two test servers: a test NFS server that provides NFS shares, and a test LDAP server that provides sample UID resolution when the LDAP feature is enabled.

NFS Test Server

To deploy the NFS test server, include the enable-nfs-test-server.yml operations file. This creates a separate VM with nfs exports you can use to experiment with volume mounts.

Note: By default, the NFS test server expects that your CF deployment is deployed to a 10.x.x.x subnet. If you are deploying to a subnet that is not 10.x.x.x (e.g. 192.168.x.x), you must override the “export_cidr” property.
Edit the operations file, and replace this line:
nfstestserver: {}
with something like this:
nfstestserver: {export_cidr: 192.168.0.0/16}

LDAP Test Server

To deploy the LDAP test server, include the enable-nfs-test-ldapserver.yml operations file. This installs an LDAP server onto the VM created for the NFS test server.

The deployed LDAP server is preconfigured with a single user account with username uid1000 and password secret. When queried this test user will resolve to UID 1000 and GID 1000.

When using the LDAP test server with your Cloud Foundry deployment, you can use the following values for required variables to connect to it:

  • nfs-ldap-service-user: cn=admin,dc=domain,dc=com
  • nfs-ldap-service-password: secret
  • nfs-ldap-host: nfstestldapserver.service.cf.internal
  • nfs-ldap-port: 389
  • nfs-ldap-proto: tcp
  • nfs-ldap-fqdn: ou=Users,dc=domain,dc=com

Example 2: Deploy SMB Volume Service to CF

The following procedure provides an example of how to deploy the SMB broker and corresponding driver to an existing CF deployment.

Prerequisites

This procedure requires the following:

Redeploy CF with SMB Enabled

  1. Clone the cf-deployment repository from Git, if you do not already have it:

    $ cd ~/workspace
    $ git clone https://github.com/cloudfoundry/cf-deployment.git
    $ cd ~/workspace/cf-deployment

  2. Redeploy your cf-deployment while including the SMB ops file:

    $ bosh -e my-env -d cf deploy cf.yml -v deployment-vars.yml \
        -o operations/experimental/enable-smb-volume-service.yml

    Note: The above bosh deploy command is an example, but your deployment command should match the one you used to deploy CF initially, with the addition of a -o operations/experimental/enable-smb-volume-service.yml option.

  3. Run the smbbrokerpush errand to deploy the SMB service broker application:

    $ bosh -e my-env -d cf run-errand smbbrokerpush

Your CF deployment now has a running service broker and volume drivers and is ready to mount existing SMB shares.

Deploying the SMB Test Server

To deploy the SMB test server, you can fetch the operations file from the persi-ci GitHub repository and include that operation with a -o flag. This creates a separate VM with SMB shares you can use to experiment with volume mounts.

Note: By default, the SMB test server expects that your CF deployment is deployed to a 10.x.x.x subnet. If you are deploying to a subnet that is not 10.x.x.x (e.g. 192.168.x.x), you must override the `export_cidr` property.
Edit the operations file, and add a line in the properties section:
export_cidr: 192.168.0.0/16

Grant Access to the SMB Broker

Grant access to the services of the broker.

$ cf enable-service-access smb

CF developers can now create an SMB service and bind instances to their apps as outlined in the Using an External File System (Volume Services) topic.

Create a pull request or raise an issue on the source for this page in GitHub