Using Hardware Security Modules with CredHub

This topic describes how to configure AWS CloudHSM devices to work with CredHub.

Note: If you use a Luna SafeNet HSM, rather than AWS, skip over the device allocation portion and start by initializing and configuring your HSMs.

If you store critical data in CredHub, configuring at least two Hardware Security Modules (HSMs) replicates your keys and provides redundancy and security in the event of an HSM failure. With a single HSM, device failure renders your CredHub data inaccessible.

Preparation Checklist

At the end of this topic, you will have collected or created the following resources:

  1. Encryption Key Name
  2. HSM Certificate
  3. Partition name and password
  4. Client certificate and private key
  5. Partition serial numbers

Create New AWS CloudHSMs

AWS Environment Prerequisites

Note: For high availability (HA), use at least two HSM instances. AWS documentation recommends that you also use a subnet for a publicly available Control Instance, but for this product that is unnecessary. CredHub acts as a Control Instance.

  • A Virtual Private Cloud (VPC)
  • For each HSM instance: One private subnet in its own Availability Zone (AZ)
  • IAM Role for the HSM with a policy equivalent to AWSCloudHSMRole policy
  • The Security Group must allow traffic from the CredHub security group on ports 22 (SSH) and 1792 (HSM)

Create New Devices

  1. Install cloudhsm CLI.
  2. Create SSH keypairs for all planned HSMs.
    $ ssh-keygen -b 4096 -t rsa -N YOUR-PASSWORD -f path/to/ssh-key.pem
    
  3. Create the cloudhsm.conf file with the following text, replacing each VALUE with the correct value:
    
      aws_access_key_id=VALUE
      aws_secret_access_key=VALUE
      aws_region=VALUE
      

    For more information about the configuration file for cloudhsm CLI, read Amazon’s documentation.
  4. Run the following command to create each HSM and place it in the appropriate subnet:
    $ cloudhsm create-hsm \
        --conf_file path/to/cloudhsm.conf \
        --subnet-id SUBNET-ID \
        --ssh-public-key-file path/to/ssh-key.pem.pub \
        --iam-role-arn IAM-HSN-ROLE-ARN
    
  5. Assign the security group to each HSM. Start by getting the Elastic Network Interface ID EniID of the HSM.
    $ cloudhsm describe-hsm -H HSM-ARN -r AWS-REGION
    
    Then edit the network interface to assign the security group:
    $ aws ec2 modify-network-interface-attribute \
        --network-interface-id ENI-ID \
        --groups SECURITY-GROUP-ID
    

Initialize and Configure New HSMs

Complete the following steps for each HSM, regardless of whether they are Luna HSMs or AWS CloudHSMs.

SSH onto the HSM

  1. Get the HSM IP.
    $ cloudhsm describe-hsm -H HSM-ARN -r AWS-REGION
  2. SSH onto the HSM.
    $ ssh -i path/to/ssh-key.pem manager@HSM-IP

Initialize and Set Policies

  1. Initialize the HSM and create an Administrator password. Initialize all HSMs into the same cloning domain to guarantee HA.
    lunash:> hsm init -label LABEL
  2. Log into the HSM using the password you just created.
    lunash:> hsm login
  3. Confirm that only FIPS algorithms are enabled.
    lunash:> hsm changePolicy -policy 12 -value 0
  4. Run hsm showPolicies to confirm that Allow cloning and Allow network replication policy values are set to On on the HSM.

    If these values are not set to On, change them by running the following command:
    lunash:> hsm changePolicy -policy POLICY-CODE -value 1
  5. Validate that the SO can reset partition PIN is set correctly. If it is set to Off, consecutive failed login attempts will permanently erase the partition once the failure count hits the configured threshold.

    If SO can reset partition PIN is set to On, the partition locks once the threshold is met. An HSM Admin must unlock the partition, but no data will be lost. Use the following command to set the policy to On:
    lunash:> hsm changePolicy -policy 15 -value 1

Retrieve HSM Certificate

Fetch the certificate from the HSM. This is used to validate the identity of the HSM when connecting to it.

$ scp -i path/to/ssh-key.pem \
    manager@HSM-IP:server.pem \
    HSM-IP.pem

Create an HSM Partition

  1. Create a partition to hold the encryption keys. The partition password must be the same for all partitions in the HA partition group. The cloning domain must be the same as earlier.
    lunash:> partition create -partition PARTITION-NAME -domain CLONING-DOMAIN
  2. Record the partition serial number labeled Partition SN.
    lunash:> partition show -partition PARTITION-NAME

Create and Register HSM Clients

Clients that communicate with the HSM must provide a client certificate to establish a client-authenticated session. You must set up each client’s certificate on the HSM and assign access rights for each partition they access.

  1. Create a certificate for the client.
    $ openssl req \
      -x509   \
      -newkey rsa:4096 \
      -days   NUMBER-OF-DAYS \
      -sha256 \
      -nodes  \
      -subj   "/CN=CLIENT-HOSTNAME-OR-IP" \
      -keyout CLIENT-HOSTNAME-OR-IPKey.pem \
      -out    CLIENT-HOSTNAME-OR-IP.pem
    
  2. Copy the client certificate to each HSM.
    $ scp -i path/to/ssh-key.pem \
        CLIENT-HOSTNAME-OR-IP.pem \
        manager@HSM-IP:CLIENT-HOSTNAME-OR-IP.pem
    

Register HSM Client Host and Partitions

  1. Create a client. The client hostname is the hostname of the planned CredHub instance(s).
    lunash:> client register -client CLIENT-NAME -hostname CLIENT-HOSTNAME
    If you’re only planning to run one CredHub instance, it’s possible to register a client with the planned CredHub IP.
    lunash:> client register -client CLIENT-NAME -ip CLIENT-IP
    
  2. Assign the partition created in the previous section to the client.
    lunash:> client assignPartition -client CLIENT-NAME -partition PARTITION-NAME
    

Encryption Keys on the HSM

Set which key used for encryption operations by defining the encryption key name in the deploy manifest. By default, a key that exists on the HSM is used for encryption operations. If a key does not exist on the HSM, CredHub creates it automatically in the referenced partition.

When you generate a new key, review the list of keys on each HSM to validate that key replication is occurring. If new keys do not propagate among the HSMs, you could get locked out of HSMs.

To review stored keys on a partition:

lunash:> partition showContents -partition PARTITION-NAME

Ready for Deployment

To review, you have collected and created the following resources:

  1. Encryption Key Name
  2. HSM Certificate
  3. Partition name and password
  4. Client certificate and private key
  5. Partition serial numbers

Enter these in the manifest as shown below.


yaml
credhub: 
  properties: 
    encryption:
      keys:
        - provider_name: primary
          encryption_key_name: ENCRYPTION-KEY-NAME
          active: true
      providers:
        - name: primary
          type: hsm
          partition: PARTITION-NAME
          partition_password: PARTITION-PASSWORD
          client_certificate: CLIENT-CERTIFICATE
          client_key: CLIENT-PRIVATE-KEY
          servers: 
          - host: 10.0.0.1
            port: 1792
            certificate: HSM-CERTIFICATE
            partition_serial_number: PARTITION-SERIAL-NUMBER
          - host: 10.0.0.10
            port: 1792
            certificate: HSM-CERTIFICATE
            partition_serial_number: PARTITION-SERIAL-NUMBER

Renew or Rotate a Client Certificate

The generated client certificate has a fixed expiration date after which the HSM no longer accepts it. Rotate or renew this certificate at any time by following the steps detailed below.

  1. Generate a new certificate for the client.
    $ openssl req \
      -x509   \
      -newkey rsa:4096 \
      -days   NUMBER-OF-DAYS \
      -sha256 \
      -nodes  \
      -subj   "/CN=CLIENT-HOSTNAME-OR-IP" \
      -keyout CLIENT-HOSTNAME-OR-IPKey.pem \
      -out    CLIENT-HOSTNAME-OR-IP.pem
    
  2. Copy the client certificate to each HSM.
    $ scp -i path/to/ssh-key.pem \
        CLIENT-HOSTNAME-OR-IP.pem \
        manager@HSM-IP:CLIENT-HOSTNAME-OR-IP.pem
    
  3. (Optional) Review the client’s partition assignments.
    lunash:> client show -client CLIENT-NAME
    
  4. Remove the existing client.

    Note: All partition assignments will be deleted.

    lunash:> client delete -client CLIENT-NAME
    
  5. Re-register the client.
    lunash:> client register -client CLIENT-NAME -ip CLIENT-IP
    
  6. Re-assign partition assignments.
    lunash:> client assignPartition -client CLIENT-NAME -partition PARTITION-NAME
    
  7. (Optional) Validate the new certificate fingerprint.
    lunash:> client fingerprint -client CLIENT-NAME
    
    If you need to, you can compare the fingerprint to your locally stored certificate:
    $ openssl x509 -in clientcert.pem -outform DER | md5sum
Create a pull request or raise an issue on the source for this page in GitHub