Running the Cluster Manager in a Private VPC

If you have configured the Cluster Manager with the AWS Auto Scaling service and the Cluster Manager fails, AWS automatically launches a new instance of the Cluster Manager in the same subnet and assigns it a new IP address. If the Cluster Manager is located in a private subnet of your VPC, but the transfer nodes are located in a public subnet, the transfer nodes are unable to automatically retrieve the IP address of the new Cluster Manager instance.

In order to support the autoscaling feature, you must configure the Cluster Manager AMI to launch with a static IP address. You can use a custom script to set a static IP for the Cluster Manager. The custom script can be used to add either an internal IP or an Elastic IP. We provide an example script for both scenarios below.

The following instructions make use of the ability to run a custom script from the JSON user data before any firstboot scripts when launching an instance of the Cluster Manager AMI. There are three steps to configuring this instance:

  1. Create a new IAM policy to grant the Cluster Manager permission to run the custom scripts.
  2. Attach the new policy to the cluster manager IAM role (atc-manager).
  3. Add the custom script into the user data on the Configure Instance page and launch the instance.

Creating the Private VPC IAM Policy

Create the new IAM policy to grant the Cluster Manager permission to run the custom scripts.

  1. From the AWS console, go to Security & Identity > Identity & Access Management and select Policies from the Details sidebar.
  2. Click Create Policy. Select the Create Your Own Policy option.
  3. Name the new policy atc-ec2-private-vpc-policy.
  4. Enter the following policy into the Policy Document field.
        "Version": "2012-10-17", 
        "Statement": { 
            "Effect": "Allow",
            "Action": [
        "Resource": "*"
  5. Click Validate Policy to check for formatting issues. The policy must be well-formed JSON text.
  6. Click Create Policy.

Attaching the Private VPC Policy

Attach atc-ec2-private-vpc-policy to the atc-manager IAM role.

  1. From the AWS Console, go to Security & Identity > Identity & Access Management and select Roles from the Details sidebar.
  2. Click the atc-manager role and click Attach Policy.
  3. Select atc-ec2-private-vpc-policy and click Attach Policy.
    At this point, the IAM policy should have permission to run the custom scripts.

Using Custom Scripts to Configure the IP

Before continuing, make sure you have created the RDS database to use with the cluster. You need your RDS endpoint URL to finish this configuration. For more information, see Creating the RDS Database.

After attaching the policy to the atc-manager IAM role, finalize and launch the Cluster Manager with a custom script added to the user data. To review how to launch a cluster, see Launching the ATC Manager AMI. The following steps assume you are on the Configure Instance Details page of the launch process.

  1. Insert the custom script into the instance user data after the -----SCRIPT----- separator. Make sure the separator has exactly five dashes on each side.
    Use the following example script to automatically assign a specific Internal IP to your cluster manager. Replace your_rds_endpoint_url and your_ip_address with their actual values.
    Note: The IP address you choose must be in the same subnet as your cluster. Check that no one is already using that IP address.
        "restore": true, 
        "statestore_backup_period": "1m",
        "database": {
            "host": "your_rds_endpoint_url",
            "port": 3306,
            "user": "root",
            "password": "secret"
    # Attach a secondary private IP address to eth0 on AWS EC2. 
    # the IP to use
    curl="curl -sS"
    region=$($curl/dynamic/instance-identity/document/ | jq --raw-output '.region')
    while read -r mac; do
      if [ "$device_number" -eq "0" ]; then
        aws ec2 assign-private-ip-addresses --region "$region" --network-interface-id "$eni_id" --private-ip-addresses $ip
    done <<< "$macs"
    echo "DEVICE=eth0:1
    NETMASK=" > /etc/sysconfig/network-scripts/ifcfg-eth0:1
    ifup eth0:1
    # set the ip in the cluster manager configuration
    echo "$(jq --arg ip "$ip" '.private_ip |= $ip' /opt/aspera/atcm/etc/atc-api.conf)" > /opt/aspera/atcm/etc/atc-api.conf
  2. Finish configuring and launching instance.