Upgrading the ATC Manager

To upgrade your ATC Manager, it must meet the following prerequisites:
  • Your ATC Manager instance is running in AWS.
  • Your ATC Manager instance is connected to an RDS database .
  • Your ATC Manager instance uses an internal ELB to communicate with the cluster nodes and at least one cluster is connected to the ATC Manager using the internal ELB.
  1. Review the ATC Manager release notes.
    Review the release notes for the versions that were released since your current version. In particular, the Breaking Changes section highlights changes that may require you to adjust your workflow, configuration, or usage.
  2. Confirm that you have received the new ATC Manager image from Aspera.
    You can find the AMI ID in the ATC Manager release notes.
  3. Stop the existing ATC Manager instance.
    In the AWS console, stop the existing ATC Manager instance but do not delete it. If you have problems with your upgrade, you can start the instance again and use it while the upgrade is resolved.
  4. Launch a new ATC Manager instance and connect it to the existing ELB and RDS database.
    Follow the instructions in the Aspera Transfer Cluster Manager Admin Guide for AWS: Launching the ATC Manager AMI to create a new ATC Manager instance. Name the instance with the version of ATC Manager to avoid confusion.

    Use the following custom script to connect it to your RDS and ELB, making sure "restore" is set to true and replacing your_rds_endpoint_url and your_elb_name with their actual values.

    {
        "restore": true, 
        "statestore_backup_period": "1m",
        "database": {
            "host": "your_rds_endpoint_url",
            "port": 3306,
            "user": "db_username",
            "password": "db_password"
        }
    }
    -----SCRIPT-----
    #!/bin/bash
     
    # assign elastic load balancer
     
    elb="your_elb_name"
     
    curl="curl -sS http://169.254.169.254/2014-11-05/"
    region=$($curl/dynamic/instance-identity/document/ | jq --raw-output '.region')
    instance_id=$($curl/meta-data/instance-id)
     
    aws elb register-instances-with-load-balancer --region="$region" --load-balancer-name "$elb" --instances "$instance_id"
     
    elb_dnsname="$(aws elb describe-load-balancers --region="$region" | jq --arg elb "$elb" --raw-output '.LoadBalancerDescriptions[] | select(.LoadBalancerName == $elb) | .DNSName')"
    echo "$(jq --arg elb_dnsname "$elb_dnsname" '.private_ip |= $elb_dnsname' /opt/aspera/atcm/etc/atc-api.conf)" > /opt/aspera/atcm/etc/atc-api.conf 
    Note: In order to use this script, your cluster manager needs the ELB IAM policy.
  5. Verify the old cluster can connect to the new cluster manager.
  6. Delete the old instance.
  7. Upgrade the cluster nodes.
    For instructions, see Upgrading Cluster Nodes.