Upgrading the ATC Manager
- Your ATC Manager instance is running in AWS.
- Your ATC Manager instance is connected to an RDS database .
- Your ATC Manager instance uses an internal ELB to communicate with the cluster nodes and at least one cluster is connected to the ATC Manager using the internal ELB.
-
Review the ATC Manager release notes.
Review the release notes for the versions that were released since your current version. In particular, the Breaking Changes section highlights changes that may require you to adjust your workflow, configuration, or usage.
-
Confirm that you have received the new ATC Manager image from Aspera.
You can find the AMI ID in the ATC Manager release notes.
-
Prevent new transfers to and from the cluster from starting by hiding the
cluster from DNS.
In the ATC Manager UI, go to Clusters. For the cluster that you want to hide, click Actions > Hide Cluster Through DNS. Allow all active transfers to complete; this is indicated when all cluster nodes are idle (see Monitoring Cluster Nodes). Once transfers are complete, you can move on to the next step.
-
Stop the existing ATC Manager instance.
In the AWS console, stop the existing ATC Manager instance but do not delete it. If you have problems with your upgrade, you can start the instance again and use it while the upgrade is resolved.
-
Launch a new ATC Manager instance and connect it to the existing ELB and RDS
database.
Follow the instructions in the Aspera Transfer Cluster Manager Admin Guide for AWS: Launching the ATC Manager AMI to create a new ATC Manager instance. Name the instance with the version of ATC Manager to avoid confusion.
Use the following custom script to connect it to your RDS and ELB, making sure
"restore"is set totrueand replacing your_rds_endpoint_url and your_elb_name with their actual values.{ "restore": true, "statestore_backup_period": "1m", "database": { "host": "your_rds_endpoint_url", "port": 3306, "user": "db_username", "password": "db_password" } } -----SCRIPT----- #!/bin/bash # assign elastic load balancer elb="your_elb_name" curl="curl -sS http://169.254.169.254/2014-11-05/" region=$($curl/dynamic/instance-identity/document/ | jq --raw-output '.region') instance_id=$($curl/meta-data/instance-id) aws elb register-instances-with-load-balancer --region="$region" --load-balancer-name "$elb" --instances "$instance_id" elb_dnsname="$(aws elb describe-load-balancers --region="$region" | jq --arg elb "$elb" --raw-output '.LoadBalancerDescriptions[] | select(.LoadBalancerName == $elb) | .DNSName')" echo "$(jq --arg elb_dnsname "$elb_dnsname" '.private_ip |= $elb_dnsname' /opt/aspera/atcm/etc/atc-api.conf)" > /opt/aspera/atcm/etc/atc-api.confNote: In order to use this script, your cluster manager needs the ELB IAM policy. - Verify the old cluster can connect to the new cluster manager.
-
Allow new transfers to and from the cluster by exposing the cluster through
DNS.
In the ATC Manager UI, go to Clusters. Then click Actions > Expose Cluster Through DNS.
- Delete the old instance.
-
Upgrade the cluster nodes.
For instructions, see Upgrading Cluster Nodes.