ACM Setup

  1. If you have not already done so, set hostnames on both nodes in the following file:
    /etc/sysconfig/network
    Below is an example of how to set a hostname on the first node:
    NETWORKING=yes
    HOSTNAME=orchestrator1
    GATEWAY=10.20.104.1
  2. To allow the hostname changes to take effect, reboot on both nodes.
    $ shutdown –r now 
  3. To configure the ACM log file as /var/log/aspera.log, managed by the rsyslog service, modify /etc/rsyslog.conf on each node with the following command:
    vi /etc/rsyslog.conf
  4. Add this line at the end:
    # Aspera Logging
    local2.* -/var/log/aspera.log
  5. Replace the cron.none occurrence with the following:
    cron.none;local2.none
  6. Replace the /var/log/messages occurrence with the following:.
    -/var/log/messages
  7. Restart rsyslog with the following:
    $ /etc/init.d/rsyslog restart 
  8. Create the following file if it does not already exist. It is the ACM files mount point on the shared storage.
    /mnt/shared/orchestrator/acm_files
  9. Install ACM files on one node only, for example, /tmp.
    The ACM file is in the following format:
    acm_orchestrator-version_number.tar.gz
    Extract the files with the following commands:
    $ cp /tmp/acm_orchestrator-version_number.tar.gz /mnt/shared/orchestrator/acm_files
    $ cd /mnt/shared/orchestrator/acm_files
    $ tar xvfz acm_orchestrator-version_number.tar.gz
  10. Create a symbolic link on both nodes.
    $ ln -s /mnt/shared/orchestrator/acm_files /opt/aspera/acm
  11. Look for database.yml in the following directory: /opt/aspera/var/config/orchestrator/
    /opt/aspera/var/config/orchestrator/
    If database.yml is not there, copy it from this directory:
    /opt/aspera/orchestrator/config/
    Then, add it to the original directory:
    /opt/aspera/var/config/orchestrator/
  12. Copy database.yml into /opt/aspera/acm/config as follows:
    $ cp /opt/aspera/var/config/orchestrator/database.yml 
    /opt/aspera/acm/config/
  13. Make sure that the MySQL user and password defined in database.yml match MYSQL_USER and MYSQL_PWD values as defined in the following files:
    cod
    /opt/aspera/acm/bin/acm4orchestrator
    /opt/aspera/acm/bin/acmctl
  14. To install ACM in the crontab on both nodes so that it gets launched every minute, run the following command:
    $ crontab –e
  15. Run the following, adding the correct value for node_IP_address:
    * * * * *   /opt/aspera/acm/bin/acm4orchestrator  node_IP_address  >  /dev/null 2>&1
    For example:
    * * * * * /opt/aspera/acm/bin/acm4orchestrator 10.20.104.10 > /dev/null 2>&1
    If you don't know your IP address, obtain it with the following command:
    $ ip addr | grep "inet"
    inet 127.0.0.1/8 scope host lo
    inet 10.20.104.10/24 brd 10.20.104.255 scope global eth0
    In the example above, 10.20.104.10 is the correct IP address.
  16. Obtain a partition ID from the acm4orchestrator command line to confirm that the shared partition is mounted on Aspera Orchestrator. (Optional)
    1. Run the following command:
      $ crontab –e
    2. Enter the following:
      * * * * *   /opt/aspera/acm/bin/acm4orchestrator  node_IP_address  shared_partition_device_number  >  /dev/null 2>&1
      For example:
      * * * * *   /opt/aspera/acm/bin/acm4orchestrator 10.20.104.10  21  >  /dev/null 2>&1
    3. Obtain the partition number with this command:
      $ stat -c "%d" path_to_acm_mount
      
      For example:
      $ stat -c "%d" /mnt/acm/
      21
      Note: The partition number might change if the mount point is renamed or if the mount is unmounted and remounted. Therefore, when these operations are performed, compare the original partition number with the one issued in the command above and update the crontab entry accordingly.