Setting up Aspera Cluster Manager

For overview information about Aspera Cluster Manager (ACM), see Aspera Cluster Manager (Overview).

Installing ACM Software on the Shared Storage

  1. Obtain the latest ACM software from Aspera and place it in the shared /mnt/shared/orchestrator/acm_files/ directory.
  2. Extract the ACM software on the dedicated shared volume by running the following command, as root:
    $ cd /shared_storage/mnt/shared/orchestrator/acm_files/
    $ tar xzvf /root/acm4orchestrator-0-4-version.tar.gz
    Note: You only need to run this step on one node – the /mnt/shared/orchestrator/acm_files/ directory is shared by both orchestrator servers. This extraction creates a directory called /acm in the /mnt/shared/orchestrator/acm_files/ directory.
  3. Create a symbolic link by executing the following commands on both Orchestrator servers:
    $ cd /opt/aspera
    $ ln –s /shared_storage/mnt/shared/orchestrator/acm_files/acm ./acm
  4. Open the following directory:
    /opt/aspera/var/config/orchestrator/
    Copy the file database.yml from this directory into the following directory:
    /opt/aspera/orchestrator/config/
  5. Confirm that the MySQL user and password defined in database.yml match the MYSQL_USER and MYSQL_PWD values as defined in the following files:
    /opt/aspera/acm/bin/acm4orchestrator
    /opt/aspera/acm/bin/acmctl
  6. To configure the ACM log file to be /var/log/aspera.log, managed by the rsyslog service, on each node, modify /etc/rsyslog.conf with the following steps.
    1. Add these lines at the end of the file (note that # Aspera Logging is text, not a command):
      # Aspera Logging
      local2.* -/var/log/aspera.log
    2. Replace all cron.none occurrences with the following:
      cron.none;local2.none
    3. Replace /var/log/messages occurrence with -/var/log/messages
    4. Restart rsyslog:
      $ /etc/init.d/rsyslog restart
  7. Set up your system to rotate the logs.

    You may find that your log file, /var/log/aspera.log is growing too quickly. In that case, there are several ways to rotate Aspera logs:

    • "Option A": Add /var/log/aspera.log to the following directory:
      /etc/logrotate.d/syslog
    • "Option B": Create an entry for aspera.log in the following file:
      /etc/logrotate.conf
    • "Option C": Create a separate configuration file for aspera.log in the following directory:
      /etc/logrotate.d/

    Option A will rotate your logs with the system logs (usually once a week, compressed, and saving the last 10 logs). However, on some servers, there is so much traffic that the logs need to be rotated more often than once a week; in that case, use Option B or C.

    Option A: Add /var/log/aspera.log to the entries in /etc/logrotate.d/syslog, as follows:

    /var/log/messages /var/log/secure /var/log/maillog /var/log/spooler /var/log/boot.log /var/log/cron /var/log/aspera.log 
                                { 
                                sharedscripts 
                                postrotate 
                                /bin/kill -HUP `cat /var/run/syslogd.pid 2> /dev/null` 2> /dev/null || true 
                                /bin/kill -HUP `cat /var/run/rsyslogd.pid 2> /dev/null` 2> /dev/null || true 
                                endscript 
                                } 

    Option B: Edit /etc/logrotate.conf by adding the configuration after the line # system-specific logs may also be configured here.

    The following example compresses and rotates 10 logs whenever /var/log/aspera.log reaches 100MB. After log rotation is complete, it will run whatever scripts are specified by postrotate ... endscript.

    /var/log/aspera.log {
                                rotate 10
                                size 100M
                                create 664 root
                                postrotate
                                /usr/bin/killall -HUP syslogd
                                endscript
                                compress
                                }

    The following example compresses and rotates 10 logs once daily. Instead of moving the original log file and creating a new one, the copytruncate option tells logrotate to first copy the original log file, then truncate it to zero bytes.

    /var/log/aspera.log {
                                daily
                                rotate 10
                                copytruncate
                                compress
                                }

    Option C: Create a separate /etc/logrotate.d/aspera configuration file containing the same information as Option B.

    If you find that A4 logs are being overwritten before long transfers of many files are complete, you can increase the log size. For more information, see Logs Overwritten Before Transfer Completes.

  8. Now that you have completed this procedure, you can review the steps in Setting up Aspera Cluster Manager.

Turning Off MySQL, Apache, and Orchestrator Services

Turn off the MySQL, Apache, and Orchestrator services with the chkconfig commands:
$ chkconfig aspera_mysqld off
$ chkconfig aspera_httpd off
$ chkconfig AsperaOrchestrator off

Running an ACM Sanity Check

The acmctl command has an option to check that the necessary configurations have been made that allow the acm4orchestrator script to run appropriately. You should make sure that each server passes the sanity test.

  1. Run the acmctl command with the -s option on both nodes to verify the basic ACM prerequisites.

    Note that the status of the Checking if an entry for ACM seems to exist in crontab displays as KO, not OK, in the example below, because the user has not yet created the crontab entry that will run the ACM software on each server.

    $ /opt/aspera/acm/bin/acmctl –s
    
    ACM sanity check
    ----------------
    Checking if an entry for ACM seems to exist in the crontab     OK 
    Checking that the orchestrator master service is disabled in chkconfig     OK 
    Checking that SE Linux mode is not set to enforcing     OK
    
  2. Correct any task that does not pass the sanity check (except for tasks with crontab status; those tasks are addressed in the next section).

Configuring the crontab Entry to Run ACM

Configure ACM services in crontab on both nodes so that the acm4orchestrator script is launched every minute.

Use the crontab –e command to configure your entry as follows.
$ crontab –e
* * * * * /opt/aspera/acm/bin/acm4orchestrator ip_address > /dev/null 2>&1
One parameter must be entered in crontab: the IP address of the host where the script is running. This parameter is passed to the acm4orchestrator script.
In the example below, the IP address is 10.0.71.21.
$ crontab –e
* * * * * /opt/aspera/acm/bin/acm4orchestrator 10.0.71.21 > /dev/null 2>&1

Once configured in crontab, the acm4orchestrator script runs regularly to determine the active node and start the required Orchestrator services on both the active and passive nodes, depending on their current status (active or passive).

Obtaining the crontab Parameter Values

To list the IP addresses available on a system, run the following command:
$ ip addr | grep "inet"
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
inet 10.0.75.21/16 brd 10.255.255 scope global eth0
To determine the correct value to use for the device number, run the following command:
$ stat -c "%d" /mnt/shared/orchestrator/acm_files/
                        20
/mnt/shared/orchestrator/acm_files/ is a placeholder for your shared storage mount point that contains ACM files.

Identifying the Status of ACM on Each Orchestrator Server

The following command can be used to identify which Orchestrator server is active and which is passive:
/opt/aspera/acm/bin/acmctl -i
For more information about using this command, see Disabling and Re-enabling ACM on One Node

Connecting to Orchestrator with the VIP

If the services are running properly, you can now connect to the Orchestrator application using the virtual IP address (VIP) assigned to the ACM cluster.

If the load balancer is correctly configured, you should now be able to connect to the Orchestrator web application using the URL pointing to the VIP