Installing and Configuring Operating Systems

Note: All commands are run as root.

Install a Supported Operating System

The admin user's first task is to install a supported operating system on both nodes that will support the Orchestrator application. See General Requirements for details on supported operating systems.

Update your Environment

If desired, update some or all packages on each system.
# yum –y update

Check Network Settings and Names

Confirm that your network settings are correctly configured and that each host has a unique host name properly configured within the name resolution mechanism you use (DNS, hosts file, and so on). Each host must be able to resolve its own name, as well as the name of the other node.

Run the following command on both nodes. The resulting system output should make sense in your environment.
# hostname
haorchestrator1.mydomain.com

Configure Local Firewalls

No traffic filter should be put in place between the two nodes. If your nodes are located behind a corporate firewall (and thus appropriately protected), you should disable the Linux firewall components. Use chkconfig to prevent the firewall from becoming active when the system is rebooted.
Note: If the firewall will not be disabled, make sure to configure the firewall to open the necessary ports for Aspera. See TCP and UDP Ports Used in Orchestrator HA Environments for a list of ports used by the Orchestrator HA environment.

Disable SELinux on All Servers

SELinux must be disabled or set to permissive in the /etc/selinux/config file on each Orchestrator server system. You can confirm the SE Linux current status by running the sestatus command.
# sestatus
SELinux status: disabled

Create User Accounts and Groups on Each Orchestrator Server

Manually create the mysql and orchestrator user accounts and groups on each orchestrator Server before installing any Aspera packages.
Note: It is critical to ensure that the UID and GID for the mysql and orchestrator user accounts are consistent across all orchestrator servers.

Additionally, ensure that the permissions defined on the NFS server are appropriate for the shared directories (in other words, consistent with what has already been defined on the shared directories).

Use the following commands on each node to create the required users and groups. In this example, the UID and GID for Aspera Orchestrator user is 776, and the UID and GID for Aspera Mysql is 778. The actual values you enter should match the values used for the NFS-exported directories defined on the NFS server. You must confirm that the UID and GID values for each user and group are the same on both Orchestrator server systems. Verify their values in the /etc/passwd and /etc/group files of each system (as well as configuring ID mapping as mentioned previously).
# groupadd -g 776 orchestrator && useradd -c "Aspera Orchestrator user" -d
/home/orchestrator -g orchestrator -m -s /bin/aspshell -r -u 776 orchestrator

# groupadd -g 778 mysql && useradd -c "Aspera Mysql" -d /home/mysql 
-g mysql -m -s /bin/false -u 778 mysql
The UIDs and GIDs used on the Orchestrator servers for orchestrator and mysql users must match the UIDs and GIDs associated with the shared directories on the NFS server.
Note: NFS v. 4 uses ID mapping to ensure the enforcement of shared directory ownership; it must be configured on the NFS server and each NFS client in a way that avoids access problems with Orchestrator and ACM. Set the Domain variable in the /etc/idmapd.conf file to a value of localdomain on the NFS4 server and on both Aspera orchestrator Server systems (in other words, every system that will participate in the orchestrator environment).
The following link provides a discussion of configuring the idmap service:
http://www.softpanorama.org/Net/Application_layer/NFS/Troubleshooting/nfsv4_mounts_files_as_nobody.shtml

Mount Remote File Systems on Each Orchestrator Server

Orchestrator servers in HA environments must be configured with shared storage. There are three shared directories that need to be available to each Orchestrator server.
Note: The following shared storage names are used as mount points in this document, but you may use any mount point names you prefer.
Mount Point Usage Owner Permissions
/mysql_data Stores shared MySQL data files nobody.root drwx------
/orchestrator Stores Orchestrator upload and download files orchestrator.orchestrator drwx------
/acm_files Stores shared ACM files nobody.nobody drwx------

Configuring the /etc/fstab file will cause the shared directories to be automatically mounted when the system reboots.

The following entries in the /etc/fstab file indicate that the shared directories (/home/mysql_data, /home/orchestrator, and /home/acm_files) are shared from the NFS server with an IP address of 10.0.75.10. These shared directories will be mounted to their corresponding local directories on each orchestrator server (/mysql, /orchestrator, /acm_files). The nfs3 entry indicates the type of file system which is being shared, and the remaining option entries define typical parameters used when mounting file systems for use in Orchestrator HA environment.

10.0.75.10:/home/mysql_data	   /mysql_data 	   nfs3 rw,sync,hard,intr 0 0
10.0.75.10:/home/orchestrator	       /orchestrator 	       nfs3 rw,sync,hard,intr 0 0
10.0.75.10:/home/acm_files	    /acm_files 	    nfs3 rw,sync,hard,intr 0 0

Once you have configured the /etc/fstab file, make sure the mount points have been created on both Orchestrator servers, and confirm each directory’s ownership and permissions.

Install ACM Software on the Shared Storage

Obtain the latest ACM software from Aspera and place it in the shared /acm_files directory.

Extract the ACM software on the dedicated shared volume by running the following command:
# cd /acm_files
# tar xzvf /root/acmversionorchestrator-version.tar.gz
Note: You only need to perform this task from one node – the /acm_files directory is shared by both orchestrator servers. This extraction creates a directory called acm in the /acm_files directory. This acm directory is referenced in a later step.