Installing and Configuring Operating Systems

Note: All commands in this section are run as root.

Installing a Supported Operating System

The admin user's first task is to install a supported operating system on both nodes that will support the Orchestrator application. See General Requirements for details on supported operating systems.

Updating Your Environment

If desired, update some or all packages on each system.
# yum –y update

Checking Network Settings and Names

Confirm that your network settings are correctly configured and that each host has a unique host name that is properly configured within the name resolution mechanism you use (DNS, hosts file, and so on). Each host must be able to resolve its own name, as well as the name of the other node.

Run the following command on both nodes.
# hostname
haorchestratornode_id.my_domain.com

Configuring Local Firewalls

Do not place a traffic filter between the two nodes. If your nodes are located behind a corporate firewall (and thus appropriately protected), disable the Linux firewall components. The chkconfig command prevents the firewall from becoming active when the system is rebooted.
# service iptables stop
iptables:	flushing firewall rules:				[ OK ]
iptables:	Setting chaings to policy ACCEPT: filter		[ OK ]
iptables:	Unloading modules:				[ OK ]

# service ip6tables
ip6tables: Flushing firewall rules:  				[ OK ] 
ip6tables: Setting chains to policy ACCEPT: filter 		[ OK ] 
ip6tables: Unloading modules:   				[ OK ]

# chkconfig iptables off
# chkconfig ip6tables off
If using CentOS 7 or RHEL 7, you must run two final commands to disable the firewall:
# systemctl stop firewalld
# systemctl disable firewalld
Note: If you do not disable the firewall, configure it to open the necessary ports for Aspera. See TCP and UDP Ports Used in Orchestrator High Availability Environments for a list of ports used by the Orchestrator HA environment.

Disabling SELinux on All Servers

SELinux must be disabled or set to permissive in the /etc/selinux/config file on each Orchestrator server system. You can confirm the SELinux current status by running the sestatus command.
# sestatus
SELinux status: disabled

Creating User Accounts and Groups on Each Orchestrator Server (For Non-Root User)

The installation of the Aspera Common Components automatically creates a mysql user.

Note: It is critical to ensure that the UID and GID for the mysql user account is consistent across all Orchestrator servers.

Ensure that the permissions defined on an NFS server are appropriate for the shared directories (in other words, consistent with what has already been defined on the shared directories).

Note:

NFS v. 4 uses ID mapping to ensure the enforcement of shared directory ownership; it must be configured on the NFS server and each NFS client in a way that avoids access problems with Orchestrator and ACM.

Mounting Remote File Systems on Each Orchestrator Server

Orchestrator servers in HA environments must be configured with shared storage. There are three shared directories that need to be available to each Orchestrator server.
Note: Rights—such as mysql:mysql—may be set only on the NFS server side.
The following shared storage names are used as mount points in this document, but you may use any mount point names you prefer:
Mount Point Usage Owner Permissions
/mnt/shared/orchestrator/mysql_data
Note: Replace /mnt/shared/orchestrator/mysql_data by the mount point you chose for mysql data
Stores shared MySQL data files mysql:mysql drwx------
/mnt/shared/orchestrator/orchestrator_var_data Stores Orchestrator upload and download files nobody.nobody drwx------
/mnt/shared/orchestrator/acm_data/ Stores shared ACM files nobody.nobody drwx------
  1. Configure the /etc/fstab file.
    When this file is configured, the shared directories are automatically mounted when the system reboots.
    In the following example /etc/fstab file, the shared directories (/mnt/shared/orchestrator/mysql_data, /mnt/shared/orchestrator/orchestrator_var_data, and /home/mnt/shared/orchestrator/acm_data/) are shared from the NFS server with an IP address of 10.0.75.10 (note that the entries in your file do not need to share a single IP address). The example below contains options typically used when mounting file systems for use in the Orchestrator high availability environment. For further details on how to adjust the file structure according to your storage vendor requirements, consult the man page for the file (man fstab).
    10.0.75.10:/home/mysql_data	   /mnt/shared/orchestrator/mysql_data 	   nfs4 rw,sync,hard,intr 0 0
    10.0.75.10:/home/orchestrator	       /mnt/shared/orchestrator/orchestrator_var_data 	       nfs4 rw,sync,hard,intr 0 0
    10.0.75.10:/home/mnt/shared/acm_data/	    /mnt/shared/orchestrator/acm_data/ 	    nfs4 rw,noac,sync,hard,intr 0 0
    These shared directories are mounted to their corresponding local directories on each orchestrator server (/mnt/shared/orchestrator/mysql_data, /mnt/shared/orchestrator/orchestrator_var_data, and /mnt/shared/orchestrator/acm_data/).
    Explanation of Configuration Options
    Option Details
    nfs4 This first entry indicates the type of file system which is being shared (in this case, nfs4).
    noac This setting disables attribute caching; it is crucial for the /mnt/shared/orchestrator/acm_data/ directory because attribute caching can break ACM. (Note that attribute caching may be useful in other directories because it can optimize (speed up) performance.)
    hard This setting permits unlimited retry requests to the NFS server; however, it can cause the system to freeze. The alternative is the soft setting which developers sometime avoid because it can cause data corruption.
    For a description of all available options, see the man page for the Network File System (man nfs).
  2. Once you have configured the /etc/fstab file, make sure the mount points have been created on both Orchestrator servers, and confirm each directory’s ownership and permissions.