Installing and Configuring Operating Systems

Note: All commands in this section are run as root.

Installing a Supported Operating System

The admin user's first task is to install a supported operating system on both nodes that will support the Orchestrator application. See General Requirements for details on supported operating systems.

Updating Your Environment

If desired, update some or all packages on each system.
# yum –y update

Checking Network Settings and Names

Confirm that your network settings are correctly configured and that each host has a unique host name properly configured within the name resolution mechanism you use (DNS, hosts file, and so on). Each host must be able to resolve its own name, as well as the name of the other node.

Run the following command on both nodes.
# hostname
haorchestratornode_id.my_domain.com

Configuring Local Firewalls

Do not place a traffic filter between the two nodes. If your nodes are located behind a corporate firewall (and thus appropriately protected), disable the Linux firewall components. The chkconfig command prevents the firewall from becoming active when the system is rebooted.
# service iptables stop
iptables:	flushing firewall rules:				[ OK ]
iptables:	Setting chaings to policy ACCEPT: filter		[ OK ]
iptables:	Unloading modules:				[ OK ]

# service ip6tables
ip6tables: Flushing firewall rules:  				[ OK ] 
ip6tables: Setting chains to policy ACCEPT: filter 		[ OK ] 
ip6tables: Unloading modules:   				[ OK ]

# chkconfig iptables off
# chkconfig ip6tables off
Note: If the firewall is not disabled, configure the firewall to open the necessary ports for Aspera. See TCP and UDP Ports Used in Orchestrator High Availability Environments for a list of ports used by the Orchestrator HA environment.

Disabling SELinux on All Servers

SELinux must be disabled or set to permissive in the /etc/selinux/config file on each Orchestrator server system. You can confirm the SELinux current status by running the sestatus command.
# sestatus
SELinux status: disabled

Creating User Accounts and Groups on Each Orchestrator Server (For Non-Root User)

The installation of the Aspera Common Components automatically creates a mysql user.

Note: It is critical to ensure that the UID and GID for the mysql user account is consistent across all Orchestrator servers.

Ensure that the permissions defined on an NFS server are appropriate for the shared directories (in other words, consistent with what has already been defined on the shared directories).

Note:

NFS v. 4 uses ID mapping to ensure the enforcement of shared directory ownership; it must be configured on the NFS server and each NFS client in a way that avoids access problems with Orchestrator and ACM.

Mounting Remote File Systems on Each Orchestrator Server

Orchestrator servers in HA environments must be configured with shared storage. There are three shared directories that need to be available to each Orchestrator server.
Note: Rights—such as mysql:mysql—may be set only on the NFS server side.
The following shared storage names are used as mount points in this document, but you may use any mount point names you prefer:
Mount Point Usage Owner Permissions
/mnt/shared/mysql_data
Note: Replace /mysql_data by the mount point you chose for mysql data
Stores shared MySQL data files mysql:mysql drwx------
/orchestrator Stores Orchestrator upload and download files nobody.nobody drwx------
/mnt/shared/acm_data/ Stores shared ACM files nobody.nobody drwx------
  1. Configure the /etc/fstab file.
    This action causes the shared directories to be automatically mounted when the system reboots.
    The following entries in the /etc/fstab file indicate that the shared directories (/home/mysql_data, /home/orchestrator, and /home/mnt/shared/acm_data/) are shared from the NFS server with an IP address of 10.0.75.10. These shared directories will be mounted to their corresponding local directories on each orchestrator server (/mysql, /orchestrator, /mnt/shared/acm_data/). The nfs4 entry indicates the type of file system which is being shared, and the remaining option entries define typical parameters used when mounting file systems for use in Orchestrator HA environment.
    10.0.75.10:/home/mysql_data	   /mnt/shared/mysql_data 	   nfs4 rw,sync,hard,intr 0 0
    10.0.75.10:/home/orchestrator	       /orchestrator 	       nfs4 rw,sync,hard,intr 0 0
    10.0.75.10:/home/mnt/shared/acm_data/	    /mnt/shared/acm_data/ 	    nfs4 rw,noac,sync,hard,intr 0 0
  2. Once you have configured the /etc/fstab file, make sure the mount points have been created on both Orchestrator servers, and confirm each directory’s ownership and permissions.