Installing and Configuring the HA Environment
Install two standalone IBM Aspera Shares servers and join them together into an HA environment.
This guide assumes that Shares is installed on two servers with High Speed Transfer Server software installed and configured on each. The High Speed Transfer Server on each server behaves like any other node within the Console environment.
Before You Start
- Review the System Requirements.
-
Check your network settings and names.
Confirm that your network settings are correctly configured and that each host has a unique hostname properly configured within the name resolution mechanism you use (DNS, hosts file, and so on). Each host must be able to resolve its own name, as well as the name of the other node.Run the following command on both nodes. The resulting system output should make sense in your environment.
# hostname hashares1.mydomain.com
Securing Your System
-
Disable local firewalls.
No traffic filter should be put in place between the two nodes. If your nodes are located behind a corporate firewall (and thus appropriately protected), you should disable the Linux firewall components. Use chkconfig to prevent the firewall from becoming active when the system is rebooted.
# service iptables stop iptables: Flushing firewall rules: [ OK ] iptables: Setting chains to policy ACCEPT: filter [ OK ] iptables: Unloading modules: [ OK ] # service ip6tables stop ip6tables: Flushing firewall rules: [ OK ] ip6tables: Setting chains to policy ACCEPT: filter [ OK ] ip6tables: Unloading modules: [ OK ] # chkconfig iptables off # chkconfig ip6tables off
Note: If the firewall is not disabled, make sure to configure the firewall to open the necessary ports for Aspera. See TCP and UDP Ports Used in HA Environments for a list of ports used by the Shares HA environment. -
Disable SELinux.
SELinux must be disabled or set to permissive in the /etc/selinux/config file on each High Speed Transfer Server and each Shares server system. You can confirm the SELinux current status by running the sestatus command.
# sestatus SELinux status: disabled
-
Configure SSH security on each High Speed Transfer Server.
See the Securing your SSH Server section in the IBM Aspera Shares Admin Guide for additional information and guidance.
Make sure that public/private key authentication has been enabled on each server. Look for the following line in the /etc/ssh/sshd_config file and verify that it is uncommented.
If you have modified the sshd_config file, you need to restart the sshd service:PubkeyAuthentication yes
# service sshd restart
Configure Shares Servers
-
Create user accounts and groups on each Shares server.
The mysql and shares user accounts and groups must be created manually on both systems before installing any Aspera packages to have consistent UID and GID across the HA environment.Note: It is critical to ensure that the UID and GID for the mysql and shares user accounts are consistent across all Shares servers.You can use the following commands on each node to create the required users and groups:
# groupadd -g 777 shares && useradd -c "Aspera Shares" -d /home/shares -g shares -m -s /bin/aspshell -r -u 777 shares # groupadd -g 778 mysql && useradd -c "Aspera Mysql" -d /home/mysql -g mysql -m -s /bin/false -u 778 mysql
The UID and GID do not have to be 777 and 778, and you can use any value available. Just make sure you use the same values on both systems.
-
Mount remote file systems on each Shares server.
Shares servers in HA environments must be configured with shared storage. There are 3 shared directories that need to be available to each Shares server.
The following are example mount points. Yours may be different.
Example Mount Point Usage Owner User Permissions Notes /mysql_data Used to store the MySQL data files nobody.root drwx------ /shares Used to store uploaded files shares.shares drwx------ /acm_files Used to store the common ACM files nobody.nobody drwx------ If using NFS, use the noac flag -
Configure the /etc/fstab file to automatically mount the
directories when the system reboots.
10.0.75.10:/export/mysql_data /mysql_data nfs4 rw,sync,hard,intr 0 0 10.0.75.10:/export/shares /shares nfs4 rw,sync,hard,intr 0 0 10.0.75.10:/export/acm_files /acm_files nfs4 rw,sync,hard,intr,noac 0 0
The above entries in the /etc/fstab file indicate that the shared directories (/export/mysql_data, /export/shares, and /export/acm_files) are shared from the NFS server with an IP address of 10.0.75.10. These shared directories will be mounted to their corresponding local directories on each Shares server (/mysql, /shares, /acm_files). The “nfs4” entry indicates the type of filesystem which is being shared, and the remaining option entries define typical parameters used when mounting file systems for use in Shares HA environment.
Note: NFS version 4 is required for the Shares HA environment. If your version of Linux does not support NFS4, upgrade your server to support NFS version 4.
Once you have configured the /etc/fstab file, make sure the mount points have been created on both Shares servers, and confirm each directory's ownership and permissions. -
Configure the /etc/fstab file to automatically mount the
directories when the system reboots.
Install ACM
- Download ACM here: ACM Package
-
Extract it to the dedicated shared volume by running the following command:
# cd acm_files_mount_point # tar xzvf /path/to/acm_package.tar.gz
Note: You only need to perform this task from one node as the acm_files_mount_point directory is shared by both Shares servers.
Install Aspera Software
-
Install the HST Server package if you haven't already:
# rpm –Uvh aspera-entsrv-version.rpm
-
On each server, install a valid license by copying the license keys into the
/opt/aspera/etc/aspera-license file.
Note: You must have separate license keys for each server.
-
Configure the shares user account for each HST server.
Add the shares system user to the /opt/aspera/etc/aspera.conf file.
-
Set the docroot to /shares:
# asconfigurator -x "set_user_data;shares,xfer_user;absolute,/shares"
-
Set up token authorization:
# asconfigurator -x "set_user_data;shares,username;authorization_transfer_in_value,token" # asconfigurator -x "set_user_data;shares,username;authorization_transfer_out_value,token" # asconfigurator -x "set_user_data;shares,username;token_encryption_key,encryption_key"
Confirm that the entries on each server are identical. In particular, confirm that the encryption_key tag and that the shares user's docroot value (/shares) is the same on each transfer server.
-
Set the docroot to /shares:
-
Configure Node API user accounts on each server.
Run the following command to create a Node API user account associated with the shares transfer user (system user) account:
# /opt/aspera/bin/asnodeadmin –a –u nodeadmin –x shares –p password
-
On each server, verify that the nodeadmin Node API user account has been
created and is associated with the shares transfer user by running the
following command:
# /opt/aspera/bin/asnodeadmin –l
-
Install the IBM Aspera Connect Browser Plug-In key.
-
If the .ssh folder does not already exist in the system user's home directory,
run the following command to create the folder:
# mkdir -p /home/shares/.ssh
-
If the authorized_keys file does not already exist, add the
aspera_id_dsa.pub public key to the file by running the
following command:
# cat /opt/aspera/var/aspera_id_dsa.pub >> /home/shares/.ssh/authorized_keys
-
Transfer the .ssh folder and authorized_keys file ownership to the system user by
running the following commands:
# chown -R username:username /home/shares/.ssh # chmod 600 /home/shares /.ssh/authorized_keys # chmod 700 /home/shares # chmod 700 /home/shares /.ssh
Note: The system defined /home/shares as the shares system user's home directory when the user account was created. This is the proper location for the authorized_keys file. Shares uses the user's home directory to locate the .ssh/authorized_keys file, but actual file transfers made by the shares transfer user account are directed to the shares docroot directory (/shares) set in the aspera.conf file. -
If the .ssh folder does not already exist in the system user's home directory,
run the following command to create the folder:
Share Resources Between Nodes
With Shares running properly on each server, the next step is to configure the HA environment by integrating the nodes with each other. Integrating the nodes into the HA environment involves configuring the MySql services and implementing the ACM software on each server.
This process involves using one system to configure the database for the aspera account, placing the mysql files from that server into the shared directory, then configuring each of the servers to use the shared database, and finally configuring each Shares server to use a special database.yml file provided by the ACM software.
- Choose a node to be the primary node.
-
Find and note the password for the aspera MySQL database user:
# cat /opt/aspera/shares/u/shares/config/database.yml production: database: shares username: "aspera" password: "nqH5R5GhQoDyWj0DPEHvshltiGVOmD5z" host: "10.0.90.16" port: 4406 adapter: mysql2 encoding: utf8 reconnect: false pool: 5 ...
-
Retrieve current root password for MySQL.
Retrieve the MySQL “root” account password from the /opt/aspera/shares/.my.cnf file on the primary server. Copy the password in /opt/aspera/shares/.my.cnf file as follows:
# cat /opt/aspera/shares/.my.cnf [client] user = root password = RAAp2jRGIdfUoTBL3ttr host = localhost port = 4406
-
On the primary node, login to MySQL as root using the password value you
retrieved from the .my.cnf file:
# /opt/aspera/shares/bin/mysql –uroot –hlocalhost –ppassword
Note: There is no space between the -p option and the password valueFor example:# /opt/aspera/shares/bin/mysql –uroot –hlocalhost –pRAAp2jRGIdfUoTBL3ttr
-
Grant access privileges to the user aspera with the password from the
database.yml file:
mysql> grant all privileges on *.* to 'aspera'@'primary_node_ip_address' identified by 'password' ; mysql> grant all privileges on *.* to 'aspera'@'other_node_ip_address' identified by 'password' ;
For example:mysql> grant all privileges on *.* to 'aspera'@'10.0.115.100' identified by 'nqH5R5GhQoDyWj0DPEHvshltiGVOmD5z' ; Query OK, 0 rows affected (0.00 sec) mysql> grant all privileges on *.* to 'aspera'@'10.0.115.101' identified by 'nqH5R5GhQoDyWj0DPEHvshltiGVOmD5z' ; Query OK, 0 rows affected (0.00 sec)
Note: Include the quote marks exactly as shown ('aspera'@'10.0.115.100' and 'aspera') and make sure to include the final semicolon symbol (;), which must be separated from 'aspera' with a space. -
Exit the MySQL environment.
mysql> quit
-
Verify the changes have been implemented by testing the ability to log into MySQL
using the aspera account and the IP address of the system where you ran the
mysql command.
Test the ability to log in:
# /opt/aspera/shares/bin/mysql –uaspera –hprimary_node_ip_address –ppassword
If you are able to get into the MySQL environment, the changes were successfully implemented.
Note: Attempting to login using the address of the other server will fail at this point. This is resolved by sharing the MySQL database bweteen both systems. -
Stop and disable Shares services on each Shares server.
# service aspera-shares stop # chkconfig aspera-shares off
-
Confirm that the aspera-shares services are stopped before proceeding. Proceeding
while the services are running may corrupt the MySQL database.
# service aspera-shares status Checking status of aspera-shares ... Status is stopped
-
Pick one node and copy the following files into the same directory on the other node,
preserving the same owner and permissions.
- /opt/aspera/shares/u/shares/config/aspera/secret.rb
- /opt/aspera/shares/u/shares/config/initializers/secret_token.rb
- /opt/aspera/shares/u/stats-collector/etc/keystore.jks
- /opt/aspera/shares/u/stats-collector/etc/persistence.xml
The following instructions refer to the example mount points below:
- Shared MySQL directory: /mysql_data
- Shared Shares files directory: /shares
- Shared ACM files directory: /acm_files
-
Move the MySQL data files onto shared volume
-
Backup the MySQL data, create a symlink to the mount point, and change the owner
and group.
# cd/opt/aspera/shares/var # mvmysql ./mysql_bak # ln -s /mysql_data ./mysql # chown -h nobody.root ./mysql
-
Check the permissions.
# ls -lah /opt/aspera/shares/var drwxr-xr-x 7 root root 4096 Dec 19 10:01 log lrwxrwxrwx 1 nobody root 11 Dec 19 15:14 mysql -> /mysql_data drwx----- 5 nobody root 4096 Dec 19 15:12 mysql_bak ...
-
Backup the MySQL data, create a symlink to the mount point, and change the owner
and group.
-
On the first node, copy the database file into the shared volume:
# cp -Rp /opt/aspera/shares/var/mysql_bak/* /opt/aspera/shares/var/mysql
Install and Configure ACM
-
Create the following symbolic links on both nodes:
# ln –s /acm_files/acm /opt/aspera/acm # cd /opt/aspera/shares/u/shares/config # mv database.yml database.yml.orig # ln -s /opt/aspera/acm/config/database.yml database.yml # chown –h nobody.nobody database.yml
-
You may need to edit the acm file
(/opt/aspera/acm/bin/acm) to set correct values to these
variables:
MYSQLPW="mysql_password" SYSLOG_FACILITY=local2 LOG_TO_FILE=0 LOG_TO_SYSLOG=1 CHECK_DEVICE_ID=1
Note: The mysql_password is the password you configured when you granted the nodes remote access to the MySQL database.Note: The CHECK_DEVICE_ID variable defines if ACM should verify the Device ID of the storage volume where ACM is located. Because that Device ID can change upon reboot with NFS volumes, you may want to set this variable to 0 in order to disable the verification, which could prevent ACM and Shares from running correctly. -
Install ACM in the crontab on both nodes so that the system launches ACM
every minute.
Two parameters are passed to the acm command. The first parameter is the local IP address of the host. You can use the following command to find out the list of IP addresses available on a system:# crontab -e * * * * * /opt/aspera/acm/bin/acm local_ip_address device_number > /dev/null 2>&1
The second parameter is the device number of the partition where the ACM files are stored. You can determine the correct value by using this command:# ip addr | grep "inet"
# stat -c "%d" /acm_files/acm
For example:# crontab -e * * * * * /opt/aspera/acm/bin/acm 10.0.0.0 21 /dev/null 2>&1
Once installed in the crontab, ACM starts running, elects an active node, and starts the services on the different nodes accordingly depending on their current status: active or passive. -
Create a job to backup Shares database with the acmctl
command.
Aspera recommends regularly backing up the database. In the example cronjob below, ACM performs a backup every day at 1:30 AM. Choose the interval depending on your requirements.
# crontab -e * * * * * /opt/aspera/acm/bin/acm 10.0.71.21 20 > /dev/null 2>&1 30 3 * * * /opt/aspera/acm/bin/acmctl -b > /dev/null 2>&1
-
Create a job to reset asctl logs.
Each time the system launches ACM, ACM writes to the asctl logs. Since the asctl logs do not get rotated, the logs can start to cause performance issues if the files grow too large. In the example cronjob below, the system resets the asctl logs every 7 days at 3:45 AM. Choose the interval depending on your requirements.
# crontab -e * * * * * /opt/aspera/acm/bin/acm 10.0.71.21 20 > /dev/null 2>&1 30 3 * * * /opt/aspera/acm/bin/acmtl -b > /dev/null 2>&1 45 3 * * 7 echo -n "" > /opt/aspera/common/asctl/log/asctl.log > /dev/null 2>&1
-
Run the acmctl command with the -s option on both nodes in order to verify
some basic ACM prerequisites:
# /opt/aspera/acm/bin/acmctl –s ACM sanity check ---------------- Checking if the database.yml symbolic link exists OK Checking if the database.yml symbolic link points to the right location OK Checking if an entry for ACM seems to exist in the crontab OK Checking that all the Shares services are disabled in chkconfig OK Checking that SE Linux mode is not set to enforcing OK
-
If the verification looks good, start ACM on all the nodes at once, using the
acmctl command with the –E option:
# /opt/aspera/acm/bin/acmctl –E ACM is enabled globally
If the services are running properly and the load balancer is correctly configured, you should now be able to connect to the Shares web application using the URL pointing to the VIP.