Configuring Multi-Session Transfers

The Aspera transfer server products Enterprise Server and Connect Server can achieve significant performance improvements by using multi-session transfers (also known as parallel transfers and multi-part transfers) on multi-node and multi-core systems.

To enable multi-session transfers, run ascp with the option -C nid:ncount, where nid is the node ID and ncount is the number of nodes or cores. Assign each session (or invocation) its own UDP port.

You can also specify that individual files may be split between multiple sessions by specifying the companion option --multi-session-threshold=threshold. The threshold value specifies, in bytes, the smallest-size file that can be split. That is, files greater than or equal to the threshold will be split, while those smaller than the threshold will not.

A default value for the threshold can be specified in the aspera.conf file by setting <multi-session_threshold_default> in the <default> section. Setting it to 0 (zero) indicates that files should not be split. The command-line setting overrides the aspera.conf setting. If the client's aspera.conf does not specify a default value for the threshold, then the default is taken from the server's aspera.conf (if specified).

If neither --multi-session-threshold nor <multi-session_threshold_default> is specified, then no files are split.

Using Multi-Session to Transfer Between Nodes

The following example shows a multi-session transfer on a dual-core system, which together can transfer at up to 200 Mbps. Each command uses a different UDP port, and each is run from a different terminal window. No multi-session threshold is specified either on the command line or in aspera.conf; therefore no file splitting occurs.

  ascp -C 1:2 -O 33001 -l 100m /dir01
  ascp -C 2:2 -O 33002 -l 100m /dir01

Assuming there are multiple files in dir01, ascp will distribute the files between each command to get the most efficient throughput. If there is only one file in dir01, only one of the commands will actually transfer the file.

In the following example, the multi-session threshold is added to the command line from the example above. This enables file splitting and specifies the threshold size, which is the minimum-size file that can be split.

  ascp -C 1:2 -O 33001 -l 100m --multi-session-threshold=5242880 /dir01
  ascp -C 2:2 -O 33002 -l 100m --multi-session-threshold=5242880 /dir01

In this case, if there are multiple files in dir01, all files less than 5 MB will be distributed between each command, while all files 5 MB or greater will be split to further achieve a level distribution. If there is only one file in dir01, it will be split if it's 5 MB or larger, otherwise the entire file will be transferred by only one of the commands.

Using Multi-Session to Transfer to an Aspera Transfer Cluster

Note: For cloud transfers, file-splitting is currently supported for AWS S3 only.

For transfers to cloud storage, the process of splitting files in multi-session transfers differs from regular (non-cloud) multi-session transfers. For cloud transfers, files are sent in chunks, and the chunk size is specified by <chunk_size> in aspera.conf:

    . . .

For cloud storage, file-splitting needs to respect a minimum split size, which for cloud storage is a part. Part size must be set to the same value as the ascp chunk size. That is, each ascp call needs to deliver full parts. However, a file that would normally be split (due to being larger than the multi-session threshold) will not be split if it is smaller than the chunk/part size. Set chunk size and part size as follows:

  1. In aspera.conf set the chunk size to some value greater than 5 MB; for example:
    <chunk_size>67108864</chunk_size>   <!-- 64 MB -->
  2. In /opt/aspera/etc/trapd/
    • Set the upload part size (default 64 MB) to the same value as the chunk size.
    • Use a ONE_TO_ONE gathering policy:

The following example uploads an 80 GB file into AWS S3 by means of a multi-session transfer to an Aspera Transfer Cluster. The setup for this example consists of one 10 Gbps system and a 20-node Aspera Transfer Cluster (ATC).

Configuring the Aspera Transfer Cluster

  1. Log into the Aspera Transfer Cluster Manager (ATCM) and check that the following exists in the cluster's Transfer Configuration:

    To show the cluster's transfer configuration, select the cluster (in this case jmoore-se-demo-cluster) and click the Action button and select Edit Transfer Configuration:

  2. From the Action drop-down menu, select Edit Auto Scaling Policy. Configure the cluster for at least 20 static nodes by setting Max Nodes and Min Available Nodes to 20 as shown below. Also ensure that Max Start Frequency Count is greater-than or equal-to the values for Max Nodes and Min Available Nodes.

Configuring the Aspera Client Transfer System

Configure Aspera Enterprise Server or Connect Server as in the following example aspera.conf file:

  1. Configure Aspera Enterprise Server or Connect Server as in the following example aspera.conf file:
    <?xml version='1.0' encoding='UTF-8'?>
    <CONF version="2">
  2. Create a JSON transfer request file, ms-send-80g.json, containing the following:
        "transfer": {
            "remote_host": "",
            "remote_user": "xfer",
            "token": "Basic QVVrZ3VobUNsdjBsdjNoYXAxWnk6dXI0VGZXNW5rNldBVW1zSm5FRzFVZWFvUXFTRUtLd3JmanhvNEZIQnFZT2U=",
            "target_rate_kbps": 700000,
            "multipart": 75,
            "paths": [
                    "source": "/80g-file"
            "ssh_port": 33001,
            "fasp_port": 33001,
            "direction": "send",
            "overwrite" : "always",
            "cookie": "multi-session upload"
  3. Initiate the transfer through the Node API with an HTTP POST of the JSON transfer request using a curl command as follows:
    curl -k -v -X POST -d @ms-send-80g.json https://ak_data:aspera@localhost:9092/transfers 
  4. Monitor transfer progress, bandwidth utilization, and the distribution of transfers for each cluster node.
    On UNIX/Linux systems, you can view bandwidth utilization from a terminal by running nload on the client system with the following command:
    nload -u g -o 10000000 ens1f0
    The nload report below shows bandwidth utilization at 9+ Gbps:

    In the ATCM UI, selecting Monitor Nodes from the Action drop-down menu shows the transfer distribution and utilization for each of the 20 nodes in the cluster: