Transfers with IBM Aspera On Demand and Cloud-Based HST Servers

Transfers to Aspera on Demand and cloud-based HST Servers require authorization credentials to the storage, but are otherwise the same as transfers to on-premises HST Server.

Provide object storage credentials in one of the following ways:

  • Specify the storage password or secret key in the transfer user's docroot. (Preferred method)
  • Set the storage password or secret key as an environment variable.
  • Specify the storage password or secret key in the command line.

With Docroot Configured: Authenticate in the Docroot

If your transfer user account has a docroot set that includes credentials or credentials are configured in the .properties file, ascp transfers to and from Alibaba Cloud, Amazon S3, IBM COS - S3, Google Cloud Storage, Akamai, SoftLayer, Azure, and are the same as regular ascp transfers.

For instructions on configuring a docroot for these types of storage, see IBM Aspera High-Speed Transfer Server Admin Guide (Linux): Docroot Path Formatting for Cloud, Object, and HDFS Storage.

For command syntax examples, see Ascp General Examples. You are prompted for the transfer user's password when you run an ascp command unless you set the ASPERA_SCP_PASS environment variable or use SSH key authorization.

With No Docroot Configured: Authenticate with Environment Variables

Note: The ASPERA_DEST_PASS variable is not applicable to Google Cloud Storage or Amazon S3 using IAM roles.

Set an environment variable (ASPERA_DEST_PASS) with the storage password or access key:

# export ASPERA_DEST_PASS = secret_key

With ASPERA_DEST_PASS and ASPERA_SCP_PASS set, run ascp with the syntax listed in the table for transfers with no docroot configured, except that you do not need to include the storage password or access key, and are not prompted for the Aspera password upon running ascp.

With No Docroot Configured: Authenticate in the Command Line

If you do not have a docroot configured and do not set an environment variable (described previously), authenticate in the command line. In the following examples, the storage password or secret key are included as part of the destination path. You are prompted for the transfer user's password upon running ascp unless you set the ASPERA_SCP_PASS environment variable or use SSH key authorization.

Storage Platform ascp Syntax and Examples
Alibaba Cloud Aspera recommends running ascp transfers with Alibaba Cloud with a docroot configured.
Amazon S3
  • If you are using IAM roles, you do not need to specify the access ID or secret key for your S3 storage.
Upload syntax:
# ascp options --mode=send --user=username --host=s3_server_addr source_files s3://access_id:secret_key@s3.amazonaws.com/my_bucket

Upload example:

# ascp --mode=send --user=bear --host=s3.asperasoft.com bigfile.txt s3://1K3C18FBWF9902:GEyU...AqXuxtTVHWtc@s3.amazonaws.com/demos2014

Download syntax:

# ascp options --mode=recv --user=username --host=s3_server_addr s3://access_id:secret_key@s3.amazonaws.com/my_bucket/my_source_path destination_path

Download example:

# ascp --mode=recv --user=bear --host=s3.asperasoft.com s3://1K3C18FBWF9902:GEyU...AqXuxtTVHWtc@s3.amazonaws.com/demos2014/bigfile.txt /tmp/
Azure These examples are for Azure blob storage. For Azure Files, use the syntax: azure-files://storage_account:storage_access_key@file.core.windows.net/share. Aspera recommends running ascp transfers with Azure Data Lake Storage with a docroot configured.

Upload syntax:

# ascp options --mode=send --user=username --host=server_address source_files azu://storage_account:storage_access_key@blob.core.windows.net/path_to_blob

Upload example:

# ascp --mode=send --user=AS037d8eda429737d6 --host=dev920350144d2.azure.asperaondemand.com bigfile.txt azu://astransfer:zNfMtU...nBTkhB@blob.core.windows.net/abc  

Download syntax:

# ascp options --mode=recv --user=username --host=server azu://storage_account:storage_access_key@blob.core.windows.net/path_to_blob/source_file destination_path

Download example:

# ascp --mode=recv --user=AS037d8eda429737d6 --host=dev920350144d2.azure.asperaondemand.com azu://astransfer:zNfMtU...nBTkhB@blob.core.windows.net/abc /downloads
Google Cloud Storage Note: The examples below require that the VMI running the Aspera server is a Google Compute instance.
# ascp options --mode=send --user=username --host=server_address source_files gs:///my_bucket/my_path

Upload example:

# ascp --mode=send --user=bear --host=10.0.0.5 bigfile.txt gs:///2017_transfers/data  

Download syntax:

# ascp options --mode=recv --user=username --host=server gs:///my_bucket/my_path/source_file destination_path

Download example:

# ascp --mode=recv --user=bear --host=10.0.0.5 gs:///2017_transfers/data/bigfile.txt /data
HDFS Aspera recommends running ascp transfers with HDFS with a docroot configured.
IBM COS - S3 Upload syntax:
# ascp options --mode=send --user=username --host=server_address source_files s3://access_id:secret_key@accessor_endpoint/vault_name

Upload example:

# ascp --mode=send --user=bear --host=s3.asperasoft.com bigfile.txt s3://3ITI3OIUFEH233:KrcEW...AIuwQ@38.123.76.24/demo2017

Download syntax:

# ascp options --mode=send --user=username --host=server_address s3://access_id:secret_key@accessor_endpoint/vault_name/source_files destination_path

Download example:

# ascp --mode=send --user=bear --host=s3.asperasoft.com s3://3ITI3OIUFEH233:KrcEW...AIuwQ@38.123.76.24/demo2017 /tmp/