Troubleshooting |
If you are experiencing performance or responsiveness issues on the transfer cluster, review the following configurations:
The recommended node instance size is m4.xlarge or larger. For maximum throughput of up to 3Gbps, use a c3.8xlarge.
Set the following in the <trunks> section:
<trunks> <trunk> <id>70</id> <capacity> <schedule format="ranges">3000000</schedule> </capacity> <on>true</on> </trunk> <trunk> <id>80</id> <capacity> <schedule format="ranges">3000000</schedule> </capacity> <on>true</on> </trunk> </trunks>
Set the following in the <transfer> section:
<transfer> ... <in> <bandwidth> <aggregate> <trunk_id>70</trunk_id> </aggregate> <flow> <min_rate> <lock>true</lock> <cap>0</cap> </min_rate> <policy> <default>fair</default> <allowed>fair</allowed> </policy> <target_rate> <default>700000</default> <cap>3000000000</cap> </target_rate> </flow> </bandwidth> </in> <out> <bandwidth> <aggregate> <trunk_id>80</trunk_id> </aggregate> <flow> <min_rate> <lock>true</lock> <cap>0</cap> </min_rate> <policy> <default>fair</default> <allowed>fair</allowed> </policy> <target_rate> <default>700000</default> <cap>3000000000</cap> </target_rate> </flow> </bandwidth> </out> </transfer>
Add the <max_mem> section in <scalekv> (may be called <scaledb> in older versions):
<scalekv> <sstore> <type>redis</type> <host>state_store_host</host> <port>state_store_port</port> </sstore> <baseport>43001</baseport> <max_mem>20000000000</max_mem> <!-- scalekv_max_mem: in bytes --> </scalekv>
Set the Asperatrapd memory via firstboot. This is recommended when using c3.8xlarge images, to prevent trapd from consuming more memory than required.
# sh -c "sed -i 's/#system.buffer-pool.memory=-1/system.buffer-pool.memory=30GB/' /opt/aspera/etc/trapd/trap.properties"