Partitioning Mongrel Processes between Faspex and Cargo

Partition mongrels between handling Faspex UI requests and IBM Aspera Cargo requests to address performance issues.

When the number of Cargo clients attached to a Faspex cluster reaches a significant number, the performance of the Faspex Web UI can suffer due to resource contention with Cargo clients accessing the API.

To avoid this, tune the Apache configuration with separate sets of Mongrel processes dedicated to serving the Faspex web interface and API.

Note: The examples in the instructions below demonstrates Mongrel partitioning for 15 mongrel processes: 10 for Cargo and 5 for Faspex.
  1. Run the following asctl command to set the proper total number of Mongrel processes:
    # asctl faspex:mongrel_count number+of_mongrels
    Note: Running the mongrel_count command overwrites and removes any modifications to the faspex.apache.linux.conf configuration file, including the changes described in the following steps.
    Choose not to restart Apache and Faspex.
  2. Open the following Faspex Apache configuration file in a text editor: /opt/aspera/faspex/config/faspex.apache.linux.conf and make the following changes
    1. Add the <Proxy balancer://faspex_cargo_cluster> section.
      For example:
      ...
      #Proxy balancer section (create one for each ruby app cluster)
      <Proxy balancer://faspex_cargo_cluster>
      </Proxy>
      <Proxy balancer://faspex_cluster>
        BalancerMember http://127.0.0.1:3000
        BalancerMember http://127.0.0.1:3001
        BalancerMember http://127.0.0.1:3002
        BalancerMember http://127.0.0.1:3003
        BalancerMember http://127.0.0.1:3004
        BalancerMember http://127.0.0.1:3005
        BalancerMember http://127.0.0.1:3006
        BalancerMember http://127.0.0.1:3007
        BalancerMember http://127.0.0.1:3008
        BalancerMember http://127.0.0.1:3009
        BalancerMember http://127.0.0.1:3010
        BalancerMember http://127.0.0.1:3011
        BalancerMember http://127.0.0.1:3012
        BalancerMember http://127.0.0.1:3013
        BalancerMember http://127.0.0.1:3014
      </Proxy>
      ...
    2. Distribute the BalancerMember entries under <Proxy balancer://faspex_cluster> between the two sections.
      For example:
      ...
      #Proxy balancer section (create one for each ruby app cluster)
      <Proxy balancer://faspex_cargo_cluster>
        BalancerMember http://127.0.0.1:3000/aspera/faspex/inbox.atom
        BalancerMember http://127.0.0.1:3001/aspera/faspex/inbox.atom
        BalancerMember http://127.0.0.1:3002/aspera/faspex/inbox.atom
        BalancerMember http://127.0.0.1:3003/aspera/faspex/inbox.atom
        BalancerMember http://127.0.0.1:3004/aspera/faspex/inbox.atom
        BalancerMember http://127.0.0.1:3005/aspera/faspex/inbox.atom
        BalancerMember http://127.0.0.1:3006/aspera/faspex/inbox.atom
        BalancerMember http://127.0.0.1:3007/aspera/faspex/inbox.atom
        BalancerMember http://127.0.0.1:3008/aspera/faspex/inbox.atom
        BalancerMember http://127.0.0.1:3009/aspera/faspex/inbox.atom
      </Proxy>
      <Proxy balancer://faspex_cluster>
        BalancerMember http://127.0.0.1:3010
        BalancerMember http://127.0.0.1:3011
        BalancerMember http://127.0.0.1:3012
        BalancerMember http://127.0.0.1:3013
        BalancerMember http://127.0.0.1:3014
      </Proxy>
      ...
    3. Add ProxyPass /aspera/faspex/inbox.atom balancer://faspex_cargo_cluster to the proxy request section.
      ...
      # send the proxy request
      ProxyPass /aspera/faspex/inbox.atom balancer://faspex_cargo_cluster
      ProxyPass /aspera/faspex balancer://faspex_cluster 
      ...
  3. Restart Apache and Faspex services.
    # asctl apache:restart
    # asctl faspex:restart