Resource Groups based Dynamic Cluster Sizing in Presto

Qubole has introduced dynamic sizing of Presto clusters based on resource groups. Users are assigned to Presto resource groups and each resource group has a configurable limit on the maximum number of nodes that it can scale the cluster upto independently. The maximum cluster size is calculated dynamically based on the active resource groups and the scaling limits.

For information about autoscaling in Presto, see Autoscaling in Presto Clusters.

If there is no limit on how much a single user can autoscale the cluster upto, then:

  • A single user can autoscale a cluster to its maximum cluster size

    • Maximum cluster size is often configured considering the user concurrency

  • Admins create multiple clusters with different maximum size for different group of users and as a result:

    • Managing multiple clusters becomes an exhausting task as configurations such as network changes and Presto features need to be applied to clusters followed by a rolling deploy.

    • Cost increases because each cluster runs at a minimum size so costs have to be paid for the minimum nodes for clusters even when they are idle

Dynamic sizing of clusters resolves the above mentioned issues. The feature is supported in Presto 0.208 and later versions.

New Resource Group Property for User Scaling Limit

maxNodeLimit: It denotes the maximum number of nodes this group can request from the cluster. May be specified as an absolute number of nodes (that is 10) or as a percentage (that is 10%) of the cluster’s maximum size. Its defaults to 20%. It is an optional configuration.

Configuration Properties for Dynamic Scaling Limits

These are the configuration properties for dynamic scaling limits that you can set under etc/resource-groups.properties.

  • resource-groups.user-scaling-limits-enabled: It is a boolean value to enable resource groups based autoscaling. It defaults to false.

  • resource-groups.active-user-buffer-period: It is the time period for which a resource group is considered as active after its last query finishes. You must specify it as a duration (that is 5s). It defaults to 10 minutes (10m).

With the dynamic scaling limits feature, an active user can scale the cluster upto only a certain limit. The maximum cluster size is then derived using current active users. When users from multiple resource groups are active, the maximum number of nodes that the cluster can autoscale to is the union of the individual maximum nodes limits. The maximum cluster size is never greater than the Maximum Worker Nodes in the cluster settings.

Enable User Limits on Autoscaling through Resource Groups

To enable user limits on autoscaling through resource groups at the account level, create a ticket with Qubole Support.

To enable user limits on autoscaling through resource groups at the cluster level, add the following in the Presto cluster Overrides:

resource-groups.properties:
resource-groups.configuration-manager=file
resource-groups.config-file=etc/default_resource_groups.json
resource-groups.user-scaling-limits-enabled=true

Once the feature is enabled, resource groups defined in the default_resource_groups.json file are used. The default resource groups json file used is:

{
 "rootGroups": [
   {
     "name": "${USER}",
     "maxNodeLimit": "20%"
   }
 ],
 "selectors": [
   {
     "group": "${USER}"
   }
 ],
 "cpuQuotaPeriod": "1h"
}

Analyzing the default JSON File

According to the default JSON file, every new user gets assigned to a new resource group. For example, user-1 is assigned to a resource group named user1, which is generated by expanding the resource group template ${USER}, each with a maximum node limit 20%.

Let us consider a case of a cluster with Maximum Worker Nodes = 10 (from cluster settings).

Note

For additional examples, see Resource Groups Autoscaling Examples.

In such a cluster configuration, the impact of number of active users on the maximum possible size of the cluster is captured in this table (where U1 = User 1, U2 = User 2 and so on).

Active Users

Possible Maximum Size

Description

U1

2

20% of 10 nodes

U1, U2

4

2 nodes from each U1 and U2

U1, U2, U3

6

2 nodes from each U1, U2, and U3

U1, U2, U3, U4, U5, U6

10

Even though the combined potential cluster size is 12, the size is limited by 10 (maximum cluster size).

Recommendations for Consolidating Multiple Clusters

These are the recommendations:

  • Admins should do this configuration in steps that is, an admin should consolidate a few clusters at a time instead of configuring all settings together.

  • Admins should start with replicating a multi-cluster setup in Resource Groups as illustrated in the above example. This would be the least disruptive to end users.

  • An admin must select a larger coordinator node in consolidated clusters as it handles the load of multiple coordinator nodes of a multi-cluster setup.