Edit a Cluster Configuration
- PUT /api/v1.3/clusters/<cluster Id> or <cluster label>
Edit one or more attributes of an existing cluster using this API.
Most attribute changes takes effect when the cluster is restarted. However, label changes take effect immediately and new commands on such labels are run on the new cluster.
QDS supports defining account-level default cluster tags through the UI and plans to provide API support shortly. For more information, see Adding Account and User level Default Cluster Tags (AWS).
Note
You can now enable HiveServer2 through a Hadoop 2 request API call with additional settings as described in engine_config for Enabling HiveServer2 on a Hadoop 2 (Hive) Cluster. For details on configuring multi-instance HS2 through REST API, see Choosing Multi-instance as an option for running HiveServer2 on Hadoop (Hive) Clusters.
Required Role
The following users can make this API call:
Users who belong to the system-admin group.
Users who belong to a group associated with a role that allows editing a cluster. See Managing Groups and Managing Roles for more information.
Parameters
Note
Parameters marked in bold below are mandatory. Others are optional and have default values. For more information on parameters supported by an Airflow cluster, see Create a New Airflow Cluster. Presto is not currently supported on all Cloud platforms; see QDS Components: Supported Versions and Cloud Platforms.
Parameter |
Description |
---|---|
label |
A list of labels that identify the cluster. At least one label must be provided when creating a cluster. |
presto_version |
It is mandatory and only applicable to a Presto cluster. The supported values are:
|
spark_version |
It is mandatory and only applicable to a Spark cluster. The supported values are: |
zeppelin_interpreter_mode |
This parameter is only applicable to the Spark cluster. The default mode is |
Amazon EC2 Settings. The default values are considered if the settings are not configured. |
|
Cluster node instances type and other settings. |
|
Hadoop cluster settings that also contains the configuration description to enable Spark on the cluster. |
|
Instance security settings. |
|
Presto cluster settings. |
|
Spark cluster settings. |
|
Datadog cloud monitoring settings. Qubole supports the Datadog cloud monitoring service on Hadoop 2 (Hive) clusters. |
|
disallow_cluster_termination |
Prevents auto-termination of the cluster after a prolonged period of disuse. The default value is, |
enable_ganglia_monitoring |
Enable Ganglia monitoring for the cluster. The default value is, |
node_bootstrap_file |
A file that gets executed on every node of the cluster at boot time. You can use this to customize your cluster nodes by setting up environment variables, installing required packages, etc. The default value is, |
ec2_settings
Parameter |
Description |
---|---|
compute_access_key |
The EC2 Access Key (Note: This field is not visible to non-admin users.) |
compute_secret_key |
The EC2 Secret Key (Note: This field is not visible to non-admin users.) |
aws_region |
The AWS region to create the cluster in.
The default value is, |
aws_preferred_availability_zone |
The preferred availability zone (AZ) in which the cluster must be created. The default value is |
vpc_id |
The ID of the Virtual Private Cloud (VPC) in which the cluster is created.
In this VPC, the |
subnet_id |
The ID of the subnet that must belong to the above VPC in which the cluster is created and it can be a public/private
subnet. Qubole supports multiple subnets. Specify multiple subnets in this format:
|
master_elastic_ip |
It is the Elastic IP address for attaching to the cluster coordinator. For more information, see this documentation. |
bastion_node_public_dns |
Specify the Bastion host public DNS name if private subnet is provided for the cluster in a VPC. Do not specify this value for a public subnet. |
bastion_node_port |
It is the port of the Bastion node. The default value is 22. You can specify a non-default port if you want to access the cluster that is in a VPC with a private subnet. |
bastion_node_user |
It is the Bastion node user, which is ec2-user by default. You can specify a non-default user using this option. |
role_instance_profile |
It is a user-defined IAM Role name that you can use in a dual-IAM role configuration. This Role overrides the account-level IAM Role and only you (and not even Qubole) can access this IAM Role and thus it provides more security. For more information, see Creating Dual IAM Roles for your Account. |
use_account_compute_creds |
Set it to |
instance_tenancy |
QDS provides instance tenancy at the cluster level and only in a VPC. The choice for tenancies are: |
node_configuration
Note
For parameters associated with resizing a cluster, see Resize a Cluster.
Parameter |
Description |
---|---|
master_instance_type |
The instance type to use for a cluster coordinator node. The default value is |
slave_instance_type |
The instance type to use for cluster worker nodes. The default value is |
Qubole supports configuring heterogeneous nodes in Hadoop 2 and Spark clusters. It implies that worker nodes can be of different instance types. For more information, see heterogeneous_instance_config and An Overview of Heterogeneous Nodes in Clusters. |
|
initial_nodes |
The number of nodes to start the cluster with. The default value is Note You can push the number of nodes into a running cluster as described in Resize a Cluster through API. You can also push the number of nodes through UI as described in How to Push Configuration Changes to a Cluster. |
max_nodes |
The maximum number of nodes up to which the cluster can be autoscaled. The default value is |
slave_request_type |
The request type for the autoscaled worker instances. The default value is Qubole allows you to set Note The feature to set Spot blocks as autoscaling nodes even when the coordinator node and minimum worker nodes are On-Demand nodes, is available for a beta access and it is only applicable to Hadoop 2 (Hive) clusters. Create a ticket with Qubole Support to enable it on the account. For more information, see Configuring Spot Blocks. |
The purchase options for autoscaling worker spot instances and these are not applicable to the minimum number of nodes that
is |
|
Purchases both coordinator node(s) and worker node(s) as Spot Instances only. The bid price is given using
the stable_spot_instance_settings. The coordinator node and minimum worker node request type depends on whether or not the
|
|
Spot Blocks are Spot instances that run continuously for a finite duration (1 to 6 hours). They are 30 to 45 percent cheaper than On-Demand instances based on the requested duration. For more information, see spot_block_settings. QDS ensures that Spot blocks are acquired at a price lower than On-Demand nodes. It also ensures that autoscaled nodes are acquired for the remaining duration of the cluster. For example, if the duration of a Spot block cluster is 5 hours and there is a need to autoscale at the 2nd hour, Spot blocks are acquired for 3 hours. |
|
fallback_to_ondemand |
Fallback to on-demand nodes if spot nodes could not be obtained when adding nodes during autoscaling. It is valid only if
worker request type is |
ebs_volume_type |
The default EBS volume type is Note For recommendations on using EBS volumes, see AWS EBS Volumes. |
ebs_volume_size |
The default EBS volume size is 100 GB for Magnetic/General Purpose SSD volume types and 500 GB for Throughput Optimized HDD/Cold HDD volume type. The supported value range is 100 GB/500 GB to 16 TB. The minimum and maximum volume size varies for each EBS volume type and are mentioned below:
Note For recommendations on using EBS volumes, see AWS EBS Volumes. |
ebs_volume_count |
The number of EBS volumes to attach to each cluster instance. The default value is 0. |
Hadoop 2 and Spark clusters that use EBS volumes can now dynamically expand the storage capacity. This relies on Logical Volume Management. When enabled, a volume group is created on this volume group. Additional EBS volumes are attached to the instance and to the logical volume when the latter is approaching full capacity usage and the file system is resized to accommodate the additional capacity. This is not enabled by default. Storage-capacity upscaling in Hadoop2/Spark clusters using EBS volumes also supports upscaling based on the rate of increase of used capacity. Note For the required EC2 permissions, see Sample Policy for EBS Upscaling. Here is an "node_configuration" : {
"ebs_upscaling_config": {
"max_ebs_volume_count":5,
"percent_free_space_threshold":20.0,
"absolute_free_space_threshold":100,
"sampling_interval":40,
"sampling_window":8
}
}
See ebs_upscaling_config for information on the configuration options. |
|
custom_ec2_tags |
It is an optional parameter. Its value contains a <tag> and a <value>. For example, custom-ec2-tags ‘{“key1”:”value1”, “key2”:”value2”}’. A set of tags to be applied on the AWS instances created for the cluster and EBS volumes attached to these instances. Specified as a JSON object, for example, {“project”: “webportal”, “owner”: “john@example.com”}. It contains a custom tag and value. You can set a custom EC2 tag if you want the instances of a cluster to get that tag on AWS. The custom tags are applied to the Qubole-created security groups (if any). Tags and values must have alphanumeric characters and can contain only these special characters: + (plus-sign), . (full-stop/period/dot), - (hyphen), @ (at-the-rate of symbol), = (equal sign), / (forward slash), : (colon) and _ (an underscore). The tags, Qubole and alias are reserved for use by Qubole (see Qubole Cluster EC2 Tags (AWS)). Tags beginning with aws- are reserved for use by Amazon. Qubole supports defining user-level EC2 tags. For more information, see Adding Account and User level Default Cluster Tags (AWS). |
idle_cluster_timeout |
The default cluster timeout is 2 hours. Optionally, you can configure it between 0 to 6 hours that is the value range is
0-6 hours. The unit of time supported is only hour. If the timeout is set at account level, it applies to all clusters
within that account. However, you can override the timeout at cluster level. The timeout is effective on the completion of
all queries on the cluster. Qubole terminates a cluster in an hour boundary. For example, when |
idle_cluster_timeout_in_secs |
After enabling the aggressive downscaling feature on the QDS account, the Cluster Idle Timeout can be configured in
seconds. Its minimum configurable value is Note This feature is only available on a request. Contact the account team to enable this feature on the QDS account. |
node_base_cooldown_period |
With the aggressive downscaling feature enabled on the QDS account, it is the cool down period set in minutes for On-Demand nodes on a Hadoop 2 or a Spark cluster. The default value is 10 minutes. For more information, see Understanding Aggressive Downscaling in Clusters (AWS). Note This feature is only available on a request. Contact the account team to enable this feature on the QDS account.
You must not set the Cool Down Period to a value lower than |
With the aggressive downscaling feature enabled on the QDS account, it is the cool down period set in minutes for
cluster nodes on a Presto cluster. The default value is Note This feature is only available on a request. Contact the account team to enable this feature on the QDS account. |
|
node_spot_cooldown_period |
With the aggressive downscaling feature enabled on the QDS account, it is the cool down period set in minutes for
Spot nodes on a Hadoop 2 or a Spark cluster. The default value is 15 minutes. For more information, see
Understanding Aggressive Downscaling in Clusters (AWS). It is not applicable to Presto clusters as Note This feature is only available on a request. Contact the account team to enable this feature on the QDS account.
You must not set the Cool Down Period to a value lower than |
root_volume_size |
Use this parameter to configure the root volume of cluster instances. The supported range for the root volume size is
|
Coordinator and Minimum Number of Nodes in a Cluster
To add the Coordinator and Minimum Number of Nodes in a cluster, you can use Stable Spot Instance, Spot Blocks, or On-Demand nodes. You can set the cluster composition by using one of these configuration types:
OnDemand
: It is the default value. This applies to On-Demand nodes.stable_spot_instance_settings. This applies to Spot Instances. For example,
stable_spot_instance_settings: {maximum_bid_price_percentage: "", timeout_for_request: ""}
.spot_block_settings. This applies to Spot Blocks. For example,
spot_block_settings: {duration: ""}
.
Cluster Composition Settings (AWS) describes how to configure the Coordinator and Minimum Number of Nodes through the Clusters UI page.
heterogeneous_instance_config
See An Overview of Heterogeneous Nodes in Clusters for more information.
Parameter |
Description |
---|---|
memory |
To configure the heterogeneous cluster, you must provide a list of whitelisted set of "node_configuration":{
"heterogeneous_instance_config":{
"memory": {
[
{"instance_type": "m4.4xlarge", "weight": 1.0},
{"instance_type": "m4.2xlarge", "weight": 0.5},
{"instance_type": "m4.xlarge", "weight": 0.25}
]
}
}
}
The following points about the instance types hold good for an heterogeneous cluster:
|
ebs_upscaling_config
Note
For the required EC2 permissions, see Sample Policy for EBS Upscaling.
Parameter |
Description |
---|---|
max_ebs_volume_count |
The maximum number of EBS volumes that can be attached to an instance. It must be more than |
percent_free_space_threshold |
The percentage of free space on the logical volume as a whole at which addition of disks must be attempted. The default value is 25%, which means new disks are added when the EBS volume is (greater than or equal to) 75% full. |
absolute_free_space_threshold |
The absolute free capacity of the EBS volume above which upscaling does not occur. The percentage threshold changes as the size of the logical volume increases. For
example, if you start with a threshold of 15% and a single disk of 100GB, the disk would upscale when it has less than 15GB free capacity. On addition of a new node,
the total capacity of the logical volume becomes 200GB and it would upscale when the free capacity falls below 30GB. If you would prefer to upscale only when the
free capacity is below a fixed value, you may use the |
sampling_interval |
It is the frequency at which the capacity of the logical volume is sampled. Its default value is 30 seconds. |
sampling_window |
It is the number of The logical volume is upscaled if, based on the current rate, it is estimated to get full in (sampling_interval + 600) seconds (the additional 600 seconds is because
the addition of a new EBS volume to a heavily loaded volume group has been observed to take up to 600 seconds.) Here is an example how the free space threshold
decrease with respect to the Sample Window and Sample Interval. Assuming the default value of
|
spot_instance_settings
Parameter |
Description |
---|---|
timeout_for_request |
The timeout for a Spot Instance request in minutes. The default value is |
maximum_spot_instance_percentage |
The maximum percentage of instances that may be purchased from the AWS Spot market. The default value is |
stable_spot_instance_settings
Use this parameter to set coordinator and minimum number of nodes in a cluster. For more information, see Coordinator and Minimum Number of Nodes in a Cluster.
Parameter |
Description |
---|---|
timeout_for_request |
The timeout for a Spot Instance request in minutes. The default value is |
spot_block_settings
Use this parameter to set the coordinator node and minimum number of nodes as described in Coordinator and Minimum Number of Nodes in a Cluster, and worker nodes.
Parameter |
Description |
---|---|
duration |
Set the duration in minutes. The accepted value range is 60-360 minutes and the duration must be a multiple of 60. It is set in node_configuration. Spot blocks are stable than spot nodes as they are not susceptible to being taken away for the specified duration. However, these nodes certainly get terminated once the duration for which they are requested for is completed. For more details, see AWS spot blocks. An example of Spot block can be as given below. "node_configuration": {"spot_block_settings": {"duration":120} }
|
hadoop_settings
Parameter |
Description |
---|---|
use_hadoop2 |
Set this parameter value to |
use_spark |
This is a mandatory setting for a Spark cluster. Its value must be |
custom_config |
The custom Hadoop configuration overrides. The default value is blank. |
The fair scheduler configuration options. |
|
use_qubole_placement_policy |
Use Qubole Block Placement policy for clusters with spot nodes. |
fairscheduler_settings
Parameter |
Description |
---|---|
fairscheduler_config_xml |
XML string with custom configuration parameters for the fair scheduler. The default value is, blank. |
default_pool |
It is the default Fair Scheduler Queue if the queue is not submitted during job submission. |
security_settings
It is now possible to enhance security of a cluster by authorizing Qubole to generate a unique SSH key every time a cluster is started. This feature is not enabled by default. Create a ticket with Qubole Support to enable this feature. Once this feature is enabled, Qubole starts using the unique SSH key to interact with the cluster. For clusters running in private subnets, enabling this feature generates a unique SSH key for the Qubole account. This SSH key must be authorized on the Bastion host.
Parameter |
Description |
---|---|
encrypted_ephemerals |
Qubole allows encrypting ephemeral drives on the instances. Create a ticket with Qubole Support to enable the block device encryption. |
ssh_public_key |
SSH key to use to login to the instances. The default value is none. (Note: This parameter is not visible to non-admin users.) The SSH key must be in the OpenSSH format and not in the PEM/PKCS format. |
persistent_security_group |
This option overrides the account-level security group settings. By default, this option is not set but inherits the account-level persistent security group, if any. Use this option if you want to give additional access permissions to cluster nodes. Qubole only uses the security group name for validation. So, do not provide the security group’s ID. You must provide a persistent security group when you configure outbound communication from cluster nodes to pass through a Internet proxy server. |
presto_settings
Parameter |
Description |
---|---|
enable_presto |
Enable Presto on the cluster. |
custom_config |
Specify the custom Presto configuration overrides. The default value is blank. |
spark_settings
Parameter |
Description |
---|---|
custom_config |
Specify the custom Spark configuration overrides. The default value is blank. |
datadog_settings
Note
This feature is enabled on Hadoop 2 (Hive), Presto, and Spark clusters. Once you set the Datadog settings, Ganglia monitoring gets automatically enabled. Although the Ganglia monitoring is enabled, its link may not be visible in the cluster’s UI resources list.
Parameter |
Description |
---|---|
datadog_api_token |
Specify the Datadog API token to use the Datadog monitoring service. The default value is NULL. |
datadog_app_token |
Specify the Datadog APP token to use the Datadog monitoring service. The default value is NULL. |
Response
The response contains a JSON object representing the edited cluster. All the attributes mentioned here are returned (except when otherwise specified or redundant).
Examples
Goal
Modify some attributes of the cluster with a cluster ID, 116.
curl -X PUT -H "X-AUTH-TOKEN:$X_AUTH_TOKEN" -H "Content-Type:application/json" -H "Accept: application/json" \
-d '{
"label":["default"],
"node_configuration": {
"initial_nodes": 2,
"slave_request_type": "ondemand",
"slave_instance_type": "c3.xlarge",
"max_nodes": 12,
"master_instance_type": "c3.large"
},
"enable_ganglia_monitoring": true
}' \
https://api.qubole.com/api/v1.3/clusters/116
Note
The above syntax uses https://api.qubole.com as the endpoint. Qubole provides other endpoints to access QDS that are described in Supported Qubole Endpoints on Different Cloud Providers.
Response
{
"security_settings":{
"encrypted_ephemerals":false
},
"enable_ganglia_monitoring":true,
"label":[
"my_cluster"
],
"ec2_settings":{
"compute_validated":false,
"compute_secret_key":"<your_ec2_compute_secret_key>",
"aws_region":"us-west-2",
"vpc_id":null,
"aws_preferred_availability_zone":"Any",
"compute_access_key":"<your_ec2_compute_access_key>",
"subnet_id":null
},
"node_bootstrap_file":"node_bootstrap.sh",
"hadoop_settings":{
"use_hadoop2":false,
"custom_config":null,
"fairscheduler_settings":{
"default_pool":null
}
},
"disallow_cluster_termination":false,
"presto_settings":{
"enable_presto":false,
"custom_config":null
},
"id":116,
"state":"DOWN",
"node_configuration":{
"max_nodes":12,
"master_instance_type":"c3.large",
"slave_instance_type":"c3.xlarge",
"use_stable_spot_nodes":false,
"slave_request_type":"ondemand",
"initial_nodes":2,
"spot_instance_settings":{
"maximum_bid_price_percentage":"100.0",
"timeout_for_request":10,
"maximum_spot_instance_percentage":60
}
}
}
Example to edit an Airflow Cluster
For more information on parameters supported by an Airflow cluster, see Create a New Airflow Cluster.
curl -X POST -H "X-AUTH-TOKEN:$X_AUTH_TOKEN" -H "Content-Type:application/json" -H "Accept: application/json" \
-d '{
"label": ["airflow"],
"node_configuration": {
"master_instance_type": "m1.large",
"custom_ec2_tags": {}
},
"engine_config": {
"type": "airflow",
"overrides": "core.parallelism=16\ncore.dag_concurrency=34"
}
}' \ https://api.qubole.com/api/v1.3/clusters/50335
Sample Response
{
"state": "DOWN",
"id": 50335,
"spark_version": "1.6.1",
"label": [1]
0: "airflow"
"disallow_cluster_termination": true,
"force_tunnel": false,
"enable_ganglia_monitoring": true,
"node_bootstrap_file": "node_bootstrap.sh",
"ec2_settings":
{
"aws_preferred_availability_zone": "Any",
"aws_region": "us-east-1",
"compute_validated": true,
"vpc_id": null,
"subnet_id": null,
"bastion_node_public_dns": null,
"compute_secret_key": ""
"compute_access_key": "AKIAIDR6RL**********",
"use_account_compute_creds": true,
}
"hadoop_settings":
{
"use_spark": false,
"custom_config": null,
"use_hadoop2": false,
"use_qubole_placement_policy": false,
"fairscheduler_settings":
{
"default_pool": null
}
}
"node_configuration":
{
"master_instance_type": "m1.large",
"slave_instance_type": "",
"initial_nodes": 1,
"max_nodes": 1,
"slave_request_type": "ondemand",
"cluster_name": "qa_qbol_acc2930_cl50335",
}
"security_settings":
{
"encrypted_ephemerals": false
}
"presto_settings":
{
"enable_presto": false,
"custom_config": null
}
"spark_settings":
{
"custom_config": ""
}
"errors": [0],
"datadog_settings":
{
"datadog_api_token": "",
"datadog_app_token": "",
}
"spark_s3_package_name": null,
"zeppelin_s3_package_name": null,
"engine_config":
{
"type": "airflow",
"dbtap_id": 9670,
"fernet_key": "<your-fernet-key>",
"overrides": "core.parallelism=16\ncore.dag_concurrency=34",
}
}
Example to add EBS configuration in an Hadoop 2 cluster
Here is an example for adding EBS configuration in an Hadoop 2 cluster with 3001 as its cluster ID.
curl -i -X PUT -H "X-AUTH-TOKEN:$token" -H "Content-Type: application/json" -H "Accept: application/json"
\ -d '{"node_configuration": {"ebs_upscaling_config": {"max_ebs_volume_count":5, "percent_free_space_threshold": 20,
"absolute_free_space_threshold": 100 } }}' \ "https://api.qubole.com/api/v1.3/clusters/3001"
Example to edit the EC2 settings
Here is an example to edi the EC2 settings of a cluster.
curl -i -X PUT -H "X-AUTH-TOKEN:<AUTH_TOKEN>" -H "Content-Type: application/json" -H "Accept: application/json" \
-d '{"ec2_settings": {"vpc_id": "<vpc-id>", "subnet_id": "<subnet-ID1>, <subnet-ID2"}}' \ "https://api.qubole.com/api/v1.3/clusters/34456"
Here is the sample response.
{ "state":"DOWN","id":34456,
"spark_version":null,
"presto_version":null,
"label":["hadoop2","default"],
"disallow_cluster_termination":false,
"force_tunnel":true,
"enable_ganglia_monitoring":false,
"node_bootstrap_file":"timeline_server.sh",
"tunnel_server_ip":null,
"ec2_settings":
{"aws_preferred_availability_zone":"Any",
"aws_region":"us-east-1",
"compute_validated":true,
"vpc_id":"vpc-9b1166e3",
"subnet_id":"subnet-<ID1>,subnet-<ID2>",
"bastion_node_public_dns":null,
"bastion_node_port":null,
"bastion_node_user":null,
"master_elastic_ip":null,
"instance_tenancy":null,
"compute_secret_key":"",
"compute_access_key":"<compute-access-key>",
"use_account_compute_creds":true},
"hadoop_settings":
{"use_hbase":false,
"use_spark":false,
"custom_config":"yarn.resourcemanager.scheduler.class=org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler",
"use_hadoop2":true,
"use_qubole_placement_policy":true,
"is_ha":null,
"enable_rubix":false,
"node_bootstrap_timeout":0,
"fairscheduler_settings":{"default_pool":null}
},
"node_configuration":
{"master_instance_type":"m3.xlarge",
"slave_instance_type":"m3.xlarge",
"initial_nodes":2,
"max_nodes":4,
"idle_cluster_timeout":null,
"node_base_cooldown_period":null,
"node_spot_cooldown_period":null,
"child_hs2_cluster_id":null,
"parent_cluster_id":null,
"slave_request_type":"spot",
"fallback_to_ondemand":false,
"spot_instance_settings":{"maximum_bid_price_percentage":100.0,
"timeout_for_request":10,
"maximum_spot_instance_percentage":50},
"cluster_name":"qbol_acc7753_cl34456"
},
"security_settings":{"encrypted_ephemerals":false},
"presto_settings":
{"enable_presto":false,
"custom_config":null},
"spark_settings":
{"custom_config":null},"errors":[],
"datadog_settings":{"datadog_api_token":null,"datadog_app_token":null},
"spark_s3_package_name":null,
"zeppelin_s3_package_name":null,
"engine_config":
{"type":"hadoop2",
"hive_settings":
{"is_hs2":false,
"hive_version":"2.1.1",
"pig_version":"0.11",
"pig_execution_engine":"mr",
"overrides":null,
"is_metadata_cache_enabled":false,
"execution_engine":null,
"hs2_thrift_port":null}
},
"zeppelin_interpreter_mode":null}