Configuring Buffer Capacity¶
HADTWO-2000: A Hadoop 2 (Hive) or Spark cluster can be configured to have a specific buffer
yarn.cluster_start.buffer_nodes.count to the number of nodes to be used for the buffer and pass it as
a Hadoop override for the cluster. You can also let QDS maintain the buffer capacity by settting
yarn.autoscaling.buffer_nodes.count.is_dynamic=true as a Hadoop override.
Disabled | Cluster Restart Required
This buffer capacity will remain free for the lifetime of the cluster, except when the cluster reaches or exceeds its configured maximum number of nodes. The advantage of configuring buffer capacity is that a new command can run immediately without needing to wait for the cluster to upscale. For more information, see the documentation.
- HADTWO-1942: Fixes a problem that caused Hadoop jobs to fail because of file permission checks when the job was submitted. QDS now skips file permission checks in the job submission phase if the staging directory of a MapReduce job is in an Oracle filesystem.
- HADTWO-2094: The
oci-hdfs-connectorhas been upgraded to version 126.96.36.199.
- HADTWO-2204: A NullPointerException can be caused by an attempt to update the scheduler node resource if the resource has already been removed as a result of an asynchronous event.