Changelog for api.qubole.com¶
|Date and time of release||Version||Change type||Change|
|23rd April, 2019 (8:45 AM PST)||55.0.49||New feature||ACM-4765: Qubole now supports m5ad and r5ad family instance types in US East (N. Virginia), US West (Oregon), US East (Ohio), and Asia Pacific (Singapore) AWS regions.|
|22nd April, 2019 (07:25 AM PST)||55.0.48||Bug fix||AN-2056: Uploading objects or files through the Qubole UI is now supported for all AWS regions. KMS-enabled buckets are also supported.|
|02nd April, 2019 (03:57 PM PST)||55.0.37||Enhancement||
PRES-1350: Qubole supports configuring the required number of worker nodes during autoscaling. It is a cluster configuration
PRES-2397: Qubole supports escaping newline
PRES-2417: Presto clusters do not terminate while actively running Presto notebook paragraphs. The enhancement is not available by default. Create a ticket with Qubole Support to use it.
PRES-2474: The optimization to speed up queries on
PRES-1373: This is a fix for an issue where the retry operation was not working on the failed Presto queries. Retries work on the failed Presto queries now. As part of the fix, retries are configurable for Presto queries that run on the QDS platform.
Configuring retries will just do a blind retry of a Presto query. This may lead to data corruption for non-Insert Overwrite Directory (IOD) queries.
PRES-2306: Fixed a bug where the Usage page on the QDS UI did not show the Presto query’s bytes read statistics.
PRES-2493: Fixed the data loss issue in the Hive connector when writing bucketed sorted table in Qubole Presto 0.208 queries.
PRES-2567: Fixed a bug in the user-overridden IAM Role feature where stale credentials were being used in a long-running cluster.
PRES-2571: Fixed a concurrency related issue in autoscaling on Qubole Presto 0.193 and 0.208 versions due to which the cluster’s size permanently exceeds its maximum size.
|01st April, 2019 (03:20 AM PST)||55.0.36||Bug fix||
HIVE-3740: The Hive metastore API call has been replaced with a less expensive API call to resolve the issue of false alerts that were triggered on Datadog.
HIVE-4040: It is a fix for an issue in
HIVE-4053: It is a fix for an issue where the
HIVE-4280: Hive jobs were failing with a Metaexception that included a timeout message. To resolve this issue, the SSH configuration parameters’ maximum value is increased on QDS servers.
QTEZ-335: It resolves an issue where the Tez Offline UI was inaccessible when
|25th March, 2019 (03:34 AM PST)||55.0.25||Bug fix||TOOLS-1139: The previous docker version had security vulnerabilities, which could allow malicious containers to gain root-level
privileges on the host. To resolve security vulnerabilities of the previous docker version, the latest patch version of
|13th March, 2019 (08:37 AM PST)||55.0.22||Enhancement||SPAR-3005: For the offline Spark clusters, only the event log files that are less than 400 MB are processed in the offline Spark History Server (SHS). This prevents high CPU utilization on the internal servers due to SHS.|
|Bug fix||SPAR-3371: Dynamic filtering now works for join keys of type Int, String, Long, and Short.|
|13th March, 2019 (08:37 AM PST)||55.0.22||Bug fix||
ZEP-3247: Zeppelin failed to start due to the
ZEP-3252: On a cluster restart, the latest dashboards are displayed after the scheduled runs.
ZEP-3230: Zeppelin server in Java 7 clusters was failing due to some ciphers that were disabled for security. Now, the Zeppelin server starts successfully as the required ciphers are added when starting the Zeppelin server.
ZEP-3243: A notebook with the ERROR status on a cluster failed to associate back to the cluster even after switching to another cluster. This issue is fixed.
ZEP-3234: Notebooks failed to load in case of higher latencies when fetching notebook permissions. This issue is fixed.
|11th March, 2019 (8:55 AM PST)||55.0.17||New feature||EAM-1632: A new API is introduced to retrieve the Hive table partitions and locations. For more information, see View Table Partitions and Location.|
|18th February, 2019 (12:12 AM PST)||55.0.11||Enhancement||QHIVE-4065: Qubole tmp tables can be dropped faster if you enable the
|5th February, 2019 (4:35 AM PST)||55.0.5||R55 Upgrade (Phase 1 frontend)|
|4th February, 2019 (1:25 PM PST)||54.0.64||Bug fix||AD-2051: Fixed an issue where sample data for Hive tables on the Explore page took a long time to appear due to a misconfiguration.|
|28th January, 2019 (9:58 AM PST)||54.0.59||Bug fix||QHIVE-3878: Fixed the issue when concurrently running the
|15th January, 2019 (9:04 AM PST)||54.0.53||Bug fix||SCHED-321: In the Cron expression, the scheduler calculates the Next Materialized Time (NMT)/Start time considering the current time as the base time and Cron expression passed. Start time is not honored in the Cron expression. It resolves the issue where the first instance of the scheduler did not honor the scheduled time.|
|11th January, 2019 (1:00 PM PST)||54.0.52||Bug fix||AN-1407: While filtering commands on the Analyze page, you must now provide the date range (start/end date) when searching for cluster labels.|
|7th January, 2019 (1:30 AM PST)||54.0.46||Enhancement||AD-1834: The list of email IDs configured in Account Settings will now receive notifications related to releases, account configuration, and feature changes.|
|19th December, 2018 (12:57 PM PST)||54.0.44||Bug fix||SPAR-3207: The Spark shuffle cleanup feature is not supported in Spark 2.2.1. However, the Spark shuffle cleanup feature continues to be supported in Spark 2.3.1 and later versions.|
|19th December, 2018 (12:47 PM PST)||54.0.44||Enhancement||
ACM-3833: Qubole now supports saving the node bootstrap on SSE-KMS encrypted buckets.
ACM-3891: As part of a specific cluster’s governance, the master node is not counted in the node time chart, that is node chart only shows worker nodes.
|Bug fix||ACM-3847: The issue where the cluster did not start from the Notebooks UI when global permissions were restrictive while the object-level ACL permissions for the same cluster were allowed, is now resolved.|
|17th December, 2018 (9:22 AM PST)||54.0.41||Bug fix||ZEP-3109: Fixed the issue that caused notebooks to freeze frequently due to the huge content, such as large number of clusters, users, and schemas.|
|13th December, 2018 (1:39 PM PST)||54.0.40||Bug fix||RUB-102: It is a fix to avoid unnecessary HTTP GET calls to Ganglia for fetching the cluster metrics due to which the master node was overloaded.|
|11th December, 2018 (12:57 AM PST)||54.0.39||Enhancement||
ACM-3436: QDS now supports c5n, m5a, and r5a instances.
ACM-3545: Qubole now provides a feature to avoid the cluster from starting whenever the master node’s Elastic IP address fails. This feature is not enabled by default. Contact Qubole Support to enable this feature.
|6th December, 2018 (5:29 AM PST)||54.0.38||Bug fix||EAM-1502: Fixed an issue in the Automatic Statistics Collection framework due to which the fresh statistics were not collected for a few tables.|
|28th November, 2018 (6:19 PM PST)||54.0.33||Enhancement||PRES-2380: Presto notebooks are GA and Presto 0.208 (beta) version is available for use.|
PRES-2338: Fixed a bug in Presto 0.180 and 0.193 to adhere to fallback to the On-Demand configuration in case of Spot loss.
PRES-2268: Fixed a bug in Presto 0.193 which caused SELECT queries with THE ROW type output to fail.
PRES-2269: Fixed a bug in Presto 0.193 due to which queries on Hive views from Presto were failing.
|26th November, 2018 (1:20 AM PST)||54.0.30||Enhancement||
SPAR-3151: Qubole now supports the latest Apache Spark 2.4.0 version. It is displayed as 2.4 latest (2.4.0) in the Spark Version field of the Create New Cluster page on QDS UI.
|26th November, 2018 (1:20 AM PST)||54.0.30||Bug fix||ACM-3893: The command failure due to an s3exception is resolved by setting up correct credentials in environment variables.|
|21st November, 2018 (1:29 AM PST)||54.0.26||Bug fix||
AD-1752: Bug fix to restrict the Service user from logging in via Gmail or SAML.
AD-1763: Bug fix to enable updating the feature status in an internal table.
|19th November, 2018 (7:23 AM PST)||54.0.23||Bug fix||ACM-3857: Fixed the incorrect volatile node count calculation in clusters with heterogeneous instance configuration enabled.|
|15th November, 2018 (12:19 PM PST)||54.0.22||Enhancement||TOOLS-906: Given the recent addition of new packages and package upgrades to clusters, increased the node root volume capacity from 45GB to 60GB as a defensive measure.|
|15th November, 2018 (3:05 AM PST)||54.0.21||Bug fix||EAM-1476: Fixed the error for the hive table_properties API to return the correct response code.|
|14th November, 2018 (7:04 PM PST)||54.0.20||Bug fix||SPAR-3160: Fixed the issue where role-based Spark clusters using the Native Amazon S3 FS did not get the Node Bootstrap correctly while bringing up the cluster.|
|14th November, 2018 (6:56 PM PST)||54.0.17||Bug fix||
AD-1739: Fixed the configuration error that caused our systems to incorrectly determine the status of customer payment plans.
AD-1742: Fixed the error that caused
|13th November, 2018 (1:35 AM PST)||54.0.15||Major release|