Changelog for api.qubole.com¶
Find in the following table the changelog of the api.qubole.com environment.
|Date and time of release||Version||Change type||Change|
|16th Apr, 2021 (11:59 CST)||59.5||Bug fix||
SPAR-4848 Fixed problems running Python in Analyze to read hive acid table data by using Dataframe API.
SPAR-4857 Fixed problems with spark snowflake jar by updating the spark-snowflake_2.11-2.5.1-spark_2.4.3.jar with the missing SnowflakeUtils.
HIVE-5721 Fixed Hive failing queries with AL2 by assigning permissions to the yarn user and overriding hadoop configurations.
JUPY-630 Fixed the error in the branch details of the gitlab server, configuring the bastion with the account public SSH key.
|19th Mar, 2021 (11:59 CST)||59.4||Bug fix||
ACM-7740: Solved the Permission denied error in the ebs directory by adding disk space above 100GB.
EAM-2867: Fixed Oracle connection problems on Airflow clusters.
TOOLS-2880: Fixed the issue that caused error while connecting Qubole via SQLAlchemy by installing JDBC 3.0.3 as driver.
|29th Jan, 2021 (00:00 CST)||59.3||Bug fix||
PRES-3159: Fixed credentials visibility issues in Presto Ranger Clusters.
RUB-55: Monitoring BKS server is not showing a timeout error anymore.
MOJ-1481, MOJ-1482, and MOJ-942: Resolved issue with missing cost explorer and QCU data. The workflow in the scheduler needs to be updated or fixed according each environment.
ACM-7738: Fixed Autoscaling problems and cluster node termination in Neustar clusters by enabling Heterogeneous Cluster Configuration or enabling the Sync API Service.
|5th Oct, 2020 (11:29 AM PST)||59.0.128||Bug fix||
ACM-7641: Fixed the intermittent issue of abnormal cluster termination of clusters that do not use a bastion node.
ACM-7653: Fixed the intermittent issue in cleaning up run-away cloud resources of a cluster.
|13th Aug, 2020 (2:39 AM PST)||59.0.109||Enhancement||
QHIVE-5539: Hive logs on cluster nodes are now rotated daily.
PRES-3435: The QueryTracker link is now available in the Workbench/Analyze UI’s Logs tab for queries run through the third-generation drivers.
PRES-3722: Optimization is added to push null filters to table scans by inferring them from the JOIN criteria of equi-joins and
semi-joins in Presto version 317 and later. You can enable it through
PRES-3748: Presto query retries for memory exceeded exceptions are triggered in a graded manner. Qubole retries the failed query in three steps. First two steps occur on the cluster size that lies between the minimum and the maximum cluster size. The last step of occurs at the maximum cluster size. To know more, see Graded Presto Query Retries. This enhancement is part of Gradual Rollout.
PRES-3761: The Presto Mongo Connector now supports querying Cosmos DB using Mongo APIs.
PRES-3788: You can now add a comma-separated list of endpoints and pass them as cluster override values of
ACM-7249: Fixed the
HADTWO-2743: Fixed a race condition in YARN autoscaling due to which nodes were moving directly from the
QHIVE-5542: Fixed a data corruption issue in Hive version 2.3 that occurred when vectorized execution is enabled and timestamp
data type is used as
PRES-3632: Fixed the
PRES-3787: Fixed the Ranger access control for Presto views in Presto version 0.208.
PRES-3790: Fixed the issue that failed queries when there was no space before or after a single line comment in Presto queries.
RUB-239: Fixed the issue in RubiX that sometimes caused query failures around the cache data invalidation.
|12th Aug, 2020 (12:09 AM PST)||59.0.107||Enhancement||ACM-6401: An option to run HiveServer2 as an additional cluster with the associated Hadoop (Hive) cluster or run it on the node only affects new clusters and clone of Hadoop (Hive) cluster that has HiveServer2 as an additional cluster. If Qubole disables this feature, then you cannot create or clone a HiveServer2 as an additional cluster. Existing HiveServer2 that acts as an additional cluster continues to work as expected.|
|4th Aug, 2020 (02:04 AM PST)||59.0.105||Enhancement||QUEST-238: Qubole Pipelines Service introduces a new API to create pipelines with custom Jar files (BYOJ) or custom code (BYOC). When creating a pipeline with the new API, you can also set notifications. For more information about this API, see Create a Pipeline.|
EAM- 2349: Qubole Airflow cluster now has CI/CD support for Git repository. The users can now select Git as the repository and specify the repository location and branch to deploy the code on Airflow cluster. The supported Git platforms are GitHub, GitLab, bitbucket, and self-hosted.
SPAR-3398: Spark on Qubole supports the latest Apache Spark 3.0.0 version. It is displayed as 3.0 latest in the Spark Version field of the Create New Cluster page on QDS UI.
For more information, see Spark 3.0 Features.
|30th Jul, 2020 (3:38 AM PST)||59.0.100||Enhancement||
QTEZ-522: Backported OSS YARN-3978 to create a configurable property for saving the YARN container info in the ATS summary store. By default, Qubole does not store this info as Tez UI does not access it. This default behavior reduces the ATS summary store’s storage size and helps the Tez UI scale better.
QTEZ-525: Storing of the YARN container info in the ATS summary store is disabled as Tez UI does not access this info. This reduces the ATS summary store’s storage size and helps the Tez UI scale better.
QTEZ-526: Improvement in the timeline data parsing performance from HDFS to the summary store in ATS v1.5. This has significantly improved the responsiveness of the Tez UI. Qubole recommends using the SSD-based instance for more performance gain. This improvement has fixed ATS v1.5 out-of-memory issues in Hadoop 3 clusters.
QHIVE-5200: Fixed the issue where a partition was pruned incorrectly when a Hive query had JOIN and complex predicate with
QHIVE-5250: Qubole Hive with direct writes used to create
QHIVE-5277: With Vectorization enabled,
QHIVE-5321: Fixed the issue where
QHIVE-5505: Backported the bug fix HIVE-17804 from Hive version 3.1.1 to 2.3.
QHIVE-5512: Fixed the issue when statistics stale status was not updated properly in table properties while running
QHIVE-5559: Hive jobs failed with
QPIG-114: Fixed the issue where a Pig log file did not rotate daily.
QTEZ-524: Fixed the issue where a cluster did not downscale due to the stale autoscaling requests from Tez Application Master.
QTEZ-544: Fixed the Tez job that got hung due to the container priority inversion. The related open-source Jira is TEZ-3491.
|29th Jul, 2020 (05:23 AM PST)||59.0.99||Bug fix||JUPY-929: Dependencies that were installed through the Environments page were not accessible for the scheduled and API runs of Python notebooks. This issue is fixed.|
|15th Jul, 2020 (06:26 AM PST)||59.0.89||Bug fix||ACM-7286: Fixed the issue where the cluster idle termination may remain stuck in the terminating state for a long time if commands are submitted to start the cluster at the same time.|
|14th Jul, 2020 (06:27 AM PST)||59.0.88||Bug fix||
ACM-7280: Fixed the issue where new clusters created after the R59 release did not get auto terminated.
EAM-2334: This bug fix ensures the versions of packages installed during the cluster startup are pinned. This improves package installation by reducing the cluster start time and avoids intermittent package installation failure during the cluster startup (Cluster Restart Required).
|13th July, 2020 (09:34 AM PST)||59.0.86||Enhancement||ZEP-3286: Users can now attribute the cost metrics to each paragraph executed from Zeppelin notebooks. Contact Qubole support to enable this feature.|
ZEP-4130: Notebook commands were failing when the status was
ZEP-4590: Interpreter settings were getting lost because of
ZEP-4642: Notebook rendering was delayed due to extra web socket calls made for each paragraph to fetch editor settings. This issue is fixed.
ZEP-4752: Dashboards that were built on a notebook, which run another notebook using
ZEP-4789: The paragraph status was not getting updated after the web socket reconnect. This issue is fixed.
|7th July, 2020 (06:57 AM PST)||59.0.81||Bug fix||
HADTWO-2659: Fixed an issue where some of the Spark executor logs were not aggregated to S3 for Spark streaming applications.
HADTWO-2671: Fixed an issue where the NodeManager process was leaking connections.
HADTWO-2687: Fixed the issue where decommissioned YARN nodes were not removed from the cluster.
QHIVE-5487: Fixed Tez ApplicationMaster that got reattempted for the session mode when the DAG recovery is disabled. These are the related open-source Jira issues:
|24th Jun, 2020 (00:40 AM PST)||59.0.66||Enhancement||
SPAR-4354 and SPAR-3183: For Pipelines, Prometheus and Grafana Structured Streaming Dashboard display the following metrics for streaming queries:
For Spark clusters, users must set the following Spark configuration
SPAR-4418: Spark 2.4-latest is now the default Spark version. When the users create a new Spark cluster, the Spark version is set to Spark 2.4-latest by default.
SPAR-4409: For a Hive ACID table, users can perform any reads and writes in the table only if the Hive ACID Datasource jar is specified in the application’s class path.
SPAR-3685: Users can view the Pyspark program errors in the Logs tab of the Analyze UI. These errors were displayed in the Results tab earlier. This enhancement is supported on Spark 2.4-latest and later versions. Contact Qubole Support to enable this enhancement.
SPAR-4531: Structured Streaming metrics were not displayed in the Grafana dashboard when the pipeline name was of some particular pattern. This issue is fixed.
SPAR-4503: Stateful structured streaming applications failed to restart because of missing state store delta files in S3. This issue occurred because the connection to the Amazon S3 server had remained idle for at least 20 seconds and was closed subsequently. This issue is fixed and is available for Spark version 2.3-latest and later versions.
SPAR-4421: Redeploying a streaming pipeline for a stateful structured streaming application failed with
Additionally, the issue with the number of retries on AWS throttling error is fixed. This fix is supported on Spark 2.3-latest and later versions.
SPAR-3659: Parquet metadata caching was not working if the parquet file contained a column with PLAIN_DICTIONARY encoding set. This issue is fixed.
|19th Jun, 2020 (11:09 AM PST)||59.0.63||Bug fix||
PRES-3660: Fixed the Presto query failure with
PRES-3701: Fixed the connection leak in
PRES-3708: Fixed the possible deadlock between Hive
|17th Jun, 2020 (07:33 AM PST)||59.0.61||Bug fix||PRES-3694: Fixed an issue to allow overriding of the
|15th Jun, 2020 (11:28 PM PST)||59.0.59||Bug fix||
PRES-3543: Fixed the issue where the aggregation node in case of a UNION query (when union sources are
PRES-3588: Fixed issues related to updating the table statistics performance. As a result, the bug fix has improved the
performance of updating table statistics. In addition, a new configuration property,
PRES-3602: Fixed the issue in reading the TEXT file collection delimiter configured in the Hive versions (earlier to Hive 3.0) in Presto version 317.
PRES-3604: Fixed the Ranger access control for Presto views that had earlier failed.
PRES-3618: The Presto catalog configuration for external data sources that skipped validation in Qubole was not added to the cluster earlier. Fixed this issue and now such configuration is added to the cluster.
PRES-3641: Fixed the failure in planning for spatial JOINs with dynamic filtering enabled.
PRES-3662: Fixed the issue where pushing configuration to a cluster corrupted the Presto configuration and failed the Presto server restart.
PRES-3672: Fixed query failures that occurred as too many partitions’ metadata were requested from the metastore in Presto versions 0.208 and 317.
PRES-3673: Fixed the issue where the Presto cluster start failed when
PRES-3677: Fixed the issue where the default location (DefLoc) was picked as the DB location of non-default schemas in the Presto version 317. The correct behavior is that DefLoc should be the DB location of only the default schema.
|5th Jun, 2020 (06:17 AM PST)||59.0.51||Bug fix||
SPAR-4486: In R59 when HMS 2.1.1 was used, the metastore queries were failing due to a missing dependency in the Spark’s HMS classpath. This issue is fixed for Spark version 2.2 and later versions.
JUPY-782: The Spark job progress bar was freezing in rare cases in Jupyter notebooks. This issue is fixed.
JUPY-864: Jupyter notebooks in offline mode were not rendering when the cluster start permission on a cluster with
|31st May, 2020 (11:38 PM PST)||59.0.48||Bug fix||
HADTWO-2632: Fixed an issue where some of the NodeManager’s local directories were being marked as unhealthy.
HADTWO-2633: Fixed an issue where the DFS status links were not working for Spark clusters.
|27th May, 2020 (01:59 AM PST)||59.0.44||Major release||R59 Upgrade. This major release does not directly impact your account. Track the R59 major release deployment schedule for your pod through Qubole Technical Communications and/or https://status.qubole.com/ to know when this major release (R59) impacts your Qubole account.|
|10th Jun, 2020 (08:55 AM PST)||58.0.131||Bug fix||PRES-3671: As part of the R59 release, calls to the Presto coordinator are authenticated when SSL is enabled. But because of a bug in the idle cluster timeout flow, authentication information is not sent to the Presto coordinator. This issue is fixed.|
|21st May, 2020 (08:38 AM PST)||58.0.126||Enhancement||EAM-2299: Now, the users have an option on the Scheduler page to specify a time interval to run the scheduler. To enable this, the following two new fields are added: Interval: The user needs to enter the numerical value for the time interval corresponding to the file path (as entered by the user in the File Location field). Increment: The user needs to select a time unit for the value specified in the Interval field. Both these are mandatory fields if the user selects Cron expression as the Frequency.|
|6th May, 2020 (05:25 AM PST)||58.0.113||Bug fix||
ACM-6923: This is a bug fix to avoid the duplication of cluster nodes, which remained in the
ZEP-4446: Notebooks were getting lost when the
ZEP-4531: The Notebook content was getting lost or reverted after the Websocket reconnect. This issue is fixed.
|4th May, 2020 (11:15 PM PST)||58.0.109||Bug fix||JUPY-694: Spark session was terminated even when there were active tasks. This issue is fixed.|
|28th Apr, 2020 (01:18 AM PST)||58.0.106||Bug fix||
HADTWO-2545: Qubole has backported YARN-9984 to Hadoop 3. It fixes the
issue where ResourceManager got restarted due to
HADTWO-2548: The NameNode used to go into the safe mode frequently as Ganglia consumed a lot of disk space in Hadoop 3. To fix this, Qubole has disabled container level metrics by default in Hadoop 3.
QHIVE-5265: The issue of clearing the logging context in HiveServer2, which led to operations log file descriptors’ leak is fixed. The related open-source Hive jira: HIVE-22733.
QHIVE-5318: The issue in a Hive ACID compaction job, which caused data loss in the event of job failure is fixed.
QTEZ-242: The issue where Tez autoscaling in a running Tez job did not occur due to lack of resources is fixed. The fix ensures
that when the Number Of Pending Tasks is 0, then the
QTEZ-513: When the ApplicationMaster (AM) on which the query is running shuts down abruptly, Tez has an inbuilt mechanism to recover to the preexisting state. When this happens, the AM restarts in the Tez DAG recovery mode and the state in which the AM that was shutdown is recovered in the new AM. Due to the open-source software bug that occurs during scheduling tasks in the Tez DAG recovery mode, the query gets stuck. So, Qubole has disabled the Tez DAG recovery by default.
|24th Apr, 2020 (06:03 AM PST)||58.0.105||Bug fix||AN-2722: Workbench now displays the complete cluster label as a tooltip when you hover over it.|
|22nd Apr, 2020 (03:09 AM PST)||58.0.99||Bug fix||
PRES-3315: Fixed the parsing error that occurred when queries contained block comments.
PRES-3466: Fixed the issue in the node removal API that did not work in SSL-enabled Presto clusters.
PRES-3558: Fixed the issue that occurred while reading ORC data generated by the minor compaction of Hive ACID tables.
|16th Apr, 2020 (04:40 AM PST)||58.0.94||Enhancement||ACM-5019: Qubole lets you configure the coordinator instance type in a multi-instance HiveServer2 cluster.|
|Bug fix||ACM-6797: Fixed the bug where a specific cluster settings’ history in the UI was displayed incorrectly.|
|13th Apr, 2020 (05:29 AM PST)||58.0.91||Bug fix||
PRES-3322: It fixes the bug in the Hive strict mode, which incorrectly blocked certain queries on partitioned columns with
PRES-3362: A Presto query with the Create table statement failed when the external location was not a directory only when the Presto version was 317. This issue is fixed now.
PRES-3442: The issue where Presto queries on ORC tables failed has been resolved. Qubole has now increased the default ORC decompression buffer from 4 MB to 16 MB. It has also introduced a new session property to set decompression buffer to any size.
Session property on Presto 0.208:
Session property on Presto 317:
PRES-3497: Presto queries failed as the user didn’t have write permissions to /media/ephemeral0. This issue is resolved now.
PRES-3510: Fixed a bug in parsing and splitting queries where inline comments had quotes.
PRES-3512: The issue where Presto queries failed while processing empty temp $folder directories is resolved now.
PRES-3513: Fixed the
PRES-3517: The issue where a Presto query failed with
PRES-3522: Fixed the
PRES-3523: Fixed the
PRES-3524: The issue where a Presto query on tables with nested storage directories returned empty results is resolved now.
PRES-3533: Parquet binary statistics generated before PARQUET-251 were corrupted. As a fix, Presto will ignore Parquet binary statistics corrupted before PARQUET-251. This fix is backported into Presto 317.
|9th Apr, 2020 (01:58 AM PST)||58.0.88||Enhancement||QHIVE-5049: Qubole has removed
|3rd Apr, 2020 (12:56 AM PST)||58.0.84||Enhancement||QUEST-608: Quest, a Data Engineering product offered by Qubole is renamed as Qubole Pipelines Service. The Quest UI is now called as Pipelines UI.|
|31st Mar, 2020 (09:08 AM PST)||58.0.81||Bug fix||HADTWO-2144: A Hive-3.1.1 (beta) cluster works only with the s3a filesystem. If a Hive-3.1.1 (beta) cluster did not start up, then check with Qubole Support if the s3a filesystem is enabled on the account.|
|22nd Mar, 2020 (11:45 PM PST)||58.0.71||Enhancement||
PRES-3360: Qubole has added a Datadog alert to detect runaway splits occupying execution slots for more than 10 minutes.
PRES-3404: Qubole has improved utilization of dynamic filters on worker nodes and reduced load on coordinator when dynamic filtering is enabled.
PRES-3429: QDS Presto version 317 is generally available now.
PRES-3469: Qubole has backported OS fixes to improve performance of inequality JOINs that involve
PRES-2481: Qubole has increased the default value of
PRES-3108: Presto queries failed with denied authorization to Hive metastore. To fix this issue, Qubole has added
a new configuration property called
PRES-3403: The issue where a Presto query with the
|19th Mar, 2020 (04:27 AM PST)||58.0.70||Enhancement||AN-1430: When the number of columns in the Results pane is greater than 30, only the first 30 columns are rendered. You can use the Column drop-down list to select any (other) 30 columns to view. Additionally, the entire result set is available for download. Beta|
|Bug fix||AN-2510: While pinning custom buckets in Workbench, you can now pick a different region to list bucket contents. Beta|
|17th Mar, 2020 (03:18 AM PST)||58.0.68||Bug fix||JUPY-672: The Spark application status was not displayed in the Jupyter notebook panel menu. This issue is fixed.|
|12th Mar, 2020 (11:22 AM PST)||58.0.64||Enhancement||ZEP-4327: Precaching for Table Explorer (Datasets) is now available in Zeppelin notebooks. Via Support.|
ZEP-4336: Paragraphs failed with the
ZEP-4321:Table Explorer was consuming high amount of memory when loading a large number of tables. As a result, notebooks page was unresponsive. This issue is fixed.
ZEP-4290: Notebook command failures with the
ZEP-4401: The paragraph run time in the Notebooks UI was displayed incorrectly. This issue is fixed.
ZEP-4378: HTML tags were not rendered under markdown paragraph. This issue is fixed.
ZEP-4310: Jobs were failing due to high logging late in the data-driven log filtering tool. This issue is fixed by adding async processing queue for the data-driven log filtering tool.
|10th Mar, 2020 (07:56 AM PST)||58.0.61||Enhancement||
ACM-6564: Added the m5a, m5ad, r5a and r5ad instance types for all the supported regions.
ACM-6585: Added the c5.12xlarge and c5.24xlarge instance types for all the supported regions.
JUPY-667: In case of isolated mode, the livy session name is unique to facilitate multiple executions of a notebook in parallel.
JUPY-654: Jupyter notebook commands were being marked as failed although the notebooks ran successfully. This issue is fixed.
JUPY-659: The create, update, and rename APIs of Jupyter notebooks were not handling few special characters in name validation. This issue is fixed.
JUPY-665: Jupyter notebook execution failed when more than 10 notebooks were run from the scheduler in parallel. This issue is fixed.
|5th Mar, 2020 (05:56 AM PST)||58.0.59||New feature||QUEST-579: Quest is now available as a BETA feature for all the user accounts. Beta|
QUEST-350: Users can now see the command logs of the test run in the Events tab. Details such as started, queued, running, cancelled or stopped are displayed. Additionally, an event is also displayed when the connection to Kafka source is not established.
QUEST-502: Users have an option to add the timestamp column in their data frame for Kafka and Kinesis as sources.
QUEST-492: When creating assisted pipelines, users do not have to specify a name of the connection when adding source and sink. The Name your connection field is removed from the Source and Sink section of the Quest UI.
QUEST-500: Pipelines can be in one of the following defined states:
QUEST-501: Users can now delete an operator sequentially starting from the last operator added in Assisted pipeline mode from the Quest UI and by using the REST API.
For more information about the REST API, see Delete Pipeline Operator.
QUEST-512: Users can now edit the pipelines that are in the running state. If the user edits a running pipeline that was created by using the assisted mode, the running pipeline is opened for edit in the BYOC mode. After editing the pipeline, the user must re-deploy the pipeline for the changes to take effect.
Users can discard all the edits or changes made after the pipeline was started by using the Discard option in the UI.
|Bug fix||QUEST-600: Name of the cloned pipelines is now appended with the siblings count of the parent pipeline.
|4th Mar, 2020 (01:12 PM PST)||58.0.58||Bug fix||
ACM-6397: The issue in which fetching engine configuration caused command failures is resolved now.
ACM-6515: Fixed a bug where a local node bootstrap file was deleted and hence it did not get executed.
ACM-6531: The issue where clusters with version R56 got terminated due to health check failure is resolved now.
QHIVE-5118: You can use the Beeline client to escape control characters in results with a HiveServer2 (HS2) cluster. For more information on the Beeline command shell, see the Apache documentation, HiveServer 2 Clients: Beeline - Command Line Shell.
When this feature is enabled, SELECT queries with the LIMIT clause less than or equal to 999 do not launch a Hadoop job on the cluster. You can get this feature enabled by contacting Qubole Support. Via Support
QHIVE-5194: Fixed the issue which caused an error where query results were displayed in the ASCII format when the Hive table data was in the Parquet format.
|25th Feb, 2020 (07:19 AM PST)||58.0.54||Bug fix||HADTWO-2367: Qubole has backported HDFS-11499 to fix the issue in HDFS where DFS clients could not close a file as its last block did not have sufficient number of replicas. For more details, see HDFS-11499 and HDFS-11486.|
|24th Feb, 2020 (06:20 AM PST)||58.0.52||Bug fix||
JUPY-621: The Spark progress widget in Jupyter Notebooks displayed a harmless exception sometimes. This issue is fixed.
JUPY-636: Python kernels were not running with the environment attached to the cluster. This issue is fixed.
|18th Feb, 2020 (01:07 AM PST)||58.0.49||Bug fix||
PRES-3200: Presto was not supporting Hash in Ranger with Presto 0.208. Now it supports Hash in Ranger with Presto 0.208.
PRES-3242: Parquet binary statistics generated before PARQUET-251 were corrupted. To resolve this, Presto ignores Parquet binary statistics generated before PARQUET-251 was fixed.
PRES-3371: The issue where the Presto QueryInfo for a query was not retrievable is resolved now.
PRES-3421: Presto has improved CBO logic to prefer broadcast join in some queries for better performance.
|13th Feb, 2020 (07:09 AM PST)||58.0.46||Bug fix||ZEP-4330: Clusters start-up time had increased for the accounts with proxy settings as packages were taking a long time to load. This issue is fixed.|
|11th Feb, 2020 (01:50 AM PST)||58.0.45||Bug fix||ZEP-4322: The changes in notebook paragraphs were not saved and the paragraphs were reverting to the previous state. This issue is fixed.|
|9th Feb, 2020 (07:54 AM PST)||58.0.43||Bug fix||ACM-6456: To avoid unavailability of spot nodes or any interruptions in spot nodes’ availability, set required IAM permissions
as defined in Additional Permissions. This ensures better fulfillment and resiliency in spot requests.
Qubole has restored the fallback to use
|7th Feb, 2020 (12:33 PM PST)||58.0.42||Bug fix||HADTWO-2413: Inline credentials were missing from output’s results when a user queried an s3 path (with inline credentials) through s3a filesystem. This issue has been fixed now.|
|6th Feb, 2020 (12:37 PM PST)||58.0.41||Bug fix||PRES-3419: Updating and pushing configuration operations failed for non-Presto clusters that were on R57 and earlier versions due to validation of the Presto version. This issue is resolved as non-Presto clusters do not validate the Presto versions now.|
|5th Feb, 2020 (5:41 PM PST)||58.0.40||Bug fix||PRES-3412: Fixed a bug in the Ranger plugin that occurred while using Ranger user groups with Presto 317 (beta).|
|3rd Feb, 2020 (11:13 PM PST)||58.0.38||R58 Upgrade (Phase 1 frontend)|
|30th Jan, 2020 (03:53 AM PST)||57.0.111||Bug fix||
AD-3111: Access to the get_creds API is now restricted to only the credentials of the default bucket.
PRES-3395: Fixed a bug that caused some users to not see logs for their Presto queries that failed fast (for example, due to syntax errors).
|20th Jan, 2020 (06:28 AM PST)||57.0.106||Enhancement||AN-2201: Workbench displays columns in the same order as that of the describe <table>. Clicking on Name sorts them in the ascending order. Beta|
|Bug fix||ZEP-4275: Downloading dependencies in notebooks failed because maven stopped support for http. This issue is fixed and now all the references to maven use https.|
|16th Jan, 2020 (03:17 AM PST)||57.0.103||Enhancement||
ACM-5876: Qubole now supports
ACM-6032: Qubole now supports
ACM-6165: Qubole now supports
QHIVE-5051: Fixed the issue when reading columns statistics fails for a column of date type in a partitioned table. Qubole has backported open-source fix, HIVE-20098 to all Hive versions.
QTEZ-497: Fixed the issue which caused very large Tez DAG submission to fail with
|10th Jan, 2020 (03:46 AM PST)||57.0.100||Enhancement||EAM-1801: Bastion Support for the snowflake data store is introduced so that the users can use Bastion nodes to allow IP and ensure more security (Disabled, Via Support, Cluster Restart Required).|
|19th Dec, 2019 (02:59 AM PST)||57.0.98||Enhancement||
AD-471: When you allow an IP address, you can now add a description as required. The description can contain a maximum of 255 characters. It is supported through the UI and API as well. For more information, see Listing Allowed IP Addresses and Add an Allowed IP Address.
AD-645: Qubole has introduced a new monthly Qubole Compute Unit (QCU) API and it has deprecated the old QCUH and Monthly Usage API. For more information, see View your Qubole Compute Unit (QCU) Usage.
PRES-3234: Presto 317 (beta) now supports AWS Glue metastore.
PRES-3157: Qubole has fixed the
PRES-3240: The optimization to finish join tasks early if their probe side is empty, has been removed as it deadlocks the query execution if new nodes join the cluster while the query is executing.
PRES-3282: Presto has added support for Lambdas in
|18th Dec, 2019 (11:15 PM PST)||57.0.94||Enhancement||
AN-2078: While selecting a cluster for a Hive, Presto, or Spark command, you can view its memory and CPU usage and its Hive metastore connectivity. Reviewing this information helps you make an informed decision on which cluster to choose for submitting commands. Beta
AN-1584: Workbench displays clusters sorted by Up, Pending, Terminating, and Down. Within each set, cluster labels are sorted alphabetically. Beta
AN-2348: Workbench now supports the read-only view for Spark notebook commands. You can use these to view logs, results, and resource links for the selected command. Beta
AN-2275: The command composer can now be resized for Spark and Shell commands. Beta
|Bug fix||AN-2261: Fixed an issue that sometimes caused the command preview text to not display properly if a keyword search was performed. Beta|
|2nd Dec, 2019 (12:35 AM PST)||57.0.81||Bug fix||ACM-6109: The issue in which the cluster start failed due to the cluster settings’ file upload failure to SSE-KMS enabled buckets has been resolved.|
|26th Nov, 2019 (11:40 PM PST)||57.0.80||Bug fix||ACM-6044: The issue in which using
|22nd Nov, 2019 (07:44 AM PST)||57.0.79||Bug fix||ACM-5991: The issue in which a cluster containing spot nodes in its composition returned
|21st Nov, 2019 (05:04 PM PST)||57.0.77||Enhancement||
AD-2805: You can now validate an account using AWS simulation. This helps in identifying any missing permissions.
ZEP-493: Bitbucket is integrated with notebooks. You can use Bitbucket to manage versions of your notebooks. Learn more.
|20th Nov, 2019 (02:21 AM PST)||57.0.72||Enhancement||
AIR-404: The users can now provide custom on_failure_callback for QuboleCheckOperator just like QuboleOperator. It allows users to define a custom callback if the QuboleCheckOperator fails (Cluster Restart Required).
QUEST-385: Users must create Spark Streaming clusters to run Spark structured streaming pipelines at scale with the Quest features.
QUEST-443: Users can use the new Filter option in the Pipelines view to filter the Pipelines based on their state.
QUEST-435, QUEST-378, QUEST-377, QUEST-403: Users can now edit, clone, archive, and delete pipelines depending on the State of the pipelines.
QUEST-406: Users can now modify name of the pipeline, when the pipeline is in the Draft state.
QUEST-405: Users can add Amazon S3 as a source when creating streaming pipelines.
QUEST-413, QUEST-454: Users can view the health of the streaming pipelines in the left pane of the Pipelines view.
QUEST-334: Users can fetch schemas from the Kafka cluster. For more information, see https://github.com/confluentinc/schema-registry.
|New feature||PRES-3070: Presto version 317 (beta) is now available on the Presto cluster.|
PRES-2990: Improved efficiency of dynamic partition pruning by preventing listing and creation of Hive splits from partitions, which are pruned at runtime.
PRES-3112: Qubole has introduced a feature to enable dynamic partition pruning on Hive tables at account level. This feature is part of Gradual Rollout.
PRES-3051: It fixes the
PRES-3113: Improved accounting of queued work for calculation of optimal size in autoscaling.
PRES-3131: Qubole has added
PRES-3163: It fixes a bug which could cause some additional nodes to be added to the cluster for a short duration during spot loss.
PRES-3187: It fixes a bug that caused the
|14th Nov, 2019 (11:40 AM PST)||57.0.64||Enhancement||ACM-5958: Qubole has done improvements in cluster UI page’s loading to resolve the slow-loading cluster UI page.|
ZEP-4081: Notebooks were failing due to jar conflicts with Spark. Thrift version used by Zeppelin is upgraded to 0.9.3 to fix this issue.
ZEP-4119: Python environment for the Notebooks was not getting configured properly. This issue is fixed now.
|24th Oct, 2019 (4:45 AM PST)||57.0.50||Bug fix||ACM-5798: This is a bug fix for warning notifications that had timestamp as timeout value. To resolve this, Qubole provides a configuration to control the warning notifications for a given cluster. To know more, see Configuring Query Runtime Settings.|
|21st Oct, 2019 (8:42 PM PST)||57.0.48||Bug fix||
ACM-5937: It is a bug fix for spot rebalancing that was being triggered incorrectly, which may result in addition of 10% more nodes than the maximum cluster size configuration.
HADTWO-2185: The issue where jobs were stuck after switching to the s3a filesystem has been resolved.
|10th Oct, 2019 (03:46 PM PST)||57.0.40||Bug fix||ACM:5866: The issue in which a cluster could not be started due to lack of disk space is resolved now. Qubole has increased the root disk size from 72GB to 90GB for all cluster types. This increases the cost by 1.8 USD per cluster node per month.|
|6th Oct, 2019 (11:13 PM PST)||57.0.35||R57 Upgrade (Phase 1 frontend)|