Automatically Retrying Failed Presto Queries

PRES-1370: Qubole has added a new feature to automatically retry failed queries (if possible) when new nodes are getting added into the cluster (as part of autoscaling). This can help queries succeed in case of spot loss or upscaling as described in the Presto autoscaling. Disabled Cluster Restart Required - for the cluster-level setting. You can set this as a session property as well.

For more information, see the query retry mechanism.

Presto Notebooks are Generally Available

PRES-1996: Presto Notebooks are generally available now with these changes:

  • Qubole has implemented a native Presto Interpreter with a detailed progress percentage visible on the notebook UI.

  • The concurrency in notebooks is supported. You can specify a maximum concurrency using the zeppelin.presto.maxConcurrency property while creating the interpreter. The default value of the property is 10.

Presto 0.208 is Supported

PRES-2169: The latest supported version is Presto 0.208. It supports file-based authentication, FastPath, and Dynamic Filtering. Beta Cluster Restart Required


  • PRES-1431: Presto now supports recommissioning nodes back to the RUNNING state if a request to add nodes comes before a decommissioned node is removed from the cluster.

  • PRES-2087: Qubole now allows you to gracefully decommission problematic worker nodes (if any) during autoscaling in a Presto cluster. For more information, see Decommissioning a Worker Node.

Bug Fixes

  • PRES-2004: QDS now shows a new query tracker for clusters within a VPC. Earlier, the old UI was displayed for clusters within a VPC.

  • PRES-2026: The queries with complex predicate and working on tables with lots of partitions were stuck in the PLANNING stage. So, the query planning is improved to make such queries’ execution faster.

  • PRES-2150: The issue in which a query was failing intermittently with the Future should be done error while dealing with memory counters has been resolved.

  • PRES-2232: Failures in queries on the system.jdbc schema with the PacketTooBigException, when there are too many tables in one of the Hive schemas have been resolved.