Presto

Changes in Presto Versions

  • PRES-2684: Presto 0.208 is now generally available. Cluster Restart Required
  • PRES-2598: Presto version 0.157 is deprecated. Presto 0.193 is the default.

Enhancements

  • PRES-2029: QDS makes an intelligent choice from a variety of JOIN types in a single query to improve the query’s performance.
  • PRES-2060: Adds Presto support for ABFS.
  • PRES-2267: Changes to the memory pool configuration for Presto 0.208:
    • The JVM heap size for worker nodes has been increased from 70% to 80% of the instance memory.
    • The default value of query.max-memory-per-node and query.max-total-memory-per-node is 30% of the JVM heap size.
    • The default value of memory.heap-headroom-per-node is 20% of the JVM heap size. This results in 30% of the heap for a reserved pool, 20% heap headroom for untracked memory allocations and the remaining 50% of the heap for the general pool.
  • PRES-2397: QDS supports escaping newline \n and carriage return (\r) characters in data for correct parsing on the QDS UI. This enhancement is not available by default and it is only supported with Presto 0.193 and later. Via Support
  • PRES-2417: Presto clusters do not terminate while actively running Presto notebook paragraphs. Via Support

Bug Fixes

  • PRES-1924: Fixes issues related to connection timeouts between the Ruby client to the Presto coordinator. The timeout message is: Connection refused - connect(2) that was displayed.
  • PRES-1968: Fixes the issue in Presto queries which caused the error nesting of 101 is too deep.
  • PRES-2723: Fixes the problem that caused the error: Number of stages in the query (<n>) exceeds the allowed maximum (100). As of Presto 0.208, the maximum number of stages allowed in a query is configurable. The default value for the variable (query.max-stage-count) is 400 in QDS Presto. You can increase the value as needed.
  • PRES-2637: Fixes an issue that prevented RubiX from being enabled from the QDS UI.
  • PRES-2443: Fixes an issue that caused a Presto command that needed to start a cluster to fail during execution (though the cluster came up).