Qubole Data Service
Getting Started
Release Information
How-To (QDS Guides, Tasks, and FAQs)
User Guide
Qubole Product Suite
Administration Guide
QDS Alerts and Runbooks
FAQs
New User FAQs
General Questions
Questions about Airflow
Questions about Hive
Questions about QDS Clusters
1. Why didn’t my cluster come up?
2. How long does a Qubole Hadoop Cluster take to come up?
3. In whose account are clusters launched?
4. When are clusters brought up and shut down?
5. When are clusters autoscaled?
6. Should I have one large autoscaling cluster or multiple smaller clusters?
7. Why is my cluster scaling beyond the configured maximum number of nodes?
8. Will HDFS be affected by cluster autoscaling?
9. Are files from a Hadoop archive extracted to a specific folder by default?
10. How do I check if a node is a coordinator node or a worker node?
11. Does Qubole store any data?
12. Can the data stored on a Cloud instance be encrypted?
13. Can I use Python 2.7 for Hadoop tasks?
14. Do I need to bake a Paravirtual Image for bringing up AWS clusters?
15. Can I submit Hive Commands to a Spark Cluster and is it supported?
Questions about AWS
Questions about Azure
Questions about Security
Questions about Package Management
Qubole Product Suite
Connectivity Options
REST API Reference
Troubleshooting Guide
How QDS Works
Product Security Guide
Additional Resources
Providing Feedback
Qubole Data Service
How-To (QDS Guides, Tasks, and FAQs)
FAQs
Questions about QDS Clusters
3.
In whose account are clusters launched?
3.
In whose account are clusters launched?
QDS launches clusters in your Cloud account, using your storage and compute credentials.
Feedback
| Try Free Trial