Compose a Shell Command

Use the command composer on Workbench to compose a shell command.

Note

Hadoop 2 and Spark clusters support shell commands. See Mapping of Cluster and Command Types for more information. Some cloud platforms do not support all cluster types.

Qubole does not recommended running a Spark application as a Bash command under the Shell command options as automatic changes, such as increase in the Application Coordinator memory based on the driver memory and debug options’ availability, do not happen. Such automatic changes occur when you run a Spark application through the Command Line option.

Perform the following steps to compose a shell command:

  1. Navigate to Workbench and click + New Collection.

  2. Select Shell from the command type drop-down list.

  3. Choose the cluster on which you want to run the query. View the health metrics of a cluster before you decide to use it.

  4. Shell Script is selected by default from the drop-down list (upper-right corner of the screen). Enter your shell command in the text field.

    or

To run a stored query, select Script Location from the drop-down list, then specify the cloud storage path that contains the shell command file.

  1. Add macro details (as needed).

  2. If you are using a script, specify script parameters, if any, in the text field.

  3. In the Optional List of Files text field, optionally list files (separated by a comma) to be copied from the cloud storage to the working directory where the command is run.

  4. In the Optional List of Archives text field, optionally list archive files (separated by a comma) to be uncompressed in the working directory where the command is run.

  5. Click Run to execute the query.

Monitor the progress of your job using the Status and Logs panes. You can toggle between the two using a switch. The Status tab also displays useful debugging information if the query does not succeed. For more information on how to download command results and logs, see Get Results.