Clone a Schedule on Microsoft Azure
- POST /api/v1.2/scheduler/(SchedulerID)/duplicate
Use this API to clone an existing schedule by providing a new schedule name.
Required Role
The following users can make this API call:
Users who belong to the system-user or system-admin group.
Users who belong to a group associated with a role that allows cloning a schedule. See Managing Groups and Managing Roles for more information.
Parameters
Note
Parameters marked in bold below are mandatory. Others are optional and have default values.
Parameter |
Description |
---|---|
command_type |
A valid command type supported by Qubole. For example, HiveCommand, HadoopCommand, PigCommand. |
command |
JSON object describing the command. Refer to the Command API for more details. Sub fields can use macros. Refer to the Qubole Scheduler for more details. |
start_time |
Start datetime for the schedule |
end_time |
End datetime for the schedule |
frequency |
Set this option or |
time_unit |
Denotes the time unit for the |
cron_expression |
Set this option or |
name |
A user-defined name for a schedule. If name is not specified, then a system-generated Schedule ID is set as the name. While cloning an existing schedule, you must change the name. |
label |
Specify a cluster label that identifies the cluster on which the schedule API call must be run. |
macros |
Expressions to evaluate macros. Macros can be used in parameterized commands. Refer to the Macros in Scheduler page for more details. |
no_catch_up |
Set this parameter to |
time_zone |
Timezone of the start and end time of the schedule. Scheduler will understand ZoneInfo identifiers. For example, Asia/Kolkata. For a list of identifiers, check column 3 in List of TZ in databases. Default value is UTC. |
command_timeout |
You can set the command timeout configurable in seconds. Its default value is 129600 seconds (36 hours) and any other value that you set must be less than 36 hours. QDS checks the timeout for a command every 60 seconds. If the timeout is set for 80 seconds, the command gets killed in the next minute that is after 120 seconds. By setting this parameter, you can avoid the command from running for 36 hours. |
time_out |
Unit is minutes. A number that represents a maximum amount of time the schedule should wait for dependencies to be satisfied. |
concurrency |
Specify how many scheudle actions can run at a time. Default value is 1. |
Describe dependencies for this schedule. Check the Hive Datasets as Schedule Dependency for more information. |
|
It is an optional parameter that is set to false by default. You can set it to true if you want to be notified through email about instance failure. notification provides more information. |
notification
Parameter |
Description |
---|---|
is_digest |
It is a notification email type that is set to |
notify_failure |
If this option is set to true, you receive schedule failure notifications. |
notify_success |
If this option is set to true, you receive schedule success notifications. |
notification_email_list |
By default, the current user’s email ID is added. You can add additional email IDs as required. |
dependency_info
Parameter |
Description |
|
---|---|---|
files |
Use this parameter if there is dependency on Azure blob storage files and it has the following sub options. For more information, see Configuring S3/Azure Blob Storage Files Data Dependency. |
|
path |
It is the Azure blob storage path of the dependent file (with data) based on which the schedule runs. |
|
window_start |
It denotes the start day or time. |
|
window_end |
It denotes the end day or time. |
|
hive_tables |
Use this parameter if there is dependency on Hive table data that has partitions. For more information, see Configuring Hive Tables Data Dependency. |
|
schema |
It is the database that contains the partitioned Hive table. |
|
name |
It is the name of the partitioned Hive table. |
|
window_start |
It denotes the start day or time. |
|
window_end |
It denotes the end day or time. |
|
interval |
It denotes the dataset interval and defines how often the data is
generated. Hive Datasets as Schedule Dependency provides more
information. You must also specify the incremental time that can be in
|
|
column |
It denotes the partitioned column name. You must specify the date-time
mask through |
Response
The response contains a JSON object representing the cloned schedule.
Note
There is a rerun limit for schedule reruns to be processed concurrently at a given point of time. Understanding the Qubole Scheduler Concepts provides more information.
Example
Goal: Clone an existing schedule, for example: schedule ID 3159, to create a new schedule. For more information on how to create a schedule, see Create a Schedule.
While creating a schedule, we created a schedule that aggregates data every day, for every stock symbol, and for each stock exchange. For example, if you want to edit the query to also calculate the total transaction amount for the stock in a day, provide the following query.
{
"command_type":"HiveCommand",
"command": {
"query": "select stock_symbol, stock_exchange, max(high), min(low), sum(volume) from daily_tick_data where date1='$formatted_date$' group by stock_symbol, stock_exchange"
},
"start_time": "2012-11-01T02:00Z",
"end_time": "2022-10-01T02:00Z"
}
Command
curl -i -X POST -H "X-AUTH-TOKEN: $AUTH_TOKEN" -H "Accept: application/json" -H "Content-type: application/json" \
-d '{ "name": "schedule1" }' \
"https://api.qubole.com/api/v1.2/scheduler/3159/duplicate"
Note
The above syntax uses https://api.qubole.com as the endpoint. Qubole provides other endpoints to access QDS that are described in Supported Qubole Endpoints on Different Cloud Providers.
Sample Response
{
"time_out":10,
"status":"RUNNING",
"start_time":"2012-07-01 02:00",
"label":"default",
"concurrency":1,
"frequency":1,
"no_catch_up":false,
"template":"generic",
"command":{
"sample":false,"loader_table_name":null,"md_cmd":null,"script_location":null,"approx_mode":false,"query":"select stock_symbol, max(high), min(low), sum(volume) from daily_tick_data where date1='$formatted_date$' group by stock_symbol","loader_stable":null,"approx_aggregations":false
},
"time_zone":"UTC",
"time_unit":"days",
"end_time":"2022-07-01 02:00",
"user_id":108,
"macros":[{"formatted_date":"Qubole_nominal_time.format('YYYY-MM-DD')"}],
"incremental":{},
"command_type":"HiveCommand",
"name":"schedule1",
"dependency_info":{},
"id":3160,
"next_materialized_time":null
}