Edit a Schedule

PUT /api/v1.2/scheduler/(Scheduler ID)

Use this API to edit an existing schedule that is created to run commands automatically in a specified interval. You can edit a schedule by sending a PUT request with attributes that you want to modify.

Required Role

The following users can make this API call:

  • Users who belong to the system-admin group.

  • Users who belong to a group associated with a role that allows editing a schedule. See Managing Groups and Managing Roles for more information.

Parameters

Note

Parameters marked in bold below are mandatory. Others are optional and have default values.

Parameter

Description

command_type

A valid command type supported by Qubole. For example, HiveCommand, HadoopCommand, PigCommand.

command

JSON object describing the command. Refer to the Command API for more details.

Sub fields can use macros. Refer to the Qubole Scheduler for more details.

name

A user-defined name for a schedule. If name is not specified, then a system-generated Schedule ID is set as the name.

label

Specify a cluster label that identifies the cluster on which the schedule API call must be run.

start_time

Start datetime for the schedule

end_time

End datetime for the schedule

frequency

Set this option or cron_expression but do not set both options. Specify how often the schedule should run. Input is an integer. For example, frequency of one hour/day/month is represented as {"frequency":"1"}

time_unit

Denotes the time unit for the frequency. Its default value is days. Accepted value is minutes, hours, days, weeks, or months.

cron_expression

Set this option or frequency but do not set both options. The standard cron format is “s, m, h, d, M, D, Y” where s is second, m is minute, M is month, d is date, and D is day of the week. Only year (Y) is optional. Example - "cron_expression":"0 0 12 * * ?". For more information, see Cron Trigger Tutorial.

macros

Expressions to evaluate macros. Macros can be used in parameterized commands.

Refer to the Macros in Scheduler page for more details.

no_catch_up

Set this parameter to true if you want to skip schedule actions that were supposed to have run in the past and run only the api/v1.2 schedule actions. By default, this parameter is set to false. When a new schedule is created, the scheduler runs schedule actions from start time to the current time. For example, if a daily schedule is created from Jun 1, 2015 on Dec 1, 2015, schedules are run for Jun 1, 2015, Jun 2, 2015, and so on. If you do not want the scheduler to run the missed schedule actions for months earlier to Dec, set no_catch_up to true. The main use of skipping a schedule action is if when you suspend a schedule and resume it later, in which case, there will be more than one schedule action and you might want to skip the earlier schedule actions. For more information, see Understanding the Qubole Scheduler Concepts.

time_zone

Timezone of the start and end time of the schedule.

Scheduler will understand ZoneInfo identifiers. For example, Asia/Kolkata.

For a list of identifiers, check column 3 in List of TZ in databases.

Default value is UTC.

command_timeout

You can set the command timeout configurable in seconds. Its default value is 129600 seconds (36 hours) and any other value that you set must be less than 36 hours. QDS checks the timeout for a command every 60 seconds. If the timeout is set for 80 seconds, the command gets killed in the next minute that is after 120 seconds. By setting this parameter, you can avoid the command from running for 36 hours.

time_out

Unit is minutes. A number that represents a maximum amount of time the schedule should wait for dependencies to be satisfied.

concurrency

Specify how many schedule actions can run at a time. Default value is 1.

dependency_info

Describe dependencies for this schedule.

Check the Hive Datasets as Schedule Dependency for more information.

notification

It is an optional parameter that is set to false by default. You can set it to true if you want to be notified through email about instance failure. notification provides more information.

notification

Parameter

Description

is_digest

It is a notification email type that is set to true if a schedule periodicity is in minutes or hours. If it set to false, the email type is immediate by default.

notify_failure

If this option is set to true, you receive schedule failure notifications.

notify_success

If this option is set to true, you receive schedule success notifications.

notification_email_list

By default, the current user’s email ID is added. You can add additional email IDs as required.

dependency_info

Parameter

Description

files

Use this parameter if there is dependency on S3 files and it has the following sub options. For more information, see Configuring S3/Azure Blob Storage Files Data Dependency.

path

It is the S3 path of the dependent file (with data) based on which the schedule runs.

window_start

It denotes the start day or time.

window_end

It denotes the end day or time.

hive_tables

Use this parameter if there is dependency on Hive table data that has partitions. For more information, see Configuring Hive Tables Data Dependency.

schema

It is the database that contains the partitioned Hive table.

name

It is the name of the partitioned Hive table.

window_start

It denotes the start day or time.

window_end

It denotes the end day or time.

interval

It denotes the dataset interval and defines how often the data is generated. Hive Datasets as Schedule Dependency provides more information. You must also specify the incremental time that can be in minutes, hours, days, weeks, or months. The usage is "interval":{"days":"1"}. The default interval is 1 day.

column

It denotes the partitioned column name. You must specify the date-time mask through the_date parameter denotes how to convert from date to string for the partition. The usage is "columns":{"the_date":"<value>"}. The <value> can be a macro or a string.

Response

The response contains a JSON object representing the edited schedule.

Note

There is a rerun limit for schedule reruns to be processed concurrently at a given point of time. Understanding the Qubole Scheduler Concepts provides more information.

Example

Sample 1

Goal: Modify a schedule to run every 2 hours

curl -i -X PUT -H "X-AUTH-TOKEN: $AUTH_TOKEN" -H "Accept: application/json" -H "Content-type: application/json" \
-d    '{ "frequency": 30, "time_unit": "days" }' \
"https://api.qubole.com/api/v1.2/scheduler/3159"

Note

The above syntax uses https://api.qubole.com as the endpoint. Qubole provides other endpoints to access QDS that are described in Supported Qubole Endpoints on Different Cloud Providers.

Response

{
 "email_list":"[email protected]",
 "dependency_info":{},
 "end_time":"2022-07-01 02:00",
 "status":"RUNNING",
 "no_catch_up":false,
 "label":"default",
 "concurrency":1,
 "frequency":30,
 "time_zone":"UTC",
 "template":"generic",
 "command":{
            "sample":false,"loader_table_name":null,"md_cmd":null,"approx_mode":false,"query":"select stock_symbol, max(high), min(low), sum(volume) from daily_tick_data where date1='$formatted_date$' group by stock_symbol","loader_stable":null,"script_location":null,"approx_aggregations":false
           },
 "user_id":108,
 "is_digest":false,
 "time_unit":"days",
 "digest_time_hour":0,
 "macros":[{"formatted_date":"Qubole_nominal_time.format('YYYY-MM-DD')"}],
 "incremental":{},
 "bitmap":0,
 "digest_time_minute":0,
 "can_notify":false,
 "command_type":"HiveCommand",
 "name":"3159",
 "start_time":"2012-07-01 02:00",
 "time_out":10,
 "id":3159,
 "next_materialized_time":"2012-07-07 02:00"
}

Sample 2

Goal: Modify a workflow command in a schedule

curl -i -X PUT -H "X-AUTH-TOKEN: $AUTH_TOKEN" -H "Accept: application/json" -H "Content-type: application/json" -d
'{"command_type": "CompositeCommand",
  "command":{ "sub_commands":
  [
   {
    "command_type": "SparkCommand",
    "language":"command_line",
    "cmdline": "A=123"
   },
   {
    "command_type": "SparkCommand",
    "language":"command_line",
    "cmdline": "B=456"
   }
  ]}
 }' "https://api.qubole.com/api/v1.2/scheduler/3159"