Create a Pipeline
- POST /api/v1.2/pipelines
Use this API to create a pipeline with your own custom Jar files (BYOJ) or custom code (BYOC).
Note
This API is not supported for creating pipelines in assisted mode. You can create pipelines in assisted mode from the Pipelines UI. For more information, see Building Assisted Streaming Pipelines.
Required Role
The following users can make this API call:
Users who belong to the system-user or system-admin group.
Users who belong to a group associated with a role that allows submitting a command.
Users who belong to a role that allows the Pipelines resource.
See Managing Groups and Managing Roles for more information.
Parameters
Note
Parameters marked in bold below are mandatory. Others are optional and have default values.
Parameter |
Description |
---|---|
type |
Possible value: |
name |
Name of the pipeline. |
create_type |
Type of the pipeline to be created. Possible values: |
cluster_label |
Label of the Spark streaming cluster for the pipeline to run. |
jar_path |
For BYOJ, S3 path of the Spark application jar. |
main_class_name |
For BYOJ, main class of the Spark application. |
code |
For BYOC, Spark application code written in Scala or Python. |
language |
For BYOC, language of the code. Possible values are: |
user_arguments |
Program command line arguments. |
command_line_options |
Spark submit command line options. For example |
can_retry |
Boolean to specify whether to retry the pipeline in case of failure. Retry is attempted twice. Possible values: |
If you want to set up notifications, use the following parameters:
Parameter |
Description |
---|---|
type |
Type: |
notification_channels |
Array of notification channel IDs. For example,``[1,2]`` |
can_notify |
Boolean value that specifies if the notifications are enabled or disabled. Possible values: |
Request API Syntax
curl -i -X POST -H "X-AUTH-TOKEN: $AUTH_TOKEN" -H "Content-Type: application/json" -H "Accept: application/json" \
-d `{
"data": {
"type": "pipeline",
"attributes": {
"name": "<pipeline-name>",
"description": "<pipeline-description>",
"create_type":<2-or-3>,
"properties": {
"cluster_label":"<cluster-label>",
"can_retry":<true-or-false>,
"command_line_options":"<command-option>",
"user_arguments":"<program-line-arguments>",
"code":"<custom code>"
"language":"<code-language>"
"jar_path":"<s3-path>"
"main_class_name":"<class-name>"
}
}
}
}` \
"https://api.qubole.com/api/v1.2/pipelines"
The following syntax is to create a pipeline and specify notification channels.
curl -i -X POST -H "X-AUTH-TOKEN: $AUTH_TOKEN" -H "Content-Type: application/json" -H "Accept: application/json" \
-d `{
"data": {
"type": "pipeline",
"attributes": {
"name": "<pipeline-name>",
"description": "<pipeline-description>",
"create_type":<2-or-3>,
"properties": {
"cluster_label":"<cluster-label>",
"can_retry":<true-or-false>,
"command_line_options":"<command-option>",
"user_arguments":"<program-line-arguments>",
"code":"<custom code>"
"language":"<code-language>"
"jar_path":"<s3-path>"
"main_class_name":"<class-name>"
}
},
"relationships": {
"alerts": {
"data": {
"type": "pipeline/alerts",
"attributes": {
"can_notify": true,
"notification_channels": [number-of-channels]
}
}
}
}
}
}` \
"https://api.qubole.com/api/v1.2/pipelines"
Sample API Request
The following example shows how to create a pipeline with the custom code.
curl -i -X POST -H "X-AUTH-TOKEN: $AUTH_TOKEN" -H "Content-Type: application/json" -H "Accept: application/json" \
-d `{
"data": {
"type": "pipeline",
"attributes": {
"name": "CreateComplete BYOC",
"description": "lorem ipsum",
"create_type": 3,
"properties": {
"cluster_label": "spark_24",
"can_retry": true,
"command_line_options": "--conf spark.driver.extraLibraryPath=/usr/lib/hadoop2/lib/native\n--conf testsid123",
"user_arguments": "optional-args",
"code": "import scala.math.random\nimport org.apache.spark._\nval slices = 6\nval n = 100000 * slices\n//spark context is available as sc or spark.\nval count = sc.parallelize(1 to n, slices).map { i =>\n val x = random * 2 - 1\n val y = random * 2 - 1\n if (x*x + y*y < 1) 1 else 0\n}.reduce(_ + _)\nprintln(\"Pi is roughly \" + 4.0 * count / n)",
"language": "scala"
}
}
}
}` \
"https://api.qubole.com/api/v1.2/pipelines"
The following example shows how to create a pipeline with the custom code and set notifications.
curl -i -X POST -H "X-AUTH-TOKEN: $AUTH_TOKEN" -H "Content-Type: application/json" -H "Accept: application/json" \
-d `{
"data": {
"type": "pipeline",
"attributes": {
"name": "CreateComplete BYOC",
"description": "lorem ipsum",
"create_type": 3,
"properties": {
"cluster_label": "spark_24",
"can_retry": true,
"command_line_options": "--conf spark.driver.extraLibraryPath=/usr/lib/hadoop2/lib/native\n--conf testsid123",
"user_arguments": "optional-args",
"code": "import scala.math.random\nimport org.apache.spark._\nval slices = 6\nval n = 100000 * slices\n//spark context is available as sc or spark.\nval count = sc.parallelize(1 to n, slices).map { i =>\n val x = random * 2 - 1\n val y = random * 2 - 1\n if (x*x + y*y < 1) 1 else 0\n}.reduce(_ + _)\nprintln(\"Pi is roughly \" + 4.0 * count / n)",
"language": "scala"
}
},
"relationships": {
"alerts": {
"data": {
"type": "pipeline/alerts",
"attributes": {
"can_notify": true,
"notification_channels": [
1,
2
]
}
}
}
}
}
}` \
"https://api.qubole.com/api/v1.2/pipelines"