Import a Notebook

POST /api/v1.2/notebooks/import

Use this API to import a Spark notebook from a location and add it to the notebooks list in the QDS account. As a prerequisite, you must ensure that the object in a cloud storage or Github is public.

Required Role

The following users can make this API call:

  • Users who belong to the system-user or system-admin group.
  • Users who belong to a group associated with a role that allows submitting a command. See Managing Groups and Managing Roles for more information.

Parameters

Note

Parameters marked in bold below are mandatory. Others are optional and have default values. Presto is not currently supported on all Cloud platforms; see QDS Components: Supported Versions and Cloud Platforms.

Parameter Description
name It is the name of the notebook. It is a string and can accept alpha-numerical characters.
location It is the location of the folder. By default, it goes to Users/current_user_email_id folders. For more information on notebook folders, see Using Folders in Notebooks. The accepted folder locations are: Users/current_user_email_id, Common, and Users. The default location is Users/current_user_email_id and it is equivalent to My Home on the Notebooks UI. You need privileges to create/edit notebooks in Common and Users. For more information, see Managing Folder-level Permissions.
note_type It is the type of notebook. The values are spark and presto.
file It is a parameter that you can only specify to import a notebook from a location on the local hard disk. For adding the complete location path, you must start it with @/. For example, "file":"@/home/spark....
nbaddmode It must only be used with the file parameter. Its value is import-from-computer.
url It is the AWS S3 or Github location, a valid JSON URL, or an ipynb URL of the notebook that you want to import.
cluster_id It is the ID of the cluster to which the notebook is assigned. If you specify this parameter, then the notebook is imported with the attached cluster.

Request API Syntax

Syntax to use for importing a notebook that is on an AWS S3 bucket.

curl -X "POST" -H "X-AUTH-TOKEN:$AUTH_TOKEN" -H "Content-Type:multipart/form-data" -H "Accept: application/json" \
 -F "name"="<Name>" -F "location"="<Location>" -F "note_type"="<Note Type>" -F "url"="<S3 location in the URL
 format/valid-JSON-URL/ipynb-URL>" \
  "https://api.qubole.com/api/v1.2/notebooks/import"

Syntax to use for importing a notebook that is on Github.

curl -X "POST" -H "X-AUTH-TOKEN:$AUTH_TOKEN" -H "Content-Type:multipart/form-data" -H "Accept: application/json" \
 -F "name"="<Name>" -F "location"="<Location>" -F "note_type"="<Note Type>" -F "url"="<Github location in the URL
 format/valid-JSON-URL/ipynb-URL>" \
  "https://api.qubole.com/api/v1.2/notebooks/import"

Syntax to use for importing a notebook that is on a local hard disk.

Note

For adding the file location in the local hard disk, you must start it with @/. For example, "file":"@/home/spark....

curl   -X "POST" -H "X-AUTH-TOKEN: $AUTH_TOKEN" -H "Content-Type: multipart/form-data" -H "Accept: application/json" \
-F "name"="<Name>" -F  "location"="<Location>" -F "note_type"="<Note Type>" -F  "file"="<local hard disk location>"
-F  "nbaddmode"="import-from-computer" \
"https://api.qubole.com/api/v1.2/notebooks/import"

Note

The above syntax uses https://api.qubole.com as the endpoint. Qubole provides other endpoints to access QDS that are described in Supported Qubole Endpoints on Different Cloud Providers.

Sample API Request

Here is an example to import a Spark notebook from an AWS S3 bucket.

curl -X "POST" -H "X-AUTH-TOKEN:$AUTH_TOKEN" -H "Content-Type:multipart/form-data" -H "Accept: application/json" \
-F "name"="SparkNote" -F "location"="Users/user1@qubole.com" -F "note_type"="spark"
-F "url"="https://s3.amazonaws.com/notebook-samples/spark_examples" \
"https://api.qubole.com/api/v1.2/notebooks/import"

Here is an example to import a Spark notebook from Github using the raw Github link.

curl -X "POST" -H "X-AUTH-TOKEN:$AUTH_TOKEN" -H "Content-Type:multipart/form-data" -H "Accept: application/json" \
-F "name"="SparkNote" -F "location"="Users/user1@qubole.com" -F "note_type"="spark"
-F "url"="https://raw.githubusercontent.com/phelps-sg/python-bigdata/master/src/main/ipynb/intro-python.ipynb" \
"https://api.qubole.com/api/v1.2/notebooks/import"

Here is an example to import a Spark notebook from Github using the gist link.

curl -X "POST" -H "X-AUTH-TOKEN:$AUTH_TOKEN" -H "Content-Type:multipart/form-data" -H "Accept: application/json" \
-F "name"="SparkNote" -F "location"="Users/user1@qubole.com" -F "note_type"="spark"
-F "url"="https://gist.githubusercontent.com/user1/fbc05748f4c660ee656d50f1d8cdad11/raw/a6332f4b0cba2fc3cd44eac9956a2fe135744a8f/urltest2.ipynb" \
"https://api.qubole.com/api/v1.2/notebooks/import"

Here is an example to import a Spark notebook from the local hard disk.

curl -X "POST" -H "X-AUTH-TOKEN: $AUTH_TOKEN" -H "Content-Type: multipart/form-data" -H "Accept: application/json" \
-F "name"="SparkNote1" -F  "location"="Users/user2@qubole.com" -F "note_type"="spark" -F  "file"="@/home/spark/SparkNoteb.ipynb"
-F  "nbaddmode"="import-from-computer" \
"https://api.qubole.com/api/v1.2/notebooks/import"