AWS supports the SSE with the customer-provided encryption keys (SSE-C), which allows you to set your own encryption keys.
Enabling SSE-C in Hadoop and Spark Clusters¶
To enable SSE-C, perform these steps:
- Navigate to the Clusters page, click Edit to edit an existing cluster or click New to create a new cluster.
- In the cluster’s Advanced Configuration tab, under Override Hadoop Configuration Variables,
When SSE-C is enabled in QDS, any command running with these settings may not be able to fetch the result data. As such, these settings must only be used when results are irrelevant (for example, populating data into a directory in S3 using a Spark or a Hive job).
The same syntax is applicable to Hive commands, which is set per command and in the same command session as the command.
CREATE EXTERNAL TABLE New2 (`Col0` STRING, `Col1` STRING, `Col2` STRING) PARTITIONED BY (`20100102` STRING,`IN` STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t' LOCATION 's3://ap-dev-qubole/common/hive/30day_1/30daysmall'; set fs.s3n.sse=SSE-C;
Enabling the Encryption Key¶
Set the following properties to use the SSE on the S3a filesystem:
fs.s3a.server-side-encryption.key=<key>: It is the encryption key to use for encrypting the data. For the
SSE-Calgorithm, the value of this property must be the Base64 encoded key.
Enabling SSE-C while using Hadoop DistCp¶
While using Hadoop DistCp, these parameters can be set for server-side encryption along with the other parameters:
s3ServerSideEncryption: It enables encryption of data at the object level as S3 writes it to disk.
s3SSEAlgorithm: It is the algorithm used for encryption. Specify
SSE-Cas its value. If you do not specify it but
s3ServerSideEncryptionis enabled, then AES256 algorithm is used by default.
encryptionKey: It is the key used to encrypt the data. In case of
SSE-C, you must specify it to avoid the job failure.
Enabling SSE-C in Presto¶
SSE-C was supported only in Presto 0.157, which is deprecated now. The versions later than Presto 0.157 do not support SSE-C.