Skip to main content

Delivery

Delivery Schema

The schema for Subscriptions API delivery below:

"delivery": {
"type": "cloud-storage-provider",
"parameters": {
"parameter1-name": "p1-value",
"parameter2-name": "p2-value"
}
}

Delivery Layout

When data is delivered to your cloud storage for your Subscription, the files will be by the following layout scheme: <subscription_id>/<item_id>/...

For example, file 20170716_144316_1041_3B_AnalyticMS.tif for item 20170716_144316_1041 as output for subscription 0ee41665-ab3b-4faa-98d1-25738cdd579c will be delivered to the path: 0ee41665-ab3b-4faa-98d1-25738cdd579c/20170716_144316_1041/20170716_144316_1041_3B_AnalyticMS.tif.

Delivery to Cloud Storage

You may choose to have your subscription delivered to a number of cloud storage providers. For any cloud storage provider, you must create an account with both write and delete access. Activation and processing for direct download is not currently supported.

When creating a subscription with cloud delivery, Planet checks the bucket permissions linked to your token by first attempting to deliver a file named planetverify.txt and then immediately deleting it. If Planet has the adequate permissions, you will not see this file. If you see this file in your buckets, we recommend that you review your permissions and make sure that Planet has both write and delete access.

warning

When creating a subscription, users must input their credentials for successful cloud delivery of Planet data. This poses a potential security risk. For secure handling of cloud service credentials in the request, please ensure that access is limited to the desired delivery path with no read/write access for any other storage locations or cloud services.

You can have your subscription delivered to several cloud storage providers. You must create an account for any cloud storage provider with both write and delete access.

Amazon S3

For Amazon S3 delivery you will need an AWS account with GetObject, PutObject, and DeleteObject permissions.

Parameters

PropertyRequiredDescription
aws_access_key_idRequiredAWS credentials.
aws_secret_access_keyRequiredAWS credentials.
bucketRequiredThe name of the bucket that will receive the subscription output.
aws_regionRequiredThe region where the bucket lives in AWS.
path_prefixOptionalAn optional string that will prepend to the files delivered to the bucket. A forward slash (/) is treated as a folder. All other characters are added as a prefix to the files.
"delivery": {
"type": "amazon_s3",
"parameters": {
"bucket": "foo-bucket",
"aws_region": "us-east-2",
"aws_access_key_id": "$AWS_ACCESS_KEY_ID",
"aws_secret_access_key": "$AWS_SECRET_KEY",
"path_prefix": "folder1/prefix"
}
}

Google Cloud Storage

You will need a GCS account with write and delete permissions for Google Cloud Storage delivery.

For instance, you may have set up a service account to handle calls to Planet APIs. When you did so, you would have also generated and downloaded a JSON file with that service agent's credentials. Also, you would have assigned a role to the service account - or you would have created a custom role - that gives the service agent permissions to write to your Google Cloud Storage bucket, for example storage.objects.create, storage.objects.get, and storage.objects.delete permissions. If you have such an account set up and assigned a role with the right permissions, you can then use that service agent JSON credentials file, base64-encode it, and include that output as the credentials, below. As noted above, access should be limited to the desired delivery path with no read/write access for any other storage locations or cloud services.

Preparing your Google Cloud Storage credentials

The Google Cloud Storage delivery option requires a single-line base64 version of your service account credentials for use by the credentials parameter.

To download your service account credentials in JSON format (not P12) and encode them as a base64 string, you can use a command line operation such as:

cat my_creds.json | base64 | tr -d '\n'

Parameters

PropertyRequiredDescription
credentialsRequiredGCS credentials.
bucketRequiredThe name of the GCS bucket that will receive the order output.
path_prefixOptionalAn optional string that will prepend to the files delivered to the bucket. A forward slash (/) is treated as a folder. All other characters are added as a prefix to the files.
"delivery": {
"type": "google_cloud_storage",
"parameters": {
"bucket": "foo-bucket",
"credentials": "$GCS_CREDENTIALS",
"path_prefix":"folder1/prefix"
}
}

Google Earth Engine

The Planet GEE Delivery Integration simplifies incorporating Planet data into GEE projects by directly connecting between the Planet Subscriptions API and GEE. To use the integration, users must sign up for an Earth Engine account, create a Cloud Project, enable the Earth Engine API, and grant a Google service account access to deliver data to their GEE project. Follow the steps found in our GEE Guide to get started.

Parameters

PropertyRequiredDescription
projectRequiredThe GEE project name.
collectionRequiredThe GEE image collection name.
credentialsOptionalService account credentials.
"delivery": {
"type": "google_earth_engine",
"parameters": {
"project": "project-name",
"collection": "gee-collection"
"credentials": "$GEE_CREDENTIALS",
}
}

Microsoft Azure

For Microsoft Azure delivery you will need an Azure account with read, write, delete, and list permissions.

Parameters

PropertyRequiredDescription
accountRequiredAzure account name.
containerRequiredThe container name which will receive the subscription output.
sas_tokenRequiredAzure Shared Access Signature token. The token should be specified without a leading ?. (For example, sv=2017-04-17u0026si=writersr=cu0026sig=LGqc rather than ?sv=2017-04-17u0026si=writersr=cu0026sig=LGqc)
storage_endpoint_suffixOptionalTo deliver your order to a sovereign cloud a storage_endpoint_suffix should be set appropriately for your cloud. The default is core.windows.net.
path_prefixOptionalAn optional string that will prepend to the files delivered to the bucket. A forward slash (/) is treated as a folder. All other characters are added as a prefix to the files.
"delivery": {
"type": "azure_blob_storage",
"parameters": {
"account": "account-name",
"container": "container-name",
"sas_token": "$AZURE_SAS_TOKEN",
"storage_endpoint_suffix": "core.windows.net",
"path_prefix": "folder1/prefix"
}
}

Oracle Cloud Storage

For Oracle Cloud Storage delivery, you need an Oracle account with read, write, and delete permissions. For authentication, you need a Customer Secret Key which consists of an Access Key/Secret Key pair.

Parameters

PropertyRequiredDescription
customer_access_key_idRequiredCustomer Secret Key credentials.
customer_secret_keyRequiredCustomer Secret Key credentials.
bucketRequiredThe name of the bucket that will receive the subscription output.
regionRequiredThe region where the bucket lives in Oracle.
namespaceRequiredObject Storage namespace name.
path_prefixOptionalAn optional string that will prepend to the files delivered to the bucket. A forward slash (/) is treated as a folder. All other characters are added as a prefix to the files.
"delivery": {
"type": "oracle_cloud_storage",
"parameters": {
"bucket": "foo-bucket",
"namespace": "ORACLE_NAMESPACE",
"region": "us-sanjose-1",
"customer_access_key_id": "$ORACLE_ACCESS_ID",
"customer_secret_key": "$ORACLE_SECRET_KEY",
"path_prefix": "folder1/prefix"
}
}

S3 Compatible Delivery

S3 compatible delivery allows data to be sent to any cloud storage provider that supports the Amazon S3 API.

To use this delivery method, you'll need an account with read, write, and delete permissions on the target bucket. Authentication is performed using an Access Key and Secret Key pair.

note

While this delivery method is designed to work with any S3-compatible provider, not all integrations have been explicitly tested. Some providers may advertise S3 compatibility but deviate from the API in subtle ways that can cause issues. We encourage testing with your chosen provider to ensure compatibility.

Pay particular attention to the use_path_style parameter, as it's a common source of issues. For example, Oracle Cloud requires use_path_style to be true, while Open Telekom Cloud requires it to be false.

Parameters

PropertyRequiredDescription
access_key_idRequiredAccess key for authentication.
secret_access_keyRequiredSecret key for authentication.
bucketRequiredS3-compatible bucket to send results to.
regionRequiredRegion for the S3-compatible service.
endpointRequiredS3-compatible service endpoint.
use_path_styleOptionalWhether to use path-style addressing (default is false). If true, the bucket name is included in the URL path; if false, it's included in the hostname.
path_prefixOptionalA string to prepend to delivered files. A forward slash (/) is treated as a folder; all other characters are added directly as a prefix to file names.
"delivery": {
"type": "s3_compatible",
"parameters": {
"endpoint": "https://s3.foo.com",
"bucket": "foo-bucket",
"region": "foo-region",
"access_key_id": "$ACCESS_KEY_ID",
"secret_access_key": "$SECRET_ACCESS_KEY",
"use_path_style": false,
"path_prefix": "folder1/prefix"
}
}

Hosting

Instead of delivering to cloud storage, the data for your subscription can be hosted on another cloud platform (namely, Sentinel Hub).

The hosting block eliminates the need to use the delivery block. Specifying both is not allowed.

Image Collection (Sentinel Hub)

You can have items delivered to a collection within your Sentinel Hub account. To enable Sentinel Hub collection delivery, you must first link your Planet user to your Sentinel Hub user to deliver a Planet Subscription to a Sentinel Hub Collection. Please follow the steps here.

Once you have linked your Planet & Sentinel Hub accounts you will be able to create a subscription via the Subscriptions API to deliver to a Sentinel Hub Collection.

Parameters

PropertyRequiredDescription
typeRequired"sentinel_hub"
collection_idOptionalID of the target collection to deliver data to. If omitted, a collection will be created on your behalf, and its ID will be returned in the response with the collection name the same as the subscription name. If included, the collection must be compatible with the subscription, which will be validated during subscription creation.

To reuse a collection across multiple subscriptions with the same data type, first omit collection_id in your initial request to auto-create a collection. Then, use the returned collection_id for all subsequent requests. This links all subscriptions to the same collection efficiently. Importantly, subscriptions with different data types cannot share a collection. As an example, Soil Water Content and Land Surface Temperature subscriptions cannot share the same collection.

You can browse your collections on the Sentinel Hub Dashboard under My Collections.

To learn more about creating collections check out the Bring Your Own COG API documentation.

No collection ID provided
"hosting": {
"type": "sentinel_hub"
}
Collection ID provided
"hosting": {
"type": "sentinel_hub",
"parameters": {
"collection_id": "my_collection_id"
}
}

Please note the following:

note

When delivering to a Sentinel Hub collection, items (referred to as "tiles" in Sentinel Hub) delivered to your collection may show a warning saying coverGeometry is partially outside tileGeometry. This occurs when the geometry in the delivered metadata does not match the tile's pixel footprint which can happen due to data processing intricacies for PlanetScope and SkySat. The data will still be ingested in Sentinel Hub, but may result in nodata pixels within the tile when requesting the imagery in Sentinel Hub.