Delivery
Delivery Schema
The schema for Subscriptions API delivery
below:
"delivery": {
"type": "cloud-storage-provider",
"parameters": {
"parameter1-name": "p1-value",
"parameter2-name": "p2-value"
}
}
Delivery Layout
When data is delivered to your cloud storage for your Subscription, the files will be by the following layout scheme: <subscription_id>/<item_id>/...
For example, file 20170716_144316_1041_3B_AnalyticMS.tif
for item 20170716_144316_1041
as output for subscription 0ee41665-ab3b-4faa-98d1-25738cdd579c
will be delivered to the path: 0ee41665-ab3b-4faa-98d1-25738cdd579c/20170716_144316_1041/20170716_144316_1041_3B_AnalyticMS.tif
.
Delivery to Cloud Storage
You may choose to have your subscription delivered to a number of cloud storage providers. For any cloud storage provider, you must create an account with both write and delete access. Activation and processing for direct download is not currently supported.
When creating a subscription with cloud delivery, Planet checks the bucket permissions linked to your token by first attempting to deliver a file named planetverify.txt
and then immediately deleting it. If Planet has the adequate permissions, you will not see this file. If you see this file in your buckets, we recommend that you review your permissions and make sure that Planet has both write and delete access.
When creating a subscription, users must input their credentials for successful cloud delivery of Planet data. This poses a potential security risk. For secure handling of cloud service credentials in the request, please ensure that access is limited to the desired delivery path with no read/write access for any other storage locations or cloud services.
You can have your subscription delivered to several cloud storage providers. You must create an account for any cloud storage provider with both write and delete access.
Amazon S3
For Amazon S3 delivery you will need an AWS account with GetObject
, PutObject
, and DeleteObject
permissions.
Parameters
Property | Required | Description |
---|---|---|
aws_access_key_id | Required | AWS credentials. |
aws_secret_access_key | Required | AWS credentials. |
bucket | Required | The name of the bucket that will receive the subscription output. |
aws_region | Required | The region where the bucket lives in AWS. |
path_prefix | Optional | An optional string that will prepend to the files delivered to the bucket. A forward slash (/) is treated as a folder. All other characters are added as a prefix to the files. |
- JSON
- Python SDK
"delivery": {
"type": "amazon_s3",
"parameters": {
"bucket": "foo-bucket",
"aws_region": "us-east-2",
"aws_access_key_id": "$AWS_ACCESS_KEY_ID",
"aws_secret_access_key": "$AWS_SECRET_KEY",
"path_prefix": "folder1/prefix"
}
}
from os import getenv
from planet.subscription_request import amazon_s3
AWS_ACCESS_KEY_ID = getenv("AWS_ACCESS_KEY_ID")
AWS_SECRET_KEY = getenv("AWS_SECRET_KEY")
aws_delivery = amazon_s3(
aws_access_key_id=AWS_ACCESS_KEY_ID,
aws_secret_access_key=AWS_SECRET_KEY,
aws_region="us-east-2",
bucket="bucket-name",
path_prefix="folder1/prefix",
)
Google Cloud Storage
For Google Cloud Storage delivery, a service account with storage.objects.create
, storage.objects.get
, and storage.objects.delete
permissions is required.
Access should be restricted to the specified delivery path, without read or write permissions to other storage locations.
Preparing your Google Cloud Storage credentials
The Google Cloud Storage delivery option requires a single-line base64 version of your service account credentials for use by the credentials
parameter.
To download your service account credentials in JSON format (not P12) and encode them as a base64 string, you can use a command line operation such as:
cat my_creds.json | base64 | tr -d '\n'
Parameters
Property | Required | Description |
---|---|---|
credentials | Required | GCS credentials. |
bucket | Required | The name of the GCS bucket that will receive the order output. |
path_prefix | Optional | An optional string that will prepend to the files delivered to the bucket. A forward slash (/) is treated as a folder. All other characters are added as a prefix to the files. |
- JSON
- Python SDK
"delivery": {
"type": "google_cloud_storage",
"parameters": {
"bucket": "foo-bucket",
"credentials": "$GCS_CREDENTIALS",
"path_prefix":"folder1/prefix"
}
}
from planet.subscription_request import google_cloud_storage
from os import getenv
GCS_CREDENTIALS = getenv("GCS_CREDENTIALS")
gcs_delivery = google_cloud_storage(
bucket="bucket-name",
credentials=GCS_CREDENTIALS,
path_prefix="folder1/prefix",
)
Google Earth Engine
The Planet GEE Delivery Integration simplifies incorporating Planet data into GEE projects by directly connecting between the Planet Subscriptions API and GEE. To use the integration, users must sign up for an Earth Engine account, create a Cloud Project, enable the Earth Engine API, and grant a Google service account access to deliver data to their GEE project. Follow the steps found in our GEE Guide to get started.
Parameters
Property | Required | Description |
---|---|---|
project | Required | The GEE project name. |
collection | Required | The GEE image collection name. |
credentials | Optional | Service account credentials. |
- JSON
"delivery": {
"type": "google_earth_engine",
"parameters": {
"project": "project-name",
"collection": "gee-collection"
"credentials": "$GEE_CREDENTIALS",
}
}
Microsoft Azure
For Microsoft Azure delivery you will need an Azure account with read
, write
, delete
, and list
permissions.
Parameters
Property | Required | Description |
---|---|---|
account | Required | Azure account name. |
container | Required | The container name which will receive the subscription output. |
sas_token | Required | Azure Shared Access Signature token. The token should be specified without a leading ? . (For example, sv=2017-04-17u0026si=writersr=cu0026sig=LGqc rather than ?sv=2017-04-17u0026si=writersr=cu0026sig=LGqc ) |
storage_endpoint_suffix | Optional | To deliver your order to a sovereign cloud a storage_endpoint_suffix should be set appropriately for your cloud. The default is core.windows.net . |
path_prefix | Optional | An optional string that will prepend to the files delivered to the bucket. A forward slash (/ ) is treated as a folder. All other characters are added as a prefix to the files. |
- JSON
- Python SDK
"delivery": {
"type": "azure_blob_storage",
"parameters": {
"account": "account-name",
"container": "container-name",
"sas_token": "$AZURE_SAS_TOKEN",
"storage_endpoint_suffix": "core.windows.net",
"path_prefix": "folder1/prefix"
}
}
from os import getenv
from planet.subscription_request import azure_blob_storage
AZURE_SAS_TOKEN = getenv("AZURE_SAS_TOKEN")
azure_delivery = azure_blob_storage(
account="account-name",
container="container-name",
sas_token=AZURE_SAS_TOKEN,
storage_endpoint_suffix="core.windows.net",
path_prefix="folder1/prefix",
)
Oracle Cloud Storage
For Oracle Cloud Storage delivery, you need an Oracle account with read
, write
, and delete
permissions. For authentication, you need a Customer Secret Key which consists of an Access Key/Secret Key pair.
Parameters
Property | Required | Description |
---|---|---|
customer_access_key_id | Required | Customer Secret Key credentials. |
customer_secret_key | Required | Customer Secret Key credentials. |
bucket | Required | The name of the bucket that will receive the subscription output. |
region | Required | The region where the bucket lives in Oracle. |
namespace | Required | Object Storage namespace name. |
path_prefix | Optional | An optional string that will prepend to the files delivered to the bucket. A forward slash (/) is treated as a folder. All other characters are added as a prefix to the files. |
- JSON
- Python SDK
"delivery": {
"type": "oracle_cloud_storage",
"parameters": {
"bucket": "foo-bucket",
"namespace": "ORACLE_NAMESPACE",
"region": "us-sanjose-1",
"customer_access_key_id": "$ORACLE_ACCESS_ID",
"customer_secret_key": "$ORACLE_SECRET_KEY",
"path_prefix": "folder1/prefix"
}
}
from os import getenv
from planet.subscription_request import oracle_cloud_storage
ORACLE_ACCESS_ID = getenv("ORACLE_ACCESS_ID")
ORACLE_SECRET_KEY = getenv("ORACLE_SECRET_KEY")
oracle_delivery = oracle_cloud_storage(
customer_access_key_id=ORACLE_ACCESS_ID,
customer_secret_key=ORACLE_SECRET_KEY,
bucket="bucket-name",
region="us-sanjose-1",
namespace="ORACLE_NAMESPACE",
path_prefix="folder1/prefix",
)
S3 Compatible Delivery
S3 compatible delivery allows data to be sent to any cloud storage provider that supports the Amazon S3 API.
To use this delivery method, you'll need an account with read
, write
, and delete
permissions on the target bucket. Authentication is performed using an Access Key and Secret Key pair.
While this delivery method is designed to work with any S3-compatible provider, not all integrations have been explicitly tested. Some providers may advertise S3 compatibility but deviate from the API in subtle ways that can cause issues. We encourage testing with your chosen provider to ensure compatibility.
Pay particular attention to the use_path_style
parameter, as it's a common source of issues. For example, Oracle Cloud requires use_path_style
to be true
, while Open Telekom Cloud requires it to be false
.
Parameters
Property | Required | Description |
---|---|---|
access_key_id | Required | Access key for authentication. |
secret_access_key | Required | Secret key for authentication. |
bucket | Required | S3-compatible bucket to send results to. |
region | Required | Region for the S3-compatible service. |
endpoint | Required | S3-compatible service endpoint. |
use_path_style | Optional | Whether to use path-style addressing (default is false ). If true , the bucket name is included in the URL path; if false , it's included in the hostname. |
path_prefix | Optional | A string to prepend to delivered files. A forward slash (/ ) is treated as a folder; all other characters are added directly as a prefix to file names. |
- JSON
- Python SDK
"delivery": {
"type": "s3_compatible",
"parameters": {
"endpoint": "https://s3.foo.com",
"bucket": "foo-bucket",
"region": "foo-region",
"access_key_id": "$ACCESS_KEY_ID",
"secret_access_key": "$SECRET_ACCESS_KEY",
"use_path_style": false,
"path_prefix": "folder1/prefix"
}
}
from os import getenv
from planet.subscription_request import s3_compatible
ACCESS_KEY_ID = getenv("ACCESS_KEY_ID")
SECRET_ACCESS_KEY = getenv("SECRET_ACCESS_KEY")
delivery = s3_compatible(
endpoint="https://s3.foo.com",
bucket="foo-bucket",
region="foo-region",
access_key_id=ACCESS_KEY_ID,
secret_access_key=SECRET_ACCESS_KEY,
use_path_style=False,
path_prefix="folder1/prefix",
)
Hosting
Instead of delivering to cloud storage, the data for your subscription can be hosted on the Planet Insights Platform.
The hosting block eliminates the need to use the delivery block. Specifying both is not allowed.
Data Collection
You can deliver data to a data collection hosted on the Planet Insights Platform to visualize and stream your data in platform tools. To deliver to a data collection, you must have Processing Units provisioned to your account.
Parameters
Property | Required | Description |
---|---|---|
type | Required | "sentinel_hub" |
collection_id | Optional | ID of the target collection to deliver data to. If omitted, a collection will be created on your behalf, and its ID will be returned in the response with the collection name the same as the subscription name. If included, the collection must be compatible with the subscription, which will be validated during subscription creation. |
To reuse a collection across multiple subscriptions with the same data type, first omit collection_id
in your initial request to auto-create a collection. Then, use the returned collection_id
for all subsequent requests. This links all subscriptions to the same collection efficiently. Importantly, subscriptions with different data types cannot share a collection. As an example, Soil Water Content and Land Surface Temperature subscriptions cannot share the same collection.
You can browse your collections on the Sentinel Hub Dashboard under My Collections.
To learn more about creating collections check out the Bring Your Own COG API documentation.
No collection ID provided
- JSON
"hosting": {
"type": "sentinel_hub"
}
Collection ID provided
- JSON
- Python SDK
"hosting": {
"type": "sentinel_hub",
"parameters": {
"collection_id": "my_collection_id"
}
}
from planet.subscription_request import sentinel_hub
sh_delivery_collection = sentinel_hub(collection_id="my-collection-id")
Please note the following:
- For imagery subscriptions the following tools are permitted:
- The
clip
andfile_format
(COG) tools are automatically added and cannot be removed - No tools are supported for Planetary Variable subscriptions
When delivering to a Sentinel Hub collection, items (referred to as "tiles" in Sentinel Hub) delivered to your collection may show a warning saying coverGeometry is partially outside tileGeometry
. This occurs when the geometry in the delivered metadata does not match the tile's pixel footprint which can happen due to data processing intricacies for PlanetScope and SkySat. The data will still be ingested in Sentinel Hub, but may result in nodata
pixels within the tile when requesting the imagery in Sentinel Hub.