Skip to main content

Asynchronous Processing API

The Asynchronous Processing API allows you to process more data with a single request than the Processing API. This is possible because the processing results are not returned immediately but are delivered to your object storage after some time. We recommend using Async API for processing larger images when you prefer not to deal with tiled results and when immediate processing results are not crucial.

The Async API allows you to process the data in a similar way as the Processing API; you define the input data, area of interest, and time range in the body of an Async API request, and the data is processed according to your evalscript. When using Async API keep in mind that:

  • The maximum output image size cannot exceed 10,000 pixels in any dimension.
  • Evalscript can be either sent directly in the request or it can be stored in S3 and referenced in an async request (see parameter evalscriptReference in Async API reference for more details). This allows you to use bigger evalscripts.
  • The processing is asynchronous, which means that you do not get results in the response of your request. Instead, they are delivered to your object storage.
  • A copy of each Async API request is also stored in your object storage. After processing completes, this copy is updated with additional details, including cost information.
  • Only a limited number of asynchronous requests can run concurrently per user. The exact limit depends on your account type.
  • Processing time depends on the request size and the current service load. Typically, the first request takes longer, while subsequent requests are faster.
  • When using the Asynchronous Processing API, a multiplication factor of 2/3 is applied to all requests with an area of at least 10,000 px. This means you can process up to 1.5× more data compared to the Processing API for the same amount of Processing Units (PUs). Requests defining an area smaller than 10,000 px are charged at the standard rate (no multiplication factor applied).

Async API Deployments

DeploymentAPI endpointRegion
AWS EU (Frankfurt)https://services.sentinel-hub.com/api/v1/async/processeu-central-1
AWS US (Oregon)https://services-uswest2.sentinel-hub.com/api/v1/async/processus-west-2

Data Sources Restrictions

All data sources must be from the same deployment where the request is made.

Object Storage Configuration

The Asynchronous Processing API requires access to object storage for reading evalscripts (optional) and storing processing results. We support two object storage providers:

  • Amazon S3
  • Google Cloud Storage (GCS)

Supported Use Cases

Object storage is used for:

  • Reading evalscript files from storage (optional, evalscripts can also be provided directly in the request)
  • Uploading processing results (required)
  • Storing a copy of each request with additional details including cost information

AWS S3 Configuration

The Asynchronous Processing API supports two authentication methods for AWS S3. We recommend using the IAM Assume Role method for enhanced security and fine-grained access control.

Authentication Methods

The IAM Assume Role method provides better security by allowing temporary credentials and fine-grained access control without exposing long-term credentials.

To use this method, provide the ARN of an IAM role that has access to your S3 bucket:

{
"output": {
"delivery": {
"s3": {
"url": "s3://{bucket}/{key}",
"iamRoleARN": "{IAM-role-ARN}"
}
}
}
}

Setup Steps:

  1. Create an IAM Policy for S3 Access

Create a policy that grants the necessary permissions to your S3 bucket:

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject",
"s3:ListBucket"
],
"Resource": ["arn:aws:s3:::{bucket}", "arn:aws:s3:::{bucket}/*"]
}
]
}
  1. Create an IAM Role
  • In the AWS IAM console, create a new role
  • Choose "AWS account" as the trusted entity type
  • Select "Another AWS account" and enter account ID: 614251495211
  • Attach the policy created in step 1
  • Note the Role ARN for use in your API requests
  1. Configure Trust Relationship (Optional but Recommended)

For additional security, modify the role's trust policy:

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::614251495211:root"
},
"Action": "sts:AssumeRole",
"Condition": {
"StringEquals": {
"sts:ExternalId": "{domain-account-id}"
},
"StringLike": {
"sts:RoleSessionName": "sentinelhub"
}
}
}
]
}

Replace {domain-account-id} with your domain account ID from the Dashboard.

Option 2: Access Key & Secret Key

Alternatively, you can provide AWS access credentials directly:

{
"output": {
"delivery": {
"s3": {
"url": "s3://{bucket}/{key}",
"accessKey": "{access-key}",
"secretAccessKey": "{secret-access-key}",
"region": "{region}"
}
}
}
}

The access key and secret must be linked to an IAM user with the following permissions on your S3 bucket:

  • s3:GetObject
  • s3:PutObject
  • s3:ListBucket

To create access keys, see the AWS documentation on programmatic access.

Using S3 Configuration in Requests

The S3 configuration can be used in:

  • evalscriptReference.s3 - to specify the bucket where the evalscript .js file is available (optional)
  • output.delivery.s3 - to specify the bucket where the results will be stored (required)

Check Async API reference for more information.

Google Cloud Storage Configuration

Google Cloud Storage is supported for both evalscript input and output delivery. Authentication requires a service account with base64-encoded credentials.

Preparing Credentials

  1. Download your service account credentials in JSON format (not P12)
  2. Encode them as a base64 string:
cat my_creds.json | base64

Using GCS for Evalscript Input

To read an evalscript from Google Cloud Storage:

{
"evalscriptReference": {
"gs": {
"url": "gs://{bucket}/{key}",
"credentials": "{base64-encoded-credentials}"
}
}
}

Using GCS for Output

To deliver results to Google Cloud Storage:

{
"output": {
"delivery": {
"gs": {
"url": "gs://{bucket}/{key}",
"credentials": "{base64-encoded-credentials}"
}
}
}
}

Required GCS Permissions

The service account must have the following permissions on the specified bucket:

  • storage.objects.create
  • storage.objects.get
  • storage.objects.delete
  • storage.objects.list

These permissions can be granted through IAM roles such as Storage Object Admin or custom roles. If possible, restrict access to the specific delivery path within the bucket for enhanced security.

Using GCS Configuration in Requests

The GCS configuration can be used in:

  • evalscriptReference.gs - to specify the bucket where the evalscript .js file is available (optional)
  • output.delivery.gs - to specify the bucket where the results will be stored (required)

Check Async API reference for more information.

Cross-Cloud and Cross-Region Support

The Asynchronous Processing API provides complete flexibility in choosing storage locations for both input and output. Surcharges apply based on where your processing results are delivered (output storage location).

Storage Configuration Options

The table below shows output storage options for each deployment and their associated costs. Surcharges apply only to the volume of output data transferred to your storage.

Important: Input and output storage can be configured independently - you can mix and match any combination. For example, you can read evalscripts from GCS and write output to S3, or read from S3 in one region and write to S3 in another region. Input storage location does not affect costs.

DeploymentRegionOutput Storage LocationAdditional PU Cost
AWS EU (Frankfurt)eu-central-1S3 eu-central-1None
AWS EU (Frankfurt)eu-central-1S3 (any other region)0.03 PU/MB
AWS EU (Frankfurt)eu-central-1Google Cloud Storage0.1 PU/MB
AWS US (Oregon)us-west-2S3 us-west-2None
AWS US (Oregon)us-west-2S3 (any other region)0.03 PU/MB
AWS US (Oregon)us-west-2Google Cloud Storage0.1 PU/MB

Output Data Transfer Surcharges Summary:

  • Cross-region (same cloud): 0.03 PU per MB
  • Cross-cloud: 0.1 PU per MB

Important Notes:

  • Surcharges apply only to output data transfer (processing results)
  • Input location (evalscript files) does not affect costs
  • When using a bucket in a different region than the deployment, specify the region parameter in your request:
{
"output": {
"delivery": {
"s3": {
"url": "s3://{bucket}/{key}",
"region": "{region}",
"iamRoleARN": "{IAM-role-ARN}"
}
}
}
}

Checking the Status of the Request

While the request is running, you can get its status (see this example). Once the processing is finished, the request is deleted from our system. If you try to check its status after it has been deleted, you will get a '404 Not Found' response even if the request was processed successfully.

Troubleshooting

In case anything goes wrong when creating an Async request, we will return an error message immediately. If anything goes wrong once the Async request has been created, we will deliver an "error.json" file with an error message to your object storage (S3).