Skip to content

Tanager pricing assumptions

This document describes how we have estimated the cost of downloading Tanager methane data.

In future we will only get search results for data that intersects with our minimum search areas. However, given the current limitations of the API we are unable to do this and instead get all results in a much larger area. This means we will need to pull in a lot more data, with most of that data potentially not being relevant for our sites. However, in the interim Planet have been able to offer a solution where we will only be charged the minimum search area once per scene. This means we can safely pull in all data that the search returns. For clarity we have the following explanation from the Planet team.

In the short term we will be accounting for your usage such that we will only be charging you the 100 sqkm minimum per unique item ID. For example, if you download an area where 5 TanagerMethane plumes have been published there you will not be charged 100 sqkm * 5 plumes, you will be charged 100 sqkm so long as each of those plumes comes from the same source image, TanagerScene ID. Within the next month this will be more straightforward and will not be as confusing as we roll out improvements.

We will assume that all assets with the same strip_id, geometry and acquired timestamp are considered the same source image or scene. There is no Tanager Scene ID field in the search result output, but this is the most natural interpretation of what a single scene would be (the key being they have the same polygon outline and timestamp). From here on we will refer to this as a scene.

Process

First we do a search over the Permian and find all scenes that intersect with our locations of interest.

We then do a specific search at the centroid of the scene polygon with a time filtering matching the acquired timestamp, using a polygon with area 0.1 km^2. This additional search is not actually necessary, but it keeps a clear record of the fact we are only interested in a specific point, and are then pulling all assets associated with that point. From a billing perspective this makes our intentions clear. This search then gives us a list of all assets at that scene. Using this list of assets we download each ortho_ql_ch4 and ql_ch4_json file.

Pricing is computed per scene as per the explanation above. The first download from a scene is counted at the rate of minimum area (100 or 10 km^2 depending on the imagery type) times the rate. The rate is determined based on the timestamp and sensitivity mode, if the acquired time is less than 30 days ago we use the higher rate. Otherwise we use the older than 30 day rate. If the sensitivity is above standard then we apply the appropriate multiplier. Any more assets downloaded from the same search group (scene) do not incur additional charges.

We then move onto the next scene, do another search at the centroid, and repeat the process.

First run details

For our first data download we fetched everything that intersects with a Chevron site in the Permian from September 2024 to now (14th March 2025). We only downloaded data with standard sensitivity as the collection mode, only one scene was excluded by this restriction.

We downloaded a total of 316 assets across 49 unique scenes. Both the ortho_ql_ch4 and ql_ch4_json asset types were downloaded.

The CSV file downloaded_assets.csv lists all assets downloaded and some useful metadata. The key columns which are not self-explanatory are:

  • geometry: this is a hash of the polygon. The purpose of this here is to easily identify unique scenes.
  • tokens_used: this is the cost that we expect for this asset. If this is zero it is because another asset has already been charged within that scene.
  • imagery_type: this is either archive_30_days for data older than 30 days from the time of download or archive_0_30_days for more recent data.
  • charged_area_km2: this is the area of our search or the minimum area for the imagery type, whichever is larger. Since we are effectively doing point searches (polygons of area 0.1 km^2), this is always the minimum for the imagery type.

To understand how we have computed the cost the table can be sorted by timestamp and geometry. It is then clear that we have one charge per geometry/timestamp group, and that charge can be understood by looking at the sensitivity, imagery type and charged area columns.

The total cost computed for this download was 457.875 tokens.

The app.log file contains detailed logs of our process. The download was done in two batches. The first was a test run for September 2024, with a total cost of 6.75. We then reran the job for the entire time period, giving a cost of 451.875. The process does not try to download data that has already been downloaded. We had also already downloaded some assets from one scene in January 2025 (202501291_180053). These would not have been downloaded again, but should be charged from when we did download them a few days earlier (and have been included in this total cost here).